A Comparison of Machine Learning Tools That Model the Splitting Tensile Strength of Self-Compacting Recycled Aggregate Concrete

Several types of research currently use machine learning (ML) methods to estimate the mechanical characteristics of concrete. This study aimed to compare the capacities of four ML methods: eXtreme gradient boosting (XG Boost), gradient boosting (GB), Cat boosting (CB), and extra trees regressor (ETR), to predict the splitting tensile strength of 28-day-old self-compacting concrete (SCC) made from recycled aggregates (RA), using data obtained from the literature. A database of 381 samples from literature published in scientific journals was used to develop the models. The samples were randomly divided into three sets: training, validation, and test, with each having 267 (70%), 57 (15%), and 57 (15%) samples, respectively. The coefficient of determination (R2), root mean square error (RMSE), and mean absolute error (MAE) metrics were used to evaluate the models. For the training data set, the results showed that all four models could predict the splitting tensile strength of SCC made with RA because the R2 values for each model had significance higher than 0.75. XG Boost was the model with the best performance, showing the highest R2 value of R2 = 0.8423, as well as the lowest values of RMSE (=0.0581) and MAE (=0.0443), when compared with the GB, CB, and ETR models. Therefore, XG Boost was considered the best model for predicting the splitting tensile strength of 28-day-old SCC made with RA. Sensitivity analysis revealed that the variable contributing the most to the split tensile strength of this material after 28 days was cement.


Introduction
Currently, concrete, as a construction material, is in great demand due to the rapid and advanced growth of infrastructure development in many countries, typically utilized in engineered buildings throughout the globe [1][2][3]; this requires the technology surrounding it to permanently change, looking for improvements and innovations. This is why particular types of concrete have recently emerged, such as self-compacting concrete (SCC), representing an acceptable construction potential while also attracting interest in the use of recycled aggregates (RA) [4][5][6][7][8] from construction and demolition waste (CDW) as a substitute to conventional aggregates [9][10][11], minimizing or potentially eliminating the environmental impacts produced by these CDW [12] and allowing the combination of economic development with sustainability and environmental protection [13].
SCC made with RA is one of the most widely used building materials in construction [14,15] due to its compaction characteristics (without mechanical vibration) and its fluidity. It is a high-strength and efficient concrete that guarantees uniformity. However, its complex

Machine Learning Methods
ML methods learn from data to then perform classification and prediction. They are becoming more and more popular due to the increasing computational power utilized in the construction sector to estimate the performance of materials [32,37]. The present study applied four ML methods to predict the splitting tensile strength of SCC made with RA: XG Boost, GB, CB, and ETR. These methods were selected based on their extensive usage in related investigations. The ML process is presented in Figure 1. A summary overview of these methods is presented below. metrics to determine the most suitable ML algorithm for obtaining reliable, splitting ten sile strength predictions.

Machine Learning Methods
ML methods learn from data to then perform classification and prediction. They ar becoming more and more popular due to the increasing computational power utilized in the construction sector to estimate the performance of materials [32,37]. The present stud applied four ML methods to predict the splitting tensile strength of SCC made with RA XG Boost, GB, CB, and ETR. These methods were selected based on their extensive usag in related investigations. The ML process is presented in Figure 1. A summary overview of these methods is presented below. eXtreme gradient boosting (XG Boost) was developed by Chen and Guestrin [50] in 2016 as a scalable, tree-scalable ensemble learning method for tree boosting, helpful fo both ML and data mining. XG Boost employs a more regularized formalization of th technique to control overfitting and achieve better performance. As a result, model com plexity decreases, and overfitting is largely evaded [51,52]. XG Boost can be employed a an advanced GB method with distributed-parallel processing; this is a result of comparin XG Boost with GB, performed by Chen and Guestrin [50]. In this regard, GB suffers from the drawbacks of overfitting and slowness. Therefore, XG Boost is an ML method tha presents two self-compatible regulatory functions (column shrinkage and under sampling), making it more reliable [53].
Moreover, it presents better prediction capability, meaning that when there is a larg volume of data, the processing time is shorter for XG Boost than for GB. Marani et al. [54 have pointed out that XG Boost employs a regularization function together with a los function to evaluate the "goodness" of fit of the model. Figure 2 shows the schematic di agram of XG Boost. eXtreme gradient boosting (XG Boost) was developed by Chen and Guestrin [50] in 2016 as a scalable, tree-scalable ensemble learning method for tree boosting, helpful for both ML and data mining. XG Boost employs a more regularized formalization of the technique to control overfitting and achieve better performance. As a result, model complexity decreases, and overfitting is largely evaded [51,52]. XG Boost can be employed as an advanced GB method with distributed-parallel processing; this is a result of comparing XG Boost with GB, performed by Chen and Guestrin [50]. In this regard, GB suffers from the drawbacks of overfitting and slowness. Therefore, XG Boost is an ML method that presents two self-compatible regulatory functions (column shrinkage and undersampling), making it more reliable [53].
Moreover, it presents better prediction capability, meaning that when there is a large volume of data, the processing time is shorter for XG Boost than for GB. Marani et al. [54] have pointed out that XG Boost employs a regularization function together with a loss function to evaluate the "goodness" of fit of the model. Figure 2 shows the schematic diagram of XG Boost.

Gradient Boosting (GB)
Gradient boosting (GB) is a supervised ML method used for both regression and classification problems [54,55]. It was designed in 2001 by Friedman [56] as a method that combines a set of weak models to form a more robust model using additive models. GB connects numerous base learners as a weighted sum to reduce bias and variance, and reweight misclassified data [53,57]. The loss function serves to minimize by employing base learners in boosting iteration [53,57,58]. Several recently developed supervised ML methods, such as XG Boost, LightGBM, and Cat boost, use GB as a basis to improve their ability to adapt to the needs of the moment, improving scalability [57]. Figure 3 shows the schematic diagram of gradient boost.

Gradient Boosting (GB)
Gradient boosting (GB) is a supervised ML method used for both regression and classification problems [54,55]. It was designed in 2001 by Friedman [56] as a method that combines a set of weak models to form a more robust model using additive models. GB connects numerous base learners as a weighted sum to reduce bias and variance, and reweight misclassified data [53,57]. The loss function serves to minimize by employing base learners in boosting iteration [53,57,58]. Several recently developed supervised ML methods, such as XG Boost, LightGBM, and Cat boost, use GB as a basis to improve their ability to adapt to the needs of the moment, improving scalability [57]. Figure 3 shows the schematic diagram of gradient boost.

Gradient Boosting (GB)
Gradient boosting (GB) is a supervised ML method used for both regression and clas sification problems [54,55]. It was designed in 2001 by Friedman [56] as a method tha combines a set of weak models to form a more robust model using additive models. GB connects numerous base learners as a weighted sum to reduce bias and variance, and re weight misclassified data [53,57]. The loss function serves to minimize by employing base learners in boosting iteration [53,57,58]. Several recently developed supervised ML meth ods, such as XG Boost, LightGBM, and Cat boost, use GB as a basis to improve their ability to adapt to the needs of the moment, improving scalability [57]. Figure 3 shows the sche matic diagram of gradient boost.   Cat boosting (CB) is an implementation of GB, proposed by Prokhorenkova et al. [59] that uses binary decision trees as the predictor basis. Two fundamental algorithmic advances introduced in CB were the implementation of ordered boosting (an alternative to the classical algorithm based on permutations) and an innovative algorithm for processing categorical features [59,60]. CB employs one hot max size (OHMS) permutation techniques and object-based statistics focusing on categorical columns [61]. Through the use of the greedy method, tree splitting solves the exponential growth of the combination of features [59]. For each feature that has more categories than OHMS (an input parameter), CB randomly splits (into subsets) the records and converts the labels into integers, and encodes the categorical features by converting them into numbers [61], meaning successful work with categorical features is carried out with the least loss of information [60].

Extra Trees Regressor (ETR)
Extra trees regressor (ETR) is another supervised ML method proposed by Geurts et al. [62] in 2005, which can be used in regression and classification problems. ETR randomly selects features and cut points by splitting a tree node to train the estimators [62][63][64]. ETR was developed as an extension of GB, employing the same principle [64]. However, it is less likely to overfit a data set [62]. One of the critical differences between these two algorithms is that ETR selects the best aspect and related value to split the node, while GB employs a more discriminative splitting [54]. In addition, ETR, unlike GB, uses the entire training data set to train each regression tree and does not use bootstrapping for training [62][63][64]. Figure 4 shows the schematic diagram of the extra tree regressor. Cat boosting (CB) is an implementation of GB, proposed by Prokhorenkova et al. [59] that uses binary decision trees as the predictor basis. Two fundamental algorithmic advances introduced in CB were the implementation of ordered boosting (an alternative to the classical algorithm based on permutations) and an innovative algorithm for processing categorical features [59,60]. CB employs one hot max size (OHMS) permutation techniques and object-based statistics focusing on categorical columns [61]. Through the use of the greedy method, tree splitting solves the exponential growth of the combination of features [59]. For each feature that has more categories than OHMS (an input parameter), CB randomly splits (into subsets) the records and converts the labels into integers, and encodes the categorical features by converting them into numbers [61], meaning successful work with categorical features is carried out with the least loss of information [60].

Extra Trees Regressor (ETR)
Extra trees regressor (ETR) is another supervised ML method proposed by Geurts et al. [62] in 2005, which can be used in regression and classification problems. ETR randomly selects features and cut points by splitting a tree node to train the estimators [62][63][64]. ETR was developed as an extension of GB, employing the same principle [64]. However, it is less likely to overfit a data set [62]. One of the critical differences between these two algorithms is that ETR selects the best aspect and related value to split the node, while GB employs a more discriminative splitting [54]. In addition, ETR, unlike GB, uses the entire training data set to train each regression tree and does not use bootstrapping for training [62][63][64]. Figure 4 shows the schematic diagram of the extra tree regressor.

Experimental Database
The database for this study was made up of 381 samples of SCC made with RA from research articles published in scientific journals, as shown in Table 1. In which the author, the number of mixtures (# mix), and the proportion (% data) contributed to the database are indicated. From these published papers on the splitting tensile strength of SCC made with RA, Table 2 shows the minimum, maximum, mean, standard deviation, skewness, and kurtosis values of these input variables: Cement (Cmt), Mineral Admixture (MA), Water (W), Fine Aggregate (FA), Coarse Aggregate (CA), Superplasticizer (SP), and Output Splitting Tensile Strength (fst), which were employed to model the splitting tensile strength of SCC made with RA, through the use of ML techniques. In addition, the frequency distribution normal curve of every input variable is displayed in Figure 5, where the behavior of each of the variables can be seen.

Data Pre-Processing
The pre-processing of data is necessary when making data suitable for an ML model. Normalization is a data pre-processing procedure; it eliminates the influence of scales since several features often have different scales and dimensions [92,93]. Normalization ensures that all elements are on the same scale. For this, the data of each part are converted into a number between zero and one; this prevents variables in a higher numerical range from dominating those in a lower numerical range. This process is fundamental to eliminating the influence of a particular dimension and avoiding errors during model development [92,94]. In order to normalize the input and output variables used to model the splitting tensile strength of the SCC made with RA, MaxAbs Scaler was used to scale each character using its maximum value, according to formula (1): where x is data.

Data Pre-Processing
The pre-processing of data is necessary when making data suitable for an ML model. Normalization is a data pre-processing procedure; it eliminates the influence of scales since several features often have different scales and dimensions [92,93]. Normalization ensures that all elements are on the same scale. For this, the data of each part are converted into a number between zero and one; this prevents variables in a higher numerical range from dominating those in a lower numerical range. This process is fundamental to eliminating the influence of a particular dimension and avoiding errors during model development [92,94]. In order to normalize the input and output variables used to model the splitting tensile strength of the SCC made with RA, MaxAbs Scaler was used to scale each character using its maximum value, according to formula (1): where x is data.

Data Visualization
The correlation between the input characteristics (independent variables) was analyzed to see whether or not there was a dependence between the different parts; this statistical analysis contributes to the optimization of the predictive model [95] because it maximizes the prediction of the results. For this purpose, the Pearson correlation matrix (heat map) was calculated (Figure 6), analyzing the correlation between the independent variables (input variables). Even though there was a relatively high correlation between some of the characteristics, such as mineral admixture and cement (r = −0.608) and coarse aggregates and fine aggregates (r = −0.685), no correlation between the characteristics was higher than 0.80, which indicates that there is no multicollinearity [3,96].

Data Visualization
The correlation between the input characteristics (independent variables) was analyzed to see whether or not there was a dependence between the different parts; this statistical analysis contributes to the optimization of the predictive model [95] because it maximizes the prediction of the results. For this purpose, the Pearson correlation matrix (heat map) was calculated (Figure 6), analyzing the correlation between the independent variables (input variables). Even though there was a relatively high correlation between some of the characteristics, such as mineral admixture and cement (r = −0.608) and coarse aggregates and fine aggregates (r = −0.685), no correlation between the characteristics was higher than 0.80, which indicates that there is no multicollinearity [3,96].

Data Split
To perform the modeling of the 28-day splitting tensile strength of SCC made with RA, a random partition of the data was made within three different sets: training, validation, and test, which helped to evaluate the generalization capacity of the predictive model. The training data set consisted of 267 mixtures (70%), the validation data set consisted of 57 combinations (15%), and the test data set consisted of 57 combinations (15%). Table 3 shows the range and description of the input and output variables for the three data sets.

Data Split
To perform the modeling of the 28-day splitting tensile strength of SCC made with RA, a random partition of the data was made within three different sets: training, validation, and test, which helped to evaluate the generalization capacity of the predictive model. The training data set consisted of 267 mixtures (70%), the validation data set consisted of 57 combinations (15%), and the test data set consisted of 57 combinations (15%). Table 3 shows the range and description of the input and output variables for the three data sets.

Model Evaluation
Four metrics were used to evaluate the performance of the models: coefficient of determination (R 2 ) (Equation (2)), root mean square error (RMSE) (Equation (3)), and mean absolute error (MAE) (Equation (4)). These metrics estimate errors in the predictions of the splitting tensile strength (of the SCC made with RA after 28 days) when compared with actual observations [9,53,55,97].
where y i = fst (output variable),ŷ i = estimated fst, y i = mean experimental fst, and n = number of samples. Currently, the R 2 value is thought to be the best metric for assessing the model [95,97]. Table 4 shows the range of R 2 values for prediction model evaluations [54,98,99].

Comparison of the Predictive Performance of ML Models
Since the R 2 metric is more intuitive and convenient for comparing the performance of different ML models [95,97], in the following analysis, we adopted it as the primary metric index. Prediction accuracy is reflected in the value of R 2 , and a significant value for this metric indicates that a model has exhibited high prediction accuracy. Values for the RMSE and MAE metrics were also considered; values less than 0.05 indicate that the ML model presents a good fit [95,101] for predicting the splitting tensile strength of 28-day-old SCC made with RA. Table 5 shows the R 2 results for both the overall data set and the training and test data sets for the models: XG Boost, GB, CB, and ETR. The R 2 values from the global data set of the four models ranged from 0.7717 to 0.8428 MPa, showing values greater than 0.75. These values indicate that the models have a good predictive capability according to the statistical criteria established for R 2 [98,99]. Additionally, root mean square error and mean average error values ranged between 0.0225 and 0.0270 MPA and 0.0066 and 0.0078 Mpa, respectively. These values, which are close to zero, indicate that the prediction models XG Boost, GB, CB, and ETR are in high agreement between the predicted data and the actual experimental data obtained from the SCC made with RA. On the other hand, concerning the training data, it can be seen that the R 2 values range from 0.9292 to 0.9421 (Table 5), with all values being higher than 0.90; this shows that the four models are good predictors of splitting tensile strength for SCC made with RA.
To select the model of best fit for good predictions of the splitting tensile strength after 28 days (of SCC made with RA), a comparison of the metrics from the test data was made. The XG Boost model had the best predictive performance, with the highest R 2 value of R 2 = 0. 8423 (Table 4). Therefore, considering that XG Boost predicts splitting tensile strength with perfect accuracy [98,99], as well as having the lowest RMSE and MAE values (0.0581 MPa and 0.0443 MPa, respectively), indicates that it is a good model fit with high generalizability. According to Guo et al. [44], the high accuracy of the XG Boost model can be attributed to its architecture, which allows for better representation of the relationship between the input and output variables. Figure 7 shows the predictive behavior of the XG Boost model, with it outperforming the GB, CB, and ETR models with regards to the R 2 value, as well as having the lowest values for root mean square error and mean average error, which indicates that the XG Boost model presents a good fit for the prediction of 28-day splitting tensile strength in SCC made with RA [19,37,44].
On the other hand, Figure 8 shows the correlation between the experimental and predicted tensile strength for the test data, where it can be seen that all models predict the actual measurements well. However, the scatter plot of the XG Boost model (Figure 5a) has values more closely clustered around the prediction line compared to the other models, thus presenting less scatter. These results show that the XG Boost model made reasonably predictions for splitting tensile strength, similar to findings in previous studies [19,37,44]. In contrast, gradient boost (GB) was the model that showed the lowest accuracy, with an R 2 value of R 2 = 0.9292 (Table 5) and this is reflected in the scatter plot (Figure 5b), where a higher dispersion of the values around the prediction line is visible. This result agrees with those found by Nguyen et al. [37] when contrasting the importance of XG Boost with gradient boost. On the other hand, Figure 8 shows the correlation between the experimental and predicted tensile strength for the test data, where it can be seen that all models predict the actual measurements well. However, the scatter plot of the XG Boost model (Figure 5a) has values more closely clustered around the prediction line compared to the other models, thus presenting less scatter. These results show that the XG Boost model made reasonably predictions for splitting tensile strength, similar to findings in previous studies [19,37,44]. In contrast, gradient boost (GB) was the model that showed the lowest accuracy, with an R 2 value of R 2 = 0.9292 (Table 5) and this is reflected in the scatter plot (Figure  5b), where a higher dispersion of the values around the prediction line is visible. This result agrees with those found by Nguyen et al. [37] when contrasting the importance of XG Boost with gradient boost.  On the other hand, Figure 8 shows the correlation between the experimental and predicted tensile strength for the test data, where it can be seen that all models predict the actual measurements well. However, the scatter plot of the XG Boost model (Figure 5a) has values more closely clustered around the prediction line compared to the other models, thus presenting less scatter. These results show that the XG Boost model made reasonably predictions for splitting tensile strength, similar to findings in previous studies [19,37,44]. In contrast, gradient boost (GB) was the model that showed the lowest accuracy, with an R 2 value of R 2 = 0.9292 (Table 5) and this is reflected in the scatter plot ( Figure  5b), where a higher dispersion of the values around the prediction line is visible. This result agrees with those found by Nguyen et al. [37] when contrasting the importance of XG Boost with gradient boost.   Figure 9 shows the splitting tensile strength of SCC with experimental AR, as predicted by the models XG Boost, GB, CB, and ETR, where the number of samples equal to 267 is the margin of the training and test data results, with the vertical blue stitched line representing this. The given curves illustrate that the values predicted by the XG Boost, GB, CB, and ETR models correlate well with the experimental values of splitting tensile strength. These models allow for the recognition of patterns embedded in the experimental data. The blue colored lines reflect the behavior of the experimental data in each graph, while the red colored lines show the predicted values. The more significant the difference is between the lines of the observed values and the predicted values, the more notable errors have occurred. Thus, the best fitting graph is that of the XG Boost model (Figure 9a). This suggests that the XG Boost model can accurately predict the splitting tensile strength better than GB, CB, and ETR and is therefore the best model.  Figure 9 shows the splitting tensile strength of SCC with experimental AR, as predicted by the models XG Boost, GB, CB, and ETR, where the number of samples equal to 267 is the margin of the training and test data results, with the vertical blue stitched line representing this. The given curves illustrate that the values predicted by the XG Boost, GB, CB, and ETR models correlate well with the experimental values of splitting tensile strength. These models allow for the recognition of patterns embedded in the experimental data. The blue colored lines reflect the behavior of the experimental data in each graph, while the red colored lines show the predicted values. The more significant the difference is between the lines of the observed values and the predicted values, the more notable errors have occurred. Thus, the best fitting graph is that of the XG Boost model (Figure 9a). This suggests that the XG Boost model can accurately predict the splitting tensile strength better than GB, CB, and ETR and is therefore the best model.

Sensitivity Analysis
Sensitivity analysis helps to understand the influence of each input variable on the output variables. The higher the sensitivity values, the more significant the impact of the input variables is on the output variable. According to Shang et al. [27], the input variables have a notable effect on the prediction of the output variables. To evaluate the impact of each input variable: cement, mineral admixture, water, fine aggregates, coarse aggregates, and superplasticizers on the uncertainty of the splitting tensile strength (of SCC made with RA), sensitivity analysis was implemented using Equations (5) and (6):

Sensitivity Analysis
Sensitivity analysis helps to understand the influence of each input variable on the output variables. The higher the sensitivity values, the more significant the impact of the input variables is on the output variable. According to Shang et al. [27], the input variables have a notable effect on the prediction of the output variables. To evaluate the impact of each input variable: cement, mineral admixture, water, fine aggregates, coarse aggregates, and superplasticizers on the uncertainty of the splitting tensile strength (of SCC made with RA), sensitivity analysis was implemented using Equations (5) and (6): where y i = fst (output variable),ŷ i = estimated fst, y i = mean experimental fst, and n = number of samples. Each of the above input variables plays an essential role in predicting the splitting tensile strength of SCC made with RA, as shown in Figure 10. Cement (30.07%), fine aggregate (22.83%), and mineral admixture (22.08%) made the most significant contributions to the prediction of the fst of SCC made with RA. In relation to this, Shang et al. [27] stated that cement is an element that decisively influences the prediction of the split tensile strength of self-compacting concrete made with RA. It can also be observed that the input variables of coarse aggregate and superplasticizer made similar contributions of 13.02% and 9.61%, respectively. Finally, water (2.39%) was the least influential variable in predicting splitting tensile strength; this result agrees with the findings of previous research [27].
Each of the above input variables plays an essential role in predicting the splitting tensile strength of SCC made with RA, as shown in Figure 10. Cement (30.07%), fine aggregate (22.83%), and mineral admixture (22.08%) made the most significant contributions to the prediction of the fst of SCC made with RA. In relation to this, Shang et al. [27] stated that cement is an element that decisively influences the prediction of the split tensile strength of self-compacting concrete made with RA. It can also be observed that the input variables of coarse aggregate and superplasticizer made similar contributions of 13.02% and 9.61%, respectively. Finally, water (2.39%) was the least influential variable in predicting splitting tensile strength; this result agrees with the findings of previous research [27].

Conclusions
This study aimed to compare the capacities of four ML methods: XG Boost, GB, CB, and ETR, to predict the splitting tensile strength of 28-day-old SCC made with RA. In addition, the contribution of each input variable in predicting the 28-day splitting tensile strength of SCC made with RA was investigated through sensitivity analysis. For this purpose, the following input variables were implemented: cement, water, mineral admixture, fine aggregates, coarse aggregates, and superplasticizer. To evaluate the predictive capacity of the models, R2, RMSE, and MAE metrics were used. The following conclusions were drawn from this research: • For the development of the ML models: XG Boost, GB, CB, and ETR, a database of 381 samples from literature published in scientific journals was used. The samples were randomly divided into three data sets: training, validation, and test, each with 267 (70%), 57 (15%), and 57 (15%) samples, respectively.

Conclusions
This study aimed to compare the capacities of four ML methods: XG Boost, GB, CB, and ETR, to predict the splitting tensile strength of 28-day-old SCC made with RA. In addition, the contribution of each input variable in predicting the 28-day splitting tensile strength of SCC made with RA was investigated through sensitivity analysis. For this purpose, the following input variables were implemented: cement, water, mineral admixture, fine aggregates, coarse aggregates, and superplasticizer. To evaluate the predictive capacity of the models, R2, RMSE, and MAE metrics were used. The following conclusions were drawn from this research:

•
For the development of the ML models: XG Boost, GB, CB, and ETR, a database of 381 samples from literature published in scientific journals was used. The samples were randomly divided into three data sets: training, validation, and test, each with 267 (70%), 57 (15%), and 57 (15%) samples, respectively.

•
The four ML methods predicted the splitting tensile strength of SCC made with RA with satisfactory accuracy; the R 2 values from the training data for XG Boost, GB, CB, and ETR were 0.9421; 0.9292; 0.9382, and 0.9484, respectively, with all models achieving a value greater than 0.75.
• XG Boost was the best performing model with the highest value of R 2 (= 0.8423) from the test data set and the lowest values of RMSE (= 0.0581) and MAE (= 0.0443) in comparison with the GB, CB, and ETR models.

•
The developed XG Boost model is therefore considered the best for predicting the 28-day splitting tensile strength of SCC made with RA. • Sensitivity analysis revealed that cement is the input variable that contributes the most (30.07%) to predicting the splitting tensile strength of 28-day-old SCC made with RA. In contrast, water is the parameter that contributes the least (2.39%) towards the same prediction. Funding: This research has been financed by the University of León.
Institutional Review Board Statement: Not applicable.