Next Article in Journal
Recent Developments in Effective Antioxidants: The Structure and Antioxidant Properties
Next Article in Special Issue
Dynamic Properties of Pretreated Rubberized Concrete under Incremental Loading
Previous Article in Journal
Corrosion Protection of Q235 Steel Using Epoxy Coatings Loaded with Calcium Carbonate Microparticles Modified by Sodium Lignosulfonate in Simulated Concrete Pore Solutions
Previous Article in Special Issue
Mechanical and Durability Properties of Portland Limestone Cement (PLC) Incorporated with Nano Calcium Carbonate (CaCO3)
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Supervised Learning Methods for Modeling Concrete Compressive Strength Prediction at High Temperature

1
Department of Civil Engineering, University of Engineering and Technology Peshawar (Bannu Campus), Bannu 28100, Pakistan
2
College of Civil Engineering and Architecture, China Three Gogres University, Yichang 443002, China
3
State Key Laboratory of Coastal and Offshore Engineering, Dalian University of Technology, Dalian 116024, China
4
Department of Civil Engineering, University of Engineering and Technology, Peshawar 25120, Pakistan
5
Department of Transportation Engineering, Pak-Austria Fachhochschule: Institute of Applied Sciences and Technology, Haripur 22620, Pakistan
*
Author to whom correspondence should be addressed.
Materials 2021, 14(8), 1983; https://doi.org/10.3390/ma14081983
Submission received: 24 February 2021 / Revised: 1 April 2021 / Accepted: 7 April 2021 / Published: 15 April 2021
(This article belongs to the Collection Concrete and Building Materials)

Abstract

:
Supervised learning algorithms are a recent trend for the prediction of mechanical properties of concrete. This paper presents AdaBoost, random forest (RF), and decision tree (DT) models for predicting the compressive strength of concrete at high temperature, based on the experimental data of 207 tests. The cement content, water, fine and coarse aggregates, silica fume, nano silica, fly ash, super plasticizer, and temperature were used as inputs for the models’ development. The performance of the AdaBoost, RF, and DT models are assessed using statistical indices, including the coefficient of determination (R2), root mean squared error-observations standard deviation ratio (RSR), mean absolute percentage error, and relative root mean square error. The applications of the above-mentioned approach for predicting the compressive strength of concrete at high temperature are compared with each other, and also to the artificial neural network and adaptive neuro-fuzzy inference system models described in the literature, to demonstrate the suitability of using the supervised learning methods for modeling to predict the compressive strength at high temperature. The results indicated a strong correlation between experimental and predicted values, with R2 above 0.9 and RSR lower than 0.5 during the learning and testing phases for the AdaBoost model. Moreover, the cement content in the mix was revealed as the most sensitive parameter by sensitivity analysis.

1. Introduction

Concrete is one of the most versatile materials used in the construction of buildings, subway systems, and many other civil engineering structures. With the rapid development of urbanization, the demand for structural concrete is increasing. As a core aspect of these structures, concrete may encounter aberrant results such as abrasion, freezing, and chemical erosion during the whole life of the structure. One of the aberrant results is high temperature and fire. Some examples of concrete structure that are vulnerable to high temperature include industrial structures, such as chimneys working at high temperature, as well as factories dealing with chemicals with high fire risk [1]. The fire causes the concrete temperature in the concrete structure to be extremely high. If the concrete surface reaches above 100 °C, it can be observed that heat transfer can increase the internal temperature of concrete to 300–700 °C [2].
Concrete is a non-combustible material, but when subjected to high temperatures, its chemical, physical, and mechanical properties change with the impact of high temperatures [3]. Chemical and physical reactions in hot concrete, such as dehydration, decomposition [4,5], and rapid increase in vapor pressure and thermal stress, result in concrete spillage, cracking, and perforation, resulting in a deterioration of the mechanical properties of concrete [6]. Tanyildizi [7] showed that as the temperature of concrete increased, the width and length of cracks increased as well. Despite the fact that disasters such as fire or explosion do not cause direct damage, such events may, in the long or short term, damage the structure’s stiffness or structural strength [8].
There is a direct relationship between the temperature increase and the decrease in the compressive strength of concrete, according to National Institute of Standards and Technology (NIST) Technical Note 1681 [9]. In this guideline, concrete with a compressive strength less than 83 MPa is referred to as normal-strength concrete (NSE), and concrete with a compressive strength greater than 83 MPa is referred to as high-strength concrete (HSC). The relationship between concrete compressive strength and temperature was studied by Malhotra [10], who found a relationship between temperature increasing and concrete strength decreasing. Numerous variables can influence the actual behavior of concrete at high temperatures, such as the properties of the constituent concrete materials, the rate of increase in temperature, and the maximum temperature [11].
Deterioration of concrete exposed to high temperatures is attributed to three factors: physicochemical changes in the cement paste, aggregates, and the thermal incompatibility between them. Concrete deterioration is influenced by fire-related factors such as temperature and heating rate, as well as structural element conditions such as applied load and humidity [12]. As a result, it is critical to discuss the effects of high temperatures on concrete, with a focus on aggregate microstructural changes, hydrated cement paste, and the transition zone. The transformations that occur before the temperature reaches 1200 °C—at which concrete begins to melt—will be investigated [12]. It is worth noting that real fire can reach temperatures of over 900 °C; however, it is limited to the surface layers of structural elements, with the internal temperature remaining relatively low [13].
Several studies in the literature have investigated the numerical analysis of concrete exposed to high temperatures, such as Ožbolt et al. [14], who investigated 3D thermo-mechanical numerical analysis of concrete beams that had been exposed to high temperatures. For high temperature concrete failure analysis, a coupled thermo-mechanical interface model was used by Caggiano and Etse [15]. In the interface model, the coupled thermal-mechanical effect was taken into account by formulating a temperature-dependent maximum strength criterion and a fracture-energy-based softening or post-cracking law. A model of the elasto-thermo-plastic interface was proposed in this way to simulate the behavior of concrete cracking and failure. The surface of the cross-section exposed to high temperatures heats up rapidly, but the inner sections of the cross-section have slightly lower temperatures. Restrained stresses cause the concrete to crack as a result of such temperature gradients [14]. It is important to note that the weakening of the strength criterion is strictly related to the cracking of concrete due to temperature effects. The disadvantage of numerical modeling is the complexity of the model preparation, the numerical modeling calculations, and the evaluation of the results.
Machine learning (ML), which includes supervised learning methods, are an increasing trend in various fields for the prediction of different properties. Similarly, the civil engineering construction industry has also adopted such techniques for the prediction of mechanical properties of concrete to overcome cumbersome experimental procedures. The artificial neural network (ANN) method was employed by Trtnik et al. [16] to measure the compressive strength of concrete. It has been shown that the experimental values are correctly expressed by ANN; hence, it proves to be an exceptional prediction method. Keshavarz et al. [17] predicted the compressive strength of concrete with ANN and adaptive neural-fuzzy inference system (ANFIS) models. The authors show that ANFIS offers a more generalized and better correlation than the ANN model. By performing an experimental and literature-based analysis, Javed et al. [18] predicted the compressive strength of sugar cane bagasse ash concrete. Hadzima-Nyarko et al. [19,20] investigated ANN, k-nearest neighbor, regression trees, and random forests models for predicting the compressive strength of concrete. Zhang et al. [21] developed a model that combines beetle antennae search (BAS) and multi-output least square support vector regression (MOLSSVR) to predict concrete compressive strength and pervious permeability coefficient. Their proposed model outperforms support vector regression (SVR), MOLSSVR, logistic regression, and modified ANN with firefly algorithm, according to the findings of this study. To estimate the compressive strength of concrete with ground granulated blast-furnace slag, Kandiri et al. [22] developed a hybrid ANN with multi-objective salp swarm algorithm. Golafshani et al. [23] showed that an ANN-based model was more effective than modified ANFIS by combining ANFIS and ANN with the grey wolf optimizer. Ali Khan et al. [24] used gene expression programming (GEP) for prediction of the compressive strength of geopolymer concrete (GPC), and found that the GEP model possesses a higher predictive capability and is appropriate to practice in the preliminary design of fly-ash-based GPC. The results showed that the aforementioned ML models re able to obtain the experimental observations with an acceptable performance. However, this field continues to be further explored.
This paper focuses on the use of computational intelligence techniques—especially AdaBoost, random forest (RF), and decision tree (DT) algorithms—to analyze the prediction of concrete’s compressive strength at high temperature, emphasizing accuracy and efficiency, and each technique’s potential to deal with experimental data. This study also aims to contribute to the knowledge of the application of computational models in the prediction of compressive strength of concrete at high temperature, using machine learning and comparing the obtained results with other studies in the available literature. The primary significance of this study is that the data division in the training and testing datasets has been made with due regard to statistical aspects such as maximum, minimum, mean, and standard deviation. The splitting of the datasets is made to determine the predictive capability and generalization performance of established models, and it later helps to better evaluate them. Finally, a sensitivity analysis is also carried out on input parameters.
The rest of this article is structured as follows: The next section introduces the data catalog and the selection of input variables. Section 3 presents the preliminaries of the algorithms used in the proposed approach, and discusses the model evaluation metrics. Development of AdaBoost, RF, and DT of proposed models are described in Section 4. Section 5 describes the results and discussion. Finally, Section 6 draws conclusions and outlines promising directions for future work.

2. Data Catalog and Input Variables Selection

The data used in the study comprise a total of 207 experimental results on the residual compressive strength from the synthesis of previously published “source catalogs.” The source catalogs are those of Ergün et al. [3], Cülfik and Özturan [25], Behnood and Ziari [26], Bastami et al. [27], Chen et al. [28], Xiong et al. [29], Mousa [30], Fu et al. [31], and Husem [32]. The data catalog is presented in Table 1 (the entire database can be found in Appendix A, Table A1).
It should be noted that the samples that were chosen from the mentioned references were taken at the age of 28 days. In this study, 165 (80%) and 42 (20%) samples were selected based on statistical consistency—such as minimum (Min.), maximum (Max.), and mean—for the training and testing, respectively, of the proposed models. The statistical consistency of the datasets for training and testing optimizes the performance of the models and eventually helps to analyze them better.
A significant step in predictive modeling in data mining is the selection of the input variables that represent the system to be modelled. The input variables of a data-driven model should contain all relevant information about the target output. On the other hand, they rely to a large extent on the information available in the form of input-output data pairs. The proportions of the mix, the temperature, and the compressive strength associated with the temperature are the data available from the literature related to concrete’s compressive strength when exposed to high temperature. Consequently, the temperature proportions of the mix (the quantity of different materials, such as water, cement, fine and coarse aggregates, and admixtures) may be the correct choice of the input variables to predict the compressive strength of concrete for 28 days at any temperature. In this study, the input variables are cement (C), water (W), fine aggregates (FA), coarse aggregates (CA), silica fume (SF), nano silica (NS), fly ash (F), super plasticizer (SP), and temperature (T); and the output variable is compressive strength of concrete at the temperature T (fc,T). The descriptive statistics of each input and output are listed in Table 2.

3. Methodology

In this section, each of the algorithms used in the proposed methodology is briefly described.

3.1. AdaBoost

AdaBoost is a commonly used boosting algorithm that constructs an ensemble by performing multiple iterations each time with different instance weights, and adapts to the errors returned by classifiers from previous iterations [33,34] adaptively. In each iteration, changing the weight of training instances forces the learning algorithms to put more emphasis on instances previously incorrectly classified, and less emphasis on instances previously correctly classified. In other words, for misclassified instances, weights are increased, while for correctly classified instances, weights are reduced. This will make sure that misclassification errors count more heavily in the next iterations for those misclassified instances. AdaBoost utilizes the predictions of several weak classifiers, and a final prediction is given by a combined vote on techniques.
The principal concept of the AdaBoost learning algorithm is to create a strong classifier that has high performance detection by joining weak classifiers. The AdaBoost algorithm learns, and has two functions with repetitive calculations: selecting the function and learning the classifier. By reiterating the calculation, the simple classifying ability is strengthened, because weak classifiers that are an index of performance are added to the powerful iteration classifier [35].

3.2. Random Forest

Random forest (RF) is a supervised learning algorithm which is used for both classification and regression. It is, however, primarily used for problems with classification. Breimanan presented the theoretical development of RF [36]. On data samples, the RF algorithm generates decision trees and then gets the prediction from each of them and eventually chooses the best solution by voting. It is an ensemble approach that is better than a single decision tree because by averaging the result, it decreases the over-fitting. An RF algorithm’s working procedure consists of the following steps:
  • Step 1—Start by selecting random samples from a given dataset.
  • Step 2—Next, for each sample, this algorithm creates a decision tree. Then, from any decision tree, it gets the prediction result.
  • Step 3—For any predicted outcome, voting is carried out in this step.
  • Step 4—Eventually, the outcome of the most voted prediction is selected as the final result of the prediction. Figure 1 presents RF working architecture.

3.3. Decision Tree

The decision tree (DT) constructs classification or regression models in the form of a tree diagram. It divides a dataset into smaller and smaller subsets while simultaneously building an associated decision tree [37]. To predict a class label for a record in DT, we start at the root of the tree. The values of the root attribute are compared to the attributes of the record. We jump to the next node after following the branch of this value based on the relation. Decision trees classify instances by sorting the tree from the root to a specific leaf or terminal node and supplying the instance classification to the leaf node. Every tree node serves as a test case for a specific attribute, and the possible responses to the test case correspond with each edge descending from the node. This is a recursive method that is replicated for all sub-trees that are rooted in the new node. A minimum number of leaf instances, splitting into smaller subsets, a maximum number of depths, and stopping nodes from splitting before the required majority threshold has been reached are all included in the DT parameters.

4. Construction of Prediction Models

The proposed models for prediction of the compressive strength of concrete at high temperature (f’c,T) were developed using Orange software. The model structure was based on an input matrix (x) defined by x = [C, W, FA, CA, F, SP, SF, NS, and T], which provided the predictor variables, while the target variable (y) was f’c,T. The most important task is to find the acceptable size of the training data and testing dataset in every modeling phase. Therefore, 80% of the total dataset was selected and used to create the models in this analysis, and the developed models were evaluated on the remaining dataset. In other words, to build and evaluate the models, 165 and 42 datasets were used, respectively. Based on the trial and error process, all models (AdaBoost, RF, and DT) were tuned in order to optimize the f’c,T prediction. The construction of the prediction models is shown in Figure 2.

4.1. Hyperparameter Optimization

Hyperparameters that need to be tuned are found in most of the ML algorithms. In order to obtain the best prediction accuracy, the optimization process aims to determine the best parameters for AdaBoost, RF, and DT. In this research, as shown in Table 3, some critical hyperparameters in AdaBoost, RF, and DT algorithms are tuned. Table 3 also clarifies the specific meanings of these hyperparameters. First, the values for the tuning parameters of the models were selected, and then subsequently varied in the trials until the best fitness measures provided in Table 3 were achieved.

4.2. Model Evaluation Indexes

The coefficient of determination (R2), root mean square error (RMSE)-observations standard deviation ratio (RSR), mean absolute percentage error (MAPE), and relative root mean square error (RRMSE), as more common criteria in the literature, are used in this study to evaluate the results of the proposed models.
R 2 = 1 i = 1 n y i y ^ i 2 i = 1 n y i y ^ 2
R S R = i = 1 n y i y ^ i 2 i = 1 n y i y ¯ 2
M A P E = 1 n i = 1 n y i y ^ i y i × 100 %
R R M S E = i = 1 n y i y ^ i 2 n y ¯ 2 × 100 %
In the equations, n is the number of data; yi and y ^ i are the actual and predicted output of ith sample of the data, respectively; y ¯ is the averaged actual output of the data. The R2 coefficient ranges from 0 to 1, and the model a with higher quantity of R2 has more efficiency. MAPE and RRMSE criteria measure the percentage error of the model in two different forms, which range from 0 to 100. The model is deemed effective when the value of R2 is greater than 0.8 and is close to 1 [38]. The RMSE-observations standard deviation ratio (RSR) is calculated as the ratio of the RMSE and standard deviation of measured data. The RSR varies from an optimal value of 0 to a significant positive value. A lower RSR presents a lower RMSE, indicating the higher predictive efficiency of the model. RSR classification ranges are described as very good, good, acceptable, and unacceptable with ranges of 0.00 ≤ RSR ≤ 0.50, 0.50 ≤ RSR ≤ 0.60, 0.60 ≤ RSR ≤ 0.70, and RSR > 0.70, respectively [39]. It is obvious the lower the values of RSR, MAPE, and RRMSE criteria, the better the model.

5. Results and Discussion

In this section, using training and test datasets, the predictive performance of the established AdaBoost, RF, and DT models is assessed. The training dataset is used to assess the model structure and parameters. As a result, the performance of the models on the training dataset can be used to determine which model is well trained. However, the test dataset is used only after the model has been determined to evaluate the quality of the model. According to the predicted values, the values of the different statistical measures of the models for both the training and test phases are shown in Table 4 and Table 5. Figure 3 and Figure 4 display the scatter plot of the experimental (actual) and the predicted compressive strength of concrete at high temperature involving three supervised learning techniques for the phases of training and testing. From these findings, it is clear that, in terms of statistical performance measures, all models performed effectively in predicting the compressive strength of concrete at high temperatures. In the training dataset, the R2 was determined to be higher in the case of AdaBoost (R2 = 0.999) as compared with the other two models, RF (R2 = 0.965) and DT (R2 = 0.968). Similarly, in the testing phase, the R2 was determined to be higher in the case of AdaBoost (R2 = 0.938) as compared with the other two models, RF (R2 = 0.935) and DT (R2 = 0.911)
Furthermore, in terms of the statistical measures in training, the lowest value was found for AdaBoost (RSR = 0.032, MAPE = 1.357%, RRMSE = 1.666%) compared to RF (RSR = 0.190, MAPE = 11.306%, RRMSE = 9.869%) and DT (RSR = 0.178, MAPE = 9.747%, RRMSE = 9.265%). Similarly, regarding the prediction results in the testing, the lowest value was found for AdaBoost (RSR = 0.248, MAPE = 12.523%, RRMSE = 11.622%) compared to RF (RSR = 0.256, MAPE = 13.076%, RRMSE = 11.661%) and DT (RSR = 0.324, MAPE = 16.100%, RRMSE = 14.753%). This superiority may be owing to the fact that the AdaBoost model excellently captures the nonlinear relationships between concrete mix proportions and temperature with compressive strength. It can therefore be concluded that, based on statistical analysis checks, the AdaBoost model had the best results.
Additionally, the R2, MAPE, and RRMSE of the predicted values using the ANFIS method [40] were 0.94, 14%, and 13%, respectively, for the training dataset. The R2, MAPE, and RRMSE of the predicted values using the ANN method [20] were 0.96, 9%, and 10%, respectively, for the training dataset. Similarly, the R2, MAPE, and RRMSE of the predicted values using the ANFIS method [40] were 0.89, 20%, and 15%, respectively, for the testing dataset. The R2, MAPE, and RRMSE of the predicted values using the ANN method [40] were 0.92, 12%, and 12%, respectively, for the testing dataset. The performance has been improved by the AdaBoost model compared with the ANFIS and ANN models in terms of R2, MAPE, and RRMSE values. In particular, the AdaBoost model yielded the best result in the section of training and testing datasets.
Finally, it can be seen, the performance accuracy of the AdaBoost model is higher than the RF and DT models. In general, this study may assist engineers in selecting appropriate supervised learning models and parameters for the production of high-temperature concrete.
The values obtained from the three models and the experimental values are presented in Figure 5. It can be inferred from this figure that using the AdaBoost model might be sufficient and have reasonable precision with nine input variables for the estimation of the compressive strength of concrete at high temperature. Based on the findings, using a set of nine input variables could be justifiable and useful for practical and engineering applications.
Furthermore, a sensitivity analysis was also conducted using Yang and Zang’s [41] method to evaluate the influence of input parameters on f’c,T based on the AdaBoost algorithm. This approach has been used in several studies [42,43,44], and is formulated as
r i j = m = 1 n y i m × y o m m = 1 n y i m 2 m = 1 n y o m 2
where n is the number of data values (this study used 165 data values), and yim and yom are the input and output parameters. The rij value ranged from zero to one for each input parameter, and the highest rij values suggested the most efficient output parameter (which was f’c,T in this study). The rij values for all input parameters are presented in Figure 6. The cement (C) content (rij = 0.895) in the mix was revealed as the most sensitive parameter, followed by FA (rij = 0.852), CA (rij = 0.846), W (rij = 0.805), SP (rij = 0.754), SF (rij = 0.508), T (rij = 0.505), F (rij = 0.432), and NS (rij = 0.343), by sensitivity analysis.
Similar to other artificial intelligence techniques, supervised learning models have a limited domain of applicability and are mostly case dependent. Therefore, their generalization is limited, and they are only applicable in the range of training datasets. Furthermore, the developed AdaBoost model, in comparison to the other models, is suitable to accurately and efficiently predict the NSC and the HSC compressive strength at high temperature. However, this model can always be updated to yield better results as new data becomes available.

6. Conclusions and Future Prospect

Robustness and sensitivity analyses of three supervised learning models (i.e., AdaBoost, RF, and DT) were performed in this study for prediction of the compressive strength of concrete at high temperature. Statistical measure criteria such as R2, RSR, MAPE, and RRMSE were used to test the predictive abilities of the aforementioned models. In addition, the developed models were compared with ANIFS and ANN models from the literature in order to evaluate robustness. The testing phase results revealed that the supervised learning models built in this study performed well in predicting concrete compressive strength at high temperature, but the most effective model compared to other supervised learning models was the AdaBoost model (R2 = 0.938, RSR = 0.248, MAPE = 12.523%, and RRMSE = 11.622%). Statistical analysis checks reveal that the AdaBoost model shows enhancement in model accuracy by minimizing the error difference between targeted and predicted values. The results of the sensitivity analysis show that five parameters—namely, the cement content, the fine and coarse aggregate, the water, and the super plasticizer—were found to be the most sensitive and important factors for predicting the compressive strength of the concrete at high temperature. It can therefore be inferred that the AdaBoost model is a promising method for predicting concrete compressive strength at high temperature, which can be extended to predict other significant concrete properties, such as elasticity modulus, flexural strength, or tensile strength. Thus, the application of an AdaBoost in the field of predicting the compressive strength at high temperature against destructive testing methods is appropriate, and can be seen as an alternative and suitable approach.
Different artificial intelligence (AI) techniques, such as fuzzy logic, response surface methodology (RSM), support vector machine (SVM) analysis, random forest regression (RFR), recurrent neural network (RNN), may also be applied for a better understanding and predicting of the compressive strength of concrete at high temperature. Furthermore, to improve the performance results of prediction models, more experimental data should be collected in future work.

Author Contributions

Conceptualization, M.A. (Mahmood Ahmad), J.-L.H., and X.-W.T.; methodology, M.A. (Mahmood Ahmad), F.A., and J.-L.H.; software, M.A. (Mahmood Ahmad), A.F., and F.A.; validation, X.-W.T., A.F., M.A. (Maaz Amjad), and F.A.; formal analysis, M.A. (Mahmood Ahmad) and M.A. (Maaz Amjad); investigation, F.A., X.-W.T., M.A. (Maaz Amjad), M.J.I., and M.A. (Muhammad Asim); resources, X.-W.T.; data curation, M.A. (Maaz Amjad) and M.J.I.; writing—original draft preparation, M.A. (Mahmood Ahmad); writing—review and editing, M.A. (Maaz Amjad), X.-W.T., A.F., M.J.I., and M.A. (Muhammad Asim).; supervision, J.-L.H. and X.-W.T.; project administration, X.-W.T.; funding acquisition, J.-L.H. and X.-W.T. All authors have read and agreed to the published version of the manuscript.

Funding

The work presented in this paper was part of the research sponsored by the Key Program of the National Natural Science Foundation of China under Grant No. 51639002, and by the National Key Research and Development Plan of China under Grant No. 2018YFC1505300-5.3.

Data Availability Statement

The data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Experimental data catalog.
Table A1. Experimental data catalog.
S. No.Cement (kg/m3)Water (kg/m3)Sand
(kg/m3)
Gravel
(kg/m3)
Fly Ash
(kg/m3)
Super Plasticizer
(kg/m3)
Silica Fume
(kg/m3)
Nano Silica
(kg/m3)
Temperature
(°C)
Compressive Strength (MPa)
1250123417168100002028.16
22501234171681000020023.4
32501234171681000040018.57
42501234171681000060015.26
5250123417168100008008.01
6350172373150700002048.99
73501723731507000010044.58
83501723731507000040034.12
93501723731507000060024.41
103501723731507000080015.24
11500385082006002038
125003850820060020036
135003850820060080012
14450346.50805065002046
15450346.508050650020041.5
16450346.508050650040036.2
1740030807900610002050
18400308079006100040042
19400308079006100080021
20350269.507750615002033
21350269.5077506150020029
22350269.5077506150080012.5
234003080103804.8002032
244003080103804.80020029.5
254003080103804.80040028.5
26360277.20102804.84002035
27360277.20102804.840040029
28360277.20102804.840080011
29320246.40101504.88002038
30320246.40101504.880020035
31320246.40101504.880080012
32280215.60100504.812002028
33280215.60100504.8120020027
34280215.60100504.8120040021
3550013570011100143002082.47
36500135700111001430060042.58
37500135700111001430080022.03
38500135700111001522.57.52084.14
39500135700111001522.57.540068.99
40500135700111001522.57.580023.39
41500135700111001615152085.84
425001357001110016151540076.62
435001357001110016151580025.28
4450013570011100187.522.52085.21
4550013570011100187.522.540079.12
4650013570011100187.522.560051.11
4747013570011100166002087.38
48470135700111001660060047.39
49470135700111001660080018.82
50470135700111001852.57.52087.61
51470135700111001852.57.540068.94
52470135700111001852.57.580020.06
53470135700111002045152090.6
544701357001110020451540075.71
554701357001110020451560051.12
56470135700111002237.522.540078.22
57470135700111002237.522.560052.49
58470135700111002237.522.580025.72
593261846591124583002095.8
6032618465911245830065057.9
6132618465911245830080040
6232618465911245830095021.3
633911796891172693.50020114.4
643911796891172693.50040084.8
653911796891172693.50080036.8
663911796891172693.50095025.4
674421666891125785.30020115.1
684421666891125785.30040085.2
694421666891125785.30065073.5
704421666891125785.30095025.5
7144014970210991106.60020133.6
7244014970210991106.60040098.1
7344014970210991106.60065084.9
7444014970210991106.60080043.1
754371707831016491.90030057.2
764371707831016491.90040058
774371707831016491.90050047.2
784371707831016491.90060036.5
794371707831016491.90070028.3
805001506301260010002249
8150015063012600100030041
8250015063012600100040023
835001506301260010006008
845001506301260010008003
8545015063012600105002252
86450150630126001050010553
87450150630126001050040027
88450150630126001050060011
8945015063012600105008006
904251506301260012.57502257
914251506301260012.575010566
924251506301260012.575030061
934251506301260012.575060021
944251506301260012.575080012
95400150630126001510002264
964001506301260015100010578
974001506301260015100030065
984001506301260015100040037
994001506301260015100080021
10030818593396806002037.5
101308185933968060010031.5
102308185933968060015029.4
103308185933968060020029.2
104308185933968060025034.7
10531018694097607.73102044.5
10631018694097607.73105044.3
10731018694097607.731015046.5
10831018694097607.731020048.9
10931018694097607.731025047.1
1105121547111106018002080.6
1115121547111106018005080.5
11251215471111060180010067.8
11351215471111060180020078.9
11451215471111060180025083.7
1155111537091122020.45102085.1
1165111537091122020.45105085.2
1175111537091122020.451010089.6
1185111537091122020.451015094.6
1195111537091122020.4510250101.3
1205001507501068000010075.3
1215001507501068000020068.9
1225001507501068000040066
1235001507501068000060035.4
12435015075010231500002375.2
125350150750102315000020073.3
126350150750102315000040060.4
127350150750102315000060039.2
1284751507501065025002375.7
12947515075010650250010075.4
13047515075010650250040068.5
13147515075010650250060034.2
132390195585120900002334.1
1333901955851209000010035.6
1343901955851209000020031.6
1353901955851209000060016.8
1363901955851209000040026.6
13757228613450000060043.4
13878623612860025.978.6080041.3
1395722861345000002358.3
14057228613450000020055
14157228613450000040052.2
14257228613450000080031.5
14357228613450000010006.5
14478623612860025.978.602371
14578623612860025.978.6020058
14678623612860025.978.6040065.4
14778623612860025.978.6060062.9
14878623612860025.978.60100021
149430172687103001.6002061.8
150430172687103001.60010053.3
151430172687103001.60020055.5
152430172687103001.60030046.5
153430172687103001.60060020.6
154441164653111502.928010062.8
155441164653111502.928020064.7
156441164653111502.928030056.5
157441164653111502.928060021.8
158495149615116801.9002067.4
159495149615116801.90020059.7
160495149615116801.90030049
161495149615116801.90060021
162465149615116803.13002080.3
163465149615116803.130010068
164465149615116803.130030056.5
165465149615116803.130060023.4
166450149615116803.74502084.2
167450149615116803.745010070.8
168450149615116803.745020071.7
1692501234171681000010025.74
1703501723731507000020040.35
1715003850820060040034.5
172450346.508050650080021
173400308079006100020044
174350269.5077506150040027
1754003080103804.8008007.5
176360277.20102804.840020032
177320246.40101504.880040030
178280215.60100504.812008008.5
179500135700111001430040069.87
180500135700111001522.57.560045.23
1815001357001110016151560048.79
18250013570011100187.522.580027.38
183470135700111001660040069.86
184470135700111001852.57.560047.07
1854701357001110020451580022.32
186470135700111002237.522.52091.24
18732618465911245830040069.2
1883911796891172693.50065066.9
1894421666891125785.30080037.9
19044014970210991106.60095029.4
1914371707831016491.9002071.2
19250015063012600100010551
193450150630126001050030049
1944251506301260012.575040032
1954001506301260015100060028
19630818593396806005037.2
19731018694097607.731010044.1
19851215471111060180015072.8
1995111537091122020.451020095.3
200500150750106800002375.5
201350150750102315000010073.7
20247515075010650250020073.4
203441164653111502.92802073.9
204495149615116801.90010057.6
205465149615116803.130020069
206450149615116803.745030057.9
207450149615116803.745060022.6

References

  1. Düğenci, O.; Haktanir, T.; Altun, F. Experimental research for the effect of high temperature on the mechanical properties of steel fiber-reinforced concrete. Constr. Build. Mater. 2015, 75, 82–88. [Google Scholar] [CrossRef]
  2. Choe, G.; Kim, G.; Gucunski, N.; Lee, S. Evaluation of the mechanical properties of 200MPa ultra-high-strength concrete at elevated temperatures and residual strength of column. Constr. Build. Mater. 2015, 86, 159–168. [Google Scholar] [CrossRef]
  3. Ergün, A.; Kürklü, G.; Serhat, B.M.; Mansour, M.Y. The effect of cement dosage on mechanical properties of concrete exposed to high temperatures. Fire Saf. J. 2013, 55, 160–167. [Google Scholar] [CrossRef]
  4. Handoo, S.; Agarwal, S. Physicochemical, mineralogical, and morphological characteristics of concrete exposed to elevated temperatures. Cem. Concr. Res. 2002, 32, 1009–1018. [Google Scholar] [CrossRef]
  5. Li, M.; Qian, C.; Sun, W. Mechanical properties of high-strength concrete after fire. Cem. Concr. Res. 2004, 34, 1001–1005. [Google Scholar] [CrossRef]
  6. Balázs, G.L.; Lublóy, É. Post-heating strength of fiber-reinforced concretes. Fire Saf. J. 2012, 49, 100–106. [Google Scholar] [CrossRef]
  7. Tanyildizi, H. Variance analysis of crack characteristics of structural lightweight concrete containing silica fume exposed to high temperature. Constr. Build. Mater. 2013, 47, 1154–1159. [Google Scholar] [CrossRef]
  8. Hamdia, K.M.; Arafa, M.; Alqedra, M. Structural damage assessment criteria for reinforced concrete buildings by using a Fuzzy Analytic Hierarchy process. Undergr. Space 2018, 3, 243–249. [Google Scholar] [CrossRef]
  9. Phan, L.T.; McAllister, T.P.; Gross, J.L.; Hurley, M.J. Best practice guidelines for structural fire resistance design of concrete and steel buildings. NIST Tech. Note 2010, 1681, 199. [Google Scholar]
  10. Malhotra, H.L. The effect of temperature on the compressive strength of concrete. Mag. Concr. Res. 1956, 8, 85–94. [Google Scholar] [CrossRef]
  11. Crook, D.N.; Murray, M.J. Regain of strength after firing of concrete. Mag. Concr. Res. 1970, 22, 149–154. [Google Scholar] [CrossRef]
  12. Khoury, G.A. Effect of fire on concrete and concrete structures. Prog. Struct. Eng. Mater. 2000, 2, 429–447. [Google Scholar] [CrossRef]
  13. Schneider, V. Repairability of fire damaged structures: CIB W14 report. Fire Saf. J. 1990, 16, 251–338. [Google Scholar]
  14. Ožbolt, J.; Bošnjak, J.; Periškić, G.; Sharma, A. 3D numerical analysis of reinforced concrete beams exposed to elevated temperature. Eng. Struct. 2014, 58, 166–174. [Google Scholar] [CrossRef]
  15. Caggiano, A.; Etse, G. Coupled thermo–mechanical interface model for concrete failure analysis under high temperature. Comput. Methods Appl. Mech. Eng. 2015, 289, 498–516. [Google Scholar] [CrossRef]
  16. Trtnik, G.; Kavčič, F.; Turk, G. Prediction of concrete strength using ultrasonic pulse velocity and artificial neural networks. Ultrasonics 2009, 49, 53–60. [Google Scholar] [CrossRef] [Green Version]
  17. Keshavarz, Z.; Torkian, H. Application of ANN and ANFIS models in determining compressive strength of concrete. J. Soft Comput. Civ. Eng. 2018, 2, 62–70. [Google Scholar]
  18. Javed, M.F.; Amin, M.N.; Shah, M.I.; Khan, K.; Iftikhar, B.; Farooq, F.; Aslam, F.; Alyousef, R.; Alabduljabbar, H. Applications of gene expression programming and regression techniques for estimating compressive strength of bagasse ash based concrete. Crystals 2020, 10, 737. [Google Scholar] [CrossRef]
  19. Hadzima-Nyarko, M.; Nyarko, E.K.; Lu, H.; Zhu, S. Machine learning approaches for estimation of compressive strength of concrete. Eur. Phys. J. Plus 2020, 135, 1–23. [Google Scholar] [CrossRef]
  20. Hadzima-Nyarko, M.; Nyarko, E.K.; Ademović, N.; Miličević, I.; Kalman Šipoš, T. Modelling the influence of waste rubber on compressive strength of concrete by artificial neural networks. Materials 2019, 12, 561. [Google Scholar] [CrossRef] [Green Version]
  21. Zhang, J.; Huang, Y.; Ma, G.; Sun, J.; Nener, B. A metaheuristic-optimized multi-output model for predicting multiple properties of pervious concrete. Constr. Build. Mater. 2020, 249, 118803. [Google Scholar] [CrossRef]
  22. Kandiri, A.; Golafshani, E.M.; Behnood, A. Estimation of the compressive strength of concretes containing ground granulated blast furnace slag using hybridized multi-objective ANN and salp swarm algorithm. Constr. Build. Mater. 2020, 248, 118676. [Google Scholar] [CrossRef]
  23. Golafshani, E.M.; Behnood, A.; Arashpour, M. Predicting the compressive strength of normal and high-performance concretes using ANN and ANFIS hybridized with Grey Wolf Optimizer. Constr. Build. Mater. 2020, 232, 117266. [Google Scholar] [CrossRef]
  24. Khan, M.A.; Zafar, A.; Akbar, A.; Javed, M.; Mosavi, A. Application of Gene Expression Programming (GEP) for the prediction of compressive strength of geopolymer concrete. Materials 2021, 14, 1106. [Google Scholar] [CrossRef]
  25. Cülfik, M.S.; Özturan, T. Mechanical properties of normal and high strength concretes subjected to high temperatures and using image analysis to detect bond deteriorations. Constr. Build. Mater. 2010, 24, 1486–1493. [Google Scholar] [CrossRef]
  26. Behnood, A.; Ziari, H. Effects of silica fume addition and water to cement ratio on the properties of high-strength concrete after exposure to high temperatures. Cem. Concr. Compos. 2008, 30, 106–112. [Google Scholar] [CrossRef]
  27. Bastami, M.; Baghbadrani, M.; Aslani, F. Performance of nano-Silica modified high strength concrete at elevated temperatures. Constr. Build. Mater. 2014, 68, 402–408. [Google Scholar] [CrossRef]
  28. Chen, L.; Fang, Q.; Jiang, X.; Ruan, Z.; Hong, J. Combined effects of high temperature and high strain rate on normal weight concrete. Int. J. Impact Eng. 2015, 86, 40–56. [Google Scholar] [CrossRef]
  29. Xiong, Y.; Deng, S.; Wu, D. Experimental study on compressive strength recovery effect of fire-damaged high strength concrete after realkalisation treatment. Procedia Eng. 2016, 135, 476–481. [Google Scholar] [CrossRef] [Green Version]
  30. Mousa, M.I. Effect of elevated temperature on the properties of silica fume and recycled rubber-filled high strength concretes (RHSC). HBRC J. 2017, 13, 1–7. [Google Scholar] [CrossRef] [Green Version]
  31. Fu, Y.F.; Wong, Y.L.; Poon, C.S.; Tang, C.A. Stress–strain behaviour of high-strength concrete at elevated temperatures. Mag. Concr. Res. 2005, 57, 535–544. [Google Scholar] [CrossRef]
  32. Husem, M. The effects of high temperature on compressive and flexural strengths of ordinary and high-performance concrete. Fire Saf. J. 2006, 41, 155–163. [Google Scholar] [CrossRef]
  33. Freund, Y.; Schapire, R.E. Experiments with a new boosting algorithm. In Proceedings of the Machine Learning: Thirteenth International Conference, Bari, Italy, 3–6 July 1996; pp. 148–156. [Google Scholar]
  34. Schapire, R.E. Explaining AdaBoost. In Empirical Inference; Springer: Berlin/Heidelberg, Germany, 2013; pp. 37–52. [Google Scholar]
  35. Freund, Y.; Schapire, R.; Abe, N. A short introduction to boosting. J. Jpn. Soc. Artif. Intell. 1999, 14, 1612. [Google Scholar]
  36. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  37. Saad, H.; Nagarur, N. Data mining techniques in predicting breast cancer. arXiv 2020, arXiv:2011.11088. [Google Scholar] [CrossRef]
  38. Gandomi, A.H.; Babanajad, S.K.; Alavi, A.H.; Farnam, Y. Novel approach to strength modeling of concrete under triaxial compression. J. Mater. Civ. Eng. 2012, 24, 1132–1143. [Google Scholar] [CrossRef]
  39. Khosravi, K.; Mao, L.; Kisi, O.; Yaseen, Z.M.; Shahid, S. Quantifying hourly suspended sediment load using data mining models: Case study of a glacierized Andean catchment in Chile. J. Hydrol. 2018, 567, 165–179. [Google Scholar] [CrossRef]
  40. Akbari, M.; Deligani, V.J. Data driven models for compressive strength prediction of concrete at high temperatures. Front. Struct. Civ. Eng. 2020, 14, 311–321. [Google Scholar] [CrossRef]
  41. Yang, Y.; Zhang, Q. A hierarchical analysis for rock engineering using artificial neural networks. Rock Mech. Rock Eng. 1997, 30, 207–222. [Google Scholar] [CrossRef]
  42. Faradonbeh, R.S.; Armaghani, D.J.; Majid, M.A.; Tahir, M.M.; Murlidhar, B.R.; Monjezi, M.; Wong, H.M. Prediction of ground vibration due to quarry blasting based on gene expression programming: A new model for peak particle velocity prediction. Int. J. Environ. Sci. Technol. 2016, 13, 1453–1464. [Google Scholar] [CrossRef] [Green Version]
  43. Chen, W.; Hasanipanah, M.; Rad, H.N.; Armaghani, D.J.; Tahir, M.M. A new design of evolutionary hybrid optimization of SVR model in predicting the blast-induced ground vibration. Eng. Comput. 2021, 37, 1455–1471. [Google Scholar] [CrossRef]
  44. Rad, H.N.; Bakhshayeshi, I.; Jusoh, W.A.W.; Tahir, M.M.; Foong, L.K. Prediction of flyrock in mine blasting: A new computational intelligence approach. Nat. Resour. Res. 2020, 29, 609–623. [Google Scholar] [CrossRef]
Figure 1. Schematic illustration of RF structure.
Figure 1. Schematic illustration of RF structure.
Materials 14 01983 g001
Figure 2. Framework of the proposed study.
Figure 2. Framework of the proposed study.
Materials 14 01983 g002
Figure 3. Scatter plots displaying the experimental (actual) compressive values versus the predicted compressive values of concrete at high temperature of the training dataset using the (a) AdaBoost, (b) RF, and (c) DT algorithms.
Figure 3. Scatter plots displaying the experimental (actual) compressive values versus the predicted compressive values of concrete at high temperature of the training dataset using the (a) AdaBoost, (b) RF, and (c) DT algorithms.
Materials 14 01983 g003aMaterials 14 01983 g003b
Figure 4. Scatter plot presenting the experimental (actual) compressive values versus the predicted compressive values of concrete at high temperature of the testing dataset using the (a) AdaBoost, (b) RF, and (c) DT algorithms.
Figure 4. Scatter plot presenting the experimental (actual) compressive values versus the predicted compressive values of concrete at high temperature of the testing dataset using the (a) AdaBoost, (b) RF, and (c) DT algorithms.
Materials 14 01983 g004aMaterials 14 01983 g004b
Figure 5. Compressive strength results of AdaBoost, RF, and DT models in training and testing phases.
Figure 5. Compressive strength results of AdaBoost, RF, and DT models in training and testing phases.
Materials 14 01983 g005
Figure 6. Sensitivity analysis results.
Figure 6. Sensitivity analysis results.
Materials 14 01983 g006
Table 1. Concrete compressive strength catalog.
Table 1. Concrete compressive strength catalog.
S. No.Cement (kg/m3)Water (kg/m3) Sand
(kg/m3)
Gravel
(kg/m3)
Fly Ash
(kg/m3)
Super Plasticizer
(kg/m3)
Silica Fume
(kg/m3)
Nano Silica
(kg/m3)
Temperature
(°C)
Compressive Strength (MPa)
1250123417168100002028.16
22501234171681000020023.4
32501234171681000040018.57
205465149615116803.130020069
206450149615116803.745030057.9
207450149615116803.745060022.6
Table 2. Statistics of input and output parameters for the training and testing datasets used in the development of the supervised learning model.
Table 2. Statistics of input and output parameters for the training and testing datasets used in the development of the supervised learning model.
DatasetStatistical ParameterInput and Output Parameters
CWFACAFSPSFNSTf’c,T
TrainingMin.250123000000203
Max.7863851345168115025.915022.51000133.6
Mean437.788182.307618.1391051.79412.7588.53328.6361.636344.23049.795
Standard deviation96.69058.811314.867315.34233.1657.63436.9125.110289.74025.985
TestingMin.250123000000207.5
Max.7863851345168115025.915022.595095.3
Mean437.286185.338578.6671053.42912.2388.76931.9902.143394.95247.411
Standard deviation91.75764.714329.096288.53233.1007.53838.0925.806278.99121.855
Table 3. Hyperparameter optimization results.
Table 3. Hyperparameter optimization results.
AlgorithmHyperparameterExplanationOptimal Value
AdaBoostNumber of estimatorsNumber of trees50
Learning rateIt determines to what extent the newly acquired information will override the old information1
RFNumber of treesNumber of trees in the forest10
Do not split subsets smaller thanSmallest subset that can be split05
DTMin. number of instances in leavesMinimum number of samples for split nodes2
Do not split subsets smaller thanForbids the algorithm to split the nodes with less than the given number of instances.5
Limit the maximal tree depthLimit the depth of the classification tree to the number of node levels specified.100
Table 4. Comparison of statistical results obtained from the applied predictive models using training phase with available ANFIS and ANN models.
Table 4. Comparison of statistical results obtained from the applied predictive models using training phase with available ANFIS and ANN models.
ModelsStatistical Performance ResultsReference
R2RSRMAPE (%)RRMSE (%)
AdaBoost0.9990.0321.3571.666This study
RF0.9650.19011.3069.869
DT0.9680.1789.7479.265
ANFIS0.941413[40]
ANN0.96910
“−” represents that this performance statistic is not included in the reference.
Table 5. Comparison of statistical results obtained from the applied predictive models using testing phase with available ANFIS and ANN models.
Table 5. Comparison of statistical results obtained from the applied predictive models using testing phase with available ANFIS and ANN models.
ModelsStatistical Performance ResultsReference
R2RSRMAPE (%)RRMSE (%)
AdaBoost0.9380.24812.52311.622This study
RF0.9350.25613.07611.661
DT0.9110.32416.10014.753
ANFIS0.892015[40]
ANN0.921212
“−” represents that this performance statistic is not included in the reference.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ahmad, M.; Hu, J.-L.; Ahmad, F.; Tang, X.-W.; Amjad, M.; Iqbal, M.J.; Asim, M.; Farooq, A. Supervised Learning Methods for Modeling Concrete Compressive Strength Prediction at High Temperature. Materials 2021, 14, 1983. https://doi.org/10.3390/ma14081983

AMA Style

Ahmad M, Hu J-L, Ahmad F, Tang X-W, Amjad M, Iqbal MJ, Asim M, Farooq A. Supervised Learning Methods for Modeling Concrete Compressive Strength Prediction at High Temperature. Materials. 2021; 14(8):1983. https://doi.org/10.3390/ma14081983

Chicago/Turabian Style

Ahmad, Mahmood, Ji-Lei Hu, Feezan Ahmad, Xiao-Wei Tang, Maaz Amjad, Muhammad Junaid Iqbal, Muhammad Asim, and Asim Farooq. 2021. "Supervised Learning Methods for Modeling Concrete Compressive Strength Prediction at High Temperature" Materials 14, no. 8: 1983. https://doi.org/10.3390/ma14081983

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop