Next Article in Journal
Local Coordination Environment of 3d and 4d Transition Metal Ions in LiCl-KCl Eutectic Mixture
Previous Article in Journal
Humus Acids in the Digested Sludge and Their Properties
Previous Article in Special Issue
Lamb Wave Based Structural Damage Detection Using Stationarity Tests
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Back-Propagation Neural Network Optimized by K-Fold Cross-Validation for Prediction of Torsional Strength of Reinforced Concrete Beam

1
School of Civil and Environmental Engineering, University of Technology Sydney, Ultimo, NSW 2007, Australia
2
Centre for Infrastructure Engineering, Western Sydney University, Penrith, NSW 2751, Australia
3
School of Engineering, University of Southern Queensland, Springfield Central, QLD 4300, Australia
*
Authors to whom correspondence should be addressed.
Materials 2022, 15(4), 1477; https://doi.org/10.3390/ma15041477
Submission received: 29 December 2021 / Revised: 2 February 2022 / Accepted: 12 February 2022 / Published: 16 February 2022
(This article belongs to the Special Issue Emerging Trends in Structural Health Monitoring)

Abstract

:
Due to the limitation of sample size in predicting the torsional strength of Reinforced Concrete (RC) beams, this paper aims to discuss the feasibility of employing a novel machine learning approach with K-fold cross-validation in a small sample range, which combines the advantages of a Genetic Algorithm (GA) and a Neural Network (NN) to predict the torsional strength of RC beams. This research study not only utilizes the application of a Back Propagation (BP) neural network and the Gene Algorithm-Back Propagation (GA-BP) neural network in the prediction of the torsional strength of the RC beam, but it also investigates neural network parameter optimization, including connection weights and thresholds, using K-fold cross-validation. The root mean square error (RMSE), mean absolute error (MAE), mean square error (MSE), mean absolute percentage error (MAPE) and correlation coefficient (R2) are among the evaluation metrics used to assess the performance of the trained model. To elaborate on the superiority of the proposed network models in predicting the torsional strength of RC beams, a parametric study is conducted by comparing the proposed model to three commonly used empirical formulae from existing design codes. The comparative findings of this research study demonstrate that the performance of the BP neural network is highly similar to that of design codes; however, its accuracy is inadequate. After improving the weights and thresholds by k-fold cross-validation and GA, the prediction of the BP neural network shows higher consistency with the actual measured values. The outcome of this study can be used as a theoretical reference for the optimal design of RC beams in practical applications.

1. Introduction

Reinforced Concrete (RC) is a complex construction material due to the complexity of its properties and high maintenance conditions. In the past few years, a huge number of studies have been conducted on the RC beams for shear and flexural capacities, but fewer are reported about the torsional strength. Several empirical/analytical formulae from structural design codes (e.g., ACI-318-14, TBC-500-2000, BS-8110, JSCE-04, CSA-14, etc.) are available for calculations of the torsional strength of RC beams. In these models, at least 10 design parameters related to members’ dimensions, reinforcement arrangement and material properties are normally required to arrive at a more accurate calculation, including the section size of the RC beam as well as longitudinal and transverse reinforcements. The design codes on the prediction of torsional strength of RC beams provide various calculation formulations in different regions. The American design code (ACI-318-14) [1] ignores the contribution of concrete to torsional strength and only considers the role of transverse and longitudinal reinforcement. The Canadian standard [2] is similar to ACI-318-14 [1]. In addition to this, the Turkish standard [3] and British standards [4] are commonly used, where the calculation of the torsion angle has been simplified. In the Japanese standard [5], the maximum torsional strength of RC members is assessed based on the ratio of torsional reinforcement. Based on these codes, the strength of RC structures has a reference value. It is important to note that the limits of the codes are increased in different cases. Therefore, the application of the codes needs to be determined on a situational basis. Additionally, based on a large number of research studies for RC structures, the form of stress combinations, initial crack angles, dislodgement of concrete, aggregate damage, etc., are continuously incorporated into the calculations and optimized to obtain better results [6,7,8,9]. This literature provides more accurate predictions but also increases the complexity of calculating the torsional strength of RC beams.
In recent years, machine learning (ML) technology has been widely developed and applied to various scenarios of force analysis of RC beams. Abdulkadir et al. [10] used genetic programming to simulate the RC beam torsional strength and proposed an empirical formulation. Additionally, ML methods such as decision trees, random forests and fuzzy logic were used to simulate the compressive strength and slump of concrete with high accuracy [11]. Ling et al. [12] employed k-fold cross-validation to optimize a support vector machine (SVM) to reduce the average relative error and improve the prediction accuracy in predicting the degradation of concrete strength. In addition, neural network-based models have gained more attention due to their autonomous learning capability and their ability to ignore parameter classes. Tanarslan and Kumanlioglu [13] collected the parameters of 84 RC beams and improved the accuracy of the ANN model, which achieved excellent prediction accuracy in comparison with national guidelines. In addition, Hosein et al. [14] and Yang et al. [15] trained a neural network model to predict the shear strength of RC beams and showed high accuracy. Amani and Moeini [16] selected six significant parameters of RC beams as input of the BP neural network and the adaptive neuro-fuzzy inference system (ANFIS) to predict the shear strength of RC beams. The prediction accuracy of ANN and ANFIS was found to be more accurate than the ACI code. In the case of RC beams under torsion, Arslan [17] applied an artificial neural network to predict the ultimate torsional strength of beams and compared the results with design code calculations. The results showed that ANN outperformed design code in predicting the torsional strength and confirmed the potential feasibility of ANN in predicting the torsional strength of RC beams. On the other hand, the optimization of neural networks has been studied by many researchers, and different types of optimization algorithms have been derived. Among ANNs, the back-propagation neural network has also been applied in engineering applications. Lv et al. combined a BP neural network and the Grey model to predict the settlement of foundation [18]. Wu et al. [19] mentioned the common problems of the BP neural network, i.e., the inaccuracy of initial weights and thresholds, which affect the accuracy of the algorithm prediction, and used GA to optimize the BP neural network to improve the accuracy in the problem of energy consumption of copper electrowinning by 14.25%. Based on this, Liu et al. [20] used the Grey Verhulst model to improve the GA-BP neural network model and stated an accurate model in settlement prediction. Furthermore, Cevik et al. [21] used genetic programming for modelling torsional strength, and Ilkhani et al. [22] proposed a novel approach to predict the torsional strength of RC beams. In addition, Arslan [23] compared the prediction of the torsional strength of RC beams between ANNs and different design codes for the research feasibility of ANN. In the ML modelling approaches, fuzzy logic, random forests and support vector machines have been reported in predicting concrete mechanical properties such as compressive strength and elastic modulus that are largely consistent with the simulation results of neural networks [11,24,25]. However, these methods, except neural networks, usually require a significant computational effort in finding an optimal solution to a complex problem. Therefore, neural networks have been used for complex nonlinear problems such as the shear strength and torsional strength of RC beams in civil aspects by researchers.
Although the optimized neural networks employed in previous studies generated positive results in the engineering field, there are few applications of neural networks in the prediction of torsional behavior of RC beams, particularly in terms of derived neural networks such as BP neural networks, GA-BP neural networks, convolutional neural networks, etc. Moreover, the BP neural network has limitations regarding optimizing weights and thresholds when the testing and validating sample datasets are insufficient. Therefore, in this paper, the k-fold cross-validation method is used to select the best model and collect its thresholds and weights as the initial values, which can significantly improve the error correction of the BP neural network model. Then, GA is utilized to optimize the weights and thresholds to improve the accuracy of the model. In addition, this paper also discusses the variations in the prediction accuracy of the BP neural network and the GA-BP neural network optimized by k-fold cross-validation for the torsional strength of RC beams. Furthermore, five statistical evaluation metrics (RMSE, MAE, MSE, MAPE and R 2 ) are employed to appraise the prediction accuracy of the developed models. It is found that the prediction accuracy of the BP neural network improves when optimized thresholds and weights are extracted and entered using the k-fold validation method. However, it is discovered that this approach has less impact on the GA-BP neural network model. In addition to this, the design codes from different sources such as ACI-318-2014 [1], TBC-500-2000 [3] and BS8110 [4] are used to predict the results and compare them with the results predicted by the model of the BP neural network.

2. Data Collection and Analysis

A high-accuracy BP neural network requires a large amount of data to train the model and test the model with new data samples. Since the experimental data on the torsional strength of the RC beam are limited, it is necessary to make adequate use of the available data for each parameter in order to improve the accuracy of the model. Liu [18] mentioned that BP neural network models need to consider the relative parameters of the actual problem. Additionally, according to [13,26,27], in a neural network model for predicting the strength of RC beams, a few input neurons can make the network fitting process more complex and difficult, or even fail. Therefore, in this paper, 11 different parameters of RC beams were selected, which include the RC beam section (the width ( b ), depth ( h )), closed stirrup (width ( b ), depth ( h ), spacing ( s )) compressive strength ( f c ), yield strength of the longitudinal reinforcement ( f y l ), longitudinal reinforcement ratio ( ρ l ), yield strength of transverse reinforcement ( f y t ), transverse reinforcement ratio ( ρ t ) and torsional strength ( T u ). The detailed information of the dataset used in this study is shown in Table 1 and Figure 1, respectively, which are collected from references [2,3,17,27,28,29,30,31].
In general, the inputs of the neural network should have small correlations between themselves. A number of the strongly correlated coefficients can lead to worse predictions of the BP neural network model, if all 10 variables are employed as inputs in this research. This is a result of the possible strong correlation of variables. Campbell and Atchley [32] suggested using the mathematical tool principal component analysis (PCA) to reduce the number of correlated variables and transform the correlated variables in the dataset to uncorrelated variables. Furthermore, PCA revealed the importance ranking of the newly generated 10 principal components (PCs). The PCA results are shown in Table 2. The first seven PCs are sufficient to represent approximately 99% of the information in the original dataset. Therefore, these seven PCs were selected as the inputs of the BP neural network. Although the number of model inputs is reduced, the quality of the data can be improved due to non-correlation, as shown in Figure 2.

3. Methodology

3.1. Design Code

Due to the building standard differences in various regions, three widely used design codes are selected as comparison candidates. The details of these codes are shown in Table 3. In addition, according to the applicable conditions of the design codes, some parameters are limited, and calculation results may generate deviations.

3.2. K-Fold Cross-Validation

The flow chart in Figure 3 shows that the k-fold cross-validation starts by randomly breaking up the data into K groups, after which, for each group, the following operations are performed:
  • Select one of the training folds as the testing dataset.
  • The remaining K−1 groups are used as the training set.
  • Use the selected training dataset to train the model and evaluate it with the testing dataset.
In a small sample dataset of this work, k is usually set as 10, which is an empirical value obtained through extensive experimental trials. Directly utilizing the neural network simulation results in low bias and modest variance of the outcome. Therefore, in this simulation, the comprehensive datasets were randomly divided, with the first 170 sets selected as the training set and the last 70 sets as the testing set. Then, 170 samples were divided into 10 training folds. In addition, a different testing fold from D1 to D10 was selected each time as the validation set. Afterward, these 10 sets of data were inputted into the BP neural network model sequentially. The inaccuracy of the model evaluation caused by the accidental division of the sample datasets can be excluded via 10-time cross-validation.

3.3. BP Neural Network and Genetic Algorithm

Based on the advantages of the BP neural network, such as the nonlinear mapping capability, self-learning and self-adaptive capability, generalization capability and fault tolerance, this paper discusses the applicability of the BP neural network in predicting the torsional strength of RC beams. The forward and backward computation refers to [26].
The activity level for neuron j in layer l is
v j ( l ) ( n ) = i = 0 p w j i ( l ) ( n ) y i l 1 ( n ) ,
The logic sigmoid function for threshold is
y j ( l ) ( n ) = ( 1 + exp ( v j ( l ) ( n ) ) 1 ,
The weight of the neural network is
w j i ( l ) ( n + 1 ) = w j i ( l ) ( x ) + α [ w j i ( l ) ( n 1 ) ] + η δ j l ( n ) · y j ( l 1 ) ( n ) ,
where δ in the output layer and hidden layer are, respectively,
δ j ( l ) ( n ) = e j ( l ) ( n ) · o j ( n ) [ 1 o j ( n ) ] ,
δ j ( l ) ( n ) = y j ( l ) ( n ) [ 1 y j ( l ) ( n ) ] k δ k ( l + 1 ) ( n ) w k j ( l + 1 ) ( n ) ,
and the experience of α is chosen between 0 and 1 and the learning rate η = 0.5 , which is suggested by [33,34].
In a BP neural network, the neural network has a nonlinear mapping capability, which is suitable for solving problems with complex mechanisms, so the neural network can predict the nonlinear function output. It can obtain random weights and thresholds from the divided samples and start training the model. Using the BP algorithm, the partial derivative (gradient) of the loss function with respect to the weights and biases of each layer is found based on the loss function [33]. Then, this value is used to update the initial weights and bias terms until the loss function is either minimized or the set number of iterations is completed. In addition, this value is also used to calculate the best parameters for the neural network. The next part is the genetic algorithm section, which calculates adaptation values, crossover, variation and other steps to select the best group until it is close to the optimal solution [35,36,37]. In general, the GA uses a binary code and divides the program into four parts: Input and hidden layer link weights, hidden layer weights, hidden and output layer weights and output layer weights. Each weight and threshold are encoded in M-bit binary and then the optimized weights and thresholds are fed into the BP neural network. Figure 4 demonstrates the flowchart of BP neural network optimized by K-fold cross-validation and GA.

3.4. Model Parameter Setting

In this work, 240 groups of data are selected as training and testing samples for model development. The sum of the absolute values of the prediction errors of the training data is taken as the individual fitness value, and the smaller the individual fitness value, the better the individual is.
To reach the optimal simulation of a BP neural network model, the number of hidden-layer neurons needs to be varied according to the learning rate, the number of neurons, the learning algorithm, etc., and to be determined after several experimental trials [26]. Additionally, according to the models and experimental methods from the literature [16,20,26], the number of neurons in the hidden layer was assumed to be in order from 1 to 20. In addition, the simulation results of BP neural network (BPNN) were used to test the optimal number of neurons (the prediction results are shown in Figure 5). In this study, the main objective is to improve the prediction model by the k-fold validation method. In this process, it is difficult to determine whether the prediction results have been changed by the k-fold validation method when the number of neurons in the hidden layer changes. Therefore, controlling the number of neurons in the hidden layer provides a more intuitive view of this approach. Figure 6 shows the final network architecture of ANN used in this study for torsional strength prediction.
In the BP neural network, the number of samples is randomly divided into two groups: The first group contains 170 samples for training and the remaining 70 samples were used as the testing samples. This is more indicative of the realism of the simulation results. In the GA-BP neural network, the number of samples is also divided, but the weights and thresholds are varied with the best gene individuals selected. The GA parameters are set as follows: The population size of GA is 10, the maximum iteration number is 50, the crossover rate is 0.4 and the mutation probability is 0.2.

3.5. Evaluation Metrics

In this paper, five statistical evaluation metrics were used to assess the performance of different models, which includes the mean absolute error (MAE), mean squared error (MSE), root mean squared error (RMSE), coefficient of determination ( R 2 ) and mean absolute percentage error (MAPE) [38,39]. Those metrics are calculated as follows:
MAE = 1 n i = 1 n | y ^ i y i | ,
MSE = 1 n i = 1 n ( y ^ i y i ) 2 ,
RMSE = 1 n i = 1 n ( y ^ i y i ) 2 ,
R 2 = i = 1 n ( y ^ i y ¯ ) 2 i = 1 n ( y i y ¯ ) 2 ,
MAPE = 100 % n i = 1 n | y ^ i y i y i | ,
where n is the number of data groups, y ¯ is the mean of the testing torsional strength, y ^ i is the prediction of the torsional strength and y i is the testing torsional strength.
MSE, MAE and RMSE are convenient measures of the ‘mean error’ and are used to evaluate the degree of variability of the data. In addition to this, although RMSE is more complex and biased towards higher errors, it has a smoothed loss function. Furthermore, R 2 is used to characterize a good or bad fit by the variation in the data. Its normal range of values is [0 1], and the closer it is to 1, the better the variables of the equation explain y and the better the model fits the data. In addition, MAPE can also be used to determine how well different models evaluate the same data, with a value of 1 indicating a close relationship and 0 indicating a random relationship. The lower the value, the better the prediction.

4. Results and Discussion

The K-fold cross-validation method is used to sequentially select the training samples as the input data, and then the BP and GA-BP neural networks are used to predict the torsional strength of the RC beams. The results are shown in Table 4. From the table below, the results of the K-fold cross-validation for different 10 datasets are provided. In this step, the model with the best prediction performance is selected by comparing the evaluation metrics. Although some of the test groups have high correlation values closer to 1, they perform poorly in both the RMSE and MSE metrics and the values perform worse.
After the K-fold cross-validation is conducted, the results in Table 4 show that the best model should be group 10, as the MSE, RMSE and MAPE values of group 10 are lower than that of other models. In Table 4, the evaluation indicators can be used to assess the prediction performance of each group of models. The values of the MAE, RMSE, MSE and MAPE are smaller, and the generalization capacity of the prediction model is increased. Similarly, R 2 is also informative, and the value of the perfect model should be closer to 1. Therefore groups 2, 3, 4 and 5, where the MAE exceeds 10 kN · m, should be excluded. Similarly, since the value of MSE for group 7 (542.109 kN 2 · m 2 ) and group 8 (289.448 kN 2 · m 2 ) is much greater than groups 1, 6, 9 and 10, the models in groups 7 and 8 should be excluded. In addition to this, the RMSE evaluation indicator provides a reason for excluding group 1, since the corresponding RMSE value for group 1 (11.241 kN·m) is larger than that for group 6 (9.083 kN·m), group 9 (7.977 kN·m) and group 10 (7.145 kN·m). The coefficient of 45.789% for group 9 in MAPE was much greater than that of group 6 (27.160%) and group 10 (18.477%). Finally, the coefficient of determination for group 10 (0.979) was closer to 1 than group 6 (0.968). Therefore, the tenth group is selected for comprehensive consideration. Furthermore, the weights and thresholds are recorded and inputted into the BP neural network model for comparison with the test set. After changing the initial weights and thresholds, the model prediction of the GA-BP neural network and the BP neural network is improved. However, this is not a significant improvement for the GA-BP neural network (Figure 7).
From the figure below, GA and k-fold cross-validation perform well in improving the prediction accuracy of BPNN. The error range of the different models can be observed in Figure 8. In Figure 8a, the error of the GA-BPNN model is reduced from 78 kN · m to 50 kN · m. Similarly, Figure 8b shows the reduction in the model error from 78 kN · m to 40 kN · m using the k-fold validation method. In addition to this, the error was also reduced after using the k-fold validation method (Figure 8d). Furthermore, it can be observed from Figure 8c that the k-fold validation method outperforms the gene algorithm in terms of error reduction with 70 testing data. In addition, k-fold cross-validation has been reduced by approximately 15 kN · m in the absolute maximum error. Based on the K-fold cross-validation method, the prediction error values of BPNN and GA-BPNN are almost the same. Additionally, the maximum errors of both networks are close to approximately 25 kN · m.
The prediction values of the BP neural network model have higher error values than the -BP neural network. In particular, the BP neural network predicts negative values in the range of numbers 20 to 25 and numbers 45 to 50. In addition, the prediction of the GA-BP model and BP model has a low deviation range between 0 and 10. The reason for this result is the small number of data selected for training and the limited derivation of thresholds and weights by the BP neural network. It is worth noting that the BP neural network also performs well after optimizing the thresholds and weights by GA (as shown in Figure 9). Additionally, these two models produce better results in the last 10 testing data. In Figure 10, the simulation line shows high repeatability between the forecasted and actual values, especially in the range of group numbers 12–17, 40–45, 51–58 and 60–65. However, the optimized BP neural network model does not provide accurate prediction results of the groups of data for numbers 45–50. Comprehensively, the optimized BP neural network predictions show a high degree of agreement with the actual values.
In general, BP neural networks perform poorly without parameter optimization due to the random generation of thresholds and weights. Similarly, while GA can be employed to find the best weights and thresholds, the prediction results are often similar to BP neural networks when initializing the population. However, the network can be improved by inputting weights and thresholds that were filtered by k-fold cross-validation. This phenomenon has been validated via the samples (index 0 to 10) in Figure 8, Figure 9, Figure 10 and Figure 11. In Figure 8 and Figure 9, the prediction results of testing data of indices 1-6 are lower than their actual testing data, which is the opposite to the simulation results for the design code. The design code predictions are mostly higher than the actual values in this dataset, especially in TBC-500-2000 where the predicted lines almost include the actual lines. This phenomenon can also be seen in Figure 12, Figure 13 and Figure 14. Compared with the prediction of three building standards, the correlation of the coefficient of ACI-318-14 and BS-8110 is similar and it is higher than TBC-500-2000. The results in Table 5 show that the results from ACI-318-14 and BS-8110 are similar to the predictions of the BP neural network. This is due to the small sample datasets. In this case, most machine learning (e.g., decision tree, random forest, support vector machine linear, etc.) simulations are comparable to those of BP neural networks [11]. Figure 15 shows the radar diagram of evaluation metrics for model performance evaluation. However, the accuracy of the model is improved via GA optimization, with the MSE reduced from 315.363 to 240.046, but the R 2 only improved by about 0.04. In addition, the BP neural network optimized by k-fold cross-validation achieves better results than the BP neural network model. Additionally, the values of MAE, MSE, RMSE and MAPE are all reduced, and the value of the correlation coefficient increases by 0.1. This result is similar to that of the GA-BP neural network optimized by the k-fold cross-validation. Compared with the optimized GA-BP neural network, the k-fold cross-validation made a significant impact on the optimization of the GA-BP neural network by setting better initial weights and thresholds. The simulation results show that although GA also has an optimizing effect on the BP neural network, the improvement is neither adequate nor stable. However, the RMSE, MSE, MAE and MAPE values of the neural network model are reduced after optimization using the k-fold cross-validation method (Table 5). In particular, BPNN and GA-BPNN in the MSE evaluation metric decrease from 315.363 and 240.046 to 103.100 and 103.988, respectively, after optimization. The correlation coefficients of the BP neural network and GA-BP neural network models were also improved.

5. Conclusions

This study aims to investigate the performance of an optimized BP neural network in predicting the torsional strength of RC beams. Ten variables and four aspects were investigated in terms of section details, concrete strength, longitudinal bar and transvers bars. In this paper, to ensure the dataset is easier to use and to remove noise, the raw data are normalized using PCA and the seven most important features are retained for the prediction. The 240 groups of experimental data collected from existing publications were randomly divided into two groups: The first group contains 170 data samples for the model training and validation, and the remaining data were used to verify the accuracy of the model. the BP neural network was used in this paper, and the network parameters of this model were optimized using GA and k-fold cross-validation, respectively.
The design code is widely used in the construction sector as a traditional method for calculating the torsional strength of reinforced concrete beams. However, in order to obtain accurate predictions, the individual variables of a reinforced concrete beam are necessary, and the conditions of use of the beam under different conditions need to be taken into account. BPNN is able to ignore the conditions of application of the various variables for reinforced concrete beams and obtain predictions similar to those of the conventional design codes. This gives BPNN an advantage in the prediction of torsional forces in reinforced concrete beams. However, the method has limitations in terms of weights and thresholds. Due to the complexity of the construction conditions in reinforced concrete beams, it is difficult to obtain accurate and sufficient data. The application of k-fold cross-validation and GA methods can effectively avoid this situation. The k-fold cross-validation optimizes the initial threshold and weights of the BPNN after modelling 10 sets of data in turn. On the other hand, GA finds the optimal thresholds and weights in continuous iterations. While both methods improve the accuracy of the prediction results of the BPNN, k-fold cross-validation is more suitable for the case of insufficient data ( R 2 increased from 0.846 to 0.943). At the same time, the GA-BPNN model is optimized on the basis of the thresholds and weights provided by k-fold cross-validation, and the improvement is significant. Based on the statistical results of MAE, MSE, RMSE, MAPE and R 2 , the k-fold cross-validation-optimized GA-BPNN is the best prediction model for the torsional strength of reinforced concrete beams.
In the future work, in addition to k-fold cross-validation-optimized GA-BPNN and existing design codes, other soft computing approaches, such as the support vector machine (SVM), extreme learning machine (ELM), adaptive neuro-fuzzy inference system (ANFIS), gene programming (GP), etc., will be investigated to compare and determine the optimal data-driven model for the torsional strength prediction of RC beam.

Author Contributions

Conceptualization, Y.Y.; methodology, Z.L. and Y.Y.; software, Z.L.; validation, B.S., M.R. and M.M.; formal analysis, Z.L.; investigation, T.N.N. and A.N.; resources, B.S., M.R. and M.M.; data curation, Z.L. and A.N.; writing—original draft preparation, Z.L.; writing—review and editing, Y.Y., B.S., M.R., M.M., T.N.N. and A.N.; supervision, Y.Y., B.S. and M.R.; project administration, Y.Y. and T.N.N. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. ACI Committee 318; Building Code Requirements for Structural Concrete (ACI 318-14) and Commentary. American Concrete Institute: Farmington Hills, MI, USA, 2014.
  2. CSA A23. 3-14; Design of Concrete Structures. Canadian Standards 437 Association: Toronto, ON, Canada, 2014.
  3. TBC-500-2000; Requirements for Design and Construction of Reinforced Concrete Structures. Turkish Standards TS-500: Ankara, Turkey, 2000.
  4. BS 8110; Structural Use of Concrete—Part 1 Code of Practice for Design and Construction. British Standards Institution: London, UK, 1997.
  5. JSCE. Standard Specifications for Concrete and Structures-2007. Materials and Construction; Japan Society of Civil Engineers (JSCE): Shinjuku, Tokyo, 2007. [Google Scholar]
  6. Ali, M.A.; White, R.N. Toward a rational approach for design of minimum torsion reinforcement. ACI Struct. J. 1999, 96, 40–45. [Google Scholar]
  7. Ju, H.; Han, S.; Kim, K.S. Analytical model for torsional behavior of RC members combined with bending, shear, and axial loads. J. Build. Eng. 2020, 32, 101730. [Google Scholar] [CrossRef]
  8. Ju, H.; Lee, D.H.; Kim, K.S. Minimum torsional reinforcement ratio for reinforced concrete members with steel fibers. Compos. Struct. 2018, 207, 460–470. [Google Scholar] [CrossRef]
  9. Ju, H.; Lee, D.H.; Hwang, J.-H.; Kang, J.-W.; Kim, K.S.; Oh, Y.-H. Torsional behavior model of steel-fiber-reinforced concrete members modifying fixed-angle softened-truss model. Compos. Part B Eng. 2013, 45, 215–231. [Google Scholar] [CrossRef]
  10. Cevik, A.; Arslan, M.H.; Köroğlu, M.A. Genetic-programming-based modeling of RC beam torsional strength. KSCE J. Civ. Eng. 2010, 14, 371–384. [Google Scholar] [CrossRef]
  11. Cihan, M.T. Prediction of Concrete Compressive Strength and Slump by Machine Learning Methods. Adv. Civ. Eng. 2019, 2019, 3069046. [Google Scholar] [CrossRef]
  12. Ling, H.; Qian, C.; Kang, W.; Liang, C.; Chen, H. Combination of Support Vector Machine and K-Fold cross validation to predict compressive strength of concrete in marine environment. Constr. Build. Mater. 2019, 206, 355–363. [Google Scholar] [CrossRef]
  13. Tanarslan, H.; Secer, M.; Kumanlioglu, A. An approach for estimating the capacity of RC beams strengthened in shear with FRP reinforcements using artificial neural networks. Constr. Build. Mater. 2012, 30, 556–568. [Google Scholar] [CrossRef]
  14. Naderpour, H.; Haji, M.; Mirrashid, M. Shear capacity estimation of FRP-reinforced concrete beams using computational intelligence. Structures 2020, 28, 321–328. [Google Scholar] [CrossRef]
  15. Shear Capacity of Reinforced Concrete Beams Using Neural Network. Int. J. Concr. Struct. Mater. 2007, 1, 63–73. [CrossRef] [Green Version]
  16. Amani, J.; Moeini, R. Prediction of shear strength of reinforced concrete beams using adaptive neuro-fuzzy inference system and artificial neural network. Sci. Iran. 2012, 19, 242–248. [Google Scholar] [CrossRef] [Green Version]
  17. Arslan, M.H. Predicting of torsional strength of RC beams by using different artificial neural network algorithms and building codes. Adv. Eng. Softw. 2010, 41, 946–955. [Google Scholar] [CrossRef]
  18. Lv, Y.; Liu, T.; Ma, J.; Wei, S.; Gao, C. Study on settlement prediction model of deep foundation pit in sand and pebble strata based on grey theory and BP neural network. Arab. J. Geosci. 2020, 13, 1238. [Google Scholar] [CrossRef]
  19. Wu, J.; Cheng, Y.; Liu, C.; Lee, I.; Huang, W. A BP Neural Network Based on GA for Optimizing Energy Consumption of Copper Electrowinning. Math. Probl. Eng. 2020, 2020, 1026128. [Google Scholar] [CrossRef]
  20. Liu, C.Y.; Wang, Y.; Hu, X.M.; Han, Y.L.; Zhang, X.P.; Du, L.Z. Application of GA-BP Neural Network Optimized by Grey Verhulst Model around Settlement Prediction of Foundation Pit. Geofluids 2021, 2021, 5595277. [Google Scholar] [CrossRef]
  21. Yu, Y.; Nguyen, T.N.; Li, J.; Sanchez, L.F.M.; Nguyen, A. Predicting elastic modulus degradation of alkali silica reaction affected concrete using soft computing techniques: A comparative study. Constr. Build. Mater. 2021, 274, 122024. [Google Scholar] [CrossRef]
  22. Ilkhani, M.; Naderpour, H.; Kheyroddin, A. A proposed novel approach for torsional strength prediction of RC beams. J. Build. Eng. 2019, 25, 100810. [Google Scholar] [CrossRef]
  23. Arslan, M.H.; Ceylan, M.; Kaltakcı, M.Y.; Ozbay, Y.; Gulay, G. Prediction of force reduction factor R of prefabricated industrial buildings using neural networks. Struct. Eng. Mech. 2007, 27, 117–134. [Google Scholar] [CrossRef]
  24. Feng, Y.; Mohammadi, M.; Wang, L.; Rashidi, M.; Mehrabi, P. Application of artificial intelligence to evaluate the fresh properties of self-consolidating concrete. Materials 2021, 14, 4885. [Google Scholar] [CrossRef]
  25. Liu, J.; Mohammadi, M.; Zhan, Y.; Zheng, P.; Rashidi, M.; Mehrabi, P. Utilizing artificial intelligence to predict the superplasticizer demand of self-consolidating concrete incorporating pumice, slag, and fly ash powders. Materials 2021, 14, 6792. [Google Scholar] [CrossRef]
  26. Nguyen, T.N.; Yu, Y.; Li, J.; Gowripalan, N.; Sirivivatnanon, V. Elastic modulus of ASR-affected concrete: An evaluation using Artificial Neural Network. Comput. Concr. 2019, 24, 541–553. [Google Scholar]
  27. Chiu, H.-J.; Fang, I.-K.; Young, W.-T.; Shiau, J.-K. Behavior of reinforced concrete beams with minimum torsional reinforcement. Eng. Struct. 2006, 29, 2193–2205. [Google Scholar] [CrossRef]
  28. Bernardo, L.; Lopes, S. Plastic analysis and twist capacity of high-strength concrete hollow beams under pure torsion. Eng. Struct. 2013, 49, 190–201. [Google Scholar] [CrossRef]
  29. Alabdulhady, M.Y.; Sneed, L.H. Torsional strengthening of reinforced concrete beams with externally bonded composites: A state of the art review. Constr. Build. Mater. 2019, 205, 148–163. [Google Scholar] [CrossRef]
  30. Yu, Y.; Li, W.; Li, J.; Nguyen, T.N. A novel optimised self-learning method for compressive strength prediction of high performance concrete. Constr. Build. Mater. 2018, 184, 229–247. [Google Scholar] [CrossRef]
  31. Teixeira, M.; Bernardo, L. Ductility of RC beams under torsion. Eng. Struct. 2018, 168, 759–769. [Google Scholar] [CrossRef]
  32. Campbell, N.A.; Atchley, W.R. The geometry of canonical variate analysis. Syst. Biol. 1981, 30, 268–280. [Google Scholar] [CrossRef]
  33. Haykin, S. Neural Networks, a Comprehensive Foundation; Macmillan: New York, NY, USA, 1994. [Google Scholar]
  34. Richemond, P.H.; Guo, Y. Combining learning rate decay and weight decay with complexity gradient descent—Part I. arXiv 2019, arXiv:1902.02881. [Google Scholar]
  35. Ferreira, C. Gene expression programming in problem solving. In Proceedings of the 6th Online World Conference on Soft Computing in Industrial Applications, Online, 10–24 September 2001. [Google Scholar]
  36. Tsai, H.-C.; Liao, M.-C. Modeling Torsional Strength of Reinforced Concrete Beams using Genetic Programming Polynomials with Building Codes. KSCE J. Civ. Eng. 2019, 23, 3464–3475. [Google Scholar] [CrossRef]
  37. Koza, J.R. Genetic Programming: On the Programming of Computers by Means of Natural Selection; MIT Press: Cambridge, MA, USA, 1992. [Google Scholar]
  38. Draper, N.R.; Smith, H. Applied Regression Analysis; Wiley-Interscience: Hoboken, NJ, USA, 1998; ISBN 978-0-471-17082-2. [Google Scholar]
  39. Chai, T.; Draxler, R.R. Root mean square error (RMSE) or mean absolute error (MAE)?—Arguments against avoiding RMSE in the literature. Geosci. Model Dev. 2014, 7, 1247–1250. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Historical distributions of parameters (a) RC beam section’s width ( b ); (b) RC beam section’s depth ( h ); (c) Closed stirrup width ( b ); (d) Closed stirrup depth ( h ); (e) Compressive strength ( f c ); (f) Longitudinal reinforcement ratio ( ρ l ); (g) Yield strength of the longitudinal reinforcement ( f y l ); (h) Transverse reinforcement ratio ( ρ t ); (i) Yield strength of transverse reinforcement ( f y t ); (j) Closed stirrup spacing (s); (k) Torsional strength ( T n ).
Figure 1. Historical distributions of parameters (a) RC beam section’s width ( b ); (b) RC beam section’s depth ( h ); (c) Closed stirrup width ( b ); (d) Closed stirrup depth ( h ); (e) Compressive strength ( f c ); (f) Longitudinal reinforcement ratio ( ρ l ); (g) Yield strength of the longitudinal reinforcement ( f y l ); (h) Transverse reinforcement ratio ( ρ t ); (i) Yield strength of transverse reinforcement ( f y t ); (j) Closed stirrup spacing (s); (k) Torsional strength ( T n ).
Materials 15 01477 g001aMaterials 15 01477 g001b
Figure 2. Significance of principal components.
Figure 2. Significance of principal components.
Materials 15 01477 g002
Figure 3. The main process of K-fold cross-validation.
Figure 3. The main process of K-fold cross-validation.
Materials 15 01477 g003
Figure 4. Flow chart of BP neural network optimized by K-fold cross-validation.
Figure 4. Flow chart of BP neural network optimized by K-fold cross-validation.
Materials 15 01477 g004
Figure 5. Selection for hidden layer neurons.
Figure 5. Selection for hidden layer neurons.
Materials 15 01477 g005
Figure 6. Back-propagation neural network (PC: principal component; h: hidden neuron; y: output).
Figure 6. Back-propagation neural network (PC: principal component; h: hidden neuron; y: output).
Materials 15 01477 g006
Figure 7. BP neural network optimized by k-fold cross-validation.
Figure 7. BP neural network optimized by k-fold cross-validation.
Materials 15 01477 g007
Figure 8. Error distribution between different models. (a) Error comparison between BPNN and GA-BPNN. (b) Error comparison between BPNN and optimized BPNN. (c) Error comparison between optimized BPNN and GA-BPNN. (d) Error comparison between GA-BPNN and optimized GA-BPNN. (e) Error comparison between optimized BPNN and optimized GA-BPNN.
Figure 8. Error distribution between different models. (a) Error comparison between BPNN and GA-BPNN. (b) Error comparison between BPNN and optimized BPNN. (c) Error comparison between optimized BPNN and GA-BPNN. (d) Error comparison between GA-BPNN and optimized GA-BPNN. (e) Error comparison between optimized BPNN and optimized GA-BPNN.
Materials 15 01477 g008aMaterials 15 01477 g008b
Figure 9. Prediction results of GA-BP neural network.
Figure 9. Prediction results of GA-BP neural network.
Materials 15 01477 g009
Figure 10. Prediction results of BP neural network.
Figure 10. Prediction results of BP neural network.
Materials 15 01477 g010
Figure 11. GA-BP neural network optimized by k-fold cross-validation.
Figure 11. GA-BP neural network optimized by k-fold cross-validation.
Materials 15 01477 g011
Figure 12. Prediction of torsional strength of ACI-318-14.
Figure 12. Prediction of torsional strength of ACI-318-14.
Materials 15 01477 g012
Figure 13. Prediction of torsional strength of BS-8110.
Figure 13. Prediction of torsional strength of BS-8110.
Materials 15 01477 g013
Figure 14. Prediction of torsional strength of TBC-500-2000.
Figure 14. Prediction of torsional strength of TBC-500-2000.
Materials 15 01477 g014
Figure 15. Radar diagram of calculation results. (a) BP neural network; (b) GA-BP neural network; (c) Optimized BP neural network; (d) Optimized GA-BP neural network; (e) ACI-318-14; (f) BS-8110; (g) TBC-500-2000.
Figure 15. Radar diagram of calculation results. (a) BP neural network; (b) GA-BP neural network; (c) Optimized BP neural network; (d) Optimized GA-BP neural network; (e) ACI-318-14; (f) BS-8110; (g) TBC-500-2000.
Materials 15 01477 g015aMaterials 15 01477 g015b
Table 1. The range of input and output parameters (σ: Standard deviation).
Table 1. The range of input and output parameters (σ: Standard deviation).
ParametersInput/OutputUnitMinimumMaximumAverageσ
Section detailsbmm85600265.943124.295
hmm178600391.155134.699
bmm56.5546219.021112.81
hmm149.5549336.241123.514
Concrete f c MPa14.3109.845.30920.175
Longitudinal bar f y l MPa310724437.871121.795
ρ l Percentage0.183.891.3700.980
Transvers bar f y t MPa265715430.422130.735
ρ t Percentage0.133.21.0340.539
smm41300104.09539.595
Test strength T u kN·m2.18239265.943124.295
Table 2. Results of principal components analysis.
Table 2. Results of principal components analysis.
ParametersPC1PC2PC3PC4PC5PC6PC7
b 0.4513−0.0733−0.1332−0.0737−0.28010.15280.3637
h 0.4105−0.1879−0.19670.29980.2935−0.0503−0.1717
b 0.4447−0.0665−0.14140−0.1248−0.29060.20820.3884
h 0.4029−0.1520−0.22350.31620.3667−0.0109−0.2833
f c 0.18610.40780.14240−0.47100.68690.16140.2243
ρ l −0.13590.4595−0.09140.64910.1079−0.18870.5298
f y l 0.29550.44690.22180.0426−0.2801−0.1148−0.1564
ρ t −0.23550.2810−0.50690.0827−0.05360.7372−0.1903
f y t 0.26800.46010.24420.0754−0.20990.0434−0.4526
s 0.0066−0.25310.69230.35220.08620.55720.1033
Table 3. Building standards expression of torsional strength.
Table 3. Building standards expression of torsional strength.
Building StandardExpression for Torsional StrengthReference
ACI-318-14 T n = 2 A o A t f y t s c o t θ A t —the area of one leg of a closed stirrup resisting torsion.
A o —the gross area enclosed by shear flow path
f y t —characteristic strength of the links
A e —the cross-sectional area of the surrounding stirrups
θ —the torsional angel
BS-8110 T n = 0.8 b h ( 0.87 f y t ) A t s
TBC-500-2000 T n = 2 A o A e f y t 2 ( b + h )
Table 4. Results of 10-fold cross-validation in BP neural networks.
Table 4. Results of 10-fold cross-validation in BP neural networks.
Evaluation Metric12345678910
MAE
(kN·m)
8.43014.53610.93016.92414.5497.0594.5116.0725.7274.737
MSE
( kN 2 · m 2 )
126.362333.467230.5121144.994625.96482.497542.109289.44863.63051.052
RMSE
(kN·m)
11.24118.26115.18333.83825.0199.08323.28317.0137.9777.145
MAPE
(%)
33.53033.43138.75230.46919.19827.16016.30717.69745.78918.477
R 2 0.9450.8960.7560.8400.7760.9680.9790.9520.9520.979
Table 5. Calculation results of different models.
Table 5. Calculation results of different models.
ModelsMAE
(kN·m)
MSE
( k N 2 · m 2 )
RMSE
(kN·m)
MAPE
(%)
R 2
BP neural networks11.548315.36317.75840.1170.846
GA-BP neural networks9.109240.04615.49319.7980.887
Optimized BP neural networks7.063103.10010.15418.9570.943
Optimized GA-BP neural networks6.742103.98810.19716.2510.950
ACI-318-1417.832320.01617.88920.2050.867
BS-811019.700344.43618.55934.5070.856
TBC-500-200024.154842.79929.03154.2520.756
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lyu, Z.; Yu, Y.; Samali, B.; Rashidi, M.; Mohammadi, M.; Nguyen, T.N.; Nguyen, A. Back-Propagation Neural Network Optimized by K-Fold Cross-Validation for Prediction of Torsional Strength of Reinforced Concrete Beam. Materials 2022, 15, 1477. https://doi.org/10.3390/ma15041477

AMA Style

Lyu Z, Yu Y, Samali B, Rashidi M, Mohammadi M, Nguyen TN, Nguyen A. Back-Propagation Neural Network Optimized by K-Fold Cross-Validation for Prediction of Torsional Strength of Reinforced Concrete Beam. Materials. 2022; 15(4):1477. https://doi.org/10.3390/ma15041477

Chicago/Turabian Style

Lyu, Zhaoqiu, Yang Yu, Bijan Samali, Maria Rashidi, Masoud Mohammadi, Thuc N. Nguyen, and Andy Nguyen. 2022. "Back-Propagation Neural Network Optimized by K-Fold Cross-Validation for Prediction of Torsional Strength of Reinforced Concrete Beam" Materials 15, no. 4: 1477. https://doi.org/10.3390/ma15041477

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop