Next Article in Journal
Towards an Industry 5.0 Enhanced by AI: A Theoretical Framework
Previous Article in Journal
TTS and STT in Service of Education
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

Multi-Layer Perceptron Neural Networks for Concrete Strength Prediction: Balancing Performance and Optimizing Mix Designs †

1
Georesources, Geoenvironment and Civil Engineering Laboratory (L3G), Cadi Ayyad University, 40000 Marrakech, Morocco
2
LAMIGEP Research Laboratory, Moroccan School of Engineering Sciences, 40000 Marrakech, Morocco
*
Author to whom correspondence should be addressed.
Presented at the 7th edition of the International Conference on Advanced Technologies for Humanity (ICATH 2025), Kenitra, Morocco, 9–11 July 2025.
Eng. Proc. 2025, 112(1), 1; https://doi.org/10.3390/engproc2025112001
Published: 14 October 2025

Abstract

Optimizing concrete production requires balancing ingredient ratios and using local resources to produce an economical material with the desired consistency, strength, and durability. Compressive strength is crucial for structural design, yet predicting it accurately is challenging due to the complex interplay of various factors, including component types, water–cement ratio, and curing time. This study employs a Multi-layer Perceptron Neural Network (ANN_MLP) to model the relationship between input variables and the compressive strength of normal and high-performance concrete. A dataset of 1030 samples from the literature was used for training and evaluation. The optimized ANN_MLP configuration included 16 neurons in a single hidden layer, with the ‘tanh’ activation function and ‘sgd’ solver. It achieved an R2 of 0.892, an MAE of 3.648 MPa, and an RMSE of 5.13 MPa. The model was optimized using a univariate sensitivity analysis to measure the impact of each hyperparameter on performance and select optimal values to maximize the accuracy and robustness.

1. Introduction

Concrete, one of the fundamental materials in the entire lifecycle of civil engineering projects, demands a deep understanding of its mechanical properties, both in its fresh and hardened states, as well as in terms of durability [1,2]. A rigorous evaluation of these properties is crucial for optimizing time and cost management during projects [3]. In this context, machine learning approaches provide innovative solutions for predicting and ensuring concrete performance, based on the analysis of historical data [4]. Several studies have employed machine learning algorithms to predict various properties of concrete. They are used to predict fresh concrete properties, such as filling capacity, fluidity, and passing ability [5,6], as well as air content, density, compaction factor, and slump [7,8,9,10]. It also allows for the development of new models to predict the mechanical properties of concrete, including modulus of elasticity [11], tensile strength [12], shear strength [13], and compressive strength [14]. To make high-performance concrete, it is necessary to carefully select and control the quality and proportions of the materials used in its formulation.
In practice, concrete mixes in Morocco are frequently determined using the Dreux–Gorisse method, which specifies the optimal dosage of the various ingredients to ensure workability and strength [15]. This approach first requires physical characterization of the granular skeleton and other ingredients to ensure they satisfy the specified criteria. Developing an appropriate mix design is a time- and cost-consuming process. Concrete is a complex material. Its hydration depends on several parameters, as it contains various ingredients whose quality and proportions produce concrete with varying properties [16,17]. Using machine learning is essentially the new approach for predicting concrete properties based on the analysis of historical data. The compressive strength is the most common characteristic employed by engineers in designing concrete structures. Traditionally, the compressive strength of concrete is determined through destructive testing of specimens, which is time-consuming and cost-intensive [18]. To address these challenges, researchers have explored machine learning methods to predict the concrete strength based on the input of the mix design. The promise of machine learning resides in its non-destructive nature and the possibility of bypassing laboratory work when historical data are available.
The literature has explored and reported the predictive accuracy of various machine learning algorithms in predicting the compressive strength of different types of concrete. Algorithms such as artificial neural networks (ANNs), support vector machines (SVMs), decision trees (DTs), K-nearest neighbors (KNNs), ensemble methods, random forests (RFs), AdaBoost, and XGBoost have been utilized, among others [19,20,21,22,23,24]. The application of machine learning for strength prediction was first introduced in 1998 by Yeh [21], where linear regression with ANN emerged to predict the strength of high-performance concrete using historical data. Artificial Neural Networks (ANNs) excel at capturing complex, non-linear relationships within data, making highly accurate predictions about the strength of new concrete formulations. Furthermore, extensive applications in the ANN behavioral modeling of concrete structural elements have been reported in several studies [25,26,27,28,29,30,31]. In recent years, several research studies have explored the use of ANN for predicting the compressive strength of various types of concrete. Abuodeh et al. [20] assessed the compressive strength of ultra-high-performance concrete (UHPC) using deep machine learning techniques, achieving an R2 value of 0.80. Akbari and Jafari Deligani [32] used ANN to predict concrete compressive strength based on 207 mixtures, achieving a coefficient of determination R2 value of 0.92. Several studies have utilized a dataset of 1030 samples for predicting the compressive strength. Asteris et al. [33] developed a hybrid ensemble model, achieving an R2 value of 0.87. Shah et al. [34] used artificial neural networks (ANNs), reporting an R2 value of 0.873. Sah and Hong [26] conducted a performance comparison of various machine learning models and obtained an R value of 0.96. In contrast, Song et al. [35] worked with a smaller dataset of 98 samples, focusing on concrete with a fly ash admixture, and achieved an R2 value of 0.81. Khan et al. [24] optimized an ANN model with a larger dataset of 1637 samples, achieving an R value of 0.95.
The accuracy of ANN models largely depends on the architecture and parametric properties of the network. However, the time-consuming methodology used to optimize the hyperparameters that configure the best network learning was not thoroughly investigated in most of the previous research. Thus, this research aims to develop a supervised ANN model for predicting the strength of concrete. We will use data from a previous study [21] to train the model. Different hyperparameters, such as the number of neurons in the hidden layer, activation functions, and optimization functions (solvers), were investigated to improve the model accuracy and overcome challenges associated with this process. Once we have the best model, we will evaluate its performance using various methods. Additionally, the influence of input variables on the output was examined through a sensitivity analysis.

2. Materials and Methods

2.1. Description of Dataset

The dataset employed in this study, sourced from the existing literature, comprises 1030 samples. Each sample in the database provides information on the compressive strength of a specific concrete formulation, measured at a particular age. There are eight input variables in total. Seven of these represent the quantities of the ingredients used in the concrete, namely cement, fly ash, blast furnace slag, water, superplasticizer, coarse aggregate, and fine aggregate. The eighth input variable, age, indicates the curing time on the day of strength testing. Therefore, a total of eight input variables (X1, X2, …, X8) and one output variable (Y) are considered in this study, as summarized in Table 1.
This dataset covers a sufficiently wide range of concrete compositions, including diverse combinations both with and without additives and admixtures, offering a rich variety of formulations (Figure 1). The water/binder (W/b) ratio ranges from 0.23 to 0.9, and the coarse aggregate/fine aggregate (CA/FA) ratio ranges from 0.86 to 1.87. This extensive range of concrete mix designs ensures that the developed model will be capable of adapting to various new concrete compositions.
The correlation of the whole dataset, including ingredient proportions, age, and CS of concrete, can be clearly summarized by a heatmap, as shown in Figure 2. According to the correlation matrix, the materials with the most positive effect on the compressive strength were cement, superplasticizer, and age, with 0.5, 0.37, and 0.33, respectively. We can also observe that the compressive strength shows a negative correlation with water: 0.29.

2.2. Methodology

Figure 3 shows the methodology used in this study to generate artificial neural network–multi-layer perceptron (ANN_MLP) regression models to predict the compressive strength of concrete. The process of developing a model involves several steps, starting with data preprocessing. This step includes checking for missing and outlier values, then normalizing the input data. The dataset used in this study was extracted from the literature, consisting of 1030 samples, with 8 input variables (C, BF, Fly.A, W, S, FA, CA, and D) and one target variable (CS) [21]. This dataset demonstrates high quality, with no missing values detected. A small number of outliers were identified through a boxplot analysis and subsequently treated. The selected datasets in this research study contain different scales in input parameters, as presented in Table 1. Thus, rescaling is essential to keep the scale uniform in all parameters. Once the data is ready, we proceed to develop our model by randomly splitting the dataset, with around 80% of all data points used for training the model, and the remaining 20% used for testing the model’s performance. The performance of the model is judged by measuring the average error between predicted values and actual values, using three metrics: coefficient of correlation R-squared (R2), root mean squared error (RMSE), and mean absolute error (MAE).
The predictive accuracy and generalization capability of the ANN_MLP model are mainly affected by the selected architecture and its associated network parameters. The model used in this study was designed to predict the compressive strength (CS), with eight neurons in the input layer, including C, BF, Fly.A, FA, CA, W, S, and testing age (D); a single hidden layer; and one neuron in the output layer (CS). However, this study explored various network structures by modifying parameters within the forward- and backpropagation processes to identify the optimal configuration. The influence of the number of neurons and learning rate on the network performance was investigated through sequential increases in their values. A variety of activation functions and optimization methods (solvers) were considered for neuron output generation and weight/bias updates.
Optimizing the artificial neural network model is a crucial step to ensure optimal performance. This optimization involves adjusting hyperparameters to improve the model’s accuracy. Key hyperparameters that influence the performance of the ANN include the number of neurons in each layer, the choice of activation function, the optimization method, and the learning rate. A sensitivity study of hyperparameters (number of neurons, activation functions, and solvers) was conducted to understand how each of these hyperparameters influences the model’s performance, providing insight into the optimal range of hyperparameters. A range of hyperparameters was defined based on this sensitivity study. After that, a 5-fold cross-validation technique was utilized to evaluate the performance and fine-tune the optimal hyperparameters, which were used to configure every regression model before training.

3. Results and Discussion

Various ANN architectural models were developed by varying the number of neurons in the hidden layer, the type of activation function, and the solver, using default values of the learning rate. ANN models utilizing the identity activation function exhibited an extremely poor predictive capability (R2 remained nearly constant at 0.59, independent of both neuron and solver type), as illustrated in Figure 4. The performance results of the ANN-MLP model, expressed in terms of R2 values, were analyzed for the LBFGS, SGD, and Adam optimization functions over a range of four activation functions (identity, logistic, tanh, and ReLU). These results, obtained by varying the number of neurons in the single hidden layer, are presented in Figure 4a, Figure 4b, and Figure 4c, respectively.
As evident from these graphs, independent of the solver employed, ANN models developed using tanh and ReLU as activation functions demonstrate a better predictive performance in terms of the R2 coefficient compared to models using the logistic function. Figure 4a shows the performance metrics obtained by analyzing the entire dataset using the LBFGS solver. Based on the determination coefficients, the logistic function had an R2 of 0.860 with 25 neurons, while ReLU and tanh performed the best, with R2 values of 0.896 and 0.893, respectively, achieved with 62 and 23 neurons. Although ReLU exhibits erratic variations, with significant fluctuations between successive neuron values, it still delivers a strong performance for complex ANN model architectures with a large number of neurons, as shown in the results. When using the SGD solver (Figure 4b), tanh shows the best performance, with an R2 of 0.908 achieved with 58 neurons, followed closely by ReLU with an R2 of 0.904 at 96 neurons. When the Adam solver was used (Figure 4c), tanh and ReLU continued to dominate in terms of performance, with tanh reaching an R2 of 0.8632 at 95 neurons and ReLU obtaining an R2 of 0.8800 also at 95 neurons. The results suggest that for neural network architectures with a large number of neurons, the ReLU activation function is the most suitable despite its variability, while tanh offers a stable and strong alternative, especially when using the SGD solver [36].
Figure 5 represents the model performance using a neural network trained with the tanh activation function, SGD solver, and default learning rate values. It appears that when a very small number of neurons is used (5 or fewer), the neural network would perform poorly in capturing the nonlinearity in the data. As demonstrated in Figure 5, the R2 metric increases from 0.832 to 0.908 while increasing the number of neurons to more than five.
A neural network with the highest overall R2 may not always correspond to the lowest error rates (Figure 6). This discrepancy arises because models generally exhibit superior performance on training data compared to their performance in the test phase. Figure 5 illustrates that when the number of neurons in the hidden layer exceeds 20, the model error and the difference between R2 training and R2 test increase despite their apparent performance improvement. These results create rigid models, in a process known as overfitting. Rigid models fit training data well but lack predictive power. The model with 16 neurons demonstrates a better balance between training and testing performance compared to the model with 58 neurons. It shows lower overall errors (MAE of 4.182 and RMSE of 5.278 for testing) and a strong coefficient of determination (R2 of 0.892 for testing), while its training performance is slightly lower than the 58 neuron model. The 58 neuron model exhibits signs of overfitting. It achieves lower errors and a higher R2 on the training set (MAE of 2.4487, RMSE of 3.3150, and R2 of 0.9613), but its performance degrades more significantly on the test set (R2 = 0.908). In contrast, the differences between training and testing metrics are smaller for the 16 neuron model, indicating better generalization.
After careful examination, a hidden layer size of 16 neurons was selected due to its consistent delivery of dependable results in terms of R2 values across the training and testing sets, as indicated in Figure 6. This suggests that the chosen hidden layer size of 16 neurons effectively minimized overfitting and produced reliable and consistent predictions. The optimal number of neurons to adopt appears to be approximately twice the number of inputs, which aligns with the findings reported in the study by Arslan et al. [24].
Clearly, it is shown in Figure 7 that the selected model with 16 hidden nodes provides an optimal balance between performance and generalization. While it may not have the absolute highest R2 value compared to models with a larger number of neurons, it presents a high degree of fitness, with R2 = 0.92 on the training data. Additionally, it achieves nearly 90% accuracy (R2 = 0.892) on the test data, indicating its robustness in predicting the compressive strength of concrete while effectively minimizing the risk of overfitting.
Figure 8 demonstrates the error in estimating concrete’s compressive strength using the ANN_MLP model for various test samples. The proposed ANN_MLP model can accurately predict the concrete compressive strength from the inputs, with an average error of around ±4.82 MPa. However, the error varies significantly across different strength ranges. The mean relative error for samples with strengths below 20 MPa is approximately 20%; for samples in the 20–40 MPa range, it is around 12.5%; and for strengths above 40 MPa, the error is approximately 13.5%.
To determine the characteristic strength of concrete at 28 days (Fc28), the BAEL standard requires a safety margin of 5 MPa, which represents approximately 20% of a typical 25 MPa concrete, to be subtracted from the measured average strength (fm). This margin takes into account the inherent variations in concrete production and application, guaranteeing that most samples meet the minimum required strength. In simpler terms, Fc28 = fm − 5 MPa. All of this is to ensure that the structural elements can safely bear the expected loads. Using a predictive model like the ANN_MLP to estimate this average strength introduces an additional prediction error. If the model’s mean relative error is around 12%, the total error margin could lead to significant deviations from the target strength. For example, with a target Fc28 of 25 MPa, this combined uncertainty could result in a final Fc28 of around 17 MPa (Fc28 = 25 MPa − (20%BAEL + 12%model) × 25 MPa). This dual uncertainty may result in considerable underestimation or overestimation of strengths, making the model’s predictions less accurate for direct use without additional adjustment or experimental validation.

4. Conclusions

Concrete formulation traditionally consists of specifying the needed compressive strength, testing the ingredient quality, and determining the appropriate combination using methods such as Dreux–Gorisse. The strength of a test batch is evaluated after curing for 7 and 28 days, and any necessary adjustments are made until it reaches the target value. The process seems to be both costly and time-consuming. To address these challenges, we applied machine learning techniques, an artificial neural network with a multi-layer perceptron (ANN_MLP), to predict the compressive strength of concrete. The aim of our study is to determine the best architecture for the ANN_MLP model and identify the most significant input parameters that impact the compressive strength. The model showing the best performance includes 16 neurons in the hidden layer, uses a ‘tanh’ activation function, and an ‘SGD’ optimization algorithm. This configuration demonstrated the best balance between performance and risk of overfitting, with a coefficient of determination (R2) of 0.892 and a prediction error of ±5 MPa.
After achieving a satisfactory predictive performance with the developed ANN_MLP model, we could focus on leveraging it to explore various concrete formulation scenarios aimed at optimizing material efficiency and compressive strength outcomes by considering factors such as material availability, project-specific needs, and environmental constraints.

Author Contributions

Conceptualization, S.-E.C. and B.K.; methodology, Y.A.; software, Y.A. and S.-E.C.; validation, S.-E.C., B.K. and A.K.; formal analysis, Y.A.; investigation, Y.A. and Y.C.; resources, S.-E.C.; data curation, Y.A.; writing—original draft preparation, Y.A.; writing—review and editing, Y.C. and S.-E.C.; visualization, Y.A.; supervision, S.-E.C. and B.K.; project administration, A.K.; funding acquisition, S.-E.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available upon request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ben Chaabene, W.; Flah, M.; Nehdi, M.L. Machine learning prediction of mechanical properties of concrete: Critical review. Constr. Build. Mater. 2020, 260, 119889. [Google Scholar] [CrossRef]
  2. Moein, M.M.; Saradar, A.; Rahmati, K.; Mousavinejad, S.H.G.; Bristow, J.; Aramali, V.; Karakouzian, M. Predictive models for concrete properties using machine learning and deep learning approaches: A review. J. Build. Eng. 2023, 63, 105444. [Google Scholar] [CrossRef]
  3. Gamil, Y. Machine learning in concrete technology: A review of current researches, trends, and applications. Front. Built Environ. 2023, 9, 1145591. [Google Scholar] [CrossRef]
  4. Ziolkowski, P.; Niedostatkiewicz, M. Machine Learning Techniques in Concrete Mix Design. Materials 2019, 12, 1256. [Google Scholar] [CrossRef] [PubMed]
  5. Azimi-Pour, M.; Eskandari-Naddaf, H.; Pakzad, A. Linear and non-linear SVM prediction for fresh properties and compressive strength of high volume fly ash self-compacting concrete. Constr. Build. Mater. 2020, 230, 117021. [Google Scholar] [CrossRef]
  6. Sonebi, M.; Cevik, A.; Grünewald, S.; Walraven, J. Modelling the fresh properties of self-compacting concrete using support vector machine approach. Constr. Build. Mater. 2016, 106, 55–64. [Google Scholar] [CrossRef]
  7. Sobuz, H.R.; Imran, A.; Datta, S.D.; Jabin, J.A.; Aditto, F.S.; Hasan, N.M.S.; Hasan, M.; Zaman, A.A.U. Assessing the influence of sugarcane bagasse ash for the production of eco-friendly concrete: Experimental and machine learning approaches. Case Stud. Constr. Mater. 2024, 20, e02839. [Google Scholar] [CrossRef]
  8. Hasan, N.M.S.; Sobuz, H.R.; Shaurdho, N.M.N.; Meraz, M.; Datta, S.D.; Aditto, F.S.; Kabbo, K.I.; Miah, J. Eco-friendly concrete incorporating palm oil fuel ash: Fresh and mechanical properties with machine learning prediction, and sustainability assessment. Heliyon 2023, 9, e22296. [Google Scholar] [CrossRef]
  9. Nadimalla, A.; Masjuki, S.; Saad, S.; Ali, M. Machine Learning Model to Predict Slump, VEBE and Compaction Factor of M Sand and Shredded Pet Bottles Concrete. IOP Conf. Ser. Mater. Sci. Eng. 2022, 1244, 012023. [Google Scholar] [CrossRef]
  10. Öztaş, A.; Pala, M.; Özbay, E.; Kanca, E.; Çaǧlar, N.; Bhatti, M.A. Predicting the compressive strength and slump of high strength concrete using neural network. Constr. Build. Mater. 2006, 20, 769–775. [Google Scholar] [CrossRef]
  11. Yan, K.; Shi, C. Prediction of elastic modulus of normal and high strength concrete by support vector machine. Constr. Build. Mater. 2010, 24, 1479–1485. [Google Scholar] [CrossRef]
  12. Bui, D.-K.; Nguyen, T.; Chou, J.-S.; Nguyen-Xuan, H.; Ngo, T.D. A modified firefly algorithm-artificial neural network expert system for predicting compressive and tensile strength of high-performance concrete. Constr. Build. Mater. 2018, 180, 320–333. [Google Scholar] [CrossRef]
  13. Bashir, R.; Ashour, A. Neural network modelling for shear strength of concrete members reinforced with FRP bars. Compos. Part B Eng. 2012, 43, 3198–3207. [Google Scholar] [CrossRef]
  14. Nithurshan, M.; Elakneswaran, Y. A systematic review and assessment of concrete strength prediction models. Case Stud. Constr. Mater. 2023, 18, e01830. [Google Scholar] [CrossRef]
  15. Hamza, C.; Bouchra, S.; Mostapha, B.; Mohamed, B. Formulation of ordinary concrete using the Dreux-Gorisse method. Procedia Struct. Integr. 2020, 28, 430–439. [Google Scholar] [CrossRef]
  16. Hattani, F.; Menu, B.; Allaoui, D.; Mouflih, M.; Zanzoun, H.; Hannache, H.; Manoun, B. Evaluating the Impact of Material Selections, Mixing Techniques, and On-site Practices on Performance of Concrete Mixtures. Civ. Eng. J. 2024, 10, 571–598. [Google Scholar] [CrossRef]
  17. Kovler, K.; Roussel, N. Properties of fresh and hardened concrete. Cem. Concr. Res. 2011, 41, 775–792. [Google Scholar] [CrossRef]
  18. Ivanchev, I. Investigation with Non-Destructive and Destructive Methods for Assessment of Concrete Compressive Strength. Appl. Sci. 2022, 12, 12172. [Google Scholar] [CrossRef]
  19. Chou, J.-S.; Tsai, C.-F.; Pham, A.-D.; Lu, Y.-H. Machine learning in concrete strength simulations: Multi-nation data analytics. Constr. Build. Mater. 2014, 73, 771–780. [Google Scholar] [CrossRef]
  20. Abuodeh, O.R.; Abdalla, J.A.; Hawileh, R.A. Assessment of compressive strength of Ultra-high Performance Concrete using deep machine learning techniques. Appl. Soft Comput. 2020, 95, 106552. [Google Scholar] [CrossRef]
  21. Yeh, I.-C. Modeling of strength of high-performance concrete using artificial neural networks. Cem. Concr. Res. 1998, 28, 1797–1808. [Google Scholar] [CrossRef]
  22. Choudhary, L.; Sahu, V.; Dongre, A.; Garg, A. Prediction of compressive strength of sustainable concrete using machine learning tools. Comput. Concr. 2024, 33, 137–145. [Google Scholar] [CrossRef]
  23. Kumar, A.; Arora, H.C.; Kapoor, N.R.; Mohammed, M.A.; Kumar, K.; Majumdar, A.; Thinnukool, O. Compressive Strength Prediction of Lightweight Concrete: Machine Learning Models. Sustainability 2022, 14, 2404. [Google Scholar] [CrossRef]
  24. Khan, A.Q.; Awan, H.A.; Rasul, M.; Siddiqi, Z.A.; Pimanmas, A. Optimized artificial neural network model for accurate prediction of compressive strength of normal and high strength concrete. Clean. Mater. 2023, 10, 100211. [Google Scholar] [CrossRef]
  25. Feng, D.-C.; Liu, Z.-T.; Wang, X.-D.; Chen, Y.; Chang, J.-Q.; Wei, D.-F.; Jiang, Z.-M. Machine learning-based compressive strength prediction for concrete: An adaptive boosting approach. Constr. Build. Mater. 2020, 230, 117000. [Google Scholar] [CrossRef]
  26. Sah, A.K.; Hong, Y.-M. Performance Comparison of Machine Learning Models for Concrete Compressive Strength Prediction. Materials 2024, 17, 2075. [Google Scholar] [CrossRef] [PubMed]
  27. Al-Shamiri, A.K.; Kim, J.H.; Yuan, T.-F.; Yoon, Y.S. Modeling the compressive strength of high-strength concrete: An extreme learning approach. Constr. Build. Mater. 2019, 208, 204–219. [Google Scholar] [CrossRef]
  28. Deshpande, N.; Londhe, S.; Kulkarni, S. Modeling compressive strength of recycled aggregate concrete by Artificial Neural Network, Model Tree and Non-linear Regression. Int. J. Sustain. Built Environ. 2014, 3, 187–198. [Google Scholar] [CrossRef]
  29. Ni, H.-G.; Wang, J.-Z. Prediction of compressive strength of concrete by neural networks. Cem. Concr. Res. 2000, 30, 1245–1250. [Google Scholar] [CrossRef]
  30. Prasad, B.R.; Eskandari, H.; Reddy, B.V. Prediction of compressive strength of SCC and HPC with high volume fly ash using ANN. Constr. Build. Mater. 2009, 23, 117–128. [Google Scholar] [CrossRef]
  31. Topçu, I.B.; Sarıdemir, M. Prediction of properties of waste AAC aggregate concrete using artificial neural network. Comput. Mater. Sci. 2007, 41, 117–125. [Google Scholar] [CrossRef]
  32. Akbari, M.; Deligani, V.J. Data driven models for compressive strength prediction of concrete at high temperatures. Front. Struct. Civ. Eng. 2020, 14, 311–321. [Google Scholar] [CrossRef]
  33. Asteris, P.G.; Skentou, A.D.; Bardhan, A.; Samui, P.; Pilakoutas, K. Predicting concrete compressive strength using hybrid ensembling of surrogate machine learning models. Cem. Concr. Res. 2021, 145, 106449. [Google Scholar] [CrossRef]
  34. Shah, S.A.R.; Azab, M.; ElDin, H.M.S.; Barakat, O.; Anwar, M.K.; Bashir, Y. Predicting Compressive Strength of Blast Furnace Slag and Fly Ash Based Sustainable Concrete Using Machine Learning Techniques: An Application of Advanced Decision-Making Approaches. Buildings 2022, 12, 914. [Google Scholar] [CrossRef]
  35. Song, H.; Ahmad, A.; Farooq, F.; Ostrowski, K.A.; Maślak, M.; Czarnecki, S.; Aslam, F. Predicting the compressive strength of concrete with fly ash admixture using machine learning algorithms. Constr. Build. Mater. 2021, 308, 125021. [Google Scholar] [CrossRef]
  36. Wang, G.; Giannakis, G.B.; Chen, J. Learning ReLU Networks on Linearly Separable Data: Algorithm, Optimality, and Generalization. IEEE Trans. Signal Process. 2019, 67, 2357–2370. [Google Scholar] [CrossRef]
Figure 1. Distribution of samples by concrete type and strength range.
Figure 1. Distribution of samples by concrete type and strength range.
Engproc 112 00001 g001
Figure 2. Correlation between different attributes.
Figure 2. Correlation between different attributes.
Engproc 112 00001 g002
Figure 3. Flowchart of research methodology.
Figure 3. Flowchart of research methodology.
Engproc 112 00001 g003
Figure 4. Sensitivity analysis for different activation functions and solvers; (a) solver = LBFGS; (b) solver = SGD; (c) solver = Adam.
Figure 4. Sensitivity analysis for different activation functions and solvers; (a) solver = LBFGS; (b) solver = SGD; (c) solver = Adam.
Engproc 112 00001 g004
Figure 5. Performances of ANN_MLP trained with SGD and tanh function.
Figure 5. Performances of ANN_MLP trained with SGD and tanh function.
Engproc 112 00001 g005
Figure 6. Performance metrics comparison for models with 16 and 58 neurons.
Figure 6. Performance metrics comparison for models with 16 and 58 neurons.
Engproc 112 00001 g006
Figure 7. Model performance: scatter plot of predicted vs. actual values for training and test sets.
Figure 7. Model performance: scatter plot of predicted vs. actual values for training and test sets.
Engproc 112 00001 g007
Figure 8. Error evaluation of the proposed ANN_MLP.
Figure 8. Error evaluation of the proposed ANN_MLP.
Engproc 112 00001 g008
Table 1. Ranges of the input and output variables.
Table 1. Ranges of the input and output variables.
VariableSymUnitCategoryMeanMinMaxStd
CementCKg/m3Input281.2102.0540.0104.5
Blast Furnace SlagBFKg/m3Input73.90.0359.486.3
Fly AshFly.AKg/m3Input54.20.0200.164.0
WaterWKg/m3Input181.6121.7247.021.3
SuperplasticizerSKg/m3Input6.20.032.26.0
Coarse AggregateCAKg/m3Input972.9801.01145.077.7
Fine AggregateFAKg/m3Input773.6594.0992.680.2
AgeDDayInput45.71.0365.063.2
Compressive StrengthCSMPaOutput35.82.382.616.7
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alouan, Y.; Cherif, S.-E.; Kchakech, B.; Cherradi, Y.; Kchikach, A. Multi-Layer Perceptron Neural Networks for Concrete Strength Prediction: Balancing Performance and Optimizing Mix Designs. Eng. Proc. 2025, 112, 1. https://doi.org/10.3390/engproc2025112001

AMA Style

Alouan Y, Cherif S-E, Kchakech B, Cherradi Y, Kchikach A. Multi-Layer Perceptron Neural Networks for Concrete Strength Prediction: Balancing Performance and Optimizing Mix Designs. Engineering Proceedings. 2025; 112(1):1. https://doi.org/10.3390/engproc2025112001

Chicago/Turabian Style

Alouan, Younes, Seif-Eddine Cherif, Badreddine Kchakech, Youssef Cherradi, and Azzouz Kchikach. 2025. "Multi-Layer Perceptron Neural Networks for Concrete Strength Prediction: Balancing Performance and Optimizing Mix Designs" Engineering Proceedings 112, no. 1: 1. https://doi.org/10.3390/engproc2025112001

APA Style

Alouan, Y., Cherif, S.-E., Kchakech, B., Cherradi, Y., & Kchikach, A. (2025). Multi-Layer Perceptron Neural Networks for Concrete Strength Prediction: Balancing Performance and Optimizing Mix Designs. Engineering Proceedings, 112(1), 1. https://doi.org/10.3390/engproc2025112001

Article Metrics

Back to TopTop