Next Article in Journal
Utilization of the Evaluation System for Spatial Comfort toward Multi-Layered Public Hanok Facilities
Previous Article in Journal
Combining Macro- and Mesoscale Optimization: A Case Study of the General Electric Jet Engine Bracket
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deep Neural Network Models for the Prediction of the Aggregate Base Course Compaction Parameters

1
Civil Engineering Department, University of Toronto, 35 St. George, Toronto, ON M5S 1A4, Canada
2
Public Works Department, Faculty of Engineering, Cairo University, Giza 12613, Egypt
Designs 2021, 5(4), 78; https://doi.org/10.3390/designs5040078
Submission received: 15 September 2021 / Revised: 9 November 2021 / Accepted: 7 December 2021 / Published: 9 December 2021
(This article belongs to the Section Civil Engineering Design)

Abstract

:
Laboratory tests for the estimation of the compaction parameters, namely the maximum dry density (MDD) and optimum moisture content (OMC) are time-consuming and costly. Thus, this paper employs the artificial neural network technique for the prediction of the OMC and MDD for the aggregate base course from relatively easier index properties tests. The grain size distribution, plastic limit, and liquid limits are used as the inputs for the development of the ANNs. In this study, multiple ANNs (240 ANNs) are tested to choose the optimum ANN that produces the best predictions. This paper focuses on studying the impact of three different activation functions: number of hidden layers, number of neurons per hidden layer on the predictions, and heatmaps are generated to compare the performance of every ANN with different settings. Results show that the optimum ANN hyperparameters change depending on the predicted parameter. Additionally, the hyperbolic tangent activation is the most efficient activation function as it outperforms the other two activation functions. Additionally, the simplest ANN architectures results in the best predictions, as the performance of the ANNs deteriorates with the increase in the number of hidden layers or the number of neurons per hidden layers.

1. Introduction and Background

Flexible pavement is the most common pavement used in Egypt. Almost the entire network in Egypt consists of flexible pavement. It is known that the flexible pavement consists of different layers to transfer the traffic load to the soil. The main objective of the structure design is to make sure that the transferred load does not exceed the soil strength to avoid failure [1,2,3]. The wearing surface or the surface course is the layer in direct contact with traffic, and it provides characteristics such as friction, smoothness, noise control, rut resistance, and drainage. In addition, it prevents the entrance of surface water into the underlying base, subbase, and subgrade [4]. The base course is the layer immediately beneath the surface course. The main objective of the base course is to provide additional load distribution, and this layer is usually constructed out of crushed aggregate. Then, the subbase layer comes between the base and the subgrade or the soil layer. In general, the subbase layer consists of lower quality materials than the base course but better than the subgrade soils. A subbase course is not always needed or used, which is the case in Egypt; this layer is rarely used in the construction of roads in Egypt. In general, field compaction is required in all pavement layers to achieve some specifications. The main focus of this study will be on the base course layer. The aggregate base course is typically installed and compacted to a minimum of 95 percent relative compaction, thus providing the stable foundation needed to support either additional layers of aggregates or the placement of the wearing course, which is applied directly on top of the base course.
The process of compaction aims at packing soil particles closely and reducing the air voids. This process is conducted through the use of water as a lubricating medium [5]. The main target of this compaction process is to improve the layer characteristics so as to reduce undesirable settlement, permeability, and swelling, and increase the stability of slopes and the shear strength of the pavement layers or, in other words, increase the bearing capacity of the layer. In 1933, Proctor proposed a laboratory test that mimics the field compaction. During this test, the pavement layer was subjected to a specific compaction effort that is equivalent to the compaction effort the compaction equipment delivers in the field [6]. Typically, we have two ways for Proctor test. The standard Proctor test, which is adopted in the case of normal traffic situation, and the modified Proctor test that is adopted in case of heavy applied loads such as in airfield pavements [7]. In general, the results obtained from the standard procedures or by the modified procedures [8] are presented graphically to find the maximum dry density (MDD) and the corresponding optimum moisture content (OMC) of the layer as shown in Figure 1. The water content value at the peak is the OMC and the corresponding density is the MDD [9]. The OMC is essential for specifying the volume of water required in the field to reach the required density. Additionally, the MDD is essential for calculating the relative compaction to check whether the required field density is satisfied in site using the following equation:
R e l a t i v e   C o m p a c t i o n = F i e l d   D r y   D e n s i t y ( f r o m   t h e   f i e l d ) M a x i m u m   D r y   D e n s i t y ( f r o m   l a b o r a t o u r y )
However, Proctor test procedures consume considerable time (almost 2–3 days) and require a large amount of aggregate (approximate 20 kg for one test). These issues can be avoided by developing prediction models that are capable of predicting the OMC and MDD from other properties that are easier and faster to estimate. As a result, multiple studies in the literature focus on developing prediction models that use the soil index parameters for estimating the compaction parameters. While the majority of these studies utilize the linear regression technique, a few studies utilize machine learning techniques. The study by Jumikis (1946) is one of the early studies that correlated the OMC with the plasticity index (PI) and liquid limit (LL) [10]. In 1958, Jumikis developed regression models that utilize the index parameters for predicting the OMC and MDD [11]. Later on, in 1962, Ring et al. used multiple linear regression (MLR) technique for developing prediction models that utilize the soil index parameters, approximate average particle diameter (D50), the content of particles finer than 0.001 mm (F 0.001) and fineness average (FA) for predicting the two compaction parameters [12]. Ramiah et al. (1970) developed regression models for estimating the OMC and MDD from Atterberg limits and sieve analysis for 16 samples taken from Bangalore [13]. In 1980, Hammond developed regression models that utilize Atterberg limits and the percentage of fine materials for predicting the OMC of three soil classifications [14]. Similarly, in 1984, Wang and Huang proposed a group of regression models that can be used for the prediction of the two compaction parameters [15], and this study was updated in 2008 by Sinha and Wang to employ ANN instead of the regression models [16]. Results of previous studies show that ANNs outperform the traditional MLR approach and provide high accuracy results [16]. In 1993, Al-Khafaji proposed two MLR models that utilize the LL and plastic limit (PL) for the prediction of the compaction parameters [17]. In 1998, Blotz et al. used the least square regression for estimating the MDD and OMC from the LL and the compaction energy applied [18]. In 2004, Gurtug and Sridharan developed two regression models. The first model focuses on estimating the OMC from the applied compaction energy and PL, while the second model focuses on estimating the MDD from the MDD at plastic limit (PL) moisture content and the energy applied [19]. In 2005, Sridharan and Nagaraj investigated which index property correlates well with the compaction parameters using a dataset of 54 samples. Results show that the PL is highly correlated with both OMC and MDD when compared with the PI and LL [20]. In 2009, two studies developed prediction models for the estimation of the compaction parameters. The first study by Di Matteo et al. proposed MLR models that utilize Atterberg limits for the prediction of the two compaction parameters of fine-grained soils [21]. The second study by Günaydın proposed regression models that utilize the index parameters and sieve size distribution for the prediction of the two compaction parameters [22]. In 2011, Bera and Ghosh proposed two log-linear regression models that utilize the compaction energy, specific gravity, LL, and grain size or the prediction of the two compaction parameters [23]. Similarly, Farooq et al. (2016) proposed prediction models that utilize the LL, PI, and the compaction energy for estimating the two caption parameters [24]. Ardakani and Kordnaeij (2017) used ANN for the prediction of the OMC and MDD based on the results of a dataset of 212 soil samples and a comparison with MLR was conducted. The results show that ANN outperforms the previous empirical correlations approach followed in the literature [25]. This was followed by two studies [26,27] that utilize MLR models for the prediction of the two compaction parameters. Özbeyaz and Soylemez (2020) used two approaches which are the regression analysis and supporting vector machine for predicting the OMC and MDD using the grain size distribution, specific gravity, liquid limit, and plastic limit as inputs [28]. From the previous discussion, it is clear that machine learning approaches outperform the traditional regression models; however, rare studies employed this approach for estimating the two compaction parameters of the aggregate base course. Thus, this study utilizes ANNs for developing prediction models that can be used for estimating the compaction parameters of different types of aggregate base course samples in Egypt.

2. Methodology

2.1. Aggregate Base Types Selection

A total of 64 aggregate base samples were collected from multiple construction sites across Egypt in order to be tested and used in the development of the prediction models. According to the Egyptian code for urban and rural roads (ECP, 2008) part (4) [29], there are multiple aggregate types or grades used for the base course as shown in Table 1.
Between 2015 and 2016, almost 216 aggregate base samples were collected and tested from different locations all over Egypt under the supervision of the highway and research laboratory, Cairo University, Egypt. Out of these 216 samples, 61 samples follow grade A, 81 follow grade B, 41 follow grade C, and 4 follow grade D. Figure 2 shows the number of samples that follow the different gradations. Thus, the dataset used in this study contains samples that follow these four grades (A, B, C, and D) with approximately the same percentage of each gradation. The basic tests needed for this study were conducted according to the British Standard practice (BS 1377) [30]. These tests are the specific gravity, aggregate sieve size, Atterberg limits, and the standard Proctor test, during which a standard energy of 592 kJ/m3 should be applied.

2.2. Artificial Neural Networks

Over the past few years, artificial neural networks have been used as a powerful tool for the prediction problem. Thus, this technique has been employed for the prediction problem in the civil engineering field, and in the pavement engineering field. For example, Othman and Abdelwahab (2021) [3] utilized ANNs for predicting the optimum asphalt content of hot mix asphalt samples. Similarly, Othman [31] utilized ANNs to develop prediction models that can be used for predicting the characteristics of the hot asphalt mixes. Additionally, ANNs have been used in a number of studies for the prediction of the soil properties such as the study by Ardakani and Kordnaeij (2017) [25], the study by Sinha and Wang (2006) [16], the study by Özbeyaz, and Soylemez (2020) [28], and the study by Othman and Abdelwhab [32]. In general, ANN is a system that tries to mimic the human brain system or the neural system. The basic architecture of an ANN consists of three layers and each layer has its own function, starting with the input layer that is responsible for receiving the input information. Next is the hidden layer, which is responsible for receiving signals from the input layer and manipulating this information. Finally, the output layer is responsible for generating the output values or signals [33]. The main unit of the ANN is the neuron, and the neuron is responsible for receiving the input values and modulate its response using an activation function. This activation function is responsible for transmitting the outgoing signals. Each neuron computes a weighted sum of elements of the input vector (Xs) through weights associated with the connections (W) [34] and used the activation function to generate the outputs on the weighted sum of the input values as follows:
R H = W X + B
Y = F ( H )
In general, there are multiple types of ANNs such as the feedforward and feedback ANNs. Additionally, there are a variety of methods for the training of any ANN such as the supervised and unsupervised learning algorithms. In this study, the supervised backpropagation technique will be used for solving the prediction problem or for the training of the ANN. The main objective of the training process, which is iterative, is to adjust the connection weights in a way that improves the performance of the ANN in the prediction process. The performance of the ANN is significantly influenced by the hyperparameters and the architecture of the ANN. As a result, in this study, multiple ANNs with different hyperparameters are tested in order to choose the optimum ANN configuration and settings that provide the best predictions. The main focus of this paper will be on the effect on the number of hidden layers, number of neurons per hidden layer, and the activation function on the ANNs prediction performance.
In general, the activation function might take multiple expressions. In this study, the main focus will be on three activation functions as follows:
-
The Linear activation function, also called the Rectified Linear Unit (ReLU) function
f ( x ) = { x    x > 0 0    x < 0
-
The logistic activation function, also called the sigmoid function
f ( x ) = 1 1 + e x
-
The hyperbolic activation function, also called the tanh activation function
f ( x ) = e x e x e x + e x
For every activation function, multiple ANN architectures are tested with different numbers of hidden layers and of neurons per layer. For every activation function, 80 different ANN architectures are tested, starting from an ANN with one hidden layer to an ANN with 4 hidden layers. Moreover, for every hidden layer, different numbers of neurons per layer are tested, starting from 1 neuron per layer to 20 neurons per layer. Figure 3 shows the proposed general ANN structure followed in this paper. The number of hidden layer and neurons per hidden layers used in this study are based on the architectures of ANNs that were tested in previous studies as the maximum number of layers tested in the previous studies is four, and the maximum number of neurons employed in previous studies is 20 neurons per hidden layer [3,16,25,28,31]. Additionally, the weights of the connections are randomly assigned before launching the training process. Then, the ANNs are trained for one thousand iterations. The dataset used in this study was divided into three sets: training set with 40 samples or 62.5%, validation set with 12 samples or 18.75%, and testing set with 12 samples or 18.75%. The training dataset is used for the ANN learning (or training) process, by adjusting the weight and bias vectors to minimize the differences between the outputs and the targets. The validation dataset is used to monitor the convergence of the ANN learning process, and it is often used to avoid overfitting so that the ANN model is applicable to new inputs beyond the ones used in training or validating the ANN model. The testing dataset is used to check the performance of the trained ANN once completed. As shown in the introduction, the effect of index properties and grain size distribution on compacted soils. In addition, the tests to determine the index properties have fairly easy and inexpensive procedures compared with the compaction tests. Thus, in this study, Julia programming language is utilized in order to construct the different ANNs using the PL, LL, and grain size distribution as the inputs to the ANNs. Additionally, the ANNs were to minimize the mean square error using the gradient descent optimization algorithm.

2.3. Early Stopping to Avoid Overfitting

Overfitting during the training process results in deteriorating the ability of the ANN in generalization and in turn results in untrustworthy performance when the ANN is tested in a new dataset. Thus, the main objective of the methods that avoids overfitting is to find the optimum solution in the parameter space according to a predefined criterion. The most common method followed in the literature to avoid overfitting is the early stopping technique [35,36,37,38]. In this technique, the validation set is used to define the stopping criteria at which the training process should be halted. In the simplest condition, the training process should be halted when the validation set error increases during the process of training (while the training set error keeps decreasing). However, the stochastic nature of the training process of the training process might result in a decrease in the validation set error at any point. In other words, the first overfitting point is not always the best point to halt the training process. Consequently, it is recommended to keep the training process for a number of iterations even with the occurrence of overfitting, and to keep monitoring the validation set error. If the error of the validation set keeps increasing, the training process should be halted. On the other hand, if the error of the validation set declines again, the training process should continue [39]. In this study, the early stopping technique was used to avoid overfitting. Additionally, 100 iterations are used to monitor the validation set error. After these 100 iterations, if the validation set error declined again the training process proceeds, otherwise the training process is halted. Figure 4 shows one example for the validation set and training set errors during 1000 iterations of the training process. The figure shows how the validation set error fluctuates during the training process and how the overtraining of the ANN leads to overfitting as the validation set error increases significantly while the training set error keeps declining. Additionally, the application of the early stopping techniques for one example is shown in Figure 5, and the figure shows how the validation set error fluctuates up and down within a small number of iterations; in this case, the training process is halted when the validation set error increased and did not decrease again for 100 iterations.

2.4. ANN Performance Evaluation

The selection of the optimum ANN hyperparameters should be based on the evaluation of the ability or accuracy of every ANN to predict the output value, which is achieved through statistical indicators. The most common statistical indicator is the coefficient of determination (R2) and it has a value ranged from 0 to 1, where a value of 1 indicates that the model perfectly fits the data and the closest the value to 1, the better the predictions of the model. The coefficient of determination is calculated as follows:
R 2 = n ( h i t i ) 2 n ( h i ( h i ) ¯ ) 2
where, hi. = the prediction of the ANN, t i = the true value, and h i ¯ = the average of the predictions of the ANN.

3. Results and Analysis

The coefficient of determination for every ANN when used on the testing set is shown in Table 2 and Table 3. In order to easily visualize the values, the cells are highlighted from red which indicates the lowest R2 value to green which indicates the highest R2 value.

3.1. MDD

Table 2 shows the predicting capability (performance) of every ANN in estimating the MDD. The table illustrates that the ReLu activation function provides the lowest level of accuracy when compared with the other two activation functions. Additionally, there is a general pattern that can be observed across the three activation functions. In general, the prediction capability of the ANNs deteriorates with the increase in the number of hidden layers per neurons and with the number of neurons per hidden layer. In general, the simpler the ANN, the better the performance. However, the ANN that offers the best MDD predictions utilizes the tanh activation function and consists of 4 hidden layers and 11 neurons per hidden layers. This ANN has the ability to predict the MDD with an R2 value of 0.936. On the other hand, this optimum ANN performs moderately in the prediction of the OMC as it has an R2 value of 0.72 as shown in Table 3.

3.2. OMC

Table 3 shows the predicting capability (performance) of every ANN in estimating the MDD. The table illustrates that the ReLu activation function provides the lowest level of accuracy when compared with the other two activation functions. Additionally, there is a general pattern that can be observed across the three activation functions. In general, the prediction capability of the ANNs deteriorates with the increase in the number of hidden layers per neurons and with the number of neurons per hidden layer. In general, the simpler the ANN, the better the performance. On the other hand, the ANN that offers the best OMC predictions utilizes the tanh activation function and consists of 2 hidden layers and 12 neurons per hidden layers. This ANN has the ability to predict the OMC with an R2 value of 0.931. On the other hand, this optimum ANN performs moderately in the prediction of the MDD as it has an R2 value of 0.762 as shown in Table 2.

3.3. Optimum ANN Architecture for the Predictions of Both the OMC and MDD

The previous two subsections focus on finding the optimum ANN for predicting one of the two compaction parameters. However, the two ANNs can predict one variable with high level of accuracy and perform moderately for the other variable. However, the main objective of this study is to find the optimum ANN that has the ability to estimate the two compaction parameters with low error, especially as we are training the ANN for estimating the two variables. Thus, the main objective of this subsection is to search for the optimum ANN that is able to estimate the two compaction parameters with high level of accuracy, instead of finding the ANN the optimizes the performance for only one output. As a result, a new R2 value is calculated for every ANN as follows:
R n e w 2 = R O M C 2 2 + R M D D 2 2
The updated R2 values are shown in Table 4 and the results show that the ReLu activation function provides the lowest level of accuracy when compared with the other two activation functions (ReLu and sigmoid). Additionally, the performance of the ANNs deteriorate with the increase in the number of hidden layers per neurons and with the number of neurons per hidden layer. Moreover, the optimum ANN that predicts the OMC and MDD utilizes the tanh activation function and consists of one hidden layer, and one neuron in this hidden layer. This ANN can predict the MDD with an R2 value of (0.903) and can predict the OMC with an R2 value of (0.928). Additionally, there are multiple ANNs that can achieve similar results. These ANNs are simple and consist of one or two neurons per hidden layers and employ either the logistic or the tanh activation function. A comparison between the prediction of the ANN that produces the best OMC, the ANN that produces the best MDD, and the ANN that produces the best predictions for both outputs is shown in Table 5. Results show that the difference in the R2 values between the optimum ANN for both predictions and the optimum ANN for every prediction individually are 3.3% for the MDD and 0.3% for the OMC and those differences are minor.

3.4. Comparing Previous Studies with the Proposed ANN

This subsection focuses on validating the proposed optimum ANN and compares its performance with the previously proposed ANNs in the literature. This comparison focuses on comparing the performance of these ANNs on the testing set database as reported in every study. Table 6 summarizes the performance of every ANN and the final raw shows the coefficient of determination of the optimum ANN proposed in this study. Comparing the R2 values in the table shows that the ANN proposed in the current study outperforms the ANNs proposed in the literature, as the previous studies did not carry out a comprehensive search to reach the optimum ANN, but considered limited ANNs with specific assumptions. On the other hand, in this study, a comprehensive search was conducted to choose the optimum activation function and the optimum architecture that achieve the best performance.

4. Multiple Linear Regression (MLR)

In order to the validate results of the ANN, multiple linear regression models were developed and compared with the results of the ANN. During the model development process, the backward elimination technique was used to exclude the insignificant variables. In the backward elimination, all independent variables are entered in the model and each one is deleted one at a time if they do not contribute to the regression equation. The inputs used for developing the MLR models are the same as the inputs used for developing the ANN models. These inputs are the sieve analysis and Atterberg limits. Table 7 and Table 8 summarize the details of the models developed during the backward elimination process for both the MDD and OMD. Table 9 and Table 10 show the details of the final models developed at the end of the backward elimination process for both the OMC and MDD.
The previous tables show that the MLR technique has low R2 values of 0.395 and 0.393 for the prediction of the MDD and OMC. Thus, comparing the coefficient of determination of the ANN and MLR techniques shows that the ANN outperforms the traditional regression analysis technique. Table 11 summarizes the comparison between the ANN and the MLR techniques in predicting the OMC and MDD.

5. Conclusions

The present study focuses on developing ANNs models for estimating the compaction parameters of the aggregate base course used in constructing roads in Egypt using the aggregate gradation and Atterberg limits as the inputs to the ANNs. A total of 240 different ANNs were tested with different structures and hyperparameters in order to select the ANN that offered the lowest error based on the results of 64 aggregate base samples that were tested on the standard Proctor test. The results of this study can be summarized as follows:
-
The optimum structure and hyperparameters of the ANN changes depending on the desired output, as shown in Table 12.
-
In general, the tanh activation function is the most efficient, as it outperforms the other two activation functions. Additionally, the simpler the ANN architecture, the better the predictions, as the performance of the ANNs deteriorates with the increase in the number of hidden layers or the number of neurons per hidden layers.
-
The optimum ANN proposed can be used for estimating the OMC and MDD of the aggregate base course in Egypt with high accuracy (R2 = 0.903 for OMC, and R2 = 0.928 for MDD). Thus, this ANN can be used as an alternative to the standard Proctor test and, in this case, it can save significant time, material, and effort.
-
The results show that the proposed ANN outperforms the MLR models and offers highly accurate predictions.
While this study focuses on the estimation of the OMC and MDD of the aggregate base course, it is highly recommended to replicate the analysis for the subgrade course. Additionally, while the specifications for the aggregate base course vary from country to country, it is highly recommended to replicate the same analysis for aggregate base samples collected from other countries; the present analysis can be used as a benchmark for the new analysis.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The author declares no conflict of interest.

Abbreviations

ANNArtificial Neural Networks
OMCOptimum Moisture Content
MDDMaximum Dry Density
LLLiquid Limit
PLPlastic Limit
PIPlasticity Index
MLRMultiple Linear Regression
%P(i)Percentage of the Passing from Sieve (i)

References

  1. Rakaraddi, P.G.; Gomarsi, V. Establishing Relationship between CBR with Different Soil Properties. Int. J. Res. Eng. Technol. 2015, 4, 182–188. [Google Scholar]
  2. Mousa, K.M.; Abdelwahab, H.T.; Hozayen, H.A. Models for estimating optimum asphalt content from aggregate gradation. Proc. Inst. Civ. Eng.-Constr. Mater. 2018, 174, 69–74. [Google Scholar] [CrossRef]
  3. Othman, K.M.M.; Abdelwahab, H. Prediction of the optimum asphalt content using artificial neural networks. Met. Mater. Eng. 2021, 27. [Google Scholar] [CrossRef]
  4. HMA Pavement Mix Type Selection Guide; Information Serise, 128; National Asphalt Pavement Association: Lanham, MD, USA; Federal Highway Administration: Washington, DC, USA, 2001.
  5. Sridharan, A.; Nagaraj, H.B. Plastic limit and compaction characteristics of finegrained soils. Proc. Inst. Civ. Eng.-Ground Improv. 2005, 9, 17–22. [Google Scholar] [CrossRef]
  6. Proctor, R. Fundamental Principles of Soil Compaction. Eng. News-Rec. 1933, 111, 245–248. [Google Scholar]
  7. Viji, V.K.; Lissy, K.F.; Sobha, C.; Benny, M.A. Predictions on compaction characteristics of fly ashes using regression analysis and artificial neural network analysis. Int. J. Geotech. Eng. 2013, 7, 282–291. [Google Scholar] [CrossRef]
  8. ASTM International. D 698: Standard Test Methods for Laboratory Compaction Characteristics Of Soil Using Standard Effort (12 400 Ftlbf/ft3 (600 Kn-m/m3); ASTM International: West Conshohocken, PA, USA, 2012. [Google Scholar]
  9. Zainal, A.K.E. Quick Estimation of Maximum Dry Unit Weight and Optimum Moisture Content from Compaction Curve Using Peak Functions. Appl. Res. J. 2016, 2, 472–480. [Google Scholar]
  10. Jumikis, A.R. Geology of Soils of the Newark (NJ) Metropolitan Area. J. Soil Mech. Found. ASCE 1946, 93, 71–95. [Google Scholar]
  11. Jumikis, A.R. Geology of Soils of the Newark (NJ) Metropolitan Area. J. Soil Mech. Found. Div. 1958, 84, 1–41. [Google Scholar] [CrossRef]
  12. Ring, G.W.; Sallberg, J.R.; Collins, W.H. Correlation Of Compaction and Classification Test Data. Highw. Res. Board Bull. 1962, 325, 55–75. [Google Scholar]
  13. Ramiah, B.K.; Viswanath, V.; Krishnamurthy, H.V. Interrelationship of compaction and index properties. In Proceedings of the 2nd South East Asian Conference on Soil Engineering, Singapore, 11–15 June 1970; Volume 587. [Google Scholar]
  14. Hammond, A.A. Evolution of One Point Method for Determining The Laboratory Maximum Dry Density. Proc. ICC 1980, 1, 47–50. [Google Scholar]
  15. Wang, M.C.; Huang, C.C. Soil Compaction and Permeability Prediction Models. J. Environ. Eng. 1984, 110, 1063–1083. [Google Scholar] [CrossRef]
  16. Sinha, S.K.; Wang, M.C. Artificial Neural Network Prediction Models for Soil Compaction and Permeability. Geotech. Geol. Eng. 2007, 26, 47–64. [Google Scholar] [CrossRef]
  17. Al-Khafaji, A.N. Estimation of soil compaction parameters by means of Atterberg limits. Q. J. Eng. Geol. Hydrogeol. 1993, 26, 359–368. [Google Scholar] [CrossRef]
  18. Blotz, L.R.; Benson, C.H.; Boutwell, G.P. Estimating Optimum Water Content and Maximum Dry Unit Weight for Compacted Clays. J. Geotech. Geoenviron. Eng. 1998, 124, 907–912. [Google Scholar] [CrossRef]
  19. Gurtug, Y.; Sridharan, A. Compaction Behaviour and Prediction of its Characteristics of Fine Grained Soils with Particular Reference to Compaction Energy. Soils Found. 2004, 44, 27–36. [Google Scholar] [CrossRef] [Green Version]
  20. Suits, L.D.; Sheahan, T.; Sridharan, A.; Sivapullaiah, P. Mini Compaction Test Apparatus for Fine Grained Soils. Geotech. Test. J. 2005, 28, 240–246. [Google Scholar] [CrossRef]
  21. Di Matteo, L.; Bigotti, F.; Ricco, R. Best-Fit Models to Estimate Modified Proctor Properties of Compacted Soil. J. Geotech. Geoenviron. Eng. 2009, 135, 992–996. [Google Scholar] [CrossRef]
  22. Günaydın, O. Estimation of soil compaction parameters by using statistical analyses and artificial neural networks. Environ. Earth Sci. 2008, 57, 203–215. [Google Scholar] [CrossRef]
  23. Bera, A.; Ghosh, A. Regression model for prediction of optimum moisture content and maximum dry unit weight of fine grained soil. Int. J. Geotech. Eng. 2011, 5, 297–305. [Google Scholar] [CrossRef]
  24. Farooq, K.; Khalid, U.; Mujtaba, H. Prediction of Compaction Characteristics of Fine-Grained Soils Using Consistency Limits. Arab. J. Sci. Eng. 2015, 41, 1319–1328. [Google Scholar] [CrossRef]
  25. Ardakani, A.; Kordnaeij, A. Soil compaction parameters prediction using GMDH-type neural network and genetic algorithm. Eur. J. Environ. Civ. Eng. 2017, 23, 449–462. [Google Scholar] [CrossRef]
  26. Gurtug, Y.; Sridharan, A.; Ikizler, S.B. Simplified Method to Predict Compaction Curves and Characteristics of Soils. Iran. J. Sci. Technol. Trans. Civ. Eng. 2018, 42, 207–216. [Google Scholar] [CrossRef]
  27. Hussain, A.; Atalar, C. Estimation of compaction characteristics of soils using Atterberg limits. IOP Conf. Series Mater. Sci. Eng. 2020, 800, 012024. [Google Scholar] [CrossRef]
  28. Özbeyaz, A.; Söylemez, M. Modeling compaction parameters using support vector and decision tree regression algorithms. Turk. J. Electr. Eng. Comput. Sci. 2020, 28, 3079–3093. [Google Scholar] [CrossRef]
  29. ECP (Egyptian Code Provisions). ECP(104/4): Egyptian Code for Urban and Rural Roads; Part (4): Road Material and Its Tests; Housing and Building National Research Center: Cairo, Egypt, 2008. [Google Scholar]
  30. British Standard Institution. BS1377 Methods of Test for Soils for Civil Engineering Purposes; British Standards Institution: London, UK, 1990. [Google Scholar]
  31. Othman, K.; Abdelwahab, H. Using Deep Neural Networks for the Prediction of the Optimum Asphalt Content and the Asphalt mix Properties. 2021; in preparation. [Google Scholar]
  32. Othman, K.; Abdelwahab, H. Prediction of the Soil Compaction Parameters Using Deep Neural Networks. Transp. Infrastruct. Geotechnol. 2021. [Google Scholar] [CrossRef]
  33. Haykin, S. Neural Networks, a Comprehensive Foundation; Prentice Hall: Hoboken, NJ, USA, 1994. [Google Scholar]
  34. McCulloch, W.S.; Pitts, W. A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biol. 1943, 5, 115–133. [Google Scholar] [CrossRef]
  35. Liu, Y.; Starzyk, J.; Zhu, Z. Optimized Approximation Algorithm in Neural Networks without Overfitting. IEEE Trans. Neural Netw. 2008, 19, 983–995. [Google Scholar] [CrossRef]
  36. Piotrowski, A.P.; Napiorkowski, J.J. A comparison of methods to avoid overfitting in neural networks training in the case of catchment runoff modelling. J. Hydrol. 2013, 476, 97–111. [Google Scholar] [CrossRef]
  37. Goodfellow, L.; Bengio, Y.; Courville, A. Deep Learning (Adaptive Computation and Machine Learning Series); MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
  38. Christopher, B. Pattern Recognition and Machine Learning (Information Science and Statistics); Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  39. Prechelt, L. Neural Networks: Tricks of the Trade; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 1998; Volume 1524, pp. 53–67. [Google Scholar]
  40. Alavi, A.H.; Gandomi, A.H.; Mollahassani, A.; Heshmati, A.A.; Rashed, A. Modeling of maximum dry density and optimum moisture content of stabilized soil using artificial neural networks. J. Plant Nutr. Soil Sci. 2010, 173, 368–379. [Google Scholar] [CrossRef]
  41. Kurnaz, T.F.; Kaya, Y. The performance comparison of the soft computing methods on the prediction of soil compaction parameters. Arab. J. Geosci. 2020, 13, 159. [Google Scholar] [CrossRef]
Figure 1. OMC, MDD general curve.
Figure 1. OMC, MDD general curve.
Designs 05 00078 g001
Figure 2. Number of samples for different aggregate gradations.
Figure 2. Number of samples for different aggregate gradations.
Designs 05 00078 g002
Figure 3. The general ANN structure followed.
Figure 3. The general ANN structure followed.
Designs 05 00078 g003
Figure 4. The training set and validation set error during the training process for an ANN with 3 hidden layers, 3 neurons per hidden layer and employs the tanh activation function.
Figure 4. The training set and validation set error during the training process for an ANN with 3 hidden layers, 3 neurons per hidden layer and employs the tanh activation function.
Designs 05 00078 g004
Figure 5. The validation and training sets error during the training process with the application of the early stopping technique for an ANN with 3 hidden layers, 3 neurons per hidden layer and employs the tanh activation function.
Figure 5. The validation and training sets error during the training process with the application of the early stopping technique for an ANN with 3 hidden layers, 3 neurons per hidden layer and employs the tanh activation function.
Designs 05 00078 g005
Table 1. Aggregate base gradations used in Egypt [29].
Table 1. Aggregate base gradations used in Egypt [29].
Sieve SizeLimits
ABCDEF
MinMaxMinMaxMinMaxMinMaxMinMaxMinMax
% Passing from sieve 2 in100100100100100100
% Passing from sieve 1.5 in 70100 100100
% Passing from sieve 1 in 5585759570100100100100100
% Passing from sieve 3/4 in 5080 6090 70100
% Passing from sieve 3/8 in306540704075457550855080
% Passing from sieve number 4255530603060306035653565
% Passing from sieve number 10154020502045205025502550
% Passing from sieve number 4082010301530103015301530
% Passing from sieve number 20028515520515515515
Table 2. R2 values of the testing set for every ANN in the prediction of the MDD.
Table 2. R2 values of the testing set for every ANN in the prediction of the MDD.
MDDNumber of Hidden Layers
ReLu Activation FunctionSigmoid Activation FunctionTanh Activation Function
123412341234
Number of Nuerons per Hidden Layer10.8070.7680.7890.7730.890.8940.7270.7770.9030.6590.6140.892
20.6980.7980.7710.8060.8320.8670.7960.7790.8770.6750.7150.648
30.750.7590.7520.7290.8430.8090.6870.7440.7080.7920.5590.844
40.8140.7180.7370.7110.8630.8760.7140.7930.6650.6680.7480.48
50.6770.7770.8160.7930.790.6490.8290.7470.6520.8820.4710.754
60.6960.8170.7720.7580.750.6080.7520.5480.5850.7840.7560.683
70.660.6560.7770.80.7680.7410.7560.7610.7780.7090.7810.42
80.7660.7910.7750.7570.7850.8350.7110.7430.6360.7720.8680.478
90.8310.7610.6970.6830.8630.860.8050.6750.4730.5440.2940.773
100.6440.710.730.6520.7390.8520.8770.6480.8410.7790.4890.498
110.6510.7150.7420.6750.8110.6840.8430.7250.6210.5990.4550.936
120.7280.6480.6570.630.7240.6310.7210.760.4510.7620.6640.796
130.6920.6530.6570.6570.7070.7650.5490.6040.5380.7560.7970.392
140.7730.7180.640.6530.6320.4980.7390.6440.6670.6240.8850.493
150.730.6630.6490.6440.7480.5290.810.690.510.6560.7110.349
160.7540.6580.6530.6570.7650.8270.5320.6510.4470.9040.520.228
170.6560.6530.7270.750.7990.7680.7420.6510.5870.5740.7490.292
180.6420.6550.5530.6850.6210.4110.5720.6690.4970.7660.3150.596
190.6560.6140.6750.6570.6540.7460.7520.5630.6940.4910.7820.231
200.6440.6350.6380.6490.8940.6290.5790.6680.5320.7120.6870.318
Table 3. R2 values of the testing set for every ANN in the prediction of the OMC.
Table 3. R2 values of the testing set for every ANN in the prediction of the OMC.
OMCNumber of Hidden Layers
ReLu Activation FunctionSigmoid Activation FunctionTanh Activation Function
123412341234
Number of Neurons per Hidden Layer10.790.7660.7810.7830.8480.920.8140.8320.9280.8210.7720.93
20.7240.8120.750.80.8780.9290.8170.8360.7940.690.5750.458
30.7670.8030.7840.7070.8260.8460.7710.8020.8140.7770.6550.887
40.7560.6680.6710.7250.7980.8460.7170.7850.6250.520.8180.538
50.7850.8670.7540.7290.8740.630.8960.6870.5470.5580.4860.683
60.7930.7540.6750.7550.750.610.820.5270.6620.5680.830.635
70.7730.7170.7110.650.6380.6420.7710.7740.770.6170.4520.772
80.7960.7940.7690.7340.7310.7140.6840.7520.6360.7750.2730.335
90.7040.6440.7790.6790.8620.7920.6990.6270.6850.5730.7440.591
100.870.5790.6040.6130.6830.5350.7320.7240.7390.6930.3640.215
110.6290.6720.6690.6030.5580.7530.6760.5710.5950.4780.370.72
120.6620.5960.6040.680.7260.4980.7190.7280.4260.9310.2910.478
130.6150.60.6110.7330.7050.7420.8010.660.3120.7950.4690.563
140.8520.6790.6540.5980.6830.5230.7320.8010.5270.8450.6360.28
150.6440.6980.6080.5910.5660.7260.6840.7030.5330.5730.7830.223
160.8250.6340.5850.6630.790.7140.6810.6250.6290.3570.2630.28
170.6290.6220.5560.5770.8430.5460.8120.4280.4640.4620.590.308
180.6120.6170.760.6650.6960.7340.5330.6210.3050.6130.230.435
190.6150.5990.5910.5950.6060.5950.790.4030.5070.4690.4240.15
200.6440.6330.6220.6170.760.4670.5550.7190.5450.5250.6080.744
Table 4. R2 values of the testing set for every ANN in the prediction of both the OMC and MDD.
Table 4. R2 values of the testing set for every ANN in the prediction of both the OMC and MDD.
BalancedNumber of Hidden Layers
ReLu Activation FunctionSigmoid Activation FunctionTanh Activation Function
123412341234
Number of Neurons per Hidden Layer10.79850.7670.7850.7780.8690.9070.77050.80450.91550.740.6930.911
20.7110.8050.76050.8030.8550.8980.80650.80750.83550.68250.6450.553
30.75850.7810.7680.7180.83s450.82750.7290.7730.7610.78450.6070.8655
40.7850.6930.7040.7180.83050.8610.71550.7890.6450.5940.7830.509
50.7310.8220.7850.7610.8320.63950.86250.7170.59950.720.47850.7185
60.74450.78550.72350.75650.750.6090.7860.53750.62350.6760.7930.659
70.71650.68650.7440.7250.7030.69150.76350.76750.7740.6630.61650.596
80.7810.79250.7720.74550.7580.77450.69750.74750.6360.77350.57050.4065
90.76750.70250.7380.6810.86250.8260.7520.6510.5790.55850.5190.682
100.7570.64450.6670.63250.7110.69350.80450.6860.790.7360.42650.3565
110.640.69350.70550.6390.68450.71850.75950.6480.6080.53850.41250.828
120.6950.6220.63050.6550.7250.56450.720.7440.43850.84650.47750.637
130.65350.62650.6340.6950.7060.75350.6750.6320.4250.77550.6330.4775
140.81250.69850.6470.62550.65750.51050.73550.72250.5970.73450.76050.3865
150.6870.68050.62850.61750.6570.62750.7470.69650.52150.61450.7470.286
160.78950.6460.6190.660.77750.77050.60650.6380.5380.63050.39150.254
170.64250.63750.64150.66350.8210.6570.7770.53950.52550.5180.66950.3
180.6270.6360.65650.6750.65850.57250.55250.6450.4010.68950.27250.5155
190.63550.60650.6330.6260.630.67050.7710.4830.60050.480.6030.1905
200.6440.6340.630.6330.8270.5480.5670.69350.53850.61850.64750.531
Table 5. Comparison between the optimum ANN for every output and the optimum ANN for the two outputs.
Table 5. Comparison between the optimum ANN for every output and the optimum ANN for the two outputs.
Optimum ANN for Every OutputOptimum ANN for Both PredictionsR2 Difference
Hidden LayersNeurons/LayerActivation FunctionR2 (MDD)R2 (OMC)Hidden LayersNeurons/LayerActivation FunctionR2
MDD411Tanh0.9360.7211Tanh0.9030.033
OMC212Tanh0.7620.9310.9280.003
Table 6. The performance of the ANNs proposed in the literature and the proposed ANN in this study.
Table 6. The performance of the ANNs proposed in the literature and the proposed ANN in this study.
StudyR2 for the OMCR2 for the MDD
Günaydın (2009) [22]0.8370.793
Alavi et al. (2010) [40]0.890.91
Kurnaz and Kaya (2020) [41]0.850.86
Özbeyaz and Solyemez (2020) [28]0.830.71
Proposed ANN0.9030.928
Table 7. Details of the MLR models developed for the MDD during the backward elimination process.
Table 7. Details of the MLR models developed for the MDD during the backward elimination process.
ModelExcluded VariablesRR SquareAdjusted R SquareStd. Error of the Estimate
1-0.6460.4180.2810.070509
2%P (1.5 in)0.6460.4180.2950.069828
3%P (1.5 in), %P (3/8 in)0.6460.4170.3070.069194
4%P (1.5 in), %P (3/8 in), %P(2 in)0.6460.4170.320.068589
5%P (1.5 in), %P (3/8 in), %P(2 in), LL0.6430.4140.3290.068118
6%P (1.5 in), %P (3/8 in), %P(2 in), LL, %P(0.5 in)0.6350.4030.3290.068119
7%P (1.5 in), %P (3/8 in), %P(2 in), LL, %P(0.5 in), %P (3/4 in)0.6300.3960.3330.067909
8%P (1.5 in), %P (3/8 in), %P(2 in), LL, %P(0.5 in), %P (1 in)0.6280.3950.3430.067409
Where: %P(i) = Percentage of the passing from sieve (i).
Table 8. Details of the MLR models developed for the OMC during the backward elimination process.
Table 8. Details of the MLR models developed for the OMC during the backward elimination process.
ModelExcluded VariablesRR SquareAdjusted R SquareStd. Error of the Estimate
1-0.6730.4530.3251.4857
2%P (#4)0.6730.4530.3381.4714
3%P (#4), %P(0.5 in)0.6730.4520.3491.4586
4%P (#4), %P(0.5 in), %P(2 in)0.6720.4510.361.4467
5%P (#4), %P(0.5 in), %P(2 in), %P (1.5 in)0.6710.450.371.4349
6%P (#4), %P(0.5 in), %P(2 in), %P (1.5 in), LL0.6700.4480.3791.4244
7%P (#4), %P(0.5 in), %P(2 in), %P (1.5 in), LL, %P (3/8 in)0.6580.4330.3731.4313
8%P (#4), %P(0.5 in), %P(2 in), %P (1.5 in), LL, %P (3/8 in), %P (#10)0.6500.4230.3731.4312
9%P (#4), %P(0.5 in), %P(2 in), %P (1.5 in), LL, %P (3/8 in), %P (#10), %P(#40)0.6440.4150.3751.4288
10%P (#4), %P(0.5 in), %P(2 in), %P (1.5 in), LL, %P (3/8 in), %P (#10), %P(#40), %P (3/4 in)0.6380.4070.3771.4271
11%P (#4), %P(0.5 in), %P(2 in), %P (1.5 in), LL, %P (3/8 in), %P (#10), %P(#40), %P (3/4 in), %P (1in)0.6270.3930.3741.4311
Where: %P(i) = Percentage of the passing from sieve (i).
Table 9. MDD prediction model.
Table 9. MDD prediction model.
Model-8Unstandardized CoefficientsStandardized CoefficientstSig.
BStd. ErrorBeta
(Constant)2.2890.069 32.9420
Passing Sieve Number 40.0090.0040.6492.1760.034
Passing Sieve Number 10−0.0150.007−0.984−2.1140.039
Passing Sieve Number 400.010.0060.6641.8520.045
PassingSieveNO200−0.0150.003−0.771−4.2520
Plastic Limit−0.0050.003−0.191−1.8150.048
Table 10. OMC prediction model.
Table 10. OMC prediction model.
Model-11Unstandardized CoefficientsStandardized CoefficientstSig.
BStd. ErrorBeta
(Constant)2.9571.092 2.7080.009
Passing Sieve number 2000.2390.0410.5775.7630
PlasticLimit0.1050.0530.1981.9740.05
Table 11. R2 values of the different ANNs proposed and the MLR technique.
Table 11. R2 values of the different ANNs proposed and the MLR technique.
R-Square Value
MLROptimum ANN for the MDDOptimum ANN for the OMCOptimum ANN for Both OMC and MDD
MDDOMCMDDOMCMDDOMCMDDOMC
0.3950.3930.9360.720.9310.7620.9030.928
Table 12. The optimum ANN for every output.
Table 12. The optimum ANN for every output.
Optimum ANN for Every Output
Hidden LayersNeurons/LayerActivation Function
MDD411Tanh
OMC212Tanh
Both (OMC and MDD)11Tanh
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Othman, K. Deep Neural Network Models for the Prediction of the Aggregate Base Course Compaction Parameters. Designs 2021, 5, 78. https://doi.org/10.3390/designs5040078

AMA Style

Othman K. Deep Neural Network Models for the Prediction of the Aggregate Base Course Compaction Parameters. Designs. 2021; 5(4):78. https://doi.org/10.3390/designs5040078

Chicago/Turabian Style

Othman, Kareem. 2021. "Deep Neural Network Models for the Prediction of the Aggregate Base Course Compaction Parameters" Designs 5, no. 4: 78. https://doi.org/10.3390/designs5040078

APA Style

Othman, K. (2021). Deep Neural Network Models for the Prediction of the Aggregate Base Course Compaction Parameters. Designs, 5(4), 78. https://doi.org/10.3390/designs5040078

Article Metrics

Back to TopTop