# Soil Liquefaction Assessment Using Soft Computing Approaches Based on Capacity Energy Concept

^{1}

^{2}

^{3}

^{4}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Machine Learning Methodology

#### 2.1. Ridge Regression Algorithm

^{T}X is singular. The eigenvalues of the matrix are small, which results in very large diagonal elements in the inverse of the matrix X

^{T}X and degrades the predictability of the model. A tiny variation in the input X in Equation (2) will lead to great changes in the output Y.

^{T}X. This has the effect of increasing the eigenvalue of the matrix, converting the singular matrix into a non-singular matrix to some extent, and improving the stability of parameter estimation. The regression coefficient β is solved as:

_{2}-norm based sparse linear model.

#### 2.2. Lasso and LassoCV Algorithm

_{1}optimization problem:

_{1}method can find suitable models with sparse coefficients [49,50,51]. The work of Meinshausen and Bühlmann [52] gave the conditions for the stability of variable selection with the lasso.

^{th}fold, use the other n-1 folds as a training set to fit each candidate model.

^{th}fold as testing set to evaluate the performance index of each candidate model fitted in step 2.

_{1}-norm based sparse linear model or using the approach known as analysis of variance (ANOVA) decomposition.

#### 2.3. Random Forest Regression Algorithm

_{n}, samples are passed to the right node (ν

_{R}) or the left node (ν

_{L}) after comparing with some criteria or threshold values. Repeat this partition until reaching a terminal node and get a classification label (classes A or B in this case). For classification problem, the ensemble prediction is obtained as a combination of the results of the individual trees by majority voting rule [56].

_{n}. Therefore, the probability of each observation to be picked is 1/n.

#### 2.4. XGBoost Algorithm

^{th}tree,

**x**is the input variable corresponding to sample i, ${\widehat{\mathrm{y}}}_{i}$ corresponds to the predicted score from this tree, and

_{i}**F**is the space of regression trees.

_{k}) is the regularization function penalizes the complexity of the model from overfitting:

^{th}tree can be expressed as:

^{th}instance at the t

^{th}iteration. Therefore, the objective function can be expressed as:

^{th}tree concerning the model parameters and predictions can be determined by optimization of Equation (16). In the same way, the successor trees are built until the whole XGBoost is constructed satisfying predefined finishing standards [58]. The input feature importance of a XGBoost model can be evaluated through the gain criterion. The gain is obtained by collecting each feature’s contribution from each tree in the model. A feature with higher value indicates greater relative contribution and importance.

#### 2.5. MARS Methodology

_{1}, …, X

_{P}) be a matrix of P input variables. In case of a continuous response, this would be:

_{m}(X) is a basis function. It can be a spline function, or the product of two or more spline functions already contained in the model. The coefficients β are constants estimated using the least-squares method.

#### 2.6. Performance Measures

^{2}(R2) measures the deviation of a group of data and characterizes the goodness of regression fitting [67]:

## 3. Models for Soil Liquefaction Assessment Based on Capacity Energy Concept

#### 3.1. The Database

#### 3.2. Ridge Modeling Results

^{2}scores were 0.271, 0.221, and 0.582 for the training set, with the corresponding scores of 0.258, 0.201, and 0.382 for the testing set. The results show that this model could not provide satisfactory predictions, especially in the testing set.

#### 3.3. Lasso and LassoCV Analysis

^{2}curve become constant when λ was smaller than 0.01. The LassoCV algorithm automatically picked 0.001053 as an optimal value of λ. The true value and LassoCV model predicted value of Log(W) are compared in Figure 14. The main part of the estimations of Log(W) were within ±10% of the target values, but quite a few data points were still located outside the limit lines. The relative errors for the training and testing patterns are plotted in Figure 15. It can be seen that some relative errors were above 20%.

^{2}scores were 0.275, 0.225, and 0.578 for the training set, with the corresponding scores of 0.255, 0.201, and 0.581 for the testing set. The results show that this model still could not produce reasonable predictions. Its RMSE and MAE were almost the same as that of the Ridge model, although with a R

^{2}score 51.3% higher than that of the Ridge model. This again proves the insufficiency of this linear regression type algorithm in modeling the complex problem of soil liquefaction.

_{50}and ${{\sigma}^{\prime}}_{mean}$. FC was automatically removed by the LassoCV algorithm by having a zero-regression coefficient. Cu had a negative importance index of −2.2%.

#### 3.4. RF Modeling Result

_{estimators}is related to the number of decision trees in the RF model. Figure 17 and Figure 18 give the RMSE, MAE, and R

^{2}change curves of RF model with parameter n

_{estimators}. It can be seen that when n

_{estimators}was greater than 100, the curves of performance measures had only minor fluctuations.

^{2}score reaches the maximum value when min_samples_split equals 2 and min_samples_split equals 1, as shown in Figure 19.

^{2}scores were 0.075, 0.052, and 0.97 for the training set, with the corresponding scores of 0.175, 0.124, and 0.82 for the testing set. The lower RMSE/MAE and large R

^{2}scores indicate a higher fitting performance than the Ridge model and Lasso model.

_{50}, and FC.

#### 3.5. XGBoost Modeling Result

_{estimators}, which controls the number of decision trees. Figure 23 and Figure 24 give the RMSE, MAE, and R

^{2}change curves of XGBoost model with parameter n

_{estimators}. It can be seen that for the training set, when n

_{estimators}increased, RMSE and MAE decreased and the R

^{2}score increased. However, for the testing set, when n

_{estimators}was greater than 200, the curves of performance measures had only minor fluctuations. The optimal n

_{estimators}was chosen as 352.

^{2}scores of 0.055, 0.050, and 0.977 for the training set, and the corresponding scores of 0.164, 0.132, and 0.832 for the testing set. The lower RMSE/MAE and large R

^{2}scores indicate a higher fitting performance than the Ridge model and Lasso model.

_{50}, and FC had the relative importance of 21.6%, 20.9%, and 17.4%. All were quite small compared to that of Dr.

#### 3.6. MARS Modeling Result

_{terms}, which controls the maximum number of terms generated by the forward pass. Figure 28 and Figure 29 give the RMSE, MAE, and R

^{2}change curves of MARS model with parameter Max

_{terms}. It can be seen that RMSE and MAE decreased and the R

^{2}score increased with the increasing Max

_{terms}until the value of 50, then the curves remained almost unchanged. The optimal Max

_{terms}was chosen as 60.

^{2}scores were 0.148, 0.072, and 0.879 for the training set, with the corresponding scores of 0.165, 0.134, and 0.84 for the testing set. The results prove the satisfactory predictive capacities of MARS model.

_{50}(65.2%), ${{\sigma}^{\prime}}_{mean}$ (60.9%), FC (34.8%), and Cu (4.34%).

## 4. Comparison of the Results from Different Models

^{2}scores were 0.582 and 0.382 for training set and testing set of Ridge model. The Ridge mode had the highest MRSE/MAE and lowest R

^{2}score among these five models. This reflects the shortcoming of linear regression nature of Ridge model.

^{2}scores were 0.578 and 0.581 for training set and testing set, respectively. The RF, XGBoost, and MARS models achieved better results, whose RMSE and MAE values were smaller than 0.16 and 0.14, with the R

^{2}scores were all above 0.8.

^{2}scores of RF and XGBoost models were higher than the MARS model, e.g., the R

^{2}scores were 0.97 and 0.977 for training data for these two models, while the MARS was 0.879. This means that the RF and XGBoost model fit better than the MARS model in the training set. However, the R

^{2}scores of the testing set for MARS model were 0.84, which is the same as the XGBoost model, and even higher than the RF model (0.825).

## 5. Conclusions

^{2}scores of the testing set for MARS model was the same as the XGBoost model, and even higher than the RF model. The relative importance obtained by MARS model were quite close to the average value of the five models. In addition, compared to the black box model of RF and XGBoost model, the MARS model had an advantage of the capacity to output explicit expressions.

## Author Contributions

## Funding

## Acknowledgments

## Conflicts of Interest

## References

- Lee, K.L.; Fitton, J.A. Factors affecting the cyclic loading strength of soil. In Vibration Effects of Earthquakes on Soils and Foundation; American Society for Testing and Materials (ASTM): West Conshohocken, PA, USA, 1969; pp. 71–95. [Google Scholar]
- Seed, H.B.; Idriss, I.M. Simplified procedure for evaluating soil liquefaction potential. Soil Mech. Found. Eng.
**1971**, 97, 1249–1273. [Google Scholar] - Seed, H.B.; Idriss, I.M.; Makdisi, F.; Banerjee, N. Representation of Irregular Stress Time Histories by Equivalent Uniform Stress Series in Liquefaction Analyses; Report No. UCB/EERC-75/29; Earthquake Engineering Research Centre, University of California: Berkeley, CA, USA, 1975. [Google Scholar]
- Lee, Y.F.; Chi, Y.Y.; Lee, D.H.; Juang, C.H.; Wu, J.H. Simplified models for assessing annual liquefaction probability—A case study of the Yuanlin area, Taiwan. Eng. Geol.
**2007**, 90, 71–88. [Google Scholar] [CrossRef] - Whitman, R.V. Resistance of soil to liquefaction and settlement. Soils Found.
**1971**, 11, 59–68. [Google Scholar] [CrossRef][Green Version] - Ishihara, K.; Yasuda, S. Sand liquefaction in hollow cylinder torsion under irregular excitation. Soils Found.
**1975**, 15, 45–59. [Google Scholar] [CrossRef][Green Version] - Juang, C.H.; Rosowsky, D.V.; Tang, W.H. Reliability-based method for assessing liquefaction potential of soils. J. Geotech. Geoenviron. Eng.
**1999**, 125, 684–689. [Google Scholar] [CrossRef] - Juang, C.H.; Chen, C.J.; Jiang, T. Probabilistic framework for liquefaction potential by shear wave velocity. J. Geotech. Geoenviron. Eng.
**2001**, 127, 670–678. [Google Scholar] [CrossRef] - Juang, C.H.; Ching, J.; Luo, Z.; Ku, C.S. New models for probability of liquefaction using standard penetration tests based on an updated database of case histories. Eng. Geol.
**2012**, 133, 85–93. [Google Scholar] [CrossRef] - Moss, R.E.S.; Seed, R.B.; Kayen, R.E.; Stewart, J.P.; Der Kiureghian, A.; Cetin, K.O. CPT based probabilistic and deterministic assessment of in situ seismic soil liquefaction potential. J. Geotech. Geoenviron. Eng.
**2006**, 132, 1032–1051. [Google Scholar] [CrossRef][Green Version] - Boulanger, R.W.; Idriss, I.M. Probabilistic standard penetration test-based liquefaction-triggering procedure. J. Geotech. Geoenviron. Eng.
**2012**, 138, 1185–1195. [Google Scholar] [CrossRef] - Green, R.A. Energy-based Evaluation and Remediation of Liquefiable Soils. Ph.D. Thesis, Virginia Polytechnic Institute and State University, Blacksburg, VA, USA, 2001. [Google Scholar]
- Baziar, M.H.; Jafarian, Y. Assessment of liquefaction triggering using strain energy concept and ANN model: Capacity energy. Soil Dyn. Earthq. Eng.
**2007**, 27, 1056–1072. [Google Scholar] [CrossRef] - Dobry, R.; Ladd, R.S.; Yokel, F.Y.; Chung, R.M.; Powell, D. Prediction of Pore Water Pressure Build-up and Liquefaction of Sands During Earthquakes by the Cyclic Strain Method; National Institute of Standards and Technology (NIST): Gaithersburg, MD, USA, 1982.
- Seed, H.B. Closure to soil liquefaction and cyclic mobility evaluation for level ground during earthquakes. J. Geotech. Geoenviron. Eng.
**1980**, 106, 724. [Google Scholar] - Davis, R.O.; Berrill, J.B. Energy dissipation and seismic liquefaction in sands. Earthq. Eng. Struct. Dyn.
**1982**, 10, 59–68. [Google Scholar] - Nemat-Nasser, S.; Shokooh, A. A unified approach to densification and liquefaction of cohesionless sand in cyclic shearing. Can. Geotech. J.
**1979**, 16, 659–678. [Google Scholar] [CrossRef] - Ostadan, F.; Deng, N.; Arango, I. Energy-based Method for Liquefaction Potential Evaluation, Phase I. Feasibility Study; National Institute of Standards and Technology (NIST): Gaithersburg, MD, USA, 1996.
- Baziar, M.H.; Jafarian, Y.; Shahnazari, H.; Movahed, V.; Tutunchian, M.A. Prediction of strain energy-based liquefaction resistance of sand–silt mixtures: An evolutionary approach. Comput. Geosci.
**2011**, 37, 1883–1893. [Google Scholar] [CrossRef] - Figueroa, J.L.; Saada, A.S.; Liang, L.; Dahisaria, M.N. Evaluation of soil liquefaction by energy principles. J. Geotech. Geoenviron. Eng.
**1994**, 20, 1554–1569. [Google Scholar] [CrossRef] - Liang, L. Development of an Energy Method for Evaluating the Liquefaction Potential of a Soil Deposit. Ph.D. Thesis, Department of Civil Engineering, Case Western Reserve University, Cleveland, OH, USA, 1995. [Google Scholar]
- Dief, H.M.; Figueroa, J.L. Liquefaction assessment by the energy method through centrifuge modeling. In NSF International Workshop on Earthquake Simulation in Geotechnical Engineering; Zeng, X.W., Ed.; CWRU: Cleveland, OH, USA, 2001. [Google Scholar]
- Chen, Y.R.; Hsieh, S.C.; Chen, J.W.; Shih, C.C. Energy-based probabilistic evaluation of soil liquefaction. Soil Dyn. Earthq. Eng.
**2005**, 25, 55–68. [Google Scholar] [CrossRef] - Alavi, A.H.; Gandomi, A.H. Energy-based numerical models for assessment of soil liquefaction. Geosci. Front.
**2012**, 3, 541–555. [Google Scholar] [CrossRef][Green Version] - Cabalar, A.F.; Cevik, A.; Gokceoglu, C. Some applications of Adaptive Neuro-Fuzzy Inference System (ANFIS) in geotechnical engineering. Comput. Geotech.
**2012**, 40, 14–33. [Google Scholar] [CrossRef] - Goh, A.T.C.; Zhang, W.; Zhang, Y.; Xiao, Y.; Xiang, Y. Determination of EPB tunnel-related maximum surface settlement: A Multivariate adaptive regression splines approach. Bull. Eng. Geol. Environ.
**2018**, 77, 489–500. [Google Scholar] [CrossRef] - Zhang, W.G.; Goh, A.T.C. Multivariate adaptive regression splines for analysis of geotechnical engineering systems. Comput. Geotech.
**2013**, 48, 82–95. [Google Scholar] [CrossRef] - Zhang, W.G.; Goh, A.T.C.; Xuan, F. A simple prediction model for wall deflection caused by braced excavation in clays. Comput. Geotech.
**2015**, 63, 67–72. [Google Scholar] [CrossRef] - Zhang, W.G.; Goh, A.T.C.; Zhang, Y.M.; Chen, Y.M.; Xiao, Y. Assessment of soil liquefaction based on capacity energy concept and multivariate adaptive regression splines. Eng. Geol.
**2015**, 188, 29–37. [Google Scholar] [CrossRef] - Zhang, W.G.; Goh, A.T.C.; Zhang, Y.M. Multivariate adaptive regression splines application for multivariate geotechnical problems with big data. Geotech. Geol. Eng.
**2016**, 34, 193–204. [Google Scholar] [CrossRef] - Zhang, W.G.; Goh, A.T.C. Evaluating seismic liquefaction potential using multivariate adaptive regression splines and logistic regression. Geomech. Eng.
**2016**, 10, 269–284. [Google Scholar] [CrossRef] - Zhang, W.G.; Goh, A.T.C. Multivariate adaptive regression splines and neural network models for prediction of pile drivability. Geosci. Front.
**2016**, 7, 45–52. [Google Scholar] [CrossRef][Green Version] - Zhang, W.G.; Zhang, R.H.; Goh, A.T.C. Multivaraite adaptive regression splines approach to estimate lateral wall deflection profiles caused by braced excavations in clays. Geotech. Geol. Eng.
**2017**. [Google Scholar] [CrossRef] - Zhang, W.G.; Zhang, Y.M.; Goh, A.T.C. Multivariate adaptive regression splines for inverse analysis of soil and wall properties in braced excavation. Tunnel. Undergr. Space Technol.
**2017**, 64, 24–33. [Google Scholar] [CrossRef] - Zhang, W.G.; Hou, Z.J.; Goh, A.T.C.; Zhang, R.H. Estimation of strut forces for braced excavation in granular soils from numerical analysis and case histories. Comput. Geotech.
**2018**, 106, 286–295. [Google Scholar] [CrossRef] - Zhang, R.H.; Zhang, W.G.; Goh, A.T.C. Numerical investigation of pile responses caused by adjacent braced excavation in soft clays. Int. J. Geotech. Eng.
**2018**. [Google Scholar] [CrossRef] - Zhang, W.G.; Wang, W.; Zhou, D.; Goh, A.T.C.; Zhang, R. Influence of groundwater drawdown on excavation responses – a case history in Bukit Timah granitic residual soils. J. Rock Mech. Geotech. Eng.
**2018**. [Google Scholar] [CrossRef] - Zhang, W.G.; Goh, A.T.C.; Goh, K.H.; Chew, O.Y.S.; Zhou, D.; Zhang, R. Performance of braced excavation in residual soil with groundwater drawdown. Undergr. Space
**2018**, 3, 150–165. [Google Scholar] [CrossRef] - Zhang, W.; Zhang, R.; Wang, W.; Zhang, F.; Goh, A.T.C. A Multivariate Adaptive Regression Splines model for determining horizontal wall deflection envelope for braced excavations in clays. Tunnel. Undergr. Space Technol.
**2019**, 84, 461–471. [Google Scholar] [CrossRef] - Zhang, W.; Wu, C.; Zhong, H.; Li, Y.; Wang, L. Prediction of undrained shear strength using extreme gradient boosting and random forest based on Bayesian optimization. Geosci. Front.
**2020**. [Google Scholar] [CrossRef] - Zhang, W.; Li, Y.; Wu, C.; Li, H.; Goh, A.T.C.; Lin, H. Prediction of lining response for twin tunnels construction in anisotropic clays using machine learning techniques. Undergr. Space
**2020**. [Google Scholar] [CrossRef] - Hoerl, A.E.; Kennard, R.W. Ridge regression: Biased estimation for nonorthogonal problems. Technometrics
**1970**, 12, 55–67. [Google Scholar] [CrossRef] - Buonaccorsi, J.P. A modified estimating equation approach to correcting for measurement error in regression. Biometrika
**1996**, 83, 433–440. [Google Scholar] [CrossRef] - Stephen, G.W.; Christopher, J.P. Generalized ridge regression and a generalization of the Cp statistic. J. Appl. Stat.
**2001**, 28, 911–922. [Google Scholar] - Hoerl, A.E.; Kennard, R.W.; Baldwin, K.F. Ridge regression: Some simulations. Commun. Stat.
**1975**, 4, 105–123. [Google Scholar] [CrossRef] - Tibshirani, R. The lasso method for variable selection in the Cox model. Statist. Med.
**1997**, 16, 385–395. [Google Scholar] [CrossRef][Green Version] - Osborne, M.R.; Presnell, B.; Turlach, B.A. A new approach to variable selection in least squares problems. IMA J. Numer. Anal.
**2000**, 20, 389–403. [Google Scholar] [CrossRef][Green Version] - Donoho, D.; Johnstone, I.; Kerkyacharian, G.; Picard, D. Wavelet Shrinkage: Asymptopia? (with discussion). J. R. Stat. Soc. Ser. B
**1995**, 57, 301–337. [Google Scholar] - Donoho, D.; Huo, X. Uncertainty Principles and Ideal Atomic Decompositions. IEEE Trans. Inf. Theory
**2002**, 47, 2845–2863. [Google Scholar] [CrossRef][Green Version] - Donoho, D.; Elad, M. Optimally Sparse Representation in General (Nonorthogonal) Dictionaries via L1-Norm Minimizations. Proc. Natl. Acad. Sci. USA
**2003**, 100, 2197–2202. [Google Scholar] [CrossRef] [PubMed][Green Version] - Donoho, D. For Most Large Underdetermined Systems of Equations, the Minimal L1-Norm Solution is the Sparsest Solution. Commun. Pure Appl. Math.
**2006**, 59, 907–934. [Google Scholar] [CrossRef] - Meinshausen, N.; Bühlmann, P. Variable Selection and High-Dimensional Graphs With the Lasso. Ann. Stat.
**2006**, 34, 1436–1462. [Google Scholar] [CrossRef][Green Version] - Breiman, L. Random forests. Mach. Learn.
**2001**, 45, 5–32. [Google Scholar] [CrossRef][Green Version] - Kontschieder, P.; Bulò, S.R.; Bischof, H.; Pelillo, M. Structured class-labels in random forests for semantic image labelling. In 2011 International Conference on Computer Vision; IEEE: Piscataway, NJ, USA, 2011; pp. 2190–2197. [Google Scholar]
- Breiman, L.; Friedman, J.; Stone, C.J.; Olshen, R.A. Classification and Regression Trees; CRC Press: Boca Raton, FL, USA, 1984. [Google Scholar]
- Criminisi, A.; Shotton, J.; Konukoglu, E. Decision forests: A unified framework for classification, regression, density estimation, manifold learning and semi-supervised learning. Found. Trends
^{®}Comput. Graph. Vis.**2012**, 7, 81–227. [Google Scholar] [CrossRef] - Therneau, T.M.; Atkinson, E.J. An Introduction to Recursive Partitioning Using the RPART Routines; Mayo Clinic: Rochester, NY, USA, 1997. [Google Scholar]
- Chen, T.; Guestrin, C. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd Acm Sigkdd International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; ACM: New York, NY, USA, 2016; pp. 785–794. [Google Scholar]
- Friedman, J.H. Multivariate adaptive regression splines. Ann. Stat.
**1991**, 19, 1–67. [Google Scholar] [CrossRef] - Hastie, T.; Tibshirani, R.; Friedman, J. The Elements of Statistical Learning: Data Mining, Inference and Prediction, 2nd ed.; Springer: New York, NY, USA, 2009. [Google Scholar]
- Attoh-Okine, N.O.; Cooger, K.; Mensah, S. Multivariate adaptive regression spline (MARS) and hinged hyper planes (HHP) for doweled pavement performance modeling. Constr. Build. Mater.
**2009**, 23, 3020–3023. [Google Scholar] [CrossRef] - Zarnani, S.; El-Emam, M.; Bathurst, R.J. Comparison, of numerical and analytical solutions for reinforced soil wall shaking table tests. Geomech. Eng.
**2011**, 3, 291–321. [Google Scholar] [CrossRef] - Samui, P.; Karup, P. Multivariate adaptive regression spline and least square support vector machine for prediction of undrained shear strength of clay. Int. J. Appl. Metaheuristic Comput.
**2011**, 3, 33–42. [Google Scholar] [CrossRef] - Lashkari, A. Prediction of the shaft resistance of non-displacement piles in sand. Int. J. Numer. Anal. Methods Geomech.
**2012**, 37, 904–931. [Google Scholar] [CrossRef] - Khoshnevisan, S.; Juang, H.; Zhou, Y.G.; Gong, W. Probabilistic assessment of liquefaction-induced lateral spreads using CPT-Focusing on the 2010–2011 Canterbury earthquake sequence. Eng. Geol.
**2015**, 192, 113–128. [Google Scholar] [CrossRef] - Kisi, O.; Shiri, J.; Tombul, M. Modeling rainfall-runoff process using soft computing techniques. Comput. Geosci.
**2013**, 51, 108–117. [Google Scholar] [CrossRef] - Nagelkerke, N.J.D. A note on a general definition of the coefficient of determination. Biometrika
**1991**, 78, 691–692. [Google Scholar] [CrossRef]

**Figure 1.**Typical stress–strain relationship during undrained cyclic triaxial test [18].

**Figure 7.**Root mean square error (RMSE) and mean absolute error (MAE) versus parameter $\mathsf{\alpha}$ for the Ridge model.

**Figure 19.**R

^{2}change curve of the Random Forest (RF) model with parameters min_samples_split and min_samples_leaf.

**Table 1.**Statistics of parameters used for Multivariate Adaptive Regression Splines (MARS) model development.

Variables | Variable Description | Minimum | Maximum | Mean | Standard Deviation |
---|---|---|---|---|---|

Inputs | |||||

${{\sigma}^{\prime}}_{mean}$ (kPa) | Initial effective mean confining pressure | 40 | 400 | 103.2 | 50.8 |

${D}_{r}$ (%) | Initial relative density after consolidation | −44.5 | 105.1 | 51.6 | 29.8 |

FC (%) | Percentage of fines content | 0 | 100 | 18.8 | 24.2 |

C_{u} | Coefficient of uniformity | 1.52 | 28.1 | 4.14 | 6.09 |

${D}_{50}$ (mm) | Mean grain size | 0.03 | 0.46 | 0.21 | 0.12 |

Outputs | |||||

$Log\left(W\right)\left(J/{m}^{3}\right)$ | Logarithm of capacity energy | 2.48 | 4.54 | 3.27 | 0.42 |

BF | Expression | BF | Expression |
---|---|---|---|

BF1 | D_{50} | BF11 | max(0, 56.9−D_{r}) × D_{50} × D_{50} |

BF2 | max(0, ${\mathsf{\sigma}}_{\mathrm{mean}}^{\prime}$−97.2) × D_{50} | BF12 | max(0, 67−D_{r}) × ${\mathsf{\sigma}}_{\mathrm{mean}}^{\prime}$× max(0, 97.21−${\mathsf{\sigma}}_{\mathrm{mean}}^{\prime}$) |

BF3 | max(0, ${\mathsf{\sigma}}_{\mathrm{mean}}^{\prime}$−97.2) | BF13 | max(0, ${\mathsf{\sigma}}_{\mathrm{mean}}^{\prime}$−82.74) |

BF4 | D_{r} × D_{50} | BF14 | FC × max(0, D_{r}−68) × D_{50} |

BF5 | FC × D_{r} × D_{50} | BF15 | max(0, 160−${\mathsf{\sigma}}_{\mathrm{mean}}^{\prime}$) × max(0, 90.18−D_{r}) |

BF6 | D_{50} × D_{50} | BF16 | ${\mathsf{\sigma}}_{\mathrm{mean}}^{\prime}$× max(0, 82.74−${\mathsf{\sigma}}_{\mathrm{mean}}^{\prime}$) |

BF7 | FC × D_{50} × D_{50} | BF17 | ${\mathsf{\sigma}}_{\mathrm{mean}}^{\prime}$× FC |

BF8 | max(0, D_{r}−72.2) ×max(0, 90.18−D_{r}) | BF18 | max(0, D_{r}−75.1) × max(0, D_{r}−72.2) × max(0, 90.18−D_{r}) |

BF9 | ${\mathsf{\sigma}}_{\mathrm{mean}}^{\prime}$ × max(0, 97.21−${\mathsf{\sigma}}_{\mathrm{mean}}^{\prime}$) | BF19 | max(0, 75.1−D_{r}) × max(0, D_{r}−72.2) × max(0, 90.18−D_{r}) |

BF10 | max(0, D_{r}−56.9) × D_{50} × D_{50} | BF20 | FC × max(0, 72.2−D_{r})×max(0, 90.18−D_{r}) |

© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Chen, Z.; Li, H.; Goh, A.T.C.; Wu, C.; Zhang, W. Soil Liquefaction Assessment Using Soft Computing Approaches Based on Capacity Energy Concept. *Geosciences* **2020**, *10*, 330.
https://doi.org/10.3390/geosciences10090330

**AMA Style**

Chen Z, Li H, Goh ATC, Wu C, Zhang W. Soil Liquefaction Assessment Using Soft Computing Approaches Based on Capacity Energy Concept. *Geosciences*. 2020; 10(9):330.
https://doi.org/10.3390/geosciences10090330

**Chicago/Turabian Style**

Chen, Zhixiong, Hongrui Li, Anthony Teck Chee Goh, Chongzhi Wu, and Wengang Zhang. 2020. "Soil Liquefaction Assessment Using Soft Computing Approaches Based on Capacity Energy Concept" *Geosciences* 10, no. 9: 330.
https://doi.org/10.3390/geosciences10090330