# Application of Machine Learning Models to Bridge Afflux Estimation

^{1}

^{2}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Materials and Methods

#### 2.1. Bridge Backwater Explicit Formulas

_{1}and D

_{3}denote the normal flow depth at sections 1 and 3, respectively. Previous studies [1,4,8,10,12,14] have identified four parameters that have the most significant impacts on the bridge afflux. These parameters can be utilized to evaluate dh for arched bridge construction in rivers. They include (i) the normal downstream depth (D

_{3}), (ii) the Froude number at section 3 (Fr

_{3}), (iii) the ratio of blockage area of the bridge to the flow area at section 1 (J

_{1}), and (iv) the ratio of blockage area of the bridge to the flow area at section 3 (J

_{3}). Therefore, by adopting a dimensionless analysis, the bridge backwater is determined as follows:

_{1}. Based on Equation (2), an increase of Fr

_{3}leads to an increase in the bridge afflux, while decreasing J

_{3}causes a reduction in the bridge backwater. According to Equation (1), it is suggested that J

_{1}should also be included, which is the case in Equations (3)–(6). In other words, Equations (3)–(6) to incorporate all three independent parameters affecting the bridge afflux. Among the mentioned empirical equations, Equation (3) is the only linear one, whereas the rest have a nonlinear relationship.

_{3}< 1.179, J

_{3}and Fr

_{3}have a positive correlation with dh/D

_{3}. However, when Fr

_{3}≥ 1.179, J

_{3}and Fr

_{3}have a negative correlation with dh/D

_{3}. In contrast, J

_{1}always has a positive correlation with dh/D

_{3}. Furthermore, Equation (6) demonstrates that J

_{3}has two distinct impacts on the bridge afflux. Nevertheless, J

_{1}and Fr

_{3}always show a positive correlation.

#### 2.2. Datasets

_{3}with respect to J

_{1}, J

_{3}, and Fr

_{3}. As shown, most data points have similar values for J

_{1}and J

_{3}, while Fr

_{3}values are generally lower than 0.75. Additionally, most data points have dimensionless bridge afflux (i.e., dh/D

_{3}) lower than 0.78. This database has been utilized in previous studies [4,12,13,14], indicating its technical reliability for the implementation of ML models in the proposed study.

#### 2.3. ML Models

#### 2.3.1. Support Vector Regression

#### 2.3.2. Decision Tree Regressor

#### 2.3.3. Random Forest Regressor

#### 2.3.4. AdaBoost Regressor

#### 2.3.5. Gradient Boost Regressor

#### 2.3.6. XGBoost for Regression

#### 2.3.7. K-Nearest Neighbors

#### 2.3.8. Gaussian Process Regression

#### 2.4. Feature Importance Analysis

#### 2.5. Performance Criteria

^{2}) [30]. These metrics are presented in the following equations:

^{2}and NSE, as well as lower values of RMSE, MAE, MARE, and MXARE.

#### 2.6. Reliability Analysis

## 3. Results

#### 3.1. Results of Correlation

#### 3.2. Results of Performance Metrics

^{2}, a few ML models, such as DTR and ABR, regardless of their performance for the training data, did not perform well in estimating bridge afflux for the testing data (with testing R

^{2}values equal to 0.63). Other ML models, such as KNN, XGBR, and GPR, displayed a considerable difference between the metrics results for the training (i.e., R

^{2}= 1) and testing data (i.e., R

^{2}less than 0.91), indicating a dataset variance. Lastly, the NSE results are also similar to R

^{2}, where the GPR model outperformed other methods with a testing NSE of 0.91.

#### 3.3. Results of Ranking Analysis

^{2}) was compared and ranked from the best to the worst using integers 1 through 15. After calculating the rank of each method for all metrics, the algebraic summation of the ranks was obtained for each dataset. The resulting values were then ranked again from the lowest to the highest, yielding a total rank for each method considering all metrics. Finally, Table 4 displays the ranking results obtained for each method.

#### 3.4. Results of Reliability Analysis

#### 3.5. Results of Feature Importance Analysis

_{3}is the most critical feature with an importance score of 0.61, followed by J

_{3}and J

_{1}with importance scores of 0.3 and 0.08, respectively.

## 4. Discussion

#### 4.1. Discussion of Correlation Results

#### 4.2. Discussion of Perforamnce Metrics Results

^{2}values less than 0.8, as shown in Table 3. Furthermore, the MNLR method demonstrated the weakest performance with a training R

^{2}value of 0.37, while the MHBMO-GRG and GA methods performed the best with R

^{2}values of 0.78 and 0.79 for the testing data, respectively. In addition, the GP and ANN models utilized in previous studies significantly outperformed other explicit equations with testing R

^{2}values of 0.84 and 0.89, respectively. Nonetheless, some of the new ML models considered in the present study enhanced their performances of previously suggested models. For example, the GPR model outperformed all methods with a testing R

^{2}value of 0.91 and a training R

^{2}value of almost 1, unlike that of the GP model, which was 0.92. On the other hand, while MLR demonstrated a testing R

^{2}value of 0.61, the DTR and ABR models performed poorly with R

^{2}values equal to 0.63. Compared to explicit equations, ML models, such as GPR, KNN, GBR, RFR, and XGBR, demonstrated better R

^{2}values by at least 15%, 11%, 6%, 4%, and 3% improvements, respectively.

^{2}, where the MNLR and MLR methods had the weakest performances with a training NSE value of 0.02 and a testing NSE equal to 0.51, respectively. For the training data, the KNN, XGBR, and GPR models demonstrated the best performances with NSE values of 1, while the GPR model indicated an outperforming testing NSE of 0.91. Although the ML hyperparameters were tuned to prune the overfitting, there is still a significant variance between the training and testing dataset results in a few ML models, such as KNN. To be more specific, despite attempts to adjust values of ML hyperparameters and running the algorithms several times, KNN, XGBR, and GPR exhibited a tendency to fit the training data more, which may suggest overfitting. Nonetheless, the metrics results for the testing data are also satisfactory. For instance, the KNN performance is satisfactory regarding the testing dataset (KNN testing R

^{2}= 0.88 and NSE = 0.86). Therefore, the overfitting tendency to training data is a shortcoming of a few ML models as they are sensitive to their hyperparameters. Finally, when a large dataset is available, tuning hyperparameters becomes more effective on ML predictions.

#### 4.3. Discussion of Ranking Analysis

#### 4.4. Discussion of Reliability Analysis

#### 4.5. Discussion of Feature Importance Analysis

_{3}has the most significant impact on the bridge afflux, indicating its potential significance in predicting the outcome. Nevertheless, a feature with a low importance value could still be crucial to the overall performance of an estimation model. Therefore, these results should not be interpreted in a way such that an essential feature, like J

_{1}, is insignificant. Lastly, since the feature importance analysis showed the relative importance of Fr

_{3}, it can be recommended that future studies explore the relationship between Fr

_{3}and afflux depth further to gain a deeper understanding of such influence.

## 5. Conclusions

## Author Contributions

## Funding

## Data Availability Statement

## Conflicts of Interest

## References

- Cobaner, M.; Seckin, G.; Kisi, O. Initial assessment of bridge backwater using an artificial neural network approach. Can. J. Civ. Eng.
**2008**, 35, 500–510. [Google Scholar] [CrossRef] - Hunt, J.; Brunner, G.W.; Larock, B.E. Flow Transitions in Bridge Backwater Analysis. J. Hydraul. Eng.
**1999**, 125, 981–983. [Google Scholar] [CrossRef] [Green Version] - Biery, P.F.; Delleur, J.W. Hydraulics of Single Span Arch Bridge Construction. J. Hydraul. Div.
**1962**, 88, 75–108. [Google Scholar] [CrossRef] - Mamak, M.; Seckin, G.; Cobaner, M.; Kisi, O. Bridge afflux analysis through arched bridge constrictions using artificial intelligence methods. Civ. Eng. Environ. Syst.
**2009**, 26, 279–293. [Google Scholar] [CrossRef] - Pinar, E.; Paydas, K.; Seckin, G.; Akilli, H.; Sahin, B.; Cobaner, M.; Kocaman, S.; Akar, M.A. Artificial neural network approaches for prediction of backwater through arched bridge constrictions. Adv. Eng. Softw.
**2010**, 41, 627–635. [Google Scholar] [CrossRef] - Biglari, B.; Sturm, T.W. Numerical Modeling of Flow around Bridge Abutments in Compound Channel. J. Hydraul. Eng.
**1998**, 124, 156–164. [Google Scholar] [CrossRef] - Seckin, G.; Haktanir, T.; Knight, D. A simple method for estimating flood flow around bridges. In Proceedings of the Institution of Civil Engineers-Water Management; Thomas Telford Ltd.: London, UK, 2007. [Google Scholar]
- Seckin, G.; Atabay, S. Experimental backwater analysis around bridge waterways. Can. J. Civ. Eng.
**2005**, 32, 1015–1029. [Google Scholar] [CrossRef] - Seckin, G.; Yurtal, R.; Haktanir, T. Contraction and Expansion Losses through Bridge Constrictions. J. Hydraul. Eng.
**1998**, 124, 546–549. [Google Scholar] [CrossRef] - Seckin, G.; Akoz, M.S.; Cobaner, M.; Haktanir, T. Application of ANN techniques for estimating backwater through bridge constrictions in Mississippi River basin. Adv. Eng. Softw.
**2009**, 40, 1039–1046. [Google Scholar] [CrossRef] - Seckin, G.; Cobaner, M.; Ozmen-Cagatay, H.; Atabay, S.; Erduran, K.S. Bridge afflux estimation using artificial intelligence systems. In Proceedings of the Institution of Civil Engineers-Water Management; Thomas Telford Ltd.: London, UK, 2011. [Google Scholar]
- Niazkar, M.; Talebbeydokhti, N.; Afzali, S.-H. Bridge backwater estimation: A Comparison between artificial intelligence models and explicit equations. Sci. Iran.
**2020**, 28, 573–585. [Google Scholar] [CrossRef] - Brown, P. “Afflux at Arch Bridges”, Tech. Rep. Report SR 182, HR Wallingford (1988). Available online: https://eprints.hrwallingford.com/219/ (accessed on 6 June 2023).
- Pinar, E.; Seckin, G.; Sahin, B.; Akilli, H.; Cobaner, M.; Canpolat, C.; Atabay, S.; Kocaman, S. ANN approaches for the prediction of bridge backwater using both field and experimental data. Int. J. River Basin Manag.
**2011**, 9, 53–62. [Google Scholar] [CrossRef] - Niazkar, M. Assessment of artificial intelligence models for calculating optimum properties of lined channels. J. Hydroinformat.
**2020**, 22, 1410–1423. [Google Scholar] [CrossRef] - Bisong, E. Building Machine Learning and Deep Learning Models on Google Cloud Platform; Springer: Berlin/Heidelberg, Germany, 2019. [Google Scholar] [CrossRef]
- Hou, W.; Yin, G.; Gu, J.; Ma, N. Estimation of Spring Maize Evapotranspiration in Semi-Arid Regions of Northeast China Using Machine Learning: An Improved SVR Model Based on PSO and RF Algorithms. Water
**2023**, 15, 1503. [Google Scholar] [CrossRef] - Leong, W.C.; Bahadori, A.; Zhang, J.; Ahmad, Z. Prediction of water quality index (WQI) using support vector machine (SVM) and least square-support vector machine (LS-SVM). Int. J. River Basin Manag.
**2019**, 19, 149–156. [Google Scholar] [CrossRef] - Lu, H.; Ma, X. Hybrid decision tree-based machine learning models for short-term water quality prediction. Chemosphere
**2020**, 249, 126169. [Google Scholar] [CrossRef] - Schapire, R.E. Explaining adaboost. In Empirical Inference: Festschrift in Honor of Vladimir N. Vapnik; Springer: Berlin/Heidelberg, Germany, 2013; pp. 37–52. [Google Scholar]
- Bandara, A.; Hettiarachchi, Y.; Hettiarachchi, K.; Munasinghe, S.; Wijesinghe, I.; Thayasivam, U.A. A generalized ensemble machine learning approach for landslide susceptibility modeling. In Data Management, Analytics and Innovation: Proceedings of the ICDMAI 2019; Springer: Berlin/Heidelberg, Germany, 2020; Volume 2, pp. 71–93. [Google Scholar] [CrossRef]
- Katipoğlu, O.M.; Sarıgöl, M. Prediction of flood routing results in the Central Anatolian region of Türkiye with various machine learning models. Stoch. Environ. Res. Risk Assess.
**2023**, 37, 2205–2224. [Google Scholar] [CrossRef] - Han, Y.; Wu, J.; Zhai, B.; Pan, Y.; Huang, G.; Wu, L.; Zeng, W. Coupling a Bat Algorithm with XGBoost to Estimate Reference Evapotranspiration in the Arid and Semiarid Regions of China. Adv. Meteorol.
**2019**, 2019, 9575782. [Google Scholar] [CrossRef] - Müller, A.C.; Guido, S. Introduction to Machine Learning with Python: A Guide for Data Scientists; O’Reilly Media, Inc.: Sebastopol, CA, USA, 2016. [Google Scholar]
- Nugrahaeni, R.A.; Mutijarsa, K. Comparative analysis of machine learning KNN, SVM, and random forests algorithm for facial expression classification. In Proceedings of the 2016 International Seminar on Application for Technology of Information and Communication (ISemantic), Semarang, Indonesia, 5–6 August 2016. [Google Scholar]
- Schulz, E.; Speekenbrink, M.; Krause, A. A tutorial on Gaussian process regression: Modelling, exploring, and exploiting functions. J. Math. Psychol.
**2018**, 85, 1–16. [Google Scholar] [CrossRef] - Roushangar, K.; Shahnazi, S. Prediction of sediment transport rates in gravel-bed rivers using Gaussian process regression. J. Hydroinformat.
**2019**, 22, 249–262. [Google Scholar] [CrossRef] - Fathabadi, A.; Seyedian, S.M.; Malekian, A. Comparison of Bayesian, k-Nearest Neighbor and Gaussian process regression methods for quantifying uncertainty of suspended sediment concentration prediction. Sci. Total. Environ.
**2021**, 818, 151760. [Google Scholar] [CrossRef] - Zheng, H.; Yuan, J.; Chen, L. Short-Term Load Forecasting Using EMD-LSTM Neural Networks with a Xgboost Algorithm for Feature Importance Evaluation. Energies
**2017**, 10, 1168. [Google Scholar] [CrossRef] [Green Version] - Niazkar, M.; Zakwan, M. Developing ensemble models for estimating sediment loads for different times scales. Environ. Dev. Sustain.
**2023**, 1–19. [Google Scholar] [CrossRef] - Zakwan, M.; Niazkar, M. A Comparative Analysis of Data-Driven Empirical and Artificial Intelligence Models for Estimating Infiltration Rates. Complexity
**2021**, 2021, 9945218. [Google Scholar] [CrossRef]

**Figure 2.**Discrepancy of dh/D

_{3}with respect to (

**a**) J

_{1}and J

_{3}, (

**b**) J

_{1}and Fr

_{3}, and (

**c**) J

_{3}and Fr

_{3}.

**Figure 3.**Correlation results for the training dataset obtained by methods used (

**a**) in previous studies and (

**b**) in this study.

**Figure 4.**Correlation results for the testing dataset obtained by methods used (

**a**) in previous studies and (

**b**) in this study.

Training Dataset | Testing Dataset | |||||||
---|---|---|---|---|---|---|---|---|

Parameters | Min | Mean | Max | Std Dev | Min | Mean | Max | Std Dev |

J_{1} | 0.064 | 0.455 | 0.803 | 0.167 | 0.099 | 0.44 | 0.746 | 0.157 |

J_{3} | 0.047 | 0.388 | 0.742 | 0.152 | 0.097 | 0.374 | 0.678 | 0.144 |

Fr_{3} | 0.008 | 0.374 | 1.809 | 0.269 | 0.053 | 0.34 | 1.021 | 0.189 |

dh/D_{3} | 0.002 | 0.261 | 1.805 | 0.324 | 0.008 | 0.223 | 0.685 | 0.190 |

Hyperparameter | Model | Description |
---|---|---|

n_estimators | RFR, ABR, GBR, XGBR | Total number of trees. |

criterion | DTR, RFR | Loss function, which can be one of squared_error, absolute_error, poisson, and friedman_mse. |

max_depth | DTR, RFR, GBR, XGBR | Maximum depth allowed for each tree, positive integer or None. |

min_samples_split | DTR, RFR, GBR | Minimum instances required to split data. |

loss | ABR | Loss function, which can be one of linear, square, and exponential. |

loss | GBR | Loss function, which can be one of squared_error, absolute_error, huber, and quantile. |

p | KNN | The power of the distance function. If p = 1, the distance function is Manhattan, and if p = 2, it is Euclidean, while any other arbitrary value of p corresponds to Minkowski. |

N_neighbors | KNN | Total number of neighbors. |

Weights | KNN | The weight of each neighbor; includes distance for weighting based on the distance, uniform for equal weight, or any other user-defined functions. |

Algorithm | KNN | The algorithm computing the nearest neighbor’s parameter, which can be one of auto, ball_tree, kd_tree, and brute. |

Kernel | SVR | The kernel function, which can be one of linear, poly, rbf, and sigmoid. |

Degree | SVR | A non-negative parameter for poly kernel. |

Gamma | SVR | A coefficient for rbf, poly, and sigmoid kernels, which can be scale, auto, or any non-negative value. |

C | SVR | A positive regularization parameter. |

kernel | GPR | The kernel function specifying the covariance function, which can be any user-defined function. |

alpha | GPR | A value added to the diagonal of the kernel matrix during the fitting process. |

learning_rate | ABR, GBR, XGBR | The weight assigned to each tree during each iteration. Increasing the learning rate increases the contribution of each tree; range: [0, 1]. |

min_split_loss | XGBR | Minimum loss reduction required to split a child node (gamma); range: [0, ∞]. |

reg_alpha | XGBR | L1 weight regularization term. |

reg_lambda | XGBR | L2 weight regularization term. |

min_child_weight | XGBR | Minimum summation of weights required in each child node. If the summation of instance weights is below this threshold, the algorithm will stop further partitioning. |

Method | Dataset | RMSE | MAE | MARE | MXARE | NSE | R^{2} |
---|---|---|---|---|---|---|---|

Previous Studies [3,4,12] | |||||||

Biery and Delleur | Training | 0.28 | 0.10 | 0.57 | 14.09 | 0.23 | 0.56 |

Testing | 0.12 | 0.08 | 0.40 | 0.98 | 0.61 | 0.68 | |

MLR | Training | 0.24 | 0.16 | 1.86 | 24.24 | 0.47 | 0.49 |

Testing | 0.13 | 0.12 | 1.23 | 5.81 | 0.51 | 0.61 | |

MNLR | Training | 0.32 | 0.12 | 0.67 | 13.24 | 0.02 | 0.37 |

Testing | 0.12 | 0.08 | 0.39 | 1.43 | 0.62 | 0.67 | |

GA | Training | 0.11 | 0.07 | 0.63 | 15.34 | 0.88 | 0.88 |

Testing | 0.09 | 0.06 | 0.41 | 1.63 | 0.79 | 0.79 | |

MHBMO-GRG | Training | 0.19 | 0.08 | 0.53 | 9.48 | 0.66 | 0.72 |

Testing | 0.09 | 0.06 | 0.46 | 2.20 | 0.77 | 0.78 | |

ANN | Training | 0.09 | 0.04 | 0.52 | 16.17 | 0.92 | 0.92 |

Testing | 0.07 | 0.05 | 0.33 | 1.39 | 0.88 | 0.89 | |

GP | Training | 0.07 | 0.03 | 0.31 | 9.01 | 0.95 | 0.96 |

Testing | 0.08 | 0.05 | 0.31 | 0.98 | 0.82 | 0.84 | |

This study | |||||||

SVR | Training | 0.15 | 0.13 | 3.05 | 66.10 | 0.78 | 0.89 |

Testing | 0.13 | 0.11 | 2.04 | 20.32 | 0.56 | 0.80 | |

DTR | Training | 0.12 | 0.06 | 0.39 | 3.71 | 0.86 | 0.86 |

Testing | 0.12 | 0.09 | 0.78 | 11.69 | 0.61 | 0.63 | |

RFR | Training | 0.06 | 0.03 | 0.24 | 5.37 | 0.97 | 0.97 |

Testing | 0.08 | 0.06 | 0.53 | 4.97 | 0.82 | 0.82 | |

ABR | Training | 0.11 | 0.09 | 1.70 | 49.38 | 0.89 | 0.92 |

Testing | 0.12 | 0.10 | 1.23 | 8.98 | 0.59 | 0.63 | |

GBR | Training | 0.04 | 0.01 | 0.17 | 2.62 | 0.99 | 0.99 |

Testing | 0.08 | 0.05 | 0.39 | 2.05 | 0.83 | 0.84 | |

XGBR | Training | 0.001 | 0.001 | 0.01 | 0.65 | 1.00 | 1.00 |

Testing | 0.08 | 0.06 | 0.42 | 3.74 | 0.81 | 0.82 | |

GPR | Training | 0.02 | 0.01 | 0.12 | 1.62 | 1.00 | 1.00 |

Testing | 0.06 | 0.04 | 0.26 | 1.32 | 0.91 | 0.91 | |

KNN | Training | 8.3 × 10^{−18} | 1.2 × 10^{−18} | 1.1 × 10^{−17} | 2.1 × 10^{−16} | 1.00 | 1.00 |

Testing | 0.07 | 0.05 | 0.33 | 3.16 | 0.86 | 0.88 |

Method | Dataset | RMSE | MAE | MARE | MXARE | NSE | R^{2} | Subset Rank | Total Rank |
---|---|---|---|---|---|---|---|---|---|

GPR (this study) | Training | 3 | 3 | 3 | 3 | 3 | 3 | 3 | 1 |

Testing | 1 | 1 | 1 | 3 | 1 | 1 | 1 | ||

KNN (this study) | Training | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2 |

Testing | 3 | 3 | 4 | 9 | 3 | 3 | 4 | ||

XGBR (this study) | Training | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 3 |

Testing | 7 | 6 | 9 | 10 | 7 | 7 | 7 | ||

GBR (this study) | Training | 4 | 4 | 4 | 4 | 4 | 4 | 4 | 3 |

Testing | 4 | 5 | 6 | 7 | 4 | 5 | 5 | ||

GP [12] | Training | 6 | 6 | 6 | 7 | 6 | 6 | 6 | 3 |

Testing | 6 | 4 | 2 | 2 | 6 | 4 | 3 | ||

ANN [12] | Training | 7 | 7 | 8 | 12 | 7 | 7 | 7 | 3 |

Testing | 2 | 2 | 3 | 4 | 2 | 2 | 2 | ||

RFR (this study) | Training | 5 | 5 | 5 | 6 | 5 | 5 | 5 | 7 |

Testing | 5 | 7 | 11 | 11 | 5 | 6 | 6 | ||

GA [12] | Training | 9 | 9 | 11 | 11 | 9 | 10 | 9 | 8 |

Testing | 8 | 8 | 8 | 6 | 8 | 9 | 8 | ||

DTR (this study) | Training | 10 | 8 | 7 | 5 | 10 | 11 | 8 | 9 |

Testing | 12 | 12 | 12 | 14 | 12 | 14 | 12 | ||

Biery and Delleur [3] | Training | 14 | 12 | 10 | 10 | 14 | 13 | 12 | 10 |

Testing | 11 | 10 | 7 | 1 | 11 | 11 | 9 | ||

MHBMO-GRG [12] | Training | 12 | 10 | 9 | 8 | 12 | 12 | 11 | 11 |

Testing | 9 | 9 | 10 | 8 | 9 | 10 | 11 | ||

ABR (this study) | Training | 8 | 11 | 13 | 14 | 8 | 8 | 10 | 12 |

Testing | 13 | 13 | 14 | 13 | 13 | 13 | 13 | ||

MNLR [4] | Training | 15 | 13 | 12 | 9 | 15 | 15 | 14 | 13 |

Testing | 10 | 11 | 5 | 5 | 10 | 12 | 10 | ||

SVR (this study) | Training | 11 | 14 | 15 | 15 | 11 | 9 | 13 | 14 |

Testing | 14 | 14 | 15 | 15 | 14 | 8 | 14 | ||

MLR [4] | Training | 13 | 15 | 14 | 13 | 13 | 14 | 15 | 15 |

Testing | 15 | 15 | 13 | 12 | 15 | 15 | 15 |

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |

© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Piraei, R.; Niazkar, M.; Afzali, S.H.; Menapace, A.
Application of Machine Learning Models to Bridge Afflux Estimation. *Water* **2023**, *15*, 2187.
https://doi.org/10.3390/w15122187

**AMA Style**

Piraei R, Niazkar M, Afzali SH, Menapace A.
Application of Machine Learning Models to Bridge Afflux Estimation. *Water*. 2023; 15(12):2187.
https://doi.org/10.3390/w15122187

**Chicago/Turabian Style**

Piraei, Reza, Majid Niazkar, Seied Hosein Afzali, and Andrea Menapace.
2023. "Application of Machine Learning Models to Bridge Afflux Estimation" *Water* 15, no. 12: 2187.
https://doi.org/10.3390/w15122187