This study aimed to develop a reliable turbidity model to assess reservoir turbidity based on Landsat-8 satellite imagery. Models were established by multiple linear regression (MLR) and gene-expression programming (GEP) algorithms. Totally 55 and 18 measured turbidity data from Tseng-Wen and Nan-Hwa reservoir paired and screened with satellite imagery. Finally, MLR and GEP were applied to simulated 13 turbid water data for critical turbidity assessment. The coefficient of determination (R2
), root mean squared error (RMSE), and relative RMSE (R-RMSE) calculated for model performance evaluation. The result show that, in model development, MLR and GEP shows a similar consequent. However, in model testing, the R2
, RMSE, and R-RMSE of MLR and GEP are 0.7277 and 0.8278, 0.7248 NTU and 0.5815 NTU, 22.26% and 17.86%, respectively. Accuracy assessment result shows that GEP is more reasonable than MLR, even in critical turbidity situation, GEP is more convincible. In the model performance evaluation, MLR and GEP are normal
level, in critical turbidity condition, GEP even belongs to outstanding
level. These results exhibit GEP denotes rationality and with relatively good applicability for turbidity simulation. From this study, one can conclude that GEP is suitable for turbidity modeling and is accurate enough for reservoir turbidity estimation.
This is an open access article distributed under the Creative Commons Attribution License
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited