# Assessment of Hellwig Method for Predictors’ Selection in Groundwater Level Time Series Forecasting

^{1}

^{2}

^{*}

## Abstract

**:**

^{2}of 0.7–0.9 were considered as high quality. Moreover, they showed good prediction ability for high as well as low groundwater values. Additionally, the proposed method is simple, and its implementation only requires access to groundwater level measurement data. It may be useful in groundwater management and planning in terms of actual climate change and threat of water deficits.

## 1. Introduction

- The study of the performance of Hellwig’s method for selection of the predictors for groundwater level modelling;
- Daily groundwater level time series reconstruction using support vector regression models;
- Groundwater level prediction in wetlands after wastewater treatment exploitation;
- Supplementing the missing groundwater level time series.

## 2. Materials and Methods

#### 2.1. Research Area

#### 2.2. Hellwig Method

^{8}– 1) possible combinations for each well were obtained, including single, double and triple combinations of predictors, etc., until all eight were used. Creating 255 models for each piezometer would be very work-intensive and time-consuming, and that is why the Hellwig method was used to create the rank of neighboring well combinations which served as predictors to the forecasting models. The concept is to use explanatory variables strongly dependent on the explained variable and at the same time weakly correlated with each other. However, this is not a strict criterion for the variables’ selection; in addition, there is a numerical criterion, the so-called integral capacity of information carriers’ combination. In this case, the information carriers are all explanatory variables.

_{j}is the correlation coefficient of the potential explanatory variable number j with the explained variable (element of the vector of linear correlation coefficients between the explanatory variables and the explained variable R

_{0}), and r

_{ij}is the correlation coefficient between the ith and jth potential explanatory variables (elements of the correlation coefficients matrix between potential explanatory variables R).

_{Q}measures the amount of information a variable Xj adds about variable Y in the combination. H

_{Q}increases when r

_{j}increases, whereas it decreases when the more variable X

_{j}is correlated with the other explanatory variables.

_{j}is the individual capacity of information carriers.

_{Q}[18].

#### 2.3. SVR Modelling

_{i}is a set of data belonging to two classes defined by y

_{i}variables, y

_{i}is the actual value corresponding to the x

_{i}input vector, and $\mathsf{\epsilon}$ is a non-optimized method parameter that defines the acceptable level of error, i.e., of the difference between the predicted values and those that exist in the learning data.

#### 2.4. Models’ Quality Metrics

^{2}) for the actual and prognosed values, root mean squared error (RMSE), as well as mean absolute error (MAE) of the models according to the following formulas. Modelling was carried out using the MATLAB and Statistica programs.

_{i}are the predicted data, and y

_{i}are the observed data.

## 3. Results and Discussion

^{2}) between the observed and predicted time series and the size of models’ errors (RMSE, MAE). In columns, the results of modelling in the learning, testing and in both samples together for the two best combinations of predictors, selected by the Hellwig method, for the combination considered the least informative, and for the model based on all eight predictors are placed.

^{2}in the testing subset equaled 0.710–0.774, whereas RMSE was 0.103–0.117 m. For N-8, the best combinations of predictors were two-element combinations N-4, P-22 and N-4, P-36. Results indicated that the least informative combination was a single piezometer N-6, which was also justified by the analysis of the correlation between groundwater levels in N-8 and in particular, individual wells (Table 3). In N-8, it was correlated the least with N-6 and N-2 (r equaled 0.4817 and 0.5919, respectively), and the most with N-4, although r was 0.7472. That was the well with the largest response to the firefighting action in August 2015, and therefore the measurement results differed the most from the other piezometers, which was also confirmed by the forecasting results.

^{2}correlations in these cases were 0.910–0.977 for the testing sets. Groundwater level forecasting in this well, using only a single N-4 time series, provided the SVR model with the lowest quality (RMSE = 0.292 and r

^{2}= 0.142). Indeed, the measurement results in N-4 were the least correlated with P-10 (r = 0.3637), while the most with N-6 (r = 0.8710) (Table 3).

^{2}and RMSE, was the highest among all those analyzed (0.980–0.986; 0.024–0.029, respectively). The correlation between GWL in P-34 and time series from individual wells was also relatively high, r equaled from 0.7502 to 0.932 (Table 3). The lowest value was obtained in the case of N-4 and the highest for P-22, which was explained by the different distance between piezometers (Figure 1a).

^{2}) and 0.056–0.087 (RMSE) in the testing subsets for a combination based on all eight predictors and for the best combination according to Hellwig method: P-22, P-30, P-36 and P-22, P-36. Groundwater level time series from N-4 added the least amount of information to the prediction model and the correlation coefficient between N-4 and P-43 was 0.3608, while for P-36 and P-43 it was 0.9099 (Table 3). The model’s performance (created on the basis of the time series from a single N-4) was very weak: r

^{2}= 0.084 and RMSE = 0.214.

^{2}= 0.8 and RMSE of 0.2–0.6. Zhao et al. [15] demonstrated that in the case of a CART model, MAE was 0.28 (in the present research MAE was 0.017–0.213). Ibrahem Ahmed Osman et al. [17] predicted GWL with performance described by the RMSE from 0.1 to 0.8, while Iqbal et al. [16] predicted 0.05. Sahoo et al. [8] obtained hybrid models of performance of 0.32–0.65 (r

^{2}) and 0.52–1.77 (RMSE), and Sharafati et al. [12] acquired values of r

^{2}= 0.66–0.94.

^{2}results were 0.778–0.997 (Table 4).

## 4. Conclusions

^{2}of 0.7–0.9, might be subjectively considered as acceptable in the field of regional hydrology. Moreover, the created models worked well and showed good prediction ability for high as well as low daily groundwater values. Nevertheless, the comparison with MLP predictions proved to be even more accurate and led to the creation of models with better quality.

## Author Contributions

## Funding

## Institutional Review Board Statement

## Informed Consent Statement

## Data Availability Statement

## Acknowledgments

## Conflicts of Interest

## References

- Liu, D.; Li, G.; Fu, Q.; Li, M.; Liu, C.; Faiz, M.A.; Khan, M.I.; Li, T.; Cui, S. Application of Particle Swarm Optimization and Extreme Learning Machine Forecasting Models for Regional Groundwater Depth Using Nonlinear Prediction Models as Preprocessor. J. Hydrol. Eng.
**2018**, 23, 04018052. [Google Scholar] [CrossRef] - Xu, T.; Valocchi, A.J.; Choi, J.; Amir, E. Use of Machine Learning Methods to Reduce Predictive Error of Groundwater Models. Ground Water
**2013**, 52, 448–460. [Google Scholar] [CrossRef] - Yadav, B.; Ch, S.; Mathur, S.; Adamowski, J. Assessing the suitability of extreme learning machines (ELM) for groundwater level prediction. J. Water Land Dev.
**2017**, 32, 103–112. [Google Scholar] [CrossRef] - Alsumaiei, A.A. A Nonlinear Autoregressive Modeling Approach for Forecasting Groundwater Level Fluctuation in Urban Aquifers. Water
**2020**, 12, 820. [Google Scholar] [CrossRef][Green Version] - Smarra, F.; Jain, A.; Mangharam, R.; D’Innocenzo, A. Data-driven Switched Affine Modeling for Model Predictive Control. IFAC-PapersOnLine
**2018**, 51, 199–204. [Google Scholar] [CrossRef] - Sivapragasam, C.; Kannabiran, K.; Karthik, G.; Raja, S.N. Assessing Suitability of GP Modeling for Groundwater Level. Aquat. Procedia
**2015**, 4, 693–699. [Google Scholar] [CrossRef] - Quilty, J.; Adamowski, J.; Khalil, B.; Rathinasamy, M. Bootstrap rank-ordered conditional mutual information (broCMI): A nonlinear input variable selection method for water resources modeling. Water Resour. Res.
**2016**, 52, 2299–2326. [Google Scholar] [CrossRef][Green Version] - Sahoo, S.; Russo, T.A.; Elliott, J.; Foster, I. Machine learning algorithms for modeling groundwater level changes in agricultural regions of the U.S. Water Resour. Res.
**2017**, 53, 3878–3895. [Google Scholar] [CrossRef] - Jalalkamali, A.; Sedghi, H.; Manshouri, M. Monthly groundwater level prediction using ANN and neuro-fuzzy models: A case study on Kerman plain, Iran. J. Hydroinform.
**2010**, 13, 867–876. [Google Scholar] [CrossRef][Green Version] - Vu, M.; Jardani, A.; Massei, N.; Fournier, M. Reconstruction of missing groundwater level data by using Long Short-Term Memory (LSTM) deep neural network. J. Hydrol.
**2020**, 125776, 125776. [Google Scholar] [CrossRef] - Shiri, J.; Kişi, Ö. Comparison of genetic programming with neuro-fuzzy systems for predicting short-term water table depth fluctuations. Comput. Geosci.
**2011**, 37, 1692–1701. [Google Scholar] [CrossRef] - Sharafati, A.; Asadollah, S.B.H.S.; Neshat, A. A new artificial intelligence strategy for predicting the groundwater level over the Rafsanjan aquifer in Iran. J. Hydrol.
**2020**, 591, 125468. [Google Scholar] [CrossRef] - Wu, M.; Feng, Q.; Wen, X.; Yin, Z.; Yang, L.; Sheng, D. Deterministic Analysis and Uncertainty Analysis of Ensemble Forecasting Model Based on Variational Mode Decomposition for Estimation of Monthly Groundwater Level. Water
**2021**, 13, 139. [Google Scholar] [CrossRef] - Rahman, A.S.; Hosono, T.; Quilty, J.M.; Das, J.; Basak, A. Multiscale groundwater level forecasting: Coupling new machine learning approaches with wavelet transforms. Adv. Water Resour.
**2020**, 141, 103595. [Google Scholar] [CrossRef] - Zhao, Y.; Li, Y.; Zhang, L.; Wang, Q. Groundwater level prediction of landslide based on classification and regression tree. Geodesy Geodyn.
**2016**, 7, 348–355. [Google Scholar] [CrossRef][Green Version] - Iqbal, M.; Naeem, U.A.; Ahmad, A.; Rehman, H.-U.-; Ghani, U.; Farid, T. Relating groundwater levels with meteorological parameters using ANN technique. Measurement
**2020**, 166, 108163. [Google Scholar] [CrossRef] - Osman, A.I.A.; Ahmed, A.N.; Chow, M.F.; Huang, Y.F.; El-Shafie, A. Extreme gradient boosting (Xgboost) model to predict the groundwater levels in Selangor Malaysia. Ain Shams Eng. J.
**2021**. [Google Scholar] [CrossRef] - Hellwig, Z. On the optimal choice of predictors. In Toward a System of Quantitative Indicators of Components of Human Resources Development; Gostkowski, Z., Ed.; UNESCO: Paris, UK, 1968. [Google Scholar]
- Omiotek, Z.; Stepanchenko, O.; Wójcik, W.; Legieć, W.; Szatkowska, M. The use of the Hellwig’s method for feature selection in the detection of myeloma bone destruction based on radiographic images. Biocybern. Biomed. Eng.
**2019**, 39, 328–338. [Google Scholar] [CrossRef] - Szmidt, E.; Kacprzyk, J.; Bujnowski, P. Attribute Selection via Hellwig’s Algorithm for Atanassov’s Intuitionistic Fuzzy Sets. In Computational Intelligence and Mathematics for Tackling Complex Problems. Studies in Computational Intelligence; Kóczy, L., Medina-Moreno, J., Ramírez-Poussa, E., Šostak, A., Eds.; Springer: Cham, Switzerland, 2020; Volume 819. [Google Scholar] [CrossRef]
- Wójcik-Leń, J.; Leń, P.; Mika, M.; Kryszk, H.; Kotlarz, P. Studies regarding correct selection of statistical methods for the needs of increasing the efficiency of identification of land for consolidation—A case study in Poland. Land Use Policy
**2019**, 87, 104064. [Google Scholar] [CrossRef] - Łyczko, W.M. Osobowice irrigation fields—History and present time. Inżynieria Ekologiczna
**2018**, 19, 37–43. [Google Scholar] [CrossRef] - Analysis of the variability of groundwater level in the Irrigation Fields in Wrocław, Wrocław University of Environmental and Life Sciences on behalf of the Municipal Water and Sewage Company in Wrocław; summary report, typescript; Wrocław University of Environmental and Life Sciences: Wrocław, Poland, 2015.
- Suryanarayana, C.; Sudheer, C.; Mahammood, V.; Panigrahi, B. An integrated wavelet-support vector machine for groundwater level prediction in Visakhapatnam, India. Neurocomputing
**2014**, 145, 324–335. [Google Scholar] [CrossRef] - Aftab, S.; Ahmad, M.; Hameed, N.; Salman, M.; Ali, I.; Nawaz, Z. Rainfall Prediction in Lahore City using Data Mining Techniques. Int. J. Adv. Comput. Sci. Appl.
**2018**, 9, 9. [Google Scholar] [CrossRef] - Chu, H.; Wei, J.; Li, T.; Jia, K. Application of Support Vector Regression for Mid- and Long-term Runoff Forecasting in “Yellow River Headwater” Region. Procedia Eng.
**2016**, 154, 1251–1257. [Google Scholar] [CrossRef][Green Version] - Mosavi, A.; Ozturk, P.; Chau, K.-W. Flood Prediction Using Machine Learning Models: Literature Review. Water
**2018**, 10, 1536. [Google Scholar] [CrossRef][Green Version] - Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn.
**1995**, 20, 273–297. [Google Scholar] [CrossRef] - Vert, J.-P.; Tsuda, K.; Schölkopf, B. (Eds.) A Primer on Kernel Methods in Computational Biology. In Kernel Methods in Computational Biology; The MIT Press: Cambridge, UK; pp. 35–70. [CrossRef][Green Version]
- Dell Inc. Dell Statistica (Data Analysis Software System). 2016, 13. Available online: software.dell.com (accessed on 11 March 2021).
- Emamgholizadeh, S.; Moslemi, K.; Karami, G. Prediction the Groundwater Level of Bastam Plain (Iran) by Artificial Neural Network (ANN) and Adaptive Neuro-Fuzzy Inference System (ANFIS). Water Resour. Manag.
**2014**, 28, 5433–5446. [Google Scholar] [CrossRef] - Wang, X.; Liu, T.; Zheng, X.; Peng, H.; Xin, J.; Zhang, B. Short-term prediction of groundwater level using improved random forest regression with a combination of random features. Appl. Water Sci.
**2018**, 8, 125. [Google Scholar] [CrossRef][Green Version] - Wunsch, A.; Liesch, T.; Broda, S. Forecasting groundwater levels using nonlinear autoregressive networks with exogenous input (NARX). J. Hydrol.
**2018**, 567, 743–758. [Google Scholar] [CrossRef]

**Figure 2.**Observed and predicted groundwater levels (GWLs) (m) with SVR time series in the N-8 well in the test subset for the best (

**a**) and the worst (

**b**) combination of predictors.

**Figure 3.**Relationship between observed and predicted GWLs (m) with SVR time series in the N-8 well in the test subset for the best (

**a**) and the worst (

**b**) combination of predictors.

**Figure 4.**Residual histograms of predicted GWLs (m) with SVR time series in the N-8 well in the test subset for the best (

**a**) and the worst (

**b**) combination of predictors.

N-8 | no. of neurons | no. of learning epochs | activation function of neurons | |||

input layer | hidden layer | output layer | hidden layer | output layer | ||

N-4, P-22 | 2 | 3 | 1 | 89 | tanh | logistic |

N-4, P-36 | 2 | 5 | 1 | 118 | tanh | exponential |

N-6 | 1 | 6 | 1 | 54 | tanh | tanh |

all | 8 | 9 | 1 | 9999 | exponential | logistic |

P-10 | no. of neurons | no. of learning epochs | activation function of neurons | |||

input layer | hidden layer | output layer | hidden layer | output layer | ||

N-2, N-6 | 2 | 9 | 1 | 268 | tanh | tanh |

N-2, N-6, P-30 | 3 | 10 | 1 | 212 | tanh | tanh |

N-4 | 1 | 8 | 1 | 174 | tanh | tanh |

all | 8 | 11 | 1 | 286 | logistic | exponential |

P-34 | no. of neurons | no. of learning epochs | activation function of neurons | |||

input layer | hidden layer | output layer | hidden layer | output layer | ||

N-4, P-22, P-36 | 3 | 7 | 1 | 159 | tanh | tanh |

N-4, P-22, P-30, P-36 | 4 | 10 | 1 | 233 | tanh | exponential |

N4 | 1 | 6 | 1 | 294 | logistic | linear |

all | 8 | 11 | 1 | 195 | tanh | logistic |

P-43 | no. of neurons | no. of learning epochs | activation function of neurons | |||

input layer | hidden layer | output layer | hidden layer | output layer | ||

P-22, P-30, P-36 | 3 | 6 | 1 | 122 | exponential | logistic |

P-22, P-36 | 2 | 8 | 1 | 161 | logistic | tanh |

N4 | 1 | 2 | 1 | 68 | logistic | linear |

all | 8 | 5 | 1 | 280 | logistic | exponential |

N-8 | N-4, P-22 | N-4, P-36 | N-6 | All | ||||||||

learning | testing | both | learning | testing | both | learning | testing | both | learning | testing | both | |

RMSE | 0.112 | 0.114 | 0.113 | 0.119 | 0.117 | 0.119 | 0.180 | 0.186 | 0.182 | 0.104 | 0.103 | 0.104 |

MAE | 0.061 | 0.060 | 0.061 | 0.065 | 0.065 | 0.065 | 0.138 | 0.145 | 0.140 | 0.047 | 0.051 | 0.048 |

r^{2} | 0.732 | 0.710 | 0.716 | 0.691 | 0.722 | 0.699 | 0.310 | 0.313 | 0.311 | 0.758 | 0.774 | 0.763 |

P-10 | N-2, N-6 | N-2, N-6, P-30 | N-4 | all | ||||||||

learning | testing | both | learning | testing | both | learning | testing | both | learning | testing | both | |

RMSE | 0.092 | 0.099 | 0.094 | 0.088 | 0.099 | 0.090 | 0.283 | 0.292 | 0.285 | 0.047 | 0.049 | 0.047 |

MAE | 0.073 | 0.077 | 0.074 | 0.067 | 0.073 | 0.068 | 0.210 | 0.213 | 0.210 | 0.037 | 0.038 | 0.037 |

r^{2} | 0.919 | 0.910 | 0.917 | 0.929 | 0.913 | 0.925 | 0.139 | 0.142 | 0.140 | 0.978 | 0.977 | 0.977 |

P-34 | N-4, P-22, P-36 | N-4, P-22, P-30, P-36 | N-4 | all | ||||||||

learning | testing | both | learning | testing | both | learning | testing | both | learning | testing | both | |

RMSE | 0.028 | 0.029 | 0.028 | 0.026 | 0.028 | 0.026 | 0.134 | 0.138 | 0.135 | 0.022 | 0.024 | 0.023 |

MAE | 0.022 | 0.022 | 0.022 | 0.021 | 0.022 | 0.021 | 0.116 | 0.120 | 0.117 | 0.017 | 0.017 | 0.017 |

r^{2} | 0.980 | 0.980 | 0.980 | 0.981 | 0.980 | 0.981 | 0.562 | 0.565 | 0.564 | 0.986 | 0.986 | 0.986 |

P-43 | P-22, P-30, P-36 | P-22, P-36 | N-4 | all | ||||||||

learning | testing | both | learning | testing | both | learning | testing | both | learning | testing | both | |

RMSE | 0.072 | 0.080 | 0.074 | 0.080 | 0.087 | 0.081 | 0.200 | 0.214 | 0.204 | 0.051 | 0.056 | 0.053 |

MAE | 0.058 | 0.064 | 0.060 | 0.064 | 0.070 | 0.066 | 0.163 | 0.175 | 0.166 | 0.041 | 0.044 | 0.042 |

r^{2} | 0.890 | 0.877 | 0.886 | 0.890 | 0.875 | 0.886 | 0.110 | 0.084 | 0.103 | 0.939 | 0.936 | 0.938 |

Predictors | Response Variables | |||
---|---|---|---|---|

N-8 | P-10 | P-34 | P-43 | |

N-2 | 0.5919 | 0.8420 | 0.7750 | 0.6061 |

N-4 | 0.7472 | 0.3637 | 0.7502 | 0.3608 |

N-6 | 0.4817 | 0.8710 | 0.7877 | 0.8220 |

N-7 | 0.6089 | 0.8447 | 0.8805 | 0.8913 |

P-19 | 0.6747 | 0.8052 | 0.9171 | 0.8056 |

P-22 | 0.6868 | 0.7697 | 0.9332 | 0.9135 |

P-30 | 0.6338 | 0.8418 | 0.9088 | 0.9158 |

P-36 | 0.6989 | 0.7148 | 0.9200 | 0.9099 |

N-8 | N-4, P-22 | N-4, P-36 | N-6 | All | ||||||||

MLP 2-3-1 | MLP 2-5-1 | MLP 1-6-1 | MLP 8-9-1 | |||||||||

learning | testing | both | learning | testing | both | learning | testing | both | learning | testing | both | |

RMSE | 0.093 | 0.061 | 0.078 | 0.103 | 0.066 | 0.087 | 0.170 | 0.144 | 0.158 | 0.030 | 0.034 | 0.032 |

MAE | 0.052 | 0.047 | 0.050 | 0.054 | 0.047 | 0.051 | 0.121 | 0.118 | 0.120 | 0.018 | 0.020 | 0.019 |

r^{2} | 0.821 | 0.910 | 0.866 | 0.778 | 0.884 | 0.831 | 0.394 | 0.439 | 0.417 | 0.981 | 0.970 | 0.976 |

P-10 | N-2, N-6 | N-2, N-6, P-30 | N-4 | all | ||||||||

MLP 2-9-1 | MLP 3-10-1 | MLP 1-8-1 | MLP 8-11-1 | |||||||||

learning | testing | both | learning | testing | both | learning | testing | both | learning | testing | both | |

RMSE | 0.057 | 0.056 | 0.057 | 0.045 | 0.046 | 0.045 | 0.277 | 0.267 | 0.272 | 0.021 | 0.021 | 0.021 |

MAE | 0.042 | 0.042 | 0.042 | 0.034 | 0.034 | 0.034 | 0.213 | 0.207 | 0.210 | 0.015 | 0.015 | 0.015 |

r^{2} | 0.965 | 0.962 | 0.964 | 0.978 | 0.976 | 0.977 | 0.180 | 0.161 | 0.170 | 0.995 | 0.995 | 0.995 |

P-34 | N-4, P-22, P-36 | N-4, P-22, P-30, P-36 | N-4 | all | ||||||||

MLP 3-7-1 | MLP 4-10-1 | MLP 1-6-1 | MLP 8-11-1 | |||||||||

learning | testing | both | learning | testing | both | learning | testing | both | learning | testing | both | |

RMSE | 0.022 | 0.021 | 0.022 | 0.021 | 0.019 | 0.020 | 0.123 | 0.116 | 0.120 | 0.012 | 0.010 | 0.011 |

MAE | 0.016 | 0.016 | 0.016 | 0.015 | 0.014 | 0.014 | 0.108 | 0.101 | 0.104 | 0.007 | 0.007 | 0.007 |

r^{2} | 0.987 | 0.987 | 0.987 | 0.988 | 0.990 | 0.989 | 0.583 | 0.601 | 0.592 | 0.996 | 0.997 | 0.997 |

P-43 | P-22, P-30, P-36 | P-22, P-36 | N-4 | all | ||||||||

MLP 3-6-1 | MLP 2-8-1 | MLP 1-2-1 | MLP 8-5-1 | |||||||||

learning | testing | both | learning | testing | both | learning | testing | both | learning | testing | both | |

RMSE | 0.059 | 0.060 | 0.060 | 0.065 | 0.068 | 0.067 | 0.193 | 0.188 | 0.191 | 0.036 | 0.034 | 0.035 |

MAE | 0.047 | 0.049 | 0.048 | 0.051 | 0.052 | 0.051 | 0.161 | 0.154 | 0.157 | 0.026 | 0.028 | 0.027 |

r^{2} | 0.924 | 0.913 | 0.918 | 0.906 | 0.888 | 0.897 | 0.182 | 0.150 | 0.166 | 0.972 | 0.971 | 0.972 |

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Kajewska-Szkudlarek, J.; Łyczko, W. Assessment of Hellwig Method for Predictors’ Selection in Groundwater Level Time Series Forecasting. *Water* **2021**, *13*, 778.
https://doi.org/10.3390/w13060778

**AMA Style**

Kajewska-Szkudlarek J, Łyczko W. Assessment of Hellwig Method for Predictors’ Selection in Groundwater Level Time Series Forecasting. *Water*. 2021; 13(6):778.
https://doi.org/10.3390/w13060778

**Chicago/Turabian Style**

Kajewska-Szkudlarek, Joanna, and Wojciech Łyczko. 2021. "Assessment of Hellwig Method for Predictors’ Selection in Groundwater Level Time Series Forecasting" *Water* 13, no. 6: 778.
https://doi.org/10.3390/w13060778