Next Article in Journal
Evaluation of the Performance of Different Methods for Estimating Evaporation over a Highland Open Freshwater Lake in Mountainous Area
Previous Article in Journal
A Forecasting Method for Harmful Algal Bloom(HAB)-Prone Regions Allowing Preemptive Countermeasures Based only on Acoustic Doppler Current Profiler Measurements in a Large River
Article

A Generalized Method for Modeling the Adsorption of Heavy Metals with Machine Learning Algorithms

1
Department of Computer Science, College of Computer Science and Information Technology, King Faisal University, P.O. Box 400, Al-Ahsa 31982, Saudi Arabia
2
Department of Chemical Engineering, College of Engineering, King Faisal University, P.O. Box 380, Al-Ahsa 31982, Saudi Arabia
3
Department of Civil Engineering, College of Engineering, King Faisal University, P.O. Box 380, Al-Ahsa 31982, Saudi Arabia
*
Author to whom correspondence should be addressed.
Water 2020, 12(12), 3490; https://doi.org/10.3390/w12123490
Received: 16 November 2020 / Revised: 6 December 2020 / Accepted: 9 December 2020 / Published: 11 December 2020
(This article belongs to the Section Wastewater Treatment and Reuse)

Abstract

Applications of machine learning algorithms (MLAs) to modeling the adsorption efficiencies of different heavy metals have been limited by the adsorbate–adsorbent pair and the selection of specific MLAs. In the current study, adsorption efficiencies of fourteen heavy metal–adsorbent (HM-AD) pairs were modeled with a variety of ML models such as support vector regression with polynomial and radial basis function kernels, random forest (RF), stochastic gradient boosting, and bayesian additive regression tree (BART). The wet experiment-based actual measurements were supplemented with synthetic data samples. The first batch of dry experiments was performed to model the removal efficiency of an HM with a specific AD. The ML modeling was then implemented on the whole dataset to develop a generalized model. A ten-fold cross-validation method was used for the model selection, while the comparative performance of the MLAs was evaluated with statistical metrics comprising Spearman’s rank correlation coefficient, coefficient of determination (R2), mean absolute error, and root-mean-squared-error. The regression tree methods, BART, and RF demonstrated the most robust and optimum performance with 0.96 ⫹ R2 ⫹ 0.99. The current study provides a generalized methodology to implement ML in modeling the efficiency of not only a specific adsorption process but also a group of comparable processes involving multiple HM-AD pairs.
Keywords: artificial intelligence; regression; statistical analysis; ten-fold-cross-validation; adsorbent; removal efficiency artificial intelligence; regression; statistical analysis; ten-fold-cross-validation; adsorbent; removal efficiency

1. Introduction

Heavy metals (HMs) are stable inorganic pollutants with a low level of biodegradability [1,2,3,4,5,6,7] and thus tend to accumulate in living organisms [8,9,10,11]. Unlike some other pollutants, HMs can cause severe complications even at low concentrations. The US Environmental Protection Agency (EPA) listed lead (Pb), arsenic (As), nickel (Ni), chromium (Cr), copper (Cu), zinc (Zn), cadmium (Cd), and mercury (Hg) among the most serious water pollutants [12]. The permissible limits of these HMs in the industrial wastewater suggested by US EPA were 0.1, 0.01, 0.2, 0.1, 0.25, 1, 0.01, and 0.05 mg/L, respectively. The existence of such toxic metals in wastewater produced from industrial and agricultural activities can result in severe health and environmental issues due to their toxicity and environmental persistence [13]. Researchers around the globe are working on developing a feasible solution to maintain the HM concentration in natural water bodies and wastewater below the standard limits.
Various chemical and physical treatments have been evaluated to remove HMs from water. These methods include, but are not limited to, membrane separation, filtration, ion exchange, precipitation, coagulation, reverse osmosis, and adsorption [13]. The cost and efficiency of a technique should be evaluated and judged from an engineering perspective before selecting it. The adsorption method is sometimes more preferable compared to other methods due to its many beneficial advantages, including low cost, reusability of adsorbents (ADs), environmental friendliness, and ease of operation [14]. Various ADs, including clays [15,16], zeolites [17], activated carbons [18,19], carbon nanotubes (CNTs) [20], nano-composites [21,22,23], graphene [24], chemical composites [25], and bio-sorbents [26,27,28,29], have been used to remove HMs from contaminated aqueous solutions. Usually, the success of any AD is mainly attributed to its morphology (porous structure), functional groups or inorganic minerals contained [30].
Extensive experimental works on removing HMs using different ADs have been reported in the literature. In general, the research scope of the previous studies was to find the maximum adsorption capacity for a single or multiple HM(s). Experimental conditions including pH, time, initial concentration, adsorbent dosage, and temperature were optimized initially. Then the adsorption process was modeled to describe its nature quantitatively. The measured values of the independent parameters were considered as the inputs (IPs) for the model, while the output (OP) was calculated based on the measurements of the initial and final concentrations of the respective HM. In most cases, the OP was the removal efficiency (%):
R e m o v a l   e f f i c i e n c y   ( % ) = I n i t i a l   c o n c .   o f   H M F i n a l   c o n c .   o f   H M I n i t i a l   c o n c .   o f   H M × 100
The traditional way of correlating the OP to the IPs is by identifying the most suitable adsorption isotherm, which demonstrates the adsorption capacity (qe, mg/g) as a function of the adsorbate concentration (Ce) in equilibrium condition.
q e = ( C o C e ) V m
In Equation (2), Co (mg/L) is the initial concentration of adsorbate, V (L) is the total volume of the fluids, and m (g) is the mass of AD. A few examples of the isotherms used in the previous studies are as follows:
Langmuir   Isotherm   ( LI ) :   1 q e = 1 q m a x + 1 q m a x . k L · 1 C e
Freundlich   Isotherm   ( FI ) :   l o g ( q e ) = l o g ( k f ) + 1 n l o g ( C e )
Temkin   Isotherm   ( TI ) :   q e = β T l n ( K T ) + β T l n ( C e )
Dubinin Radushkevich   Isotherm   ( DRI ) :   l n ( q e ) = l n ( q m a x ) B D · ε D 2
In Equations (3)–(6), qmax (mg/g) is the maximum adsorption capacity; kL (L/mg) is the Langmuir constant; kf ((mg/g)/(mg/L)n) is the Freundlich constant; n (-) represents the non-linearity of the correlation; KT (L/mg) and βT (mg/g) are the TI specific constants; BD (mol2/kJ2) is the activity coefficient; and εD (kJ2/mol2) is the Polnyi potential. The standard practice of identifying the best isotherm for an adsorption process is to estimate the appropriate values of the isotherm-specific constants with a trial and error procedure. As analyzing the complex relative impacts of the IPs on the OP was found to be difficult with a traditional isotherm model, different statistical methodologies were also employed to model the adsorption processes. The most commonly used statistical tool was the response surface method (RSM). The data required to apply RSM were generated by conducting wet experiments. This kind of experiment can be considered a simple batch process of adsorption. An AD was added to the sample containing HM by adjusting all IPs. The concentration of HM in the sample was measured before and after the experiment to appraise the OP. The values of the IPs considered to significantly affect the OP for a specific HM-AD pair were varied, while the other IPs were maintained at fixed values for the experiments. Usually, a quadratic correlation (Equation (7)) of the OP to the variable IPs was developed by minimizing the difference between the predicted OP and its actual values.
O P = β 0 + i = 1 k β i I P i + i = 1 k β i i IP i 2 + i = 1 k j = 1 k β i j I P i I P j + ε
In Equaiton (7), β and ε are the constants. An automated trial and error procedure was followed to determine the optimum values of these constants. Even though the RSM yielded acceptable predictions in most cases, it could not address the non-linearity of the correlation appropriately.
At present, artificial intelligence (AI) has been identified as a promising technique for modeling an adsorption process [16,19,22,23,25,27,29,31,32,33,34]. Compared to the traditional isotherms and statistical models, it has the advantage of directly predicting the impact of the IPs and AD-HM interaction on the adsorption process. Many AI-based machine learning algorithms (MLAs) have been employed to date [35]. The majority of these applications involved a specific algorithm, the artificial neural network (ANN). This correlates the IPs to the OP(s) using “neurons” or nodes arranged in hidden layers. As an example, a fully connected ANN architecture (6-4-1) with one input layer with six inputs, one hidden layer with four neurons, and one output layer with a single output is shown in Figure 1. Every node of each layer is connected with a weight to the nodes in the following layer. The arrangement is similar to the neurons in the animal brain. A non-linear activation function is activated for every neuron in the hidden layer to map the weighted inputs to the outputs of the neurons. The function used to predict the actual OP with an ANN can be expressed as follows:
y ^ = i = 1 N w i φ i ( x ) + b i
In Equation (8), N is the number of neurons in the hidden layer, φi(x) is the non-linear activation function, wi is the weighting coefficient, and bi is the bias. Even though the non-linearity of a correlation can be addressed better by an ANN than an RSM or isotherm, its application usually suffers from several drawbacks [36]. It may experience the complication of over-fitting from a learning perspective if sufficient data are not used to train the model. Most of the previous studies on modeling the HM adsorption with ANN involved comparatively smaller datasets. It should be noted that this particular algorithm is usually applied using expensive commercial software, namely MATLAB.
Apart from ANN, other advanced MLAs, such as support vector regression (SVR), genetic algorithm (GA), genetic programming (GP), multiple linear regression (MLR), adaptive neural fuzzy interface (ANFIS), random forest (RF), stochastic gradient boosting (SGB), and Bayesian additive regression tree (BART), were also used to model various adsorption processes [35]. Instead of depending on specific commercial software, most of these algorithms can be applied using open-source statistical and data mining software, such as R. Earlier, Hafsa et al. [37] investigated the predicting performance of the non-ANN algorithms on modeling the adsorption efficiency of As in the oxidation state of As3+. In the current study, the scope of the application is expanded further by investigating the regression performance of a set of similar models (SVR with polynomial and RBF kernels, RF, BART, and SGB) in predicting the adsorption efficiencies of five toxic metals (Pd, Hg, Cd, Cr, and As) in different oxidation states (Pb2+, Hg2+, Cd2+, Cr6+, and As3+). The data required for the investigation were extracted from the literature. In addition to developing HM-AD-specific individual models, attempts were made to advance a generalized model that can predict the adsorption efficiency of multiple HM-AD combinations based on a single learning framework.

2. Materials and Methods

2.1. Regression Algorithms

A diverse variety of regression algorithms, including parametric, non-parametric, and Bayesian models were selected for the current study to model the removal efficiency of the toxic heavy metals. The list of the algorithms is as follows:
(i).
support vector regression with radial basis function (SVR-RBF) kernel
(ii).
support vector regression with polynomial (SVR-poly) kernel
(iii).
random forest (RF) regression
(iv).
stochastic gradient boosting (SGB) regression
(v).
Bayesian additive regression tree (BART)
All of these ML models are briefly discussed below, underscoring the respective statistical formulations that correlate the response with the inputs [38,39,40,41,42,43,44,45].

2.1.1. SVR-RBF

The objective of SVR is to devise an as flat as possible hypothesis function, f(x), that is insensitive to most of the є-deviations calculated from the measured responses in the training data. For a training dataset, (X, Y) = (x1, y1), (x2, y2), …, (xN, yN), the predicted response ( y ^ ) or the output of f(x) can be expressed as follows:
y ^   = w ,   φ ( x )   +   b = i = 1 N ( α i   k ( x i ,   x ) ) + b
where w , φ ( x ) is the dot product of the weight vector ( w ) and mapped feature vector with a non-linear transformation function φ ( x ) , αi coefficients are the support vectors, k(xi, x) is a suitable kernel function for non-linear feature mapping, and b is a constant term. The RBF kernel, K( x ,   x ) for the feature vectors, ( x , x ) can be presented with the following equation:
K ( x ,   x ) = e x p   ( 1 2 σ 2 x x 2 )  
where σ is the radius or width of the RBF kernel. It is a tuning parameter that indicates the influence of ( x , x ) on the model. Equation (10) shows the similitude between x and x as a decaying function in squared form. A smaller value of the kernel indicates a higher similarity between the features. In addition to σ , there is another important tuning parameter, C. It is the regularization parameter of the objective loss function defined as the difference between the measured and predicted values. The ultimate goal of this ML model is to minimize the objective loss function for the training data.

2.1.2. SVR-Poly

A polynomial kernel function is used in the SVR-polynomial algorithms to learn non-linear feature interactions. It compares two column vectors under the objective function framework using a degree-d polynomial equation:
K ( x ,   x ) =   ( γ x T x + c ) d
where x and x are two column vectors representing the feature vectors, γ is a scalar parameter, c is a constant, and d is the kernel degree. Combining Equation (11) with Equation (9), the following expression can be obtained for the response ( y ^ ):
y ^   =   i = 1 N α i   ( γ x T x i + c ) d + b
The tuning parameters for SVR-Polynomial are d, γ , and C.

2.1.3. RF Regression

RF is an ensemble learning method. It uses a multitude of regression trees during the training period. The variable splits at each node are selected based on the randomization. Using bootstrap sampling, the algorithm initiates the learning by dividing the training data into M subsets. Next, an individual regression tree (Ti) is set for every subset by utilizing a randomly selected subset of features. This process of splitting the nodes results in a forest of M regression trees. After fitting the model to the entire training set, the response ( y ^ ) is usually predicted for a test dataset ( x ) by averaging the individual predictions as follows:
y ^ =   1 M i = 1 M T i   ( x )
where M is the total number of regression trees and T i   ( x ) is a regression tree output. The hyper-parameter used for RF optimization is mtry. It designates the sum total of predictor variables that are randomly specified as nominees in the course of splitting the tree.

2.1.4. SGB Regression

An additive regression model is developed with an SGB algorithm by successively fitting a simple parameterized function as a weak base learner to the declivity of the difference between the measured value and model response. A random sample of the complete dataset is used to reduce the disagreement stochastically at each iteration.
In the gradient boosting algorithm, a simple regression tree can be used as a weak prediction model, and the final prediction model can then be produced in the form of an ensemble of such weak learners. The functional form of the gradient boosting-based approximation of the predicted response, y ^ , for each data point, x can be described as follows:
y ^ =   m = 0 M β m f ( x ; a m )
where the functions f ( x ; a m ) represent the weak learners that are simple functions of x involving weighting parameters a = {a1, a2,…, am}, and expansion coefficients as β = {β1, β2,…,βm}. Both a and β are jointly fit to the training data. These parameters also define the split points for the base regression tree [20]. In SGB, the tuning parameters are the number of regression trees (m) and the number of splits to be performed at each node, i.e., the maximum nodes for each tree.

2.1.5. BART

The BART is a Bayesian approach. It uses a sum-of-trees model to approximate the hypothesis function [30]. The BART algorithmic concept builds on enhancing the additive trees model by introducing a prior regularization term that attempts to fit the model by moderating the effect of the individual regression tree. Consequently, each regression tree in the BART becomes a weak prediction model, explaining only a smaller portion of the training data. BART conveniently uses the additive representation of multiple regression trees to produce the final prediction model instead of constructing a single dominant large tree. The predicted response for a set of feature variables (x1,x2,…xn) associated with a single data point, x, could be formulated according to the sum-of-trees model, which is shown as follows:
y ^ =   j = 1 m g ( x ,   T j )
where T j is a single binary regression tree and m is the total number of regression trees. Each tree T j consists of a set of interior node decision rules and a set of prior regularized terminal nodes. For BART modeling, the tuning parameter is m or the number of trees used in the sum-of-trees model.

2.2. Evaluation Metrics

The evaluation metrics used to appraise the performance of the regression models consisted of Spearman’s rank correlation coefficient (SPCC), the coefficient of determination (R2), root mean square error (RMSE), and mean absolute error (MAE). The statistical parameters are briefly described as follows.

2.2.1. Spearman’s Rank Correlation Coefficient (SPcorr)

The SRCC is a technique that summarizes the strength and the direction (positive or negative) of a relationship between actual (y) and predicted ( y ^ ) response variables. The formula to calculate SRCC adopts the following mathematical notation:
S R C C   =   1 (   6 d 2 n 3     n )
where d = difference in the ranks between two variable sets and n= number of samples. The SRCC values range between −1 and +1. The closer SRCC is to +1 or −1, the stronger the probable correlation. In the case of SRCC, a perfect positive correlation is +1 and a perfect negative correlation is −1.

2.2.2. Coefficient of Determination (R2)

The R2 is a statistical measurement of how well the results of a regression model fit the actual measurements. It quantifies the fraction of the variation in outputs explained by the model. The equation for R2 can be described as follows:
R 2   =   E x p l a i n e d   v a r i a t i o n s T o t a l   v a r i a t i o n s
The maximum and minimum values of R2 are 1 and 0, respectively. Generally, a higher value of R2 indicates a better model.

2.2.3. Mean Absolute Error (MAE)

The MAE is defined as an average of absolute differences between the model outputs and the actual measurements. The MAE can be calculated using the following formula:
M A E =   | y ^ y |
where y ^ and y are the predicted and actual responses respectively.

2.2.4. Root Mean Squared Error (RMSE)

The RMSE is a statistical measure of the dispersion of the prediction errors. It is a popular parameter to present the overall error of a model, as it can provide a relative perception of how concentrated the model outputs are around the best fit curve. The formula for RMSE can be written as follows:
R M S E =     1 N i = 1 N ( y ^   y i ) 2
where y and y ^ are actual and predicted responses, respectively, and N is the total number of samples.

2.3. Dataset

The experimental data used for the current study were collected from eight independent sources as presented in Table 1 and Table 2. All of the experiments were conducted to test the efficiencies of different adsorbents in removing toxic heavy metals such as Cr, Pd, Hg, Cd, and As from polluted water. The ADs used for the experimental studies were as follows:
  • AD1: Superheated steam-activated granular carbon
  • AD2: Ragi husk powder (bio-sorbent)
  • AD3: Antep pistachio or Pistacia vera L. (bio-sorbent)
  • AD4: Red mud
  • AD5: Synthesized functional [email protected]3O4 nanocomposite ([email protected]3O4)
  • AD6: Eucalyptus leaves (bio-sorbent)
  • AD7: Spirulina (Arthospira) maxima (bio-sorbent)
  • AD8: Spirulina (Arthospira) indica (bio-sorbent)
  • AD9: Spirulina (Arthospira) platensis (bio-sorbent)
  • AD10: Reduced graphene oxide-supported nanoscale zero-valent iron (nZVI/rGO) composites
  • AD11: Cupric oxide nanoparticles (CuONPs) prepared with Tamarindus indica pulp extract
  • AD12: Cerium hydroxylamine hydrochloride (Ce-HAHCl)
The following input and output parameters were analyzed in the course of the experiments:
  • IP1: Operating temperature, T (°C)
  • IP2: Initial pH (-)
  • IP3: Initial concentration (mg/L)
  • IP4: Contact time (min)
  • IP5: Adsorbent dosage (mg)
  • IP6: Agitator speed (rpm)
  • OP: Removal efficiency (%)
The IPs were measured, while the OP was calculated based on the measured values of initial and final concentrations of the respective HM.

2.4. MLA Modeling

As mentioned earlier, the predictive power of MLA was assessed in the current study by training and testing five regression models to estimate the removal efficiency of five heavy metals using six experimental parameters. The MLA modeling involved interpolating the experimental parameters to produce synthetic data and developing models for the datasets individually as well as comprehensively. For this purpose, the following steps were executed in successive order: (i) interpolation of the experimental data; (ii) parameter optimization and model selection for individual heavy metals; (iii) parameter optimization and model selection for the comprehensive dataset.

2.4.1. Data Interpolation

As can be observed from Table 1, the ten datasets used in the present study consist of a relatively lower number of actual measurements. Therefore, the training set composed of actual measurements is not necessarily large enough, as the MLAs are data-driven and demand a reasonable quantity of data for optimizing the parameters and training the models reliably. To resolve this issue, a data augmentation technique is necessary for increasing the number of data points in the training set. Earlier, Podder et al. [47] used a cubic spline function for generating interpolated data points for their ANN-based modeling of As adsorption efficiency. Cubic spline or piecewise cubic interpolation can be categorized as an exact point interpolation method in the family of spatial interpolation techniques. This piecewise polynomial interpolation method, unlike its polynomial analogs, is capable of finding a continuous second derivative at all data points by minimizing the interpolation errors and produces a smoother distribution of interpolated data points within a certain range [48,49]. The cubic spline interpolation also mitigates the distortion issues in boundary regions observed in least-squares interpolation [47]. Considering the advantages, a piecewise cubic interpolation method was adopted to interpolate the original data points in the current study. The process of interpolation was initiated by interpolating the output column based on the first predictor. It was then extended to the predictor columns using the previously interpolated output values. On average, 250 data points were interpolated for each data set. All interpolations were performed using the ‘spline’ function of the ‘stats’ package in R [50].

2.4.2. Parameter Optimization and Model Selection

Individual Metal

The ML modeling to predict the removal efficiency involved associated parameter optimization on a training set and the final model selection based on the validation using the independent test data points. As a first step, the original experimental datasets extracted from the literature were interpolated using a natural cubic spline technique and merged. Next, the merged dataset was split into a training (80%) and a test (20%) subset using random sampling. The training of five ML models was carried out using the training data points (80%). The test dataset (20%), which can be considered as an independent test set, was withheld for model verification. The optimized parameters for the ML models and, also, the best performing model were selected based on a repeated k-fold cross-validation (CV) technique integrated with a grid search [51]. The hyper-parameters of the ML models, as presented in Table 3, were optimized by minimizing the average prediction error in training data.
At every fold of the CV, (k-1) subsets of the data were used to train a model. The hold-out dataset was then used to validate the trained model by calculating the RMSE. This process was repeated k-times with the desired number of iterations to identify the best performing model with the optimal values of the associated parameters that minimized the average RMSE across the repeated CV. From a mathematical perspective, the following optimization problem was attempted to be minimized:
m i n p a r a m s ( i = 1 n j = 1 k R M S E i , j )
where n indicates the number of iterations, k is the number of folds in CV, and RMSEi,j is the root mean square deviation of the predicted response from the actual response for the j-th fold in the i-th iteration of the repeated CV. The algorithm used for model selection and parameter optimization is presented in Figure 2. Both the number of folds (k) used in the CV and the repetition times (N) were considered as 10 in the present study.

Comprehensive Dataset

We performed MLA modeling for a comprehensive dataset that includes the data related to all metals. The objective was to produce a generalized model for the five toxic metals (As, Cd, Cr, Hg, and Pb) and 10 adsorbents (see Table 1 and Table 2) used in this study. The dataset contained all the merged (original and interpolated) data points associated with five toxic metals. The statistics for this comprehensive dataset are presented in Table 4. There were two modifications in the feature description in this combined dataset. As shown in Table 1, six experimental parameters were available for any dataset including the variable and fixed inputs such as operating temperature, pH, initial concentration, contact time, sorbent dosage, and agitator speed. In the combined dataset, all six of these experimental parameters were included in the feature description. In addition to that, the types of metal and the specific adsorbent used in the experiment were also incorporated as features. Therefore, a total of eight (8) feature variables, as opposed to six (6) parameters in individual modeling, were used for the MLA modeling of removal efficiency on a total of 2476 (80%) training data points in the combined dataset. All five MLAs previously used for individual metal removal efficiency modeling were utilized to develop these generalized models. A similar strategy of repeated 10-fold cross-validation was performed to optimize the different parameters of the MLA models on this combined dataset. Finally, the parameter optimization and model selection steps gave us five optimal models (from using five different ML algorithms) chosen based on the lowest average RMSE observed on the validation data points during the cross-validated training. These models were then used to predict the removal efficiency on 619 (20%) independent test data points from the combined dataset.

2.5. Computing Framework

A computer with the configuration of Intel CORE i7 8th Gen, 2GHz processor with 8 GB of RAM was used to perform the dry experiments and statistical analysis. The hardware was operated with a 64-bit Windows 10 operating system. The open-source statistical computing framework, R (4.0.2) was used for conducting all ML experiments and performing the statistical analysis. We chose R because it can be generally used in any platform. It can also provide all necessary packages and library functions for ML model training, hyper-parameters tuning, visualizing and data preprocessing, performing post-prediction related statistical analyses, and plotting graphs and trends of the results. R is considered a standard industrial choice for exploratory data analysis. The time requirement of prediction for a typical data matrix of 200 × 6 dimensions was approximately 10 s.

3. Results

3.1. ML Model Evaluation for Individual Dataset

The performances of five MLAs in predicting the efficiencies of absorbing five toxic metals (As, Cd, Cr, Hg, and Pb) by different adsorbents are shown in Table 5, Table 6, Table 7, Table 8 and Table 9. For each metal, the prediction results are reported for two separate adsorbents. The outcomes are evaluated with a statistical metric comprising MAE, RMSE, SPcorr, and R2.

3.2. ML Model Evaluation for Combined Dataset

The results of the performance evaluations of the five generalized ML models on the independent test dataset are described in Table 10. The performance of the best model (RF) is presented in Figure 3, with a graph depicting the predicted values of the removal efficiencies as the function of the measured values for different HMs. In addition, the residual percentile error plot for the RF model is depicted in Figure 4 for independent test data.

4. Discussion

The current study presents a comprehensive approach to modeling adsorption efficiency. A wide range of ML models was applied to model the experimental adsorption of five toxic heavy metals with ten different adsorbents. As the modeling of an adsorption process involves non-linear feature interactions, the utility of the non-linear parametric regression models, such as SVR with polynomial and RBF kernels, RF, and SGB, including a Bayesian regression approach called BART, were examined in the current study. The RF and SGB were selected as the bagging and boosting algorithms, respectively. Both RBF and polynomial kernels in the SVR algorithm perform mapping of the input space to higher dimensional feature space, and, subsequently, the data points become linearly separable into that higher feature dimension. Similarly, three different variations of regression trees used in the current study are suitable for non-linear regression tasks. For each toxic metal, two datasets using two different adsorbents were considered, resulting in a total of 10 datasets for the ML experiments. Note that each of these datasets consists of both original and interpolated data points, which were split into an 20 to 80% ratio of training and test data, respectively. Table 5, Table 6, Table 7, Table 8 and Table 9 report the results of ML modeling of the selected regression models on 20% independent test data points for each of these 10 datasets. Interestingly, a single learning algorithm did not stand alone for all ten datasets when evaluated with the independent test points (see Table 5, Table 6, Table 7, Table 8 and Table 9). However, the BART algorithm showed the optimum performance compared to other models for all data. The average R2 value was 96%. The other two regression tree approaches, SGB and RF, demonstrated the next best performances with average R2 values of 94% and 93%, respectively. In the case of SVR, the models with the RBF kernel demonstrated slightly better performance (R2 = 93%) than its polynomial counterpart (R2 = 91%). However, an extensive comparative analysis (e.g., finding min, max, and standard deviation) of the performance of these 10 individual models may not be appropriate here, as the 10 datasets used were collected under different experimental setups using 12 different adsorbents and five different metals.
Since a generalized ML model applicable to different adsorption processes does not exist in the literature, we performed the modeling based on the strategy that combines diverse datasets in a single learning framework to which different ML algorithms can be applied. This effort provided insights about the generalized predictive power of the ML algorithms for estimating adsorption efficiency irrespective of the HM-AD combinations and the reliability of the prediction made by the generalized models in the case of different toxic metals. It also made the comparative analysis of the performances of ML algorithms more meaningful as all variations in the experimental setup, metal, and adsorbent types were brought under a single learning framework of model development using a specific algorithm and all five algorithms underwent the training on the same set of data points.
The evaluation of the generalized models, as presented in Table 10, shows that all of those demonstrate consistent and comparable performances for training and test datasets. The SVR-polynomial kernel performs almost identically to its RBF kernel counterpart. Among these methods, the RF model yielded the best scores in terms of all evaluation metrics (SPCC = 0.989, R2 = 0.988, MAE = 0.007, and RMSE = 0.033). It is important to observe that both bagging- (RF) and boosting (SGB)-based regression tree algorithms with stochastic components were found to perform better by choosing the best possible random set of predictors (RF) or observations (SGB) for splitting at each node of the regression tree and several iterations for parameter optimization. Both regression tree models were able to capture the non-linearity of the data accurately in estimating the response variable. The BART was able to achieve one of the best correlations (SPCC = 0.983 and R2 = 0.969) by imposing regularization on each tree while fitting to a small portion of the training data, leading to a bias-free prediction when several trees were fitted to the complete set of training samples. The measured removal efficiencies for the test dataset are shown against the predicted values by the best performing RF model in Figure 3. Compared to the metal-specific predictions shown in Figure 4, the RF model is evidently accurate in predicting the removal efficiencies for all different types of metals, irrespective of the adsorbent type used for the adsorption experiments. The residual error analysis of the RF model is presented in Figure 4 with the range of errors in the percentile level. More than 98% of test data lie within a ±10% error limit.
A methodology to implement the best performing RF model is outlined in Figure 5, with a block diagram. The model in its current form is directly applicable to predict the adsorption efficiency for a given set of process conditions. It requires only the design or operating parameters (IPs) as inputs from the user. These input parameters are to be treated as the predictor variables to provide the output of adsorption efficiency. In the case of using the current database, the predictions would be limited to the twelve HM-AD pairs used for this study. However, the database can be enriched further by adding new experimental measurements for different HM-AD pairs. That will help to extend the predicting scope of the current model. The AI-based automated methodology is expected to replace the traditional modeling approach that requires indefinite iterations to figure out the appropriate model with the optimized values of the coefficients. It will be significantly beneficial for the general users, including design and operating engineers, as well as management and research personnel.

5. Conclusions

State-of-the-art ML algorithms were applied to model the sorption efficiencies of different adsorbents in removing toxic heavy metals (As, Cd, Hg, Cr, and Pb) for the current study. Specifically, the predictive power of the non-ANN approach was analyzed in an open-source statistical computing framework, R. Probably the most significant contribution of the current study is a generalized ML model that can be used to predict the removal efficiency of five toxic metals using twelve different adsorbents. All ML models were developed using original and synthetic data produced by interpolating the original data using a cubic spline interpolation algorithm. Model assessments using standard evaluation metrics show an excellent agreement between the actual and predicted removal efficiency for both individual (R2 = 96%.) and generalized (R2 = 98.8%) predictive models. The present work provides important insights about the predictive power of non-ANN ML approaches for both metal- and adsorbent-specific individual learning models, and the models in which all data are combined into a single learning framework. With the superior performances and beneficial attributes of the generalized models, the proposed system has a high potential to be employed and used for the industrial production system. Although the current approach was successfully tested for a set of adsorption systems comprising five toxic heavy metals and twelve varieties of adsorbents, it should be implemented further to a larger dataset to develop a universal model.

Author Contributions

Conceptualization, N.H. and S.R.; methodology, N.H.; software, N.H.; validation, N.H. and S.R.; formal analysis, N.H. and S.R.; investigation, N.H. and S.R.; resources, N.H. and S.R.; data curation, S.R.; writing—original draft preparation, N.H., S.R., and M.A.-Y.; writing—review and editing, N.H., S.R., M.A.-Y., and M.R.; visualization, N.H. and S.R.; supervision, N.H. and S.R.; project administration, N.H. and S.R.; funding acquisition, N.H., S.R., M.A.-Y., and M.R. All authors have read and agreed to the published version of the manuscript.

Funding

The authors extend their appreciation to the Deputyship for Research & Innovation, Ministry of Education in Saudi Arabia for funding this research work through the project number IFT20063. The authors also acknowledge the Deanship of Scientific Research at King Faisal University for their kind assistance.

Acknowledgments

The authors extend their appreciation to the Deanship of Scientific Research, College of Computer Science and Information Technology, and College of Engineering at King Faisal University, Saudi Arabia.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Hegazi, H.A. Removal of heavy metals from wastewater using agricultural and industrial wastes as adsorbents. HBRC J. 2013, 9, 276–282. [Google Scholar] [CrossRef]
  2. Gupta, V.K.; Gupta, M.; Sharma, S. Process development for the removal of lead and chromium from aqueous solutions using red mud—An aluminium industry waste. Water Res. 2001, 35, 1125–1134. [Google Scholar] [CrossRef]
  3. Kumar, P.S.; Saravanan, A. Sustainable wastewater treatments in textile sector. In Sustainable Fibres and Textiles; Muthu, S.S., Ed.; Woodhead Publishing: Cambridge, UK, 2017; pp. 323–346. [Google Scholar] [CrossRef]
  4. Peng, B.; Fang, S.; Tang, L.; Ouyang, X.; Zeng, G. Nanohybrid Materials Based Biosensors for Heavy Metal Detection. In Micro and Nano Technologies, Nanohybrid and Nanoporous Materials for Aquatic Pollution Control; Tang, L., Deng, Y., Wang, J., Wang, J., Zeng, G., Eds.; Elsevier: Amsterdam, The Netherlands, 2019; pp. 233–264. [Google Scholar] [CrossRef]
  5. Tasharrofi, S.; Hassani, S.S.; Taghdisian, H.; Sobat, Z. Environmentally friendly stabilized nZVI-composite for removal of heavy metals. In New Polymer Nanocomposites for Environmental Remediation; Hussain, C.M., Mishra, A.K., Eds.; Elsevier: Amsterdam, The Netherlands, 2018; pp. 623–642. [Google Scholar] [CrossRef]
  6. Rhouati, A.; Marty, J.L.; Vasilescu, A. Metal Nanomaterial-Assisted Aptasensors for Emerging Pollutants Detection. In Advanced Nanomaterials; Nikolelis, D.P., Nikoleli, G.P., Eds.; Elsevier: Amsterdam, The Netherlands, 2018; pp. 193–231. [Google Scholar] [CrossRef]
  7. Atieh, M.A.; Ji, Y.; Kochkodan, V. Metals in the Environment: Toxic Metals Removal. Bioinorg. Chem. Appl. 2017, 2017, 4309198. [Google Scholar] [CrossRef] [PubMed]
  8. Jin, L.; Zhang, G.; Tian, H. Current state of sewage treatment in China. Water Res. 2014, 66, 85–98. [Google Scholar] [CrossRef] [PubMed]
  9. Lau, Y.J.; Khan, F.S.A.; Mubarak, N.M.; Lau, S.Y.; Chua, H.B.; Khalid, M.; Abdullah, E.C. Functionalized carbon nanomaterials for wastewater treatment. In Micro and Nano Technologies, Industrial Applications of Nanomaterials; Thomas, S., Grohens, Y., Pottathara, Y.B., Eds.; Elsevier: Amsterdam, The Netherlands, 2019; pp. 283–311. [Google Scholar] [CrossRef]
  10. Järup, L. Hazards of heavy metal contamination. Br. Med. Bull. 2003, 68, 167–182. [Google Scholar] [CrossRef] [PubMed]
  11. Khan, S.; Cao, Q.; Zheng, Y.M.; Huang, Y.Z.; Zhu, Y.G. Health risks of heavy metals in contaminated soils and food crops irrigated with wastewater in Beijing, China. Environ. Pollut. 2008, 152, 686–692. [Google Scholar] [CrossRef] [PubMed]
  12. Schmidt, S.A.; Gukelberger, E.; Hermann, M.; Fiedler, F.; Großmann, B.; Hoinkis, J.; Ghosh, A.; Chatterjee, D.; Bundschuh, J. Pilot study on arsenic removal from groundwater using a small-scale reverse osmosis system towards sustainable drinking water production. J. Hazard. Mater. 2016, 318, 671–678. [Google Scholar] [CrossRef]
  13. Fu, F.; Wang, Q. Removal of heavy metal ions from wastewaters: A review. J. Environ. Manage. 2011, 92, 407–418. [Google Scholar] [CrossRef]
  14. Saleh, T.A.; Sarı, A.; Tuzen, M. Optimization of parameters with experimental design for the adsorption of mercury using polyethylenimine modified activated carbon. J. Environ. Chem. Eng. 2017, 5, 1079–1088. [Google Scholar] [CrossRef]
  15. Benhammou, A.; Yaacoubi, A.; Nibou, L.; Tanouti, B. Adsorption of metal ions onto Moroccan stevensite: Kinetic and isotherm studies. J. Colloid Interface Sci. 2005, 282, 320–326. [Google Scholar] [CrossRef]
  16. Geyikçi, F.; Kılıç, E.; Çoruh, S.; Elevli, S. Modelling of lead adsorption from industrial sludge leachate on red mud by using RSM and ANN. Chem. Eng. J. 2012, 183, 53–59. [Google Scholar] [CrossRef]
  17. Wang, S.; Peng, Y. Natural zeolites as effective adsorbents in water and wastewater treatment. Chem. Eng. J. 2010, 156, 11–24. [Google Scholar] [CrossRef]
  18. Perrich, J.R. Activated Carbon Adsorption for Wastewater Treatment; Fla: Boca Raton, FL, USA; CRC Press: Chicago, IL, USA, 2018. [Google Scholar] [CrossRef]
  19. Halder, G.; Dhawane, S.; Barai, P.K.; Das, A. Optimizing chromium (VI) adsorption onto superheated steam activated granular carbon through response surface methodology and artificial neural network. Environ. Prog. Sustain. 2015, 34, 638–647. [Google Scholar] [CrossRef]
  20. Abbas, A.; Al-Amer, A.M.; Laoui, T.; Al-Marri, M.J.; Nasser, M.S.; Khraisheh, M.; Atieh, M.A. Heavy metal removal from aqueous solution by advanced carbon nanotubes: Critical review of adsorption applications. Sep. Purif. Technol. 2016, 157, 141–161. [Google Scholar] [CrossRef]
  21. Davodi, B.; Ghorbani, M.; Jahangiri, M. Adsorption of mercury from aqueous solution on synthetic polydopamine nanocomposite based on magnetic nanoparticles using Box–Behnken design. J. Taiwan Inst. Chem. Engrs. 2017, 80, 363–378. [Google Scholar] [CrossRef]
  22. Fan, M.; Li, T.; Hu, J.; Cao, R.; Wei, X.; Shi, X.; Ruan, W. Artificial neural network modeling and genetic algorithm optimization for cadmium removal from aqueous solutions by reduced graphene oxide-supported nanoscale zero-valent iron (nZVI/rGO) composites. Materials 2017, 10, 544. [Google Scholar] [CrossRef]
  23. Singh, D.K.; Verma, D.K.; Singh, Y.; Hasan, S.H. Preparation of CuO nanoparticles using Tamarindus indica pulp extract for removal of As (III): Optimization of adsorption process by ANN-GA. J. Environ. Chem. Eng. 2017, 5, 1302–1318. [Google Scholar] [CrossRef]
  24. Peng, W.; Li, H.; Liu, Y.; Song, S. A review on heavy metal ions adsorption from water by graphene oxide and its composites. J. Mol. Liq. 2017, 230, 496–504. [Google Scholar] [CrossRef]
  25. Mandal, S.; Mahapatra, S.S.; Sahu, M.K.; Patel, R.K. Artificial neural network modelling of As (III) removal from water by novel hybrid material. Process Saf. Environ. Prot. 2015, 93, 249–264. [Google Scholar] [CrossRef]
  26. Minamisawa, M.; Minamisawa, H.; Yoshida, S.; Takai, N. Adsorption behavior of heavy metals on biomaterials. J. Agric. Food Chem. 2004, 52, 5606–5611. [Google Scholar] [CrossRef]
  27. Krishna, D.; Sree, R.P. Artificial neural network and response surface methodology approach for modeling and optimization of chromium (VI) adsorption from waste water using Ragi husk powder. Indian Chem. Eng. 2013, 55, 200–222. [Google Scholar] [CrossRef]
  28. Alimohammadi, M.; Saeedi, Z.; Akbarpour, B.; Rasoulzadeh, H.; Yetilmezsoy, K.; Al-Ghouti, M.A.; Khraisheh, M.; McKay, G. Adsorptive removal of arsenic and mercury from aqueous solutions by eucalyptus leaves. Water Air Soil Pollut. 2017, 228, 429. [Google Scholar] [CrossRef]
  29. Kiran, R.S.; Madhu, G.M.; Satyanarayana, S.V.; Kalpana, P.; Rangaiah, G.S. Applications of Box–Behnken experimental design coupled with artificial neural networks for biosorption of low concentrations of cadmium using Spirulina (Arthrospira) spp. Resour. Effic. Technol. 2017, 3, 113–123. [Google Scholar] [CrossRef]
  30. Inyang, M.I.; Gao, B.; Yao, Y.; Xue, Y.; Zimmerman, A.; Mosa, A.; Pullammanappallil, P.; Ok, Y.S.; Cao, X. A review of biochar as a low-cost adsorbent for aqueous heavy metal removal. Crit. Rev. Environ. Sci. Technol. 2016, 46, 406–433. [Google Scholar] [CrossRef]
  31. Zhu, X.; Wang, X.; Ok, Y.S. The application of machine learning methods for prediction of metal sorption onto biochars. J. Hazard. Mater. 2019, 378, 120727. [Google Scholar] [CrossRef] [PubMed]
  32. Emigdio, Z.; Abatal, M.; Bassam, A.; Trujillo, L.; Juarez-Smith, P.; El Hamzaoui, Y. Modeling the adsorption of phenols and nitrophenols by activated carbon using genetic programming. J. Clean. Prod. 2017, 161, 860–870. [Google Scholar] [CrossRef]
  33. Febrianto, J.; Kosasih, A.N.; Sunarso, J.; Ju, Y.; Indraswati, N.; Ismadji, S. Equilibrium and kinetic studies in adsorption of heavy metals using biosorbent: A summary of recent studies. J. Hazard. Mater. 2009, 162, 616–645. [Google Scholar] [CrossRef]
  34. Vithanage, M.; Rajapaksha, A.U.; Dou, X.; Bolan, N.S.; Yang, J.E.; Ok, Y.S. Surface complexation modeling and spectroscopic evidence of antimony adsorption on ironoxide-rich red earth soils. J. Colloid Interface Sci. 2013, 406, 217–224. [Google Scholar] [CrossRef]
  35. Bhagat, S.K.; Tung, T.M.; Yaseen, Z.M. Development of artificial intelligence for modeling wastewater heavy metal removal: State of the art, application assessment and possible future research. J. Clean. Prod. 2020, 250, 119473. [Google Scholar] [CrossRef]
  36. Sakizadeh, M. Artificial intelligence for the prediction of water quality index in groundwater systems. Model. Earth Syst. Environ. 2016, 2, 8. [Google Scholar] [CrossRef]
  37. Hafsa, N.; Al-Yaari, M.; Rushd, S. Prediction of arsenic removal in aqueous solutions with non-neural network algorithms. Can. J. Chem. Eng. 2020, in press. [Google Scholar] [CrossRef]
  38. Ahmadi, M.; Chen, Z. Machine learning models to predict bottom hole pressure in multi-phase flow in vertical oil production wells. Can. J. Chem. Eng. 2019, 97, 2928–2940. [Google Scholar] [CrossRef]
  39. Guo, Y.; Bartlett, P.; Shawe-Taylor, J.; Williamson, R. Covering numbers for support vector machines. IEEE Trans. Inf. Theory 2002, 48, 239–250. [Google Scholar] [CrossRef]
  40. Durbha, S.; King, R.; Younan, N. Support vector machines regression for retrieval of leaf area index from multiangle imaging spectroradiometer. Remote Sens. Environ. 2007, 107, 348–361. [Google Scholar] [CrossRef]
  41. Omer, G.; Mutanga, O.; Abdel-Rahman, E.; Adam, E. Performance of support vector machines and artificial neural network for mapping endangered tree species using WorldView-2 data in Dukuduku Forest, South Africa. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2015, 8, 4825–4884. [Google Scholar] [CrossRef]
  42. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  43. Čeh, M.; Kilibarda, M.; Lisec, A.; Bajat, B. Estimating the performance of random forest versus multiple regression for predicting prices of the apartments. ISPRS Int. J. Geo-Inf. 2018, 7, 168. [Google Scholar] [CrossRef]
  44. Wei, L.; Yuan, Z.; Zhong, Y.; Yang, L.; Hu, X.; Zhang, Y. An improved gradient boosting regression tree estimation model for soil heavy metal (arsenic) pollution monitoring using hyperspectral remote sensing. Appl. Sci. 2019, 9, 1943. [Google Scholar] [CrossRef]
  45. Cha, Y.; Kim, Y.; Choi, J.; Sthiannopkao, S.; Cho, K. Bayesian modeling approach for characterizing groundwater arsenic contamination in the Mekong River basin. Chemosphere 2016, 143, 50–56. [Google Scholar] [CrossRef]
  46. Yetilmezsoy, K.; Demirel, S.; Vanderbei, R.J. Response surface modeling of Pb (II) removal from aqueous solution by Pistacia vera L.: Box–Behnken experimental design. J. Hazard. Mater. 2009, 171, 551–562. [Google Scholar] [CrossRef]
  47. Podder, M.S.; Majumder, C.B. The use of artificial neural network for modelling of phycoremediation of toxic elements As (III) and As (V) from wastewater using Botryococcus braunii. Spectrochim. Acta A 2016, 155, 130–145. [Google Scholar] [CrossRef]
  48. Won, W.; Lee, K. Adaptive predictive collocation with a cubic spline interpolation function for convection-dominant fixed-bed processes: Application to a fixed-bed adsorption process. Chem. Eng. J. 2011, 166, 240–248. [Google Scholar] [CrossRef]
  49. Aguilera, A.; Morillo, A. Comparative study of different B-spline approaches for functional data. Math. Comput. Model. 2013, 58, 1568–1579. [Google Scholar] [CrossRef]
  50. R Core Team. R: A Language and Environment for Statistical Computing; R Foundation for Statistical Computing: Vienna, Austria, 2020; Available online: https://www.R-project.org/ (accessed on 20 September 2020).
  51. Rodriguez, J.D.; Perez, A.; Lozano, J.A. Sensitivity analysis of k-fold cross validation in prediction error estimation. IEEE PAMI 2009, 32, 569–575. [Google Scholar] [CrossRef]
Figure 1. An example of artificial neural network (ANN) architecture (6-4-1).
Figure 1. An example of artificial neural network (ANN) architecture (6-4-1).
Water 12 03490 g001
Figure 2. A pseudocode representation of the repeated 10-fold cross-validation (CV) process used for parameter optimization and model selection of ML algorithms.
Figure 2. A pseudocode representation of the repeated 10-fold cross-validation (CV) process used for parameter optimization and model selection of ML algorithms.
Water 12 03490 g002
Figure 3. The predictions of removal efficiencies for the independent test data using the generalized RF model for different metals.
Figure 3. The predictions of removal efficiencies for the independent test data using the generalized RF model for different metals.
Water 12 03490 g003
Figure 4. The percentage residual error plot for the generalized RF model estimated removal efficiency for the independent test data.
Figure 4. The percentage residual error plot for the generalized RF model estimated removal efficiency for the independent test data.
Water 12 03490 g004
Figure 5. Presentation of the automated prediction methodology by using the current ML model.
Figure 5. Presentation of the automated prediction methodology by using the current ML model.
Water 12 03490 g005
Table 1. Summary of the experimental studies.
Table 1. Summary of the experimental studies.
ReferenceHMADExperimental ParametersModeling Methodology
Variable InputsFixed InputsOutputData Points
[19]Cr(VI)AD1IP1
IP2
IP3
IP4
IP5
IP6OP36- RSM: R2 = 0.9986
- ANN: R2 = 0.9911
[27]Cr(VI)AD2IP2
IP3
IP5
IP1
IP4
IP6
16- ANN: R2 = 0.996
- RSM: R2 = 0.993
[46]Pb(II)AD3IP2
IP3
IP4
IP1
IP5
IP6
17- RSM: R2 = 0.98383
[16]Pb(II)AD4IP2
IP4
IP5
IP1
IP3
IP6
15- ANN: R2 = 0.898
- RSM: R2 = 0.672
[21]Hg(II)AD5IP2
IP3
IP4
IP1
IP5
IP6
OP15- LI: R2 = 0.991
- FI: R2 = 0.989
- RSM: R2 = 0.9871
[28]Hg(II)AD6IP2
IP3
IP4
IP5
IP1
IP6
30- RSM: R2 = 0.984
- FI: R2 = 0.9849
- LI: R2 = 0.9802
- DRI: R2 = 0.9293
- TI: R2 = 0.8769
[29]Cd(II)AD7IP2
IP3
IP5
IP6
IP1
IP4
27- FI: R2 = 0.998
- LI: R2 = 0.969
- ANN: R2 = 0.965
- RSM: R2 = 0.760
Cd(II)AD827- FI: R2 = 0.994
- ANN: R2 = 0.967
- RSM: R2 = 0.962
- LI: R2 = 0.953
Cd(II)AD927- ANN: R2 = 0.9955
- FI: R2 = 0.979
- RSM: R2 = 0.974
- LI: R2 = 0.967
[22]Cd(II)AD10IP1
IP2
IP3
IP4
IP5
IP6
29- ANN: R2 = 0.9999
- LI: R2 = 0.9909
- FI: R2 = 0.9852
- RSM: R2 = 0.9826
- DRI: R2 = 0.8226
[23]As(III)AD11IP1
IP2
IP3
IP5
IP4
IP6
31- ANN: R2 = 0.9994
- LI: R2 = 0.997
- FI: R2 = 0.805
[25]As(III)AD12IP1
IP2
IP3
IP4
IP5
IP6
-105- ANN: R2 = 0.975
Table 2. Statistical presentation of the data.
Table 2. Statistical presentation of the data.
Parameter (Unit)AverageMaximumMinimumStandard DeviationHM-AD
IP1 (°C)25.048.81.29.4Cr(VI)-AD1
IP2 (-)6.010.81.21.9
IP3 (mg/L)150.0268.931.147.0
IP4 (min)50.073.826.29.4
IP5 (mg)1.22.20.30.4
IP6 (rpm)150.0150.0150.00.0
OP (%)71.296.339.710.6
IP1 (°C)25.025.025.00.0Cr(VI)-AD2
IP2 (-)2.03.01.00.8
IP3 (mg/L)19.325.02.04.9
IP4 (min)120.0120.0120.00.0
IP5 (mg)3.960.91.610.6
IP6 (rpm)180.0180.0180.00.0
OP (%)67.072.759.24.0
IP1 (°C)30.030.030.00.0Pb(II)-AD3
IP2 (-)3.85.52.01.2
IP3 (mg/L)27.550.05.017.8
IP4 (min)62.5120.05.045.5
IP5 (mg)1000.01000.01000.00.0
IP6 (rpm)250.0250.0250.00.0
OP (%)76.097.326.522.5
IP1 (°C)23.023.023.00.0Pb(II)-AD4
IP2 (-)5.07.03.01.5
IP3 (mg/L)32.132.132.10.0
IP4 (min)32.560.05.020.8
IP5 (mg)5.610.01.33.3
IP6 (rpm)150.0150.0150.00.0
OP (%)80.696.838.820.9
IP1 (°C)20.020.020.00.0Hg(II)-AD5
IP2 (-)4.07.01.02.3
IP3 (mg/L)60.0100.020.030.2
IP4 (min)240.0420.060.0136.1
IP5 (mg)10.010.010.00.0
IP6 (rpm)400.0400.0400.00.0
OP (%)32.741.020.56.3
IP1 (°C)25.025.025.00.0Hg(II)-AD6
IP2 (-)6.09.03.01.1
IP3 (mg/L)2.73.90.50.5
IP4 (min)47.590.05.015.8
IP5 (mg)1.52.50.50.3
IP6 (rpm)120.0120.0120.00.0
OP (%)92.694.778.54.2
IP1 (°C)25.025.025.00.0Cd(II)-AD7
IP2 (-)7.08.06.00.7
IP3 (mg/L)0.00.00.00.0
IP4 (min)6.06.06.00.0
IP5 (mg)0.20.20.10.0
IP6 (rpm)14.016.012.01.4
OP (%)62.373.356.63.8
IP1 (°C)25.025.025.00.0Cd(II)-AD8
IP2 (-)7.08.06.00.7
IP3 (mg/L)0.00.00.00.0
IP4 (min)6.06.06.00.0
IP5 (mg)0.20.20.10.0
IP6 (rpm)14.016.012.01.4
OP (%)66.279.258.25.7
IP1 (°C)25.025.025.00.0Cd(II)-AD9
IP2 (-)7.08.06.00.7
IP3 (mg/L)0.00.00.00.0
IP4 (min)6.06.06.00.0
IP5 (mg)0.20.20.10.0
IP6 (rpm)14.016.012.01.4
OP (%)69.982.561.85.6
IP1 (°C)30.040.020.06.5Cd(II)-AD10
IP2 (-)6.07.05.00.7
IP3 (mg/L)30.040.020.06.5
IP4 (min)20.030.010.06.5
IP5 (mg)30.030.030.00.0
IP6 (rpm)200.0200.0200.00.0
OP (%)60.177.344.38.7
IP1 (°C)40.060.020.08.9As(III)-AD11
IP2 (-)7.012.02.02.2
IP3 (mg/L)1000.01900.0100.0402.5
IP4 (min)270.0270.0270.00.0
IP5 (mg)75.0135.015.026.8
IP6 (rpm)100.0100.0100.00.0
OP (%)76.292.748.212.3
IP1 (°C)38.560.020.016.3As(III)-AD12
IP2 (-)7.510.04.02.4
IP3 (mg/L)23.250.010.015.7
IP4 (min)62.390.030.023.4
IP5 (mg)7733.310,000.06000.01761.0
IP6 (rpm)162.1180.0120.023.8
OP (%)76.698.950.013.9
Overall statistics
IP1 (°C)30.060.01.211.9Cr(VI)-AD1
Cr(VI)-AD2
Pb(II)-AD3
Pb(II)-AD4
Hg(II)-AD5
Hg(II)-AD6
Cd(II)-AD7
Cd(II)-AD8
Cd(II)-AD9
Cd(II)-AD10
As(II)-AD11
As(II)-AD12
IP2 (-)6.012.01.02.3
IP3 (mg/L)102.61900.00.0261.1
IP4 (min)78.7420.05.078.9
IP5 (mg)1737.010,000.00.03281.4
IP6 (rpm)178.7800.012.0178.1
OP (%)68.198.90.921.3
Table 3. The machine learning (ML) models, hyperparameter names, and corresponding optimized values after cross-validated training (the last column includes R packages used for different ML models).
Table 3. The machine learning (ML) models, hyperparameter names, and corresponding optimized values after cross-validated training (the last column includes R packages used for different ML models).
ModelHyperparameter NamesR Package
Random Forest[mtry]randomForest
SVR–RBF Kernel [sigma, C]kernlab
SVR–Polynomial Kernel [degree, scale, C]kernlab
Stochastic Gradient Boosting[n.trees, interaction.depth]gbm
Bayesian Additive Regression[num_trees]bartMachine
Table 4. Data set statistics (Combined dataset).
Table 4. Data set statistics (Combined dataset).
Combined Dataset (Five Metals)PercentageNo. Data Points
Training80%2476
Test20%619
Total100%3095
Table 5. The performances of five (5) machine learning algorithms (MLA) models on independent test data (20%) of As (III) datasets.
Table 5. The performances of five (5) machine learning algorithms (MLA) models on independent test data (20%) of As (III) datasets.
MetalAlgorithmPerformance
MAERMSESPcorrR2
As (III) 1SVR-Poly2.425.430.910.84
Stochastic Gradient Boosting1.513.130.970.93
SVR-RBF2.415.300.920.84
Random Forest1.363.530.960.93
Bayesian Additive Regression Tree1.334.180.980.97
As (III) 2SVR-Poly3.326.080.890.80
Stochastic Gradient Boosting2.715.670.900.81
SVR-RBF3.385.890.890.80
Random Forest2.725.920.890.80
Bayesian Additive Regression Tree2.575.830.890.79
Table 6. The performances of five (5) MLA models on independent test data (20%) of Cr(IV) datasets.
Table 6. The performances of five (5) MLA models on independent test data (20%) of Cr(IV) datasets.
MetalAlgorithmPerformance
MAERMSESPcorrR2
Cr(IV) 1SVR-Poly0.381.080.940.89
Stochastic Gradient Boosting1.513.130.970.93
SVR-RBF0.491.140.940.89
Random Forest1.363.530.960.93
Bayesian Additive Regression Tree0.100.150.990.99
Cr (IV) 2SVR-Poly2.163.840.970.95
Stochastic Gradient Boosting2.044.800.960.92
SVR-RBF1.623.040.980.96
Random Forest1.604.650.960.92
Bayesian Additive Regression Tree1.214.00.970.94
Table 7. The performances of five (5) MLA models on independent test data (20%) of Cd(II) datasets.
Table 7. The performances of five (5) MLA models on independent test data (20%) of Cd(II) datasets.
MetalAlgorithmPerformance
MAERMSESPcorrR2
Cd (II) 1SVR-Poly1.061.770.970.95
Stochastic Gradient Boosting0.581.320.980.97
SVR-RBF0.951.390.980.97
Random Forest0.662.000.960.92
Bayesian Additive Regression Tree0.651.600.990.98
Cd (II) 2SVR-Poly2.445.420.960.92
Stochastic Gradient Boosting2.055.070.960.93
SVR-RBF2.03.590.980.97
Random Forest1.635.180.960.92
Bayesian Additive Regression Tree1.163.220.980.97
Table 8. The performances of five (5) MLA models on independent test data (20%) of Hg(II) datasets.
Table 8. The performances of five (5) MLA models on independent test data (20%) of Hg(II) datasets.
MetalAlgorithmPerformance
MAERMSESPcorrR2
Hg (II) 1SVR-Poly0.540.950.970.95
Stochastic Gradient Boosting0.290.610.990.98
SVR-RBF0.420.900.980.96
Random Forest0.110.380.990.99
Bayesian Additive Regression Tree0.240.780.990.98
Hg (II) 2SVR-Poly0.611.670.940.88
Stochastic Gradient Boosting0.260.750.980.97
SVR-RBF1.131.990.950.91
Random Forest0.230.850.950.97
Bayesian Additive Regression Tree0.140.300.990.99
Table 9. The performances of five (5) MLA models on independent test data (20%) of Pb(II) datasets.
Table 9. The performances of five (5) MLA models on independent test data (20%) of Pb(II) datasets.
MetalAlgorithmPerformance
MAERMSESPcorrR2
Pb (II) 1SVR-Poly2.293.470.980.97
Stochastic Gradient Boosting1.461.370.980.96
SVR-RBF1.963.590.980.97
Random Forest0.923.140.980.96
Bayesian Additive Regression Tree0.611.370.990.99
Pb (II) 2SVR-Poly1.131.991.01.0
Stochastic Gradient Boosting0.902.210.990.99
SVR-RBF2.293.471.01.0
Random Forest0.180.420.990.99
Bayesian Additive Regression Tree0.692.780.990.99
Table 10. Training and independent test performances of five generalized models.
Table 10. Training and independent test performances of five generalized models.
ModelTrainTest
MAERMSESPCCR2MAERMSESPCCR2
SVR-Poly0.02760.0460.9770.9760.02780.0520.9720.970
SGB0.02470.0430.9810.9790.2490.0470.9790.976
SVR-RBF0.02670.0430.9810.9780.02730.0500.9760.973
RF0.0040.0150.9970.9970.0070.0330.9890.988
BART0.0230.0480.9900.9740.0250.0540.9830.969
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Back to TopTop