Next Article in Journal
Steps Toward the Band Gap Identification in Polystyrene Based Solid Polymer Nanocomposites Integrated with Tin Titanate Nanoparticles
Next Article in Special Issue
Multi-Scale Digital Image Correlation Analysis of In Situ Deformation of Open-Cell Porous Ultra-High Molecular Weight Polyethylene Foam
Previous Article in Journal
Thermal Conductivity and Electromagnetic Interference (EMI) Absorbing Properties of Composite Sheets Composed of Dry Processed Core–Shell Structured Fillers and Silicone Polymers
Previous Article in Special Issue
Study on the Structure and Dielectric Properties of Zeolite/LDPE Nanocomposite under Thermal Aging
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Application of Artificial Neural Networks for Producing an Estimation of High-Density Polyethylene

by
Akbar Maleki
1,2,
Mostafa Safdari Shadloo
3,4,5,* and
Amin Rahmat
6
1
Department for Management of Science and Technology Development, Ton Duc Thang University, Ho Chi Minh City, Vietnam
2
Faculty of Applied Sciences, Ton Duc Thang University, Ho Chi Minh City, Vietnam
3
CORIA-UMR 6614, CNRS-University & INSA of Rouen, Normandie University, 76000 Rouen, France
4
Institute of Research and Development, Duy Tan University, Da Nang 550000, Vietnam
5
Faculty of Electrical—Electronic Engineering, Duy Tan University, Da Nang 550000, Vietnam
6
School of Chemical Engineering, University of Birmingham, Birmingham B15 2TT, UK
*
Author to whom correspondence should be addressed.
Polymers 2020, 12(10), 2319; https://doi.org/10.3390/polym12102319
Submission received: 8 September 2020 / Revised: 30 September 2020 / Accepted: 6 October 2020 / Published: 10 October 2020
(This article belongs to the Special Issue High-Performance Polyethylene)

Abstract

:
Polyethylene as a thermoplastic has received the uppermost popularity in a vast variety of applied contexts. Polyethylene is produced by several commercially obtainable technologies. Since Ziegler–Natta catalysts generate polyolefin with broad molecular weight and copolymer composition distributions, this type of model was utilized to simulate the polymerization procedure. The EIX (ethylene index) is the critical controlling variable that indicates product characteristics. Since it is difficult to measure the EIX, estimation is a problem causing the greatest challenges in the applicability of production. To resolve such problems, ANNs (artificial neural networks) are utilized in the present paper to predict the EIX from some simply computed variables of the system. In fact, the EIX is calculated as a function of pressure, ethylene flow, hydrogen flow, 1-butane flow, catalyst flow, and TEA (triethylaluminium) flow. The estimation was accomplished via the Multi-Layer Perceptron, Radial Basis, Cascade Feed-forward, and Generalized Regression Neural Networks. According to the results, the superior performance of the Multi-Layer Perceptron model than other ANN models was clearly demonstrated. Based on our findings, this model can predict production levels with R2 (regression coefficient), MSE (mean square error), AARD% (average absolute relative deviation percent), and RMSE (root mean square error) of, respectively, 0.89413, 0.02217, 0.4213, and 0.1489.

Graphical Abstract

1. Introduction

Polyethylene as a thermoplastic has received the uppermost popularity in a vast variety of applied contexts. In spite of the need for considerable financial investment for producing polyethylene, the end consumer might receive a very inexpensive and throwaway product. Therefore, improvements in polymer fabrication procedures aiming at reducing manufacturing expenses remain as an investigational, developmental, and process expansion topic [1].
High-quality, low-cost petrochemical products are increasingly in demand for many applications including modernistic HDPE (high-density polyethylene) marketplaces [2]. Polyethylene is manufactured using common process technologies including high-pressure autoclave, high-pressure tubular, slurry (suspension), gas phase, and solution. The gas phase process in particular is one of the typical and frequently used technologies in the production of HDPE and other polyolefins. Various reactors are applied in ethylene polymerizations ranging from simple autoclaves and steel piping to CSTR (continuously stirred tank reactors) and vertical fluidized beds [3]. The main incentives for this innovation include the elimination of the necessity for removing the catalyst following the reaction and making the product in a style appropriate for manipulation and reposition. In the gas phase reactors, polymerization occurs at the surface of the catalyst and the polymer matrix, which is inflated with monomers throughout the polymerization [4].
Although catalysts for polymerizing ethylene are often heterogeneous, homogeneous catalysts are also used in some processes. Currently, four kinds of catalysts exist for polymerizing ethylene: Ziegler–Natta, Phillips, metallocenes, and late transition metal catalysts [2]. Only the first three types have commercial applications while the last one is yet to undergo examination and a developing phase. Ziegler–Natta catalysts vary greatly but are usually TiCl4 supported on MgCl2 and commonly employed for the polyethylene production in the industry [5].
In HDPE production practices, the ethylene index (EIX) is the critical controlling variable that indicates product characteristics. It is necessary to preserve the uniform features of HDPE throughout grade change practices to fulfill the different and strict requirements for HDPE products, including the EIX.
There are several approaches regarding polyethylene production estimation and correlation in the literature. Khare et al. [6] introduced steady-state and dynamic models for the HDPE slurry polymerization procedure for industrial applications. The authors utilized parallel reactors data to model the reactors in series order. They set the kinetic parameters by initial consideration of a single site catalyst followed by using the outcomes to optimize the kinetic parameters for multi-site-type catalyst. Neto et al. [7] designed a dynamic model for the LLDPE polymerization process on the basis of a multi-site catalyst that underwent development merely for a single slurry reactor.
Considering a myriad of studies for improving polyethylene productivity, modeling approaches stand superior to experiments due to several key factors, including time and running costs, test feasibility, and safety (without hazards) concerns [8,9,10,11,12]. As denoted in previous publications, modeling approaches comprise of regression models, mathematical models, and artificial intelligence models. Nevertheless, the accurate function of mathematical models is dubious, in particular, once these models deal with very uncertain polyethylene manufacture. Regression models, however, can still well predict the means of production systems per month. As it is necessary to accurately model the productive status of such systems, artificial neural network (ANN) models have been used to achieve this objective due to their ability to handle great uncertainty of such data. ANNs integrate industrial data for predicting the rate of production, but only a few investigations have been successful in utilizing the ANN models for such systems by integrating a diverse range of techniques such as the MLPNN (multi-layer perceptron), CFNN (cascaded forward), RBFNN (radial basis function), and GRNN (general regression) neural networks [13,14], among others.
MLP is a neural model with the highest prediction applicability, consisting of multiple layers, namely input, hidden layer(s), and output. Hidden layer(s) can also be set more precisely as having layers of nodes [15]. The MLPNN as the prime and plainest topology of the ANN, employed for updating the weight links through the learning step.
RBFNN is a common feedforward neural network, which has been demonstrated to be capable of global estimation without local minima problem. Besides, it possesses a plain construct and a rapid learning algorithm in comparison with other neural networks [16]. Despite the availability of multiple activation functions for radial basis neurons, the Gaussian function has received the uppermost popularity. Training is necessary before applying an RBF ANN, which is commonly achievable in two stages. Choosing centers among the data applied is in the first stage of the training. The second stage is to utilize the normal least squares for linear estimation of one weighting vector. The self-organized center election is the widespread learning approach employed for selecting RBF centers.
The input and output neurons begin the production of the CFNN topology. The output neurons are present in the neural network in advance; thus, novel neurons are provided for the network, as a result of which the network, in turn, attempts to increase the correlation level between the outputs and inputs by comparing the network residue with the novel measured error. This procedure goes ahead until reaching a smaller error value in the network, which explains the reasoning that it is labeled as a cascade [17]. CFNN generally comprises three major layers namely, input, hidden, and output layers. The variables in hidden layers are multiplied by the bias (1.0) and the weight (computed in the creation phase to decline the prediction error) followed by addition to the sum entering the neuron. The resultant value from this procedure will cross a transfer function to present the output value.
GRNN is a type of supervised network that works on the basis of the probabilistic model and is able to produce continuous outputs. It is a robust instrument for non-linear regression analysis based on the approximation of probability density functions using the Parzen window technique [18]. The GRNN architecture basically does not need an iterative process to simulate such results as back-propagation learning algorithms. GRNN is capable of estimating arbitrary functions among output and input datasets directly from training data.
The main uses of the RBF and GRNN topologies are with a rather small size of input data. Despite the common topology of neural networks, every neuron depends on the entire prior layer neurons in the CFNN. Additionally, the CFNN is able to carry on to a broad extent in case the input data possess a sizable memory capacity.
As revealed by a literature review, the ANN model has applications in predicting the performance of production rate. Nonetheless, only the MLP model was utilized to forecast the production rate [19]. The novelty of the present study is to utilize and compare the performance of several models including the GRNN and RBF models, which were not previously used for predicting the production rate. Accordingly, the current research mainly aims to introduce and assess a model for predicting the rate of polyethylene fabrication. The novelty of our model is characterized by its capability in predicting system productivity by taking the uncertainty issue into account.

2. Methods

2.1. HDPE Process

The HDPE plant comprises two procedures, through which the polymerization reaction is put into action. Figure 1 illustrates a representative gas phase polymerization procedure for producing HDPE introduced by this research. Every process involves two polymerization reactors. The polymerization reaction was very exothermal, with a heat reaction of around 1000 kcal/Kg of ethylene.
There is, therefore, a need to provide suitable cooling systems that eliminate nearby 80% of heat in the polymerization process. Co-monomer (including 1-butene or higher alpha-olefin), ethylene, an activator, hydrogen, hexane, and a catalyst, as well as continually recycled original liquid, are supplied to reactors as reactants. Generally, the slurry phase occupies almost 90–95% of reactor volume. With building up of the reaction pressure, the polyethylene slurry is transferred to the next process apparatus and the reactor level is preserved within an allowable range. Separating the reaction slurry in the centrifugal separator yields cake, holding dilutants, after which dilutants are removed with hot nitrogen gas in a dryer. Thereafter, suitable additives are added depending on the final usages. After pelletizing in water, pellets are dried and placed in a homogenizer followed by cooling.

2.2. Artificial Neural Networks

The ANN is an AI (artificial intelligence) method, defined as the information processing model, which is exhilarated by the human nervous systems for information processing [21,22,23]. The ANN is capable of identifying patterns and learning from their interplays with the environment [24,25,26,27,28]. An ANN is constructed through three major fractions of the input, the output, and the hidden layer(s), all of which comprise parallel units named neurons [29]. The neurons are coupled with massive weight links, allowing the information to be transformed among the layers. The ANN model is essentially dependent upon two key steps for predicting the response of different systems: the training phase and the testing phase. In the training phase, the inputs are received by the neurons over their entering connections, after which these inputs are combined by a specific action with the output to discover the best the weight links values. Therefore, records of the association between output and input variables are taken to forecast the fresh data. In the testing phase, the system performance is tested using a portion of the input data and a comparison is made between the predicted data and the real data. The principal benefit of the ANN model is its ability in solving sophisticated problems that cannot be easily solved by traditional models; it is also capable of solving problems without an algorithmic solution or those with algorithmic solutions having sophisticated definitions.
The independent variables from outside resources in the input layer are processed by several mathematical operators and by sending values to the hidden layers. On the other hand, output neuron(s) are used to determine the dependent variables. The output value for all ANN structures can be defined as below:
O u t p u t = f ( i = 1 n w i x i + b )
where this value is computed by the activation function of f . Here, b is the summation of the bias value, and w i is the weight of x i input. Figure 2 presents the schematic diagram of the proposed neural network for simulating the EIX.

2.3. Accuracy Assessment of AI Models

The current study has designed various AI-based approaches with diverse topologies to opt for the best model based on the accuracy of predictability. This can serve as a selection paradigm for network configuration to determine the number of hidden layers and neurons, the spread factor, and training algorithm. It is noted that the dependencies of the neurons number in the hidden layer on prediction intervals were also examined. The performance of ANN models using R2 (regression coefficient), MSE (mean square error), AARD% (average absolute relative deviation percent), and RMSE (root mean square error) are calculated, respectively, as the following:
M S E = 1 N i = 1 N ( Y i , a c t Y i , p r e d ) 2
R M S E = 1 N i = 1 N ( Y i , a c t Y i , p r e d ) 2
A A R D % = 1 N i = 1 N ( | Y i , a c t Y i , p r e d Y i , a c t | ) × 100
R 2 = i = 1 N ( Y i , a c t Y ¯ a c t ) 2 i = 1 N ( Y i , a c t Y i , p r e d ) 2 i = 1 N ( Y i , a c t Y ¯ a c t ) 2
where, Y i ,   a c t is the actual and Y i ,   p r e d is predicted value. N and Y ¯ a c t also denote the number of data points and the mean of actual values, respectively.
To evaluate mean squared error (MSE) for various parameters, the predicted data are also subject to statistical analysis. MSE measures the absolute deviation of the predicted and the actual values. Positive and negative values denote overestimation and underestimation of parameters, respectively. Based on the aforesaid description, the RMSE denotes that the model is efficient on the basis of the difference between the predicted and real data. Accordingly, a large positive RMSE indicates the presence of a high deviation between the predicted and real data and in a reverse order. The R2 index defines the proximity of the actual data points to the predicted values.

3. Results and Discussion

This section summarizes the actual databank gathered from the industrial polyethylene petrochemical company, by considering the significant independent variables and by utilizing the Pearson correlation matrix. Furthermore, this section deals with determining the best structures of different models and comparing the precisions of different models. The present section concludes by selecting the best model and analyzing the results.

3.1. Industrial Database

To investigate the EIX, eleven independent sets of input data namely, temperature, operating pressure, level, loop flow, ethylene flow, hydrogen flow, 1-butane flow, hydrogen concentration, 1-butane concentration, catalyst flow, and TEA (triethylaluminium) flow (i.e., inputs 1 to 11 denoted, respectively, by X1 to X11), and the EIX response (denoted by Y) are collected. Table 1 presents the information summary of the industrial data used in this work.
According to the industrial databank, 93 data points were gathered at the steady-state conditions. The trained neural network needs validation for determining the precision of the introduced model. The network performance can be analyzed by cross-validation of an unidentified dataset. The network is validated through preserving a fraction of the dataset (e.g., 15%) for validation and the rest of the dataset is used for training. After the training phase, the data forecasted through the ANN topology and the measured data undergo a correlation analysis.
All ANN models were developed in MATLAB® with the Levenberg–Marquardt optimization algorithm. Besides, the choice of training algorithm and neuron transfer function has a major contribution to model precision. As researchers have shown, the Levenberg–Marquardt (LM) algorithm produces quicker responses for regression-type problems in overall facets of neural networks [31,32]. Most often, the LM training algorithm was reported to have the highest significant efficiency, fast convergence, and accuracy compared with other training algorithms.

3.2. Scaling the Data

To enhance the rate of convergence in the training step as well as to avoid parameter saturation in the intended ANNs, the entire actual data were subjected to mapping within the interval [0.01 0.99]. Data were normalized by Equation (6):
V n o r m a l = 0.01 + V V min V max V min × ( 0.99 0.01 )
where V denotes an independent or dependent variable, V n o r m a l represents the normal value, V m a x is the maximum, and V m i n is the minimum value of each variable.

3.3. Independent Variable Selection

Mathematical investigation for the dependency of two variables is possible through the correlation matrix study, the coefficients of which are usually measured from −1 to +1. This indicates that the two variables are correlated directly or indirectly given the signs of these coefficients, whereas the magnitude defines the robustness of their association. Our research surveyed a multivariate AI-based method with the Pearson correlation test for estimating the degree of relationship between each two variables [23]. Figure 3 displays the correlation coefficient values’ given probable pairs of variables.
The Pearson correlation coefficient as the variable ranking is described in choosing appropriate inputs for the neural network [33,34]. Values delivered by the Pearson method reveal the type and intensity of the association between every variable pair, with a value between −1 and +1 representing the uppermost converse relationship and the highest direct association, respectively. The coefficient takes a zero value in cases where the given variables do not have any association. Independent variables take non-zero correlation coefficients that verify their choices. The highest consideration is devoted to absolute average values because they have important strong associations. The values of Pearson correlation coefficient for each input are presented in Table 2.
Accordingly, this examination confirmed that Input2, Input5, Input6, Input7, Input10, and Input11 had the uppermost direct dependency and that other inputs presented the lowermost indirect association. Hence, it is possible to model polyethylene as a function of pressure, ethylene flow, hydrogen flow, 1-butane, catalyst flow, and TEA flow. Therefore, we try to present a smart model to derive the following relation:
O u t p u t =   f u n c t i o n   ( I n p u t   2 ,   I n p u t   5 ,   I n p u t   6 ,   I n p u t   7 ,   I n p u t   10 ,   I n p u t   11 )
Consequently, in the case of maximizing the AAPC (average of absolute Pearson’s coefficient) for a specific transformation on the dependent variable, it is inferred that it yields an association between the dependent and independent variables with the highest reliability. Every input variable of differing models was selected with the Pearson correlation coefficient. The inputs of every model are presented in Table 3.
From this table, it can be concluded that it is better to output to the power of 12 instead of modeling the output itself. Although, at last, by inverse transformation, the dependent variable is calculated to compare with the actual values.

3.4. Configuration Selection for Different ANN Approaches

In this work, the EIX is predicted using a proper ANN model obtained from a logical procedure. As mentioned earlier, the number of hidden neurons has a major contribution to network performance. The majority of related investigations obtain the number of neurons through the trial and error approach. Training and generalization errors may highly happen when the numbers of hidden neurons are less than the optimum numbers. On the other hand, larger numbers of hidden neurons may result in over-fitting and considerable variations. Therefore, it is necessary to calculate the optimum number of hidden neurons for achieving the best performance of the network.
Subsequently, the ANN approaches were developed and then, for example, the MLP network was compared in terms of performance with CF, RBF, and GR neural networks. The numerical validation associates with the observed AARD%, R2, MSE, and RMSE between actual and estimated data. According to the literature, MLP network capability with one hidden layer was proven [35]. As such, an MLP network with only a single hidden layer is used for the analysis.
It is noted that the training data points should be at least twice the number of bias and weights. As a result, for the MLP with one dependent and six independent variables, the hidden neuron is computed as:
2 ×   ( 8 N + 1 )   64   ( t r a i n i n g   d a t a   p o i n t s )   N 4
Therefore, this number can change from 1 to 4 (the highest acceptable number) in this network, and is trained 50 times for each network. The best configuration of the hidden neurons in the MLP model is presented in Table 4.
The MLP network with three hidden neurons and the structure of 6-3-1 was determined as the most appropriate model. The MSE values of the MLP network with various numbers of hidden layer neurons are presented in Figure 4. The data reveal that the optimality of the three numbers of neurons owes to the uppermost value of R2 (0.89413) and the lowermost value of MSE (0.02217).
According to Figure 4, MSE is lowest (0.07184) in the total MLP model with the presence of a single neuron in the hidden layer. The MLP model possesses the least MSE (0.02217) once six neurons exist in the hidden layer. Moreover, Figure 5 presents a comparison between the industrial datasets and the predicted values by the optimum MLP network. The fit performance was determined for every trained MLP with minimum MSE value on the basis of R2 values.

3.5. Other Types of ANN

To find an appropriate model for evaluating the EIX, different topologies of artificial neural networks must be compare based on their performances. Therefore, the intended MLP approach developed with optimum configuration in terms of predictive accuracy was evaluated with other ANN models (GR, CF, and RBF). The sensitivity results for selecting the best number of hidden neurons are presented in Table 5, Table 6 and Table 7. It should be pointed out that determination of the number of hidden neurons in the other ANNs was the same as for the MLP model.
In GR, hidden neurons were not significant and the spread value needed to be set up. Subsequently, the spread value for the GR changes from 0.1 to 10 with 0.1 steps and 50 different GRs were considered, with statistical indices. The MSE is minimum (0.10808) in the GR model when the spread value was 4.81.
Considering the task, the best model contains the lowest value for MSE and AARD%. Table 8 clearly reveals that the MLP model can predict the EIX more accurately than other types of ANN models. Based on the statistical error values, the MSE for the MLP model (0.02217) is less than those for the MSE estimated with CF (0.03914), GR (0.10808), and RBF (0.09255) models, respectively. The above findings confirm that the MLP model is superior in the prediction of the EIX comparing to other ANN models.
The MLP model, trained by the Levenberg–Marquardt algorithm with 6-3-1 structure, has the logsig transfer function in the output and hidden layers. In fact, this model is chosen from 600 models (200 MLPNN models, 150 CFNN models, 50 GRNN models, and 200 RBFNN models). This selection is based on four statistical indices: AARD%, MSE, RMSE, and R2.
Table 9 summarizes the value of the weight and bias for the proposed MLP model. The MLP was trained using a training dataset by the adjustment of the biases and weights. The performance validity of the trained MLP was achieved according to the training and testing datasets (independent datasets). The optimal division ratio is 85:15 for segregating the data.

4. Procedure for Simple Usage of the MLP Model

  • All independent variables normalize into an interval of [0.01 0.99] using Equation (9) and should be arranged as a 6×1 vector.
V n o r m a l = 0.01 + V V min V max V min × ( 0.99 0.01 )
2.
Multiply the first six columns of Table 9 by the variables achieved in step 1.
3.
The 7th column of Table 9 is added to the obtained values in step 2.
4.
Substitute all elements of step 3 in the following equation to calculate NOHL.
NOHL = 1/[1 + exp(−values obtained in step 3)]
5.
Multiply the transposition of the 8th column of Table 9 by the obtained values in step 4.
6.
Add the value of the last column of Table 9, i.e., 0.69286, to the obtained values in step 5.
7.
Substitute the obtained value in step 6 in the following equation to calculate NOOL.
NOOL = 1/[1 + exp(−value obtained in step 6)]
8.
Inverse transformation using NOOL ^ (1/12).
9.
Map the output values in the previous step into the actual range of dependent variables, i.e., [24.3 26.9], using the following equation.
Predicted variable = (Step9 − 0.01) × 2.6531 + 24.3
10.
The obtained value in step 9 shows the estimated value for the dependent variable by the proposed MLP approach.
Finally, the proposed MLP model was statistically checked for reliability, so the leverage approach was used. The following figure shows the outlier detection based on the MLP model using the Williams Plot method. It is clear from Figure 6 that only 4 data points (red dots) of the 75 data available are difficult to model. In fact, about 95% of the data is in the valid range (blue squares).

5. Conclusions

In HDPE procedures, the EIX is the critical controlling variable indicative of product quality. A large number of approaches are available for the estimation and correlation of EIX, since it is difficult to nonlinearly measure them. The current paper applied predicting methods to prediction schemes, including MLPNN (multi-layer perceptron), CFNN (cascaded forward), RBFNN (radial basis function), and GRNN (general regression) neural networks. Comparisons were made between the findings of different dynamic prediction schemes to assess the best performance. The superior performance of the present MLP model was demonstrated using the same case study dataset for predicting the EIX than other models. The results clearly suggest that three hidden neurons are the best number of neurons in the proposed MLP model. For this model, the MSE and R2 values of the total dataset are 0.02217 and 0.89413, respectively. The main advantages of using ANNs for the EIX are the ability to predict the production rate of the network quickly and to clarify the characteristics of high-density polyethylene with network inputs. Although these models have used complex computational algorithms, fast convergence along with accuracy is not always confirmed in some cases.

Nomenclature

bBias
NNumber of actual data
Xiith input variable
YResponse
wWeight

Abbreviations

AARD% Average absolute relative deviation percent
AIArtificial intelligence
ANNArtificial neural networks
CFNNCascade feedforward neural networks
CSTRContinuously stirred tank reactors
EIXEthylene index
HDPEHigh-density polyethylene
MLPMulti-layer perceptron
MLPNNMulti-layer perceptron neural networks
MSEMean squared errors
RBFRadial basis neural networks
RMSE Root mean square errors
R2Regression coefficient
TEATriethylaluminium

Subscripts/Superscripts

pre.Predicted variable
act.Actual variable
maxMaximum value
minMinimum value
normalNormalized values

Author Contributions

Data curation, A.R.; Formal analysis, A.M.; Investigation, A.M.; Methodology, M.S.S.; Supervision, M.S.S.; Writing—original draft, A.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Nowlin, T.E. Business and Technology of the Global Polyethylene Industry: An In-depth Look at the History, Technology, Catalysts, and Modern Commercial Manufacture of Polyethylene and Its Products; John Wiley & Sons: Hoboken, NJ, USA, 2014. [Google Scholar]
  2. Malpass, D.B. Industrial Metal Alkyls and Their Use in Polyolefin Catalysts. In Handbook of Transition Metal Polymerization Catalysts; John Wiley & Sons: Hoboken, NJ, USA, 2018; pp. 1–30. [Google Scholar] [CrossRef]
  3. Spalding, M.A.; Chatterjee, A. Handbook of Industrial Polyethylene and Technology: Definitive Guide to Manufacturing, Properties, Processing, Applications and Markets Set; John Wiley & Sons: Hoboken, NJ, USA, 2017. [Google Scholar]
  4. Patel, R.M. Polyethylene. In Multilayer Flexible Packaging; Elsevier: Amsterdam, The Netherlands, 2016; pp. 17–34. [Google Scholar]
  5. Kissin, Y.V.; Nowlin, T.E.; Mink, R.I. Supported Titanium/Magnesium Ziegler Catalysts for the Production of Polyethylene. In Handbook of Transition Metal Polymerization Catalysts; John Wiley & Sons: Hoboken, NJ, USA, 2018; pp. 189–227. [Google Scholar] [CrossRef]
  6. Khare, N.P.; Lucas, B.; Seavey, K.C.; Liu, Y.A.; Sirohi, A.; Ramanathan, S.; Lingard, S.; Song, Y.; Chen, C.-C. Steady-State and Dynamic Modeling of Gas-Phase Polypropylene Processes Using Stirred-Bed Reactors. Ind. Eng. Chem. Res. 2004, 43, 884–900. [Google Scholar] [CrossRef]
  7. Neto, A.G.M.; Freitas, M.F.; Nele, M.; Pinto, J.C. Modeling Ethylene/1-Butene Copolymerizations in Industrial Slurry Reactors. Ind. Eng. Chem. Res. 2005, 44, 2697–2715. [Google Scholar] [CrossRef]
  8. Shehzad, N.; Zeeshan, A.; Ellahi, R.; Vafai, K. Convective heat transfer of nanofluid in a wavy channel: Buongiorno’s mathematical model. J. Mol. Liq. 2016, 222, 446–455. [Google Scholar] [CrossRef]
  9. Ellahi, R.; Rahman, S.U.; Gulzar, M.M.; Nadeem, S.; Vafai, K. A Mathematical Study of Non-Newtonian Micropolar Fluid in Arterial Blood Flow through Composite Stenosis. Appl. Math. Inf. Sci. 2014, 8, 1567–1573. [Google Scholar] [CrossRef]
  10. Nadeem, S.; Riaz, A.; Ellahi, R.; Akbar, N.S. Mathematical model for the peristaltic flow of Jeffrey fluid with nanoparticles phenomenon through a rectangular duct. Appl. Nanosci. 2013, 4, 613–624. [Google Scholar] [CrossRef] [Green Version]
  11. Hussain, F.; Ellahi, R.; Zeeshan, A. Mathematical Models of Electro-Magnetohydrodynamic Multiphase Flows Synthesis with Nano-Sized Hafnium Particles. Appl. Sci. 2018, 8, 275. [Google Scholar] [CrossRef] [Green Version]
  12. Bhatti, M.; Zeeshan, A.; Ellahi, R.; Shit, G. Mathematical modeling of heat and mass transfer effects on MHD peristaltic propulsion of two-phase flow through a Darcy-Brinkman-Forchheimer porous medium. Adv. Powder Technol. 2018, 29, 1189–1197. [Google Scholar] [CrossRef]
  13. Liu, W.; Shadloo, M.S.; Tlili, I.; Maleki, A.; Bach, Q.-V. The effect of alcohol–gasoline fuel blends on the engines’ performances and emissions. Fuel 2020, 276, 117977. [Google Scholar] [CrossRef]
  14. Zheng, Y.; Shadloo, M.S.; Nasiri, H.; Maleki, A.; Karimipour, A.; Tlili, I. Prediction of viscosity of biodiesel blends using various artificial model and comparison with empirical correlations. Renew. Energy 2020, 153, 1296–1306. [Google Scholar] [CrossRef]
  15. Messikh, N.; Bousba, S.; Bougdah, N. The use of a multilayer perceptron (MLP) for modelling the phenol removal by emulsion liquid membrane. J. Environ. Chem. Eng. 2017, 5, 3483–3489. [Google Scholar] [CrossRef]
  16. Zhao, Z.; Lou, Y.; Chen, Y.; Lin, H.; Li, R.; Yu, G. Prediction of interfacial interactions related with membrane fouling in a membrane bioreactor based on radial basis function artificial neural network (ANN). Bioresour. Technol. 2019, 282, 262–268. [Google Scholar] [CrossRef] [PubMed]
  17. Lashkarbolooki, M.; Vaferi, B.; Shariati, A.; Hezave, A.Z. Investigating vapor–liquid equilibria of binary mixtures containing supercritical or near-critical carbon dioxide and a cyclic compound using cascade neural network. Fluid Phase Equilibria 2013, 343, 24–29. [Google Scholar] [CrossRef]
  18. Ghritlahre, H.K.; Prasad, R.K. Exergetic performance prediction of solar air heater using MLP, GRNN and RBF models of artificial neural network technique. J. Environ. Manag. 2018, 223, 566–575. [Google Scholar] [CrossRef] [PubMed]
  19. Gonzaga, J.; Meleiro, L.A.C.; Kiang, C.; Filho, R.M. ANN-based soft-sensor for real-time process monitoring and control of an industrial polymerization process. Comput. Chem. Eng. 2009, 33, 43–49. [Google Scholar] [CrossRef]
  20. Allemeersch, P. Polymerisation of Ethylene; Walter de Gruyter GmbH: Berlin, Germany, 2015. [Google Scholar]
  21. Ghaffarian, N.; Eslamloueyan, R.; Vaferi, B. Model identification for gas condensate reservoirs by using ANN method based on well test data. J. Pet. Sci. Eng. 2014, 123, 20–29. [Google Scholar] [CrossRef]
  22. Aghel, B.; Rezaei, A.; Mohadesi, M. Modeling and prediction of water quality parameters using a hybrid particle swarm optimization–neural fuzzy approach. Int. J. Environ. Sci. Technol. 2018, 16, 4823–4832. [Google Scholar] [CrossRef]
  23. Maleki, A.; Elahi, M.; Assad, M.E.H.; Nazari, M.A.; Shadloo, M.S.; Nabipour, N. Thermal conductivity modeling of nanofluids with ZnO particles by using approaches based on artificial neural network and MARS. J. Therm. Anal. Calorim. 2020, 1–12. [Google Scholar] [CrossRef]
  24. Amini, Y.; Gerdroodbary, M.B.; Pishvaie, M.R.; Moradi, R.; Monfared, S.M. Optimal control of batch cooling crystallizers by using genetic algorithm. Case Stud. Therm. Eng. 2016, 8, 300–310. [Google Scholar] [CrossRef] [Green Version]
  25. Esfe, M.H.; Afrand, M.; Yan, W.-M.; Akbari, M. Applicability of artificial neural network and nonlinear regression to predict thermal conductivity modeling of Al2O3–water nanofluids using experimental data. Int. Commun. Heat Mass Transf. 2015, 66, 246–249. [Google Scholar] [CrossRef]
  26. Moayedi, H.; Aghel, B.; Vaferi, B.; Foong, L.K.; Bui, D.T. The feasibility of Levenberg–Marquardt algorithm combined with imperialist competitive computational method predicting drag reduction in crude oil pipelines. J. Pet. Sci. Eng. 2020, 185, 106634. [Google Scholar] [CrossRef]
  27. Shadloo, M.S.; Rahmat, A.; Karimipour, A.; Wongwises, S. Estimation of Pressure Drop of Two-Phase Flow in Horizontal Long Pipes Using Artificial Neural Networks. J. Energy Resour. Technol. 2020, 142, 1–21. [Google Scholar] [CrossRef]
  28. Komeilibirjandi, A.; Raffiee, A.H.; Maleki, A.; Nazari, M.A.; Shadloo, M.S. Thermal conductivity prediction of nanofluids containing CuO nanoparticles by using correlation and artificial neural network. J. Therm. Anal. Calorim. 2019, 139, 2679–2689. [Google Scholar] [CrossRef]
  29. Davoudi, E.; Vaferi, B. Applying artificial neural networks for systematic estimation of degree of fouling in heat exchangers. Chem. Eng. Res. Des. 2018, 130, 138–153. [Google Scholar] [CrossRef]
  30. Da Silva, I.N.; Spatti, D.H.; Flauzino, R.A.; Liboni, L.; Alves, S.F.D.R. Artificial Neural Network Architectures and Training Processes. In Artificial Neural Networks; Springer Science and Business Media: Berlin, Germany, 2016; pp. 21–28. [Google Scholar]
  31. Kayri, M. Predictive Abilities of Bayesian Regularization and Levenberg–Marquardt Algorithms in Artificial Neural Networks: A Comparative Empirical Study on Social Data. Math. Comput. Appl. 2016, 21, 20. [Google Scholar] [CrossRef]
  32. Zayani, R.; Bouallegue, R.; Roviras, D. Levenberg-marquardt learning neural network for adaptive predistortion for time-varying HPA with memory in OFDM systems. In Proceedings of the 16th European Signal Processing Conference, Lausanne, Switzerland, 25–29 August 2008; pp. 1–5. [Google Scholar]
  33. Guyon, I.; Elisseeff, A. An introduction to variable and feature selection. J. Mach. Learn. Res. 2003, 3, 1157–1182. [Google Scholar]
  34. Wilby, R.; Abrahart, R.J.; Dawson, C. Detection of conceptual model rainfall—Runoff processes inside an artificial neural network. Hydrol. Sci. J. 2003, 48, 163–181. [Google Scholar] [CrossRef] [Green Version]
  35. Kamadinata, J.O.; Tan, L.K.; Suwa, T. Sky image-based solar irradiance prediction methodologies using artificial neural networks. Renew. Energy 2019, 134, 837–845. [Google Scholar] [CrossRef]
Figure 1. A diagram of slurry polymerization for HDPE production [20].
Figure 1. A diagram of slurry polymerization for HDPE production [20].
Polymers 12 02319 g001
Figure 2. Schematic representation of the used model for neural networks [30].
Figure 2. Schematic representation of the used model for neural networks [30].
Polymers 12 02319 g002
Figure 3. Pearson correlation coefficient values among independent and dependent variables.
Figure 3. Pearson correlation coefficient values among independent and dependent variables.
Polymers 12 02319 g003
Figure 4. The MSE for the multi-layer perceptron (MLP) network with 1 to 4 hidden neurons (50 networks per neuron) during the training stage.
Figure 4. The MSE for the multi-layer perceptron (MLP) network with 1 to 4 hidden neurons (50 networks per neuron) during the training stage.
Polymers 12 02319 g004
Figure 5. The efficiency of the optimum MLP model for prediction of all the datasets.
Figure 5. The efficiency of the optimum MLP model for prediction of all the datasets.
Polymers 12 02319 g005
Figure 6. Williams plot for the MLP model.
Figure 6. Williams plot for the MLP model.
Polymers 12 02319 g006
Table 1. Ranges and of the Industrial inputs.
Table 1. Ranges and of the Industrial inputs.
Independent VariablesNamedMinimumMaximum
Temperature (°C)Input 17797.8
Pressure (bar)Input 221.822.6
Level (%)Input 36877
Loop flow (Kg/h)Input 4646724
Ethylene flow (Kg/h)Input 51.815.6
Hydrogen flow (Kg/h)Input 66.125
1-butane flow (Kg/h)Input 717970
Hydrogen concentration (mol%)Input 816.0219.03
1-butane concentration (mol%)Input 90.491.38
Catalyst flow (Kg/h)Input 1023.7
TEA flow (Kg/h)Input 111.83.7
Table 2. The Pearson correlation coefficient for each input.
Table 2. The Pearson correlation coefficient for each input.
Independent Variables Pearson Correlation Coefficient
Input1−0.0430
Input20.2452
Input30.0581
Input40.0252
Input50.1606
Input60.0956
Input7−0.1160
Input80.0052
Input9−0.0296
Input100.0935
Input110.1075
Table 3. Pearson coefficient values calculated for different transformations between dependent and independent variables.
Table 3. Pearson coefficient values calculated for different transformations between dependent and independent variables.
TransformationPearson’s CoefficientAAPC
Input11Input10Input7Input6Input5Input2
Output150.098370.08478−0.128810.08770.170710.281880.14204
Output140.099430.08576−0.128450.088850.170770.279970.1422
Output130.100450.0867−0.128010.089920.170730.277950.14229
Output120.101430.0876−0.127500.090920.170580.275840.14231
Output110.102350.08845−0.126900.091840.170330.273610.14225
Output100.103210.08925−0.126230.092680.169960.271280.1421
Output20.10740.09331−0.117540.095720.162250.248510.13745
Output0.107510.09346−0.116030.095560.160630.245160.13639
Output0.750.107530.09349−0.115630.09550.16020.244310.13611
Output0.50.107530.0935−0.115230.095430.159760.243440.13582
Output0.250.107530.09351−0.114830.095350.159320.242580.13552
Output0.10.107530.09352−0.114580.09530.159040.242050.13534
Output−0.1−0.10752−0.093520.11425−0.09523−0.15867−0.241350.13509
Output−0.25−0.10751−0.093520.114−0.09517−0.15839−0.240820.1349
Output−0.5−0.10749−0.093510.11357−0.09507−0.15791−0.239940.13458
Output−0.75−0.10747−0.093500.11314−0.09496−0.15743−0.239040.13426
Output−1−0.10743−0.093480.11271−0.09484−0.15693−0.238140.13392
Output−2−0.10723−0.093350.11091−0.09427−0.15485−0.234490.13252
Output−10−0.10154−0.088670.09353−0.08469−0.13296−0.202590.11733
Output−11−0.10034−0.087650.09106−0.08289−0.12965−0.198400.115
Output−12−0.09903−0.086530.08855−0.08097−0.12624−0.194200.11259
Output−13−0.09763−0.085330.08599−0.07894−0.12273−0.190000.11011
Output−14−0.09614−0.084050.08341−0.07680−0.11914−0.185810.10756
Output−15−0.09456−0.082690.0808−0.07457−0.11548−0.181650.10496
exp(Output)0.083360.07114−0.128090.070170.163820.297570.13569
Table 4. The procedure for detecting the best configuration for the MLP model.
Table 4. The procedure for detecting the best configuration for the MLP model.
Hidden NeuronDatasetStatistical Index
AARD%MSERMSER2
1Train0.65660.072740.26970.6043
Test0.75780.066650.25820.35034
Total0.67150.071840.2680.57942
2Train0.5680.066030.2570.66405
Test0.73140.055830.23630.65288
Total0.5920.064540.2540.644
3Train0.38630.019820.14080.91723
Test0.62510.035810.18920.53962
Total0.42130.022170.14890.89413
4Train0.33970.020370.14270.90878
Test0.80840.076170.2760.44131
Total0.40850.028560.1690.86377
Table 5. Determination of the best-hidden neurons for cascaded forward (CF).
Table 5. Determination of the best-hidden neurons for cascaded forward (CF).
Hidden NeuronDatasetStatistical Index
AARD%MSERMSER2
1Train0.56040.070490.26550.62338
Test0.63770.054310.2330.58962
Total0.57170.068120.2610.60867
2Train0.49610.052270.22860.74815
Test0.69040.060220.24540.36408
Total0.52460.053440.23120.71356
3Train0.4880.036380.19070.83777
Test0.57290.05520.2350.48687
Total0.50040.039140.19790.79892
Table 6. Determination of the best hidden neurons for radial bias function (RBF).
Table 6. Determination of the best hidden neurons for radial bias function (RBF).
Hidden NeuronSpreadDatasetStatistical Index
AARD%MSERMSER2
10.41Train0.7620.102570.32030.30383
Test0.85740.068780.26230.28658
Total0.7760.097620.31240.312
20.81Train0.75630.098870.31440.42263
Test0.7620.055840.2363−0.10195
Total0.75710.092550.30420.38919
31.01Train0.73110.092590.30430.45482
Test0.7740.057240.23920.33916
Total0.73740.087410.29560.43965
40.21Train0.67080.083240.28850.51622
Test0.75790.082440.28710.4649
Total0.68360.083120.28830.48421
Table 7. Sensitivity analyses on spread parameter for finding best hidden neurons for general regression (GR).
Table 7. Sensitivity analyses on spread parameter for finding best hidden neurons for general regression (GR).
SpreadDatasetStatistical Index
AARD%MSERMSER2
4.81Train0.83610.109820.33140.32248
Test0.92420.0980.313−0.20907
Total0.8490.108080.32880.22754
Table 8. A performance comparison of ANN models.
Table 8. A performance comparison of ANN models.
ModelDatasetStatistical Index
AARD%MSERMSER2
MLPTrain0.38630.019820.14080.91723
Test0.62510.035810.18920.53962
Total0.42130.022170.14890.89413
CFTrain0.4880.036380.19070.83777
Test0.57290.05520.2350.48687
Total0.50040.039140.19790.79892
GRTrain0.83610.109820.33140.32248
Test0.92420.0980.313−0.20907
Total0.8490.108080.32880.22754
RBFTrain0.75630.098870.31440.42263
Test0.7620.055840.2363−0.10195
Total0.75710.092550.30420.38919
Table 9. Weight and bias coefficients for the proposed MLP model.
Table 9. Weight and bias coefficients for the proposed MLP model.
Weights between Hidden Layer Neurons and Input VariablesBias of Hidden Layer NeuronsWeight between the Hidden Layer and the Output LayerOutput Layer Bias
Input2Input5Input6Input7Input10Input11
37.522325.0769−1007.47−3.6761−330.513−168.349176.6308−0.948880.69286
272.4066−70.6262−94.82384.1889−150.1632.1714−16.82912.7494
−1095.551458.01870.1148370.1613−539.429−12.14139.7747−2.276

Share and Cite

MDPI and ACS Style

Maleki, A.; Safdari Shadloo, M.; Rahmat, A. Application of Artificial Neural Networks for Producing an Estimation of High-Density Polyethylene. Polymers 2020, 12, 2319. https://doi.org/10.3390/polym12102319

AMA Style

Maleki A, Safdari Shadloo M, Rahmat A. Application of Artificial Neural Networks for Producing an Estimation of High-Density Polyethylene. Polymers. 2020; 12(10):2319. https://doi.org/10.3390/polym12102319

Chicago/Turabian Style

Maleki, Akbar, Mostafa Safdari Shadloo, and Amin Rahmat. 2020. "Application of Artificial Neural Networks for Producing an Estimation of High-Density Polyethylene" Polymers 12, no. 10: 2319. https://doi.org/10.3390/polym12102319

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop