Next Article in Journal
A Dynamic Optimization Model for Designing Open-Channel Raceway Ponds for Batch Production of Algal Biomass
Next Article in Special Issue
Modeling and Optimization of High-Performance Polymer Membrane Reactor Systems for Water–Gas Shift Reaction Applications
Previous Article in Journal
A Continuous Formulation for Logical Decisions in Differential Algebraic Systems using Mathematical Programs with Complementarity Constraints
Previous Article in Special Issue
Surrogate Models for Online Monitoring and Process Troubleshooting of NBR Emulsion Copolymerization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Gaussian Mixture Model-Based Ensemble Kalman Filtering for State and Parameter Estimation for a PMMA Process

Department of Chemical and Materials Engineering, University of Alberta, 12th Floor—Donadeo Innovation Centre for Engineering (ICE), 9211—116 Street, Edmonton, AB T6G 1H9, Canada
*
Author to whom correspondence should be addressed.
Processes 2016, 4(2), 9; https://doi.org/10.3390/pr4020009
Submission received: 24 November 2015 / Accepted: 21 March 2016 / Published: 30 March 2016
(This article belongs to the Special Issue Polymer Modeling, Control and Monitoring)

Abstract

:
Polymer processes often contain state variables whose distributions are multimodal; in addition, the models for these processes are often complex and nonlinear with uncertain parameters. This presents a challenge for Kalman-based state estimators such as the ensemble Kalman filter. We develop an estimator based on a Gaussian mixture model (GMM) coupled with the ensemble Kalman filter (EnKF) specifically for estimation with multimodal state distributions. The expectation maximization algorithm is used for clustering in the Gaussian mixture model. The performance of the GMM-based EnKF is compared to that of the EnKF and the particle filter (PF) through simulations of a polymethyl methacrylate process, and it is seen that it clearly outperforms the other estimators both in state and parameter estimation. While the PF is also able to handle nonlinearity and multimodality, its lack of robustness to model-plant mismatch affects its performance significantly.

1. Introduction

Polymerization reactors offer unique challenges for process modeling, monitoring, and control. The production of polymers of different grades means that the process conditions are changed relatively often. Product quality specifications (usually expressed in terms of constraints on the properties of the molecular weight distribution) and dynamic operation lead to the need for on-line monitoring and control, which further require accurate process models and real-time estimation of states and parameters of the system. Over the years, the most popular estimator used in nonlinear chemical processes—both in general and specifically for polymerization reactors, too—is the extended Kalman filter (EKF) (e.g., [1,2,3,4,5,6,7,8]). However, this estimator involves linearization of the original model at each step, and can be inaccurate for highly nonlinear systems. Our focus in this work is on particle-based estimators, which are derivative free estimators using different sampling methods to generate an ensemble of particles to represent the distributions of the dynamic states of the system.
The most commonly used estimators based on the use of an ensemble of particles are the ensemble Kalman filter (EnKF) [9], the unscented Kalman filter (UKF) [10,11] and the particle filter (PF) [12]. While the EnKF and the UKF provide only the mean and variance of the posterior distribution of the states (since they use a Gaussian assumption for the distributions), the PF, which works on Bayesian principles, can provide estimates for the full distribution of the states even in situations where the distribution is not Gaussian (which occurs in nonlinear systems) by using a set of particles associated with different weights. In practice, the application of the PF to chemical processes is very recent. Chen et al. [13] compared the performance of the auxiliary particle filter with an EKF for a batch polymethyl methacrylate process to show that it outperformed the EKF in terms of the root mean squared error for state and parameter estimation. Shenoy et al. [14] compared the UKF, EKF, and PF in a case study on a polyethylene reactor simulation to demonstrate that the PF provided more accurate estimation results, but was less robust to plant-model mismatch. Shao et al. [15] compared the performance of the PF, EKF, UKF, and moving horizon estimation for constrained state estimation and showed that the constrained PF provides more accurate estimation results compared to other methods.
An important issue with the PF relates to its performance for high dimensional systems. The ensemble Kalman filter (EnKF), on the other hand, has the advantage of being scalable to high-dimensional systems without a prohibitive increase in the size of the ensemble required; however, as stated earlier, the algorithm is based on the assumption that both the prior and posterior distribution of the states can be approximated by the Gaussian distribution, and it may be unreliable when this assumption is not valid.
Polymerization processes can be of high dimension when they are described using population balance models [16,17] and a multimodal distribution of properties such as the particle size and molecular weight, may be desirable [18,19,20]. This, especially in the presence of model-plant mismatch, creates challenges for both the EnKF and the PF. Also, the nonlinearity of the systems may lead to multimodality in the state distributions.
Recently, the Gaussian mixture model (GMM) has been combined with the ensemble Kalman filter to create a new category of estimators: Gaussian mixture filters. Bengtsson et al. [21] proposed the GMM to approximate the prior distribution of the states, but the means and variances of the GMM were approximated directly from the ensemble. In [22], Smith proposed the expectation maximization (EM) algorithm to learn the parameters of the prior distribution modeled by the GMM. In the update step, the idea of Kalman-based filtering was extended to the multimodal scenario; however, the posterior distribution is constrained to be a Gaussian distribution. Dovera and Della Rossa [23] used a different update technique and retained the posterior distribution as a GMM.
In this work, we propose an estimator that belongs to the category of Gaussian mixture filters and provides a full state distribution at each time step that is approximated by the GMM. We extend the idea of the EnKF to priors with multimodal features that are described by the GMM. We present results on the application of this estimator to a polymethyl methacrylate (PMMA) process and compare its performance to that of the EnKF and the PF.

2. State Estimation Techniques for Nonlinear Systems

Consider a dynamic nonlinear system represented by:
{ x n = f ( x n 1 , u n 1 , θ ) + v n y n = H x n + e n
where x n are the hidden states. u n and y n are the inputs and outputs of the system. θ represents the parameters in the model. v n and e n are process noise and measurement noise respectively.
In this section, we will introduce the particle filter and the ensemble Kalman filter for these systems, and then describe the GMM-based ensemble Kalman filter that we propose to employ. The performance of the three estimators will be compared for the PMMA system in later sections.

2.1. Particle Filter (PF)

The PF employs a sequential Monte Carlo method that uses a set of sampling techniques to generate samples from a sequence of probability distribution functions.
The particle filter approximates the posterior probability p ( x n | y n ) with a set of N s particles { x n ( i ) }. Each particle is assigned a weight w n ( i ) and the sum of all weights is unity. Since the probability distribution of the states conditioned on the measurements of the outputs, p ( x n | y 1 : n ) , is usually unknown, these particles are drawn from the importance distribution q( x n | y 1 : n ). The posterior distribution is given by:
p ( x n | y 1 : n ) = i = 1 N s w n ( i ) δ ( x n x n ( i ) )
where the recursive update of the weights w n ( i ) is given by:
w n ( i ) = w n 1 ( i ) p ( x n ( i ) | x n 1 ( i ) ) q ( x n ( i ) | x n 1 ( i ) , y n ) p ( y n | x n ( i ) )
In the sequential importance resampling (SIR) version of the PF, we choose q ( x n ( i ) | x n 1 ( i ) ) = p ( x n ( i ) | x n 1 ( i ) ) , so that w n ( i ) = w n 1 ( i ) p ( y n | x n ( i ) ) , i.e., we draw particles directly from the prior distribution at time instant n.
The N s particles at time step (n-1) are forwarded through the state transition equation x n | n 1 ( i ) = f ( x n 1 | n 1 ( i ) , u n 1 , v n 1 ( i ) ) to get a new series of particles { x n | n 1 ( i ) } i = 1 N s to approximate the prior density p ( x n | y 1 : n 1 ) at time instant n. The weight w n ( i ) associated with each particle is calculated using Equation (3). Then, a resampling step is performed on the prior particles { x n | n 1 ( i ) } i = 1 N s based on their weights w n ( i ) to generate the posterior particles { x n | n ( i ) } i = 1 N s such that the weights of all the posterior particles are set to be equal. The full state distribution and its properties can be calculated from the posterior particles.

2.2. Ensemble Kalman Filter (EnKF)

The EnKF was first proposed as a data assimilation technique for highly nonlinear ocean models by Evensen [9] and is a Monte Carlo sampling based variant of the Kalman filter. Like the PF, it also uses an ensemble of particles from which the statistical information of the distribution of the states can be calculated, but it uses the Kalman update. In order to have an explicit analytical expression for the Kalman gain, both the prior and posterior distributions are approximated by the Gaussian distribution. The framework of this algorithm is as follows:
At time step k, N e particles are drawn from the prior distribution to form the prior ensemble { x n 1 | n 1 i } i = 1 , , N e . In the prediction step, each member of the ensemble x n 1 | n 1 i is forwarded through the state transition equation x n | n 1 i = f ( x n 1 | n 1 i , u n 1 , v n 1 i ) to get its predicted value, thus forming a predicted ensemble { x n | n 1 i } i = 1 , , N e . Corresponding to each member of the ensemble, a predicted observation value is obtained; this can be achieved by perturbing the measurement of the output with random measurement error. Let { y ^ n | n 1 i } i = 1 , , N e denote the predicted observation data.
In the update step, two error matrices are calculated. The error matrix of the predicted state ensemble is defined as:
e n | n 1 i = x n | n 1 i μ n | n 1 x
where μ n | n 1 x = 1 N e i = 1 N e x n | n 1 i .
The error matrix of the predicted measurement ensemble is defined as:
ε n | n 1 i = y ^ n | n 1 i μ n | n 1 y
where μ n | n 1 y = 1 N e i = 1 N e y ^ n | n 1 i .
The cross-covariance between the state prediction ensemble and measurement ensemble is given in Equation (6), and the covariance matrix of the measurement ensemble is given in Equation (7).
P n | n 1 e , ε = 1 N e 1 i = 1 N e ( e n | n 1 i ) ( ε n | n 1 i ) T
P n | n 1 ε , ε = 1 N e 1 i = 1 N e ( ε n | n 1 i ) ( ε n | n 1 i ) T
with the two covariance matrices, the Kalman gain is calculated as:
K = P n | n 1 e , ε ( P n | n 1 ε , ε + R ) 1
where R is the covariance of the measurement noise.
Each member of the ensemble is updated as:
x n | n i = x n | n 1 i + K ( y n o b s y ^ n | n 1 i )
where y n o b s is the true measurement value at time step n.

2.3. Gaussian Mixture Model Based Ensemble Kalman Filter (EnKF-GMM)

2.3.1. Expectation Maximization (EM) for Clustering of the Gaussian Mixture Model

The probability distribution function of a random vector x following a finite Gaussian mixture distribution is given by:
p X ( x ) = j = 1 M π j × N ( x ; μ j , P j )
subject to constraints that π j 0 and j = 1 M   π j = 1 , where π j , μ j , P j are the prior probability, mean and covariance of mode j and N ( x ; μ j , P j ) = 1 ( 2 π ) n / 2 | P j | 1 / 2 e 1 2 ( x μ j ) T P 1 ( x μ j ) .
Given a set of data { x i } i = 1 , .. , N randomly generated by a GMM, the expectation maximization (EM) algorithm is used to estimate the parameters of the GMM, θ = { π 1 , , π M , μ 1 , .. , μ M , P 1 , , P M } [24]. EM is a variant of maximum likelihood estimation when there exist hidden variables or missing data. In this case, the mode identity of each data point is considered as the missing or hidden variable. Let { ( c i ) j } be a binary indicator vector representing the identity of the component that generates x i . Its value is given by:
( c i ) j = { 1 ,   i f   d a t a   p o i n t   i s   g e n e r a t e d   b y   c o m p o n e n t   j 0 ,   o t h e r w i s e  
In the EM algorithm, an E-step is performed first to compute the Q function, the expectation of the log likelihood of the complete data set, by computing the probability of each data x i belonging to each component j given the current parameters θ k estimated from the previous iteration. Specifically, Q ( θ | θ k ) = E [ L ( p ( z | θ ) ) | { x } , θ k ] , where { x } is the observed data set; { z } is the complete data set consisting of both observed and missing data, { z } = { c 1 , x 1 , , c N , x N } , c i is the membership of each data point, and θ k is the estimate of the last iteration. This becomes
Q ( θ | θ k ) = i = 1 N j = 1 M p [ ( c i ) j | { x } ,   θ k ] ( l o g π j N ( x i ; μ j , P j ) )
w i j = p [ ( c i ) j | { x } ,   θ k ] = π j k N ( x i ; μ j k , P j k ) m = 1 M π m k N ( x i ; μ m k , P m k )
Next, the M-step is performed to maximize the Q function and calculate the corresponding θ k + 1 .
π j k + 1 = N k N
μ j k + 1 = 1 N k i = 1 N w i j x i
P j k + 1 = 1 N k i = 1 N w i j ( x i μ j k + 1 ) ( x i μ j k + 1 ) T
where N k = i = 1 N w i j .
The E-step and the M-step are performed iteratively until the estimates converge. During this process, the problem of singularity may arise when one of the components collapses onto one data point. This usually happens due to over-fitting in the maximum likelihood estimation (MLE). To avoid this problem, one approach is to adopt a Bayesian regularization method [25] to replace the MLE with the maximum a posteriori (MAP) estimate. Based on this method, the update of the covariance is modified to become
P j k + 1 = i = 1 N w i j ( x i μ j k + 1 ) ( x i μ j k + 1 ) T + λ I d N k + 1
where I d is an n-dimensional unit matrix and λ is a regularization constant determined by some validation data [26]. An alternate (ad hoc) method to deal with the problem of singularity is to detect when the singularity occurs and reset the means of all components randomly and the covariance to some larger value.
The pseudo-code for the EM algorithm is provided below.
Algorithm 1: Expectation Maximization algorithm. Inputs are data set { x i } i = 1 , .. , N , component number M and initial values { θ 0 } of { π j } j = 1 , , M , { μ j } j = 1 , M , { P j } j = 1 , M , θ k = θ 0 .
EM[{x} , M ,   { θ k } ]
// E step
while ε 1 e 6
for i = 1: N
  for j = 1:M
     p [ ( c i ) j | x i ,   θ k ] = p ( x i | ( c i ) j , θ k ) p ( ( c i ) j | θ k ) / p ( x i )
    end for
end for
// M step
for j = 1:M
   π j k + 1 = i = 1 N p [ ( c i ) j | x i ,   θ k ] / N
   μ j k + 1 = i = 1 N p [ ( c i ) j | x i ,   θ k ] x i / i = 1 N p [ ( c i ) j | x i ,   θ k ]
   P j k + 1 = i = 1 N p [ ( c i ) j | x i ,   θ k ] ( x i μ j k + 1 ) ( x i μ j k + 1 ) T + λ I d i = 1 N p [ ( c i ) j | x i ,   θ k ] + 1
end for
ε = μ k + 1 μ k
end while
return θ k + 1

2.3.2. EnKF-GMM Algorithm

In this section, a GMM-based EnKF (EnKF-GMM) filter is proposed to obtain estimates of the full state distribution. As with the particle filter, it also uses a set of particles to represent the posterior probability distribution function (PDF) of the states. The difference is that the PDF is constrained to be a GMM at every time step.
At each time step, the EnKF-GMM has two steps—forecast and update. The forecast step is identical to the EnKF. An ensemble of size N , { x i } i = 1 , , N , is drawn from the prior distribution of the states and forwarded through the model to obtain a predicted ensemble for the next time step. Then, the EM algorithm is performed on the predicted ensemble to obtain the estimates of the GMM with M components. Next, the Kalman update is performed based on each component in the GMM to get an ensemble of size N × M . Finally, these ensemble members are combined based on their weights and reduced to a size of N . The details of the algorithmic sequence are as follows:
Forecast:
  • The first portion of the forecast step is to determine the number of components M in the multimodal distribution. M can be determined using the Bayesian or other information criteria [27,28], or using prior knowledge. For example, in reservoir models, petrophysical properties (such as porosity or permeability) are typically related to geological units (facies), and variables inside the facies are characterized by underlying multimodal distributions which are known beforehand [9]. In our work, this information can be considered as prior knowledge if we know the distribution of the process noise.
  • With the knowledge of the process model and the number of components M , the prior ensemble { x i } i = 1 , , N is propagated through the model to get the predicted values of the ensemble { x i f } i = 1 , , N . These are the realizations of the predicted state space x f .
Assuming the predicted state x f at the forecast step is a GMM,
p ( x f ) = j = 1 M τ j f p j ( x f ) = j = 1 M τ j N ( x f ; μ j f , P j f )
The EM algorithm is applied on { x i f } i = 1 , , N to give us the parameters of the prior distribution ( τ j f , μ j f and P j f ) of each component j .
Update:
3.
For each component j of the distribution, the Kalman gain matrix for each Gaussian component is computed by utilizing the membership probability matrix W .
P [ j ] f H T = i = 1 N w i , j ( x i f μ j ) ( H x i f   H μ j ) T / n j
H P [ j ] f H T = i = 1 N w i , j ( H x i f H μ j ) ( H x i f H μ j ) T / n j
K [ j ] = P [ j ] f H T ( H P [ j ] f H T + R ) 1
where w i , j = π j N ( x i ; μ j ,   P j ) m = 1 M π m N ( x i ; μ m , P m ) , n j = i = 1 N w i , j , and H is the linearized measurement function.
4.
In the update step, assuming one Gaussian component j claims the ownership of all the ensemble members, the Kalman update can be performed for each component member under component j . This gives us an ensemble size of N × M .
x i a , j = x i f +   K [ k ] ( d H x i f e i )
5.
The N × M ensemble members can be combined to form N members by using the probability matrix. This gives us the final posterior ensemble { x i a } i = 1 , , N .
x i a = j = 1 M w i , j x i a , j
The mean and covariance of the posterior can be computed as:
μ j a = i = 1 N w i , j x i a , j / n j
P [ j ] a = i = 1 N w i , j ( x i a , j μ j a ) ( x i a , j μ j a ) T / n j
6.
The posterior weight of each component of the distribution can be computed based on the observed data d , which contains the measurements y.
τ j a = p ( μ j , j ,   R | d ) = p ( d | μ j , j ,   R ) n j j = 1 M p ( d | μ j , j ,   R ) n j
p ( d | μ j , j ,   R ) = e x p [ 1 2 ( d H μ j ) T ( H j H T + R ) 1 ( d H μ j ) ] ( 2 π ) m | H j H T + R |
7.
The point estimate is given by:
x a = j = 1 M τ j a μ j a
The pseudo-code for the EnKF-GMM algorithm is provided below.
Algorithm 2: EnKF-GMM algorithm. Inputs include the initial distribution of x, the total number of the particles N, the components M, and the time steps T. Inputs and observations at each time step are u n and d n .
[ { x i a } i = 1 N , { μ j a , P j a . τ j a } j = 1 M ] = EnKF-GMM[ { x i } i = 1 N , d t ]
 
for n = 1:T
  for I = 1 : N
     Draw x i f   ~   f ( I , u n 1 , v n 1 i )
     Calculate y i = H x i f + v n i
     end for
  Apply the EM algorithm on { x i f } i = 1 , , N using algorithm 1:
   { τ j f , μ j f , P j f } j = 1 M = E M [ { x i f } i = 1 , , N , M ,   { θ k } ]
  for j = 1 : M
     Calculate the Kalman gain of each component K [ j ] using Equation (21)
     foI i = 1 : N
        Calculate the updated particles for each component { x i a , j } i = 1 N using Equation (22)
        end for
        Combine { x i a , j } i = 1 N to obtain the posterior particles { x i a } i = 1 N using Equation (23)
         Calculate the parameters of the posterior distribution μ j a , P j a . τ j a using Equations (24)–(26).
  end for
  Calculate the point estimate x a using Equation (28)
  end for
While the PF and the EnKF-GMM both can, in principle, account for multimodality, the use of the Gaussian mixture model provides the EnKF-GMM with greater flexibility in capturing a wide variety of distributions under varying levels of model-plant mismatch, as will be shown in the results.

3. Results and Discussion

3.1. Mathematical Model of the Methyl Methacrylate ( MMA) Polymerization Process

Simulations of a free-radical methyl methacrylate (MMA) polymerization process are used to demonstrate the performance of the estimation method proposed in this paper. The process is assumed to take place in a continuous stirred tank reactor (CSTR), and uses AIBN as the initiator and toluene as the solvent. The mathematical model of this process is described below in Equations (29)–(35), and further details can be found in [29,30]. Parameter values are provided in Table 1. The six states to be estimated include the monomer concentration C M , the initiator concentration C I , the reactor temperature T , the moments of the polymer distribution, D 0 and D 1 , and the jacket temperature T j . Only the temperatures are measured. The number average molecular weight (NAMW), which is the primary quality variable for the process, is defined as the ratio D 1 / D 0 .
d C m d t = ( k p + k f m ) C m P 0 + F ( C m i n C m ) V
d C I d t = k I C I + ( F I C I i n F C I ) V
d T d t = ( Δ H ) k p C m P 0 ρ C ρ U A ρ C ρ V ( T T j ) + F ( T i n T ) V
d D 0 d t = ( 0.5 k t c + k t d ) P 0 2 + k f m C m P 0 F D 0 V
d D 1 d t = M m ( k p + k f m ) C m P 0 F D f V
d T j d t = F c w ( T w 0 T j ) V 0 + U A ρ w C p w V 0 ( T T j )
P 0 = 2 f * + C I k I k t d + k t c
In all the simulations whose results are described in the following sections, the number of particles used for each estimator, N, is 100. The number of components, M, is set to 2. The parameters of the bimodal noise in all simulations are μ=[0.1,0.8], P=diag(0.1,0.1) for states C_m, C_I and D_0; μ=[8,64], P=diag(8,8) for state D_1; and μ=[0.6,4.8], P=diag(0.6,0.6) for states T and Tj.
The simulations we perform are introduced here: Case Study 1 provides a comparison of the EnKF-GMM, the PF, and the EnKF for a case with bimodal distributions and insignificant model-plant mismatch. Case Study 2 provides a comparison of the three estimators where the model-plant mismatch is significant. Case Study 3 compares the estimators for state estimation with uncertain parameters, but with the uncertain parameter not being estimated. Case Study 4 considers the same case as Case Study 3, but with combined state and parameter estimation. In Case Study 5, we consider an alternate version of the PF and use the simulation conditions of Case Study 2.

3.2. Comparison of State Estimation with the EnKF-GMM, EnKF, and PF (Case Study 1)

In this section, we present the results of applying the EnKF-GMM, EnKF, and PF algorithms on the PMMA process. To illustrate the performance of the estimators in cases where the states have multimodal distributions, bimodal process noise is applied to all the six states. The measurement noise is assumed to be Gaussian. The prior distribution of the state is also assumed to follow a GM distribution which contains two modes.
For Case Study 1, the true initial values of the states are:
x 0 = [ 5 kgmol m 3 ,   3 kgmol m 3 ,   320 K ,   0.5   kgmol m 3 ,   0.5 kg m 3 ,   300   K ]
The dynamics of the simulation describe how the system relaxes to a steady state from this initial condition. For the estimators, the initial particles are drawn from the prior distribution. The tuning parameters for the prior distribution are its mean and covariance. In the first case, a prior distribution with a small amount of bimodal process noise is tested for the three algorithms. The means of the two Gaussian modes of the prior distribution are:
μ 1 = [ 4   kgmol m 3 ,   2   kgmol m 3 ,   310   K ,   0.49   kgmol m 3 ,   0.49   kg m 3 ,   295   K ] ;
μ 2 = [ 6   kgmol m 3 ,   4   kgmol m 3 ,   330   K ,   0.51   kgmol m 3 ,   0.51   kg m 3 ,   305   K ]
The covariances of the modes of the prior distribution are:
P 1 = diag ( 4 ,   4 ,   28 ,   8 e 1 , 8 e 4 ,   6 ) ;
P 2 = diag ( 4 ,   4 ,   28 ,   8 e 1 , 8 e 4 ,   6 )
The tuning parameters of the initial distribution indicate a state distribution with insignificant bimodality. The purpose of this simulation is to demonstrate the estimation performance of the three algorithms in the scenario where the state distribution shows insignificant multimodality.
The comparison of estimation results using the EnKF-GMM, EnKF, and PF is shown in Figure 1, with time steps on the x-axis (each time step is 0.3 h = 18 min). Table 2 shows the root mean squared error (RMSE) over the 25 time steps of the simulation for the six states and the NAMW for the three algorithms. In this case, the estimation results from Figure 1 and Table 2 show that the three algorithms have similar performance in estimation of the six states. However, the EnKF-GMM has the best performance in the estimation of the NAMW. In addition, the converged variance of the estimates of the states, obtained from the estimated covariance matrix with the EnKF-GMM, are [10−4, 10−4, 1.2 × 10−4, 10−5, 2 × 10−4, 4 × 10−4], respectively, confirming the significance of the estimates. The PF performs better than the EnKF only for some states. Increasing the number of particles for each of the algorithms to 200 (results not shown) improves the performance of the PF slightly, but the same conclusions hold.
In Case Study 2, the multimodal features of the prior distribution are made more significant compared with the first case. The parameters of the prior distribution given below indicate that both modes lie far away from the true value, which also means that the initial condition mismatch is much larger. The true initial values of the states remain the same as the first case, and the process noise and measurement noise applied to the plant remain unchanged as well. The modified prior distribution is specified by:
μ 1 = [ 1   kgmol m 3 ,   1   kgmol m 3 ,   290   K ,   0.49   kgmol m 3 ,   0.49   kg m 3 ,   270   K ] ;
μ 2 = [ 10   kgmol m 3 ,   8   kgmol m 3 ,   350   K ,   0.51   kgmol m 3 ,   0.51   kg m 3 ,   330   K ] ;
P 1 = diag ( 0.8 ,   0.8 ,   5.6 ,   8 e 2 , 8 e 3 ,   5.6 ) ;
P 2 = diag ( 0.8 ,   0.8 ,   5.6 ,   8 e 2 , 8 e 3 ,   5.6 )
In this case, the parameters of the prior distribution indicate that both of the modes lie near the tail of the likelihood function. The initial particles not only show significant multimodality, but also some degree of model-plant mismatch. The comparison of estimation using the EnKF-GMM, EnKF, and PF is shown in Figure 2 and the RMSE is shown in Table 3, and it is clear that the EnKF-GMM outperforms the other two estimators. As expected, the performance of the EnKF has worsened in this case because its Gaussian assumption on the prior and posterior distributions is violated in a significant manner. The PF does not show good performance either, and it is outperformed by the EnKF in the estimation of the NAMW. This is because the PF lacks robustness to plant-model mismatch [14], which is present in this case. Increasing the number of particles for all the estimators does not change these conclusions.
Figure 3 shows the evolution of the multimodal posterior distribution of the one of the states (the monomer concentration) at time steps 1, 3, 4, and 9. Table 4 lists the corresponding estimation errors of the three algorithms at those time steps with respect to the true value of C M . Figure 4 shows the evolution of the posterior distribution of another state (the jacket temperature) at time steps 2, 6, 9, and 10, and Table 5 shows the corresponding estimation errors of the three algorithms. These distributions are bimodal, and this clearly shows that the EnKF-GMM outperforms the other estimators in the presence of multimodal distributions.

3.3. Comparison of State and Parameter Estimation with the EnKF-GMM, EnKF and PF (Case Studies 3 and 4)

We consider the effects of parametric uncertainty in this section. The uncertain parameter chosen for these studies is E p , which is the activation energy associated with the reaction rate parameter k p . We choose E p as the uncertain parameter because (based on dimensionless sensitivity analysis) the NAMW is highly sensitive to the values of this parameter. We consider state estimation and joint state and parameter estimation in this section.

3.3.1. State Estimation with Uncertain Parameter (Case Study 3)

In this sub-section, while E p is an uncertain parameter and noise is added to its value at each time step in the simulation, the parameter is not estimated. The nominal value of E p is set to be E p = 1.8283 × 10 4 kJ kgmol , and bimodal Gaussian noise with means of the modes μ 1 = 100 , μ 2 = 100 and covariances P 1 = 50 , P 2 = 50 is added to it. In addition, process and measurement noise with the same distributions as in the second case in Section 3.2 are included. Figure 5 shows the comparison of the estimation results using the three algorithms over 40 time steps, and Table 6 shows the corresponding RMSE. In this case, the EnKF-GMM shows a small improvement in state estimation performance over the other estimators, especially in the estimation of the NAMW.

3.3.2. State and Parameter Estimation with Uncertain Parameter (Case Study 4)

Next, we compare the performance of the estimators for joint state and parameter estimation. Once again, E p is the uncertain parameter and its nominal value is kept the same as in Case Study 3. The parameter E p is treated as an augmented state for estimation. The prior distribution for E p has the following characteristics: means of μ 1 = 1.9 × 10 4 , μ 2 = 2.5 × 10 4 for its two modes, and covariances of P 1 = 500 , P 2 = 500 . Bimodal noise is added to each particle of the parameter, with means μ 1 = 100 , μ 2 = 100 and covariances P 1 = 50 , P 2 = 50 . Except for the exclusion of process noise, the properties of the simulation are kept the same as in Case Study 3. Figure 6 shows the performance of the estimators in state estimation, and Figure 7 shows their performance in estimating the parameter E p . While the performance of the EnKF in state estimation is comparable to that of the EnKF-GMM, the EnKF-GMM is clearly superior in parameter estimation. The PF has the worst performance among the estimators.

3.4. Alternate Point Estimates for the PF (Case Study 5)

In the PF, even though the full distribution is obtained, a point estimate for the states is usually obtained by choosing the expectation (mean) of the posterior particles. This is the method we have employed for the PF in the simulations described in the previous sections. However, if the distribution is multimodal, the mean may not necessarily represent the best point estimate, and the mode of the distribution (which is equivalent to the maximum a posteriori estimate) can provide a better estimate [14,31]. We investigate whether this approach can improve the performance of the PF, since we are considering cases where the distributions are multimodal. We apply k-means clustering on the posterior distribution of the particles to identify the modes and the maximum a posteriori estimate with the particle filter, and compare the estimation performance of this PF, called the PF-mode, with the other estimators. The parameters of the simulations are similar to the second case study. Figure 8 shows the performance of the estimators, and the RMSE is described in Table 7. The PF-mode clearly outperforms the PF and the EnKF; however, the EnKF-GMM has superior performance.
The idea of the PF-mode is very similar to that of the EnKF-GMM. Both of them use clustering to extract modes from the posterior distribution and generate a point estimate based on the information in the modes. However, the EnKF-GMM outperforms the PF-mode because it is more robust to poor initial estimates and model-plant mismatch. Also, if the number of modes in the state distributions varies with time, perhaps even becoming unimodal at some times, using the mode as a point estimate is not necessarily superior to the mean. The EnKF-GMM combines the modes of the distribution in proportion based on the calculated weights to get a point estimate, and can adjust its estimation results in these cases by adjusting the weights of the modes.

4. Conclusions

We have proposed an estimator based on a Gaussian mixture model coupled with an ensemble Kalman filter (EnKF-GMM) that is capable of handling multimodal state distributions, and demonstrated its performance in simulations on a polymethyl methacrylate process. The EnKF-GMM clearly outperforms the particle filter (PF) and the EnKF in both state and parameter estimation with multimodal distributions. The EnKF is limited by the assumption of Gaussian distributions, and the particle filter’s performance is affected by its lack of robustness with respect to model-plant mismatch. A different choice for obtaining a point estimate with the particle filter, leading to a maximum a posteriori estimate, improves the performance of the PF, but the EnKF-GMM is still superior, indicating that it is the estimator of choice for systems with multimodal state distributions such as polymer processes.

Acknowledgments

The authors acknowledge financial support from the China Scholarship Council and the Natural Sciences and Engineering Research Council of Canada.

Author Contributions

All three authors conceived the work and participated in defining its scope. Ruoxia Li developed the algorithms and conducted the simulations described in the manuscript with inputs from Vinay Prasad and Biao Huang. Ruoxia Li wrote the initial drafts of the manuscript, and all authors contributed to the editing of the final manuscript and to the revisions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wilson, D.; Agarwal, M.; Rippin, D. Experiences implementing the extended Kalman filter on an industrial batch reactor. Comput. Chem. Eng. 1998, 22, 1653–1672. [Google Scholar] [CrossRef]
  2. Prasad, V.; Schley, M.; Russo, L.P.; Bequette, W.B. Product property and production rate control of styrene polymerization. J. Process Control 2002, 12, 353–372. [Google Scholar] [CrossRef]
  3. Jo, J.; Bankoff, S. Digital monitoring and estimation of polymerization reactor. AIChE J. 1976, 22, 361–368. [Google Scholar] [CrossRef]
  4. Kozub, D.; MacGregor, J. State estimation for semi-batch polymerization reactors. Chem. Eng. Sci. 1992, 47, 1047–1062. [Google Scholar] [CrossRef]
  5. McAuley, K.; MacGregor, J. On-line inference of polymer properties in an industrial polymer properties in an industrial polyethylene reactor. AIChE J. 1991, 37, 825–835. [Google Scholar] [CrossRef]
  6. McAuley, K.; MacGregor, J. Nonlinear product property control in industrial gas-phase polyethylene reactors. AIChE J. 1993, 39, 855–866. [Google Scholar] [CrossRef]
  7. Sriniwas, G.; Arkun, Y.; Schork, F. Estimation and control of an alpha-olefin polymerization reactor. J. Process Control 1994, 5, 303–313. [Google Scholar] [CrossRef]
  8. Gopalakrishnan, A.; Kaisare, N.S.; Narasimhan, S. Incorporating delayed and infrequent measurements in extended Kalman filter based nonlinear state estimation. J. Process Control 2011, 21, 119–129. [Google Scholar] [CrossRef]
  9. Evensen, G. Sequential data assimilation with a nonlinear quasi-geostrophic model using Monte Carlo methods to forecast error statistics. J. Geophys. Res. 1994, 99, 10143–10162. [Google Scholar] [CrossRef]
  10. Julier, S.; Uhlmann, J.; Durrant-Whyte, H. A new approach for filtering nonlinear systems. In Proceedings of the American Control Conference, Seattle, WA, USA, 21–23 June 1995.
  11. Julier, S.; Uhlmann, J.; Durrant-Whyte, H. A new method for the nonlinear transformation of means and covariances in filters and estimators. IEEE Trans. Autom. Control 2000, 45, 477–482. [Google Scholar] [CrossRef]
  12. Arulampalam, S.; Maskell, S.; Gordon, N.; Clapp, T. A tutorial on particle filters for on-line non-linear/non-Gaussian Bayesian tracking. IEEE Trans. Autom. Control 2002, 30, 174–189. [Google Scholar]
  13. Chen, T.; Morris, J.; Martin, E. Particle filters for state and parameter estimation in batch processes. J. Process Control 2005, 15, 665–673. [Google Scholar] [CrossRef]
  14. Shenoy, A.V.; Prakash, J.; Prasad, V.; Shah, S.L.; McAuley, K.B. Practical issues in state estimation using particle filters: Case studies with polymer reactors. J. Process Control 2013, 23, 120–131. [Google Scholar] [CrossRef]
  15. Shao, X.; Huang, B.; Lee, J.M. Constrained Bayesian state estimation- A comparative study and a new particle filter based approach. J. Process Control 2010, 20, 143–157. [Google Scholar] [CrossRef]
  16. Crowley, T.J.; Meadows, E.S.; Kostoulas, E.; Doyle, F.D. Control of particle size distribution described by a population balance model of semibatch emulsion polymerization. J. Process Control 2000, 10, 419–432. [Google Scholar] [CrossRef]
  17. Kiparissides, C. Challenges in particulate polymerization reactor modeling and optimization: A population balance perspective. J. Process Control 2006, 16, 205–224. [Google Scholar] [CrossRef]
  18. Sajjadi, S.; Brooks, B.W. Unseeded semibatch emulsion polymerization of butyl acrylate: Bimodal particle size distribution. J. Polym. Sci. A Polym. Chem. 2000, 38, 528–545. [Google Scholar] [CrossRef]
  19. Doyle, F.J.; Soroush, M.; Cordeiro, C. Control of product quality in polymerization processes. In Proceedings of the Sixth International Conference on Chemical Process Control, Tucson, AZ, USA, 7–12 January 2001; AIChE Press: New York, NY, USA, 2002; pp. 290–306. [Google Scholar]
  20. Flores-Cerrillo, J.; MacGregor, J.F. Control of particle size distributions in emulsion semibatch polymerization using mid-course correction policies. Ind. Eng. Chem. Res. 2002, 41, 1805–1814. [Google Scholar] [CrossRef]
  21. Bengtsson, T.; Snyder, C.; Nychka, D. Toward a nonlinear ensemble filter for high-dimensional systems. J. Geophys. Res. 2003, 108, 35–45. [Google Scholar] [CrossRef]
  22. Smith, K.W. Cluster ensemble Kalman filter. Tellus 2007, 59A, 749–757. [Google Scholar] [CrossRef]
  23. Dovera, L.; Della Rossa, E. Multimodal ensemble Kalman filtering using Gaussian mixture models. Comput. Geosci. 2011, 15, 307–323. [Google Scholar] [CrossRef]
  24. Dempster, A.P.; Laird, N.M.; Rubin, D.B. Maximum likelihood from incomplete data via the EM algorithm. J. R. Stat. Soc. B 1977, 39, 1–38. [Google Scholar]
  25. Ormoneit, D.; Tresp, V. Improved Gaussian mixture density estimates using Bayesian penalty terms and network averaging. In Advances in Neural Information Processing Systems 8; Touretzky, D.S., Tesauro, G., Leen, T.K., Eds.; MIT Press: Cambridge, MA, USA, 1996; pp. 542–548. [Google Scholar]
  26. Ueda, N.; Nakano, R.; Ghahramani, Z.; Hinton, G.E. SMEM algorithm for mixture models. Neural Comput. 2000, 12, 2109–2128. [Google Scholar] [CrossRef] [PubMed]
  27. Schwarz, G. Estimating the dimension of a model. Ann. Stat. 1978, 6, 461–464. [Google Scholar] [CrossRef]
  28. Hu, X.; Xu, L. Investigation on several model selection criteria for determining the number of cluster. Neural Inf. Process.-Lett. Rev. 2004, 4, 1–10. [Google Scholar]
  29. Silva-Beard, A.; Flores-Tlacuahuac Silva, A. Effect of process design/operation on the steady-state operability of a methyl methacrylate polymerization reactor. Ind. Eng. Chem. Res. 1999, 38, 4790–4804. [Google Scholar] [CrossRef]
  30. Shenoy, A.V.; Prasad, V.; Shah, S.L. Comparison of unconstrained nonlinear state estimation techniques on a MMA polymer reactor. In Proceedings of the 9th International Symposium on Dynamics and Control of Process Systems (DYCOPS 2010), Leuven, Belgium, 5–9 July 2010.
  31. Bavdekar, V.A.; Shah, S.L. Computing point estimates from a non-Gaussian posterior distribution using a probabilistic k-means clustering approach. J. Process Control 2014, 24, 487–497. [Google Scholar] [CrossRef]
Figure 1. Comparison of the estimation performance of the ensemble Kalman filter (EnKF)-Gaussian mixture model (GMM), EnKF, and particle filter (PF) for the polymethyl methacrylate (PMMA) process with multimodal process noise (Case Study 1).
Figure 1. Comparison of the estimation performance of the ensemble Kalman filter (EnKF)-Gaussian mixture model (GMM), EnKF, and particle filter (PF) for the polymethyl methacrylate (PMMA) process with multimodal process noise (Case Study 1).
Processes 04 00009 g001
Figure 2. Comparison of the estimation performance of the EnKF-GMM, EnKF, and PF for the PMMA process with more significant multimodal process noise (Case Study 2).
Figure 2. Comparison of the estimation performance of the EnKF-GMM, EnKF, and PF for the PMMA process with more significant multimodal process noise (Case Study 2).
Processes 04 00009 g002
Figure 3. Evolution of the multimodal posterior distributions of C M at time steps 1, 2, 4, and 9 (Case Study 2).
Figure 3. Evolution of the multimodal posterior distributions of C M at time steps 1, 2, 4, and 9 (Case Study 2).
Processes 04 00009 g003
Figure 4. Evolution of the multimodal posterior distributions of T j at time steps 2, 6, 9, and 10 (Case Study 2).
Figure 4. Evolution of the multimodal posterior distributions of T j at time steps 2, 6, 9, and 10 (Case Study 2).
Processes 04 00009 g004
Figure 5. Comparison of state estimation with the EnKF-GMM, EnKF, and PF for the PMMA process with uncertain parameter E p (Case Study 3).
Figure 5. Comparison of state estimation with the EnKF-GMM, EnKF, and PF for the PMMA process with uncertain parameter E p (Case Study 3).
Processes 04 00009 g005
Figure 6. Comparison of state estimation with the EnKF-GMM, EnKF, and PF for the PMMA process with uncertain parameters (Case Study 4).
Figure 6. Comparison of state estimation with the EnKF-GMM, EnKF, and PF for the PMMA process with uncertain parameters (Case Study 4).
Processes 04 00009 g006
Figure 7. Parameter estimation using the EnKF-GMM, EnKF, and PF (Case Study 4).
Figure 7. Parameter estimation using the EnKF-GMM, EnKF, and PF (Case Study 4).
Processes 04 00009 g007
Figure 8. Comparison of state estimation with the EnKF-GMM, EnKF, PF, and PF-mode (Case Study 5).
Figure 8. Comparison of state estimation with the EnKF-GMM, EnKF, PF, and PF-mode (Case Study 5).
Processes 04 00009 g008aProcesses 04 00009 g008b
Table 1. Operational parameters for the methyl methacrylate (MMA) polymerization reactor.
Table 1. Operational parameters for the methyl methacrylate (MMA) polymerization reactor.
F = 1.0   m 3 / h M m = 100.12   k g / k g m o l
F I = 0.0032   m 3 / h f * = 0.58
F c w = 0.1588   m 3 / h R = 8.314   k J / k g m o l · K
C m i n = 6.4678   k g m o l / m 3 Δ H = 57800   k J / k g m o l
C I i n = 8.0   k g m o l / m 3 E p = 1.8283 × 10 4 k J / k g m o l
T i n = 350   K E I = 1.2877 × 10 5   k J / k g m o l
T w 0 = 293.2   K E f m = 7.4478 × 10 4   k J / k g m o l
U = 720   k J / h · K · m 2 E t c = 2.9442 × 10 3   k J / k g m o l
A = 2.0   m 2 E t d = 2.9442 × 10 3   k J / k g m o l
V = 0.1   m 3 A p = 1.77 × 10 9   m 3 / k g m o l · h
V 0 = 0.02   m 3 A I = 3.792 × 10 18   1 / h
ρ = 866   k g / m 3 A f m = 1.0067 × 10 15   m 3 / k g m o l · h
ρ w = 1000   k g / m 3 A t c = 3.8223 × 10 10   m 3 / k g m o l · h
C p = 2.0   k J / ( k g · K ) A t d = 3.1457 × 10 11   m 3 / k g m o l · h
C p w = 4.2   k J / ( k g · K )
Table 2. RMSE of the Gaussian mixture model based ensemble Kalman filter (EnKF-GMM), ensemble Kalman filter (EnKF), and particle filter (PF) for the polymethyl methacrylate (PMMA) process with multimodal process noise (Case Study 1).
Table 2. RMSE of the Gaussian mixture model based ensemble Kalman filter (EnKF-GMM), ensemble Kalman filter (EnKF), and particle filter (PF) for the polymethyl methacrylate (PMMA) process with multimodal process noise (Case Study 1).
VariableEnKF-GMMEnKFPF
C M ,   k g · m o l / m 3 0.200.200.33
C I ,   k g · m o l / m 3 0.240.200.33
T ,   K 4.34.43.1
D 0 ,   k g · m o l / m 3 0.0190.0140.032
D 1 ,   k g / m 3 11.8511.5310.44
T j ,   K 2.32.21.4
NAMW209338357
Table 3. RMSE of the EnKF-GMM, EnKF, and PF for the PMMA process with more significant multimodal process noise (Case Study 2).
Table 3. RMSE of the EnKF-GMM, EnKF, and PF for the PMMA process with more significant multimodal process noise (Case Study 2).
VariableEnKF-GMMEnKFPF
C M ,   k g · m o l / m 3 0.440.680.69
C I ,   k g · m o l / m 3 0.370.140.17
T ,   K 5.811.814.4
D 0 ,   k g · m o l / m 3 0.0420.0620.078
D 1 ,   k g / m 3 9.7336.1351.38
T j ,   K 5.18.29.2
NAMW5591400831
Table 4. Comparison of the estimation errors of the EnKF-GMM, EnKF, and PF for C M at time steps 1, 3, 4, and 9 (in kg · mol / m 3 ) (Case Study 2).
Table 4. Comparison of the estimation errors of the EnKF-GMM, EnKF, and PF for C M at time steps 1, 3, 4, and 9 (in kg · mol / m 3 ) (Case Study 2).
EstimatorTime Step 1Time Step 3Time Step 4Time Step 9
EnKF-GMM0.230.140.400.04
EnKF2.061.060.800.10
PF 3.602.201.650.22
Table 5. Comparison of the estimation errors of the EnKF-GMM, EnKF, and PF for T j at time steps 2, 6, 9, and 10 (in K ) (Case Study 2).
Table 5. Comparison of the estimation errors of the EnKF-GMM, EnKF, and PF for T j at time steps 2, 6, 9, and 10 (in K ) (Case Study 2).
EstimatorTime Step 1Time Step 3Time Step 4Time Step 9
EnKF-GMM6.42.81.51.3
EnKF6.63.01.91.7
PF13.54.52.92.9
Table 6. RMSE of the EnKF-GMM, EnKF, and PF for state estimation in the case with uncertain parameter E p (Case Study 3).
Table 6. RMSE of the EnKF-GMM, EnKF, and PF for state estimation in the case with uncertain parameter E p (Case Study 3).
VariableEnKF-GMMEnKFPF
C M ,   k g · m o l / m 3 0.290.260.32
C I ,   k g · m o l / m 3 0.120.100.27
T ,   K 7.28.910.3
D 0 ,   k g · m o l / m 3 0.1110.0920.144
D 1 ,   k g / m 3 32.2735.1145.34
T j ,   K 5.55.77.5
NAMW487869653
Table 7. RMSE of the EnKF-GMM, EnKF, PF, and PF-mode for state estimation (Case Study 5).
Table 7. RMSE of the EnKF-GMM, EnKF, PF, and PF-mode for state estimation (Case Study 5).
VariableEnKF-GMMEnKFPFPF-mode
C M ,   kg · mol / m 3 0.440.680.680.85
C I ,   kg · mol / m 3 0.370.140.170.55
T ,   K 5.811.814.48.31
D 0 ,   kg · mol / m 3 0.0420.0620.0780.047
D 1 ,   kg / m 3 9.7336.1351.3813.05
T j ,   K 5.18.29.27.9
NAMW5591400831706

Share and Cite

MDPI and ACS Style

Li, R.; Prasad, V.; Huang, B. Gaussian Mixture Model-Based Ensemble Kalman Filtering for State and Parameter Estimation for a PMMA Process. Processes 2016, 4, 9. https://doi.org/10.3390/pr4020009

AMA Style

Li R, Prasad V, Huang B. Gaussian Mixture Model-Based Ensemble Kalman Filtering for State and Parameter Estimation for a PMMA Process. Processes. 2016; 4(2):9. https://doi.org/10.3390/pr4020009

Chicago/Turabian Style

Li, Ruoxia, Vinay Prasad, and Biao Huang. 2016. "Gaussian Mixture Model-Based Ensemble Kalman Filtering for State and Parameter Estimation for a PMMA Process" Processes 4, no. 2: 9. https://doi.org/10.3390/pr4020009

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop