Next Article in Journal
Parallel Potentiometric and Capacitive Response in a Water-Gate Thin Film Transistor Biosensor at High Ionic Strength
Next Article in Special Issue
Computationally Efficient Nonlinear Model Predictive Control Using the L1 Cost-Function
Previous Article in Journal
High-Resolution Multi-Channel Frequency Standard Comparator Using Digital Frequency Measurement
Previous Article in Special Issue
Using Off-the-Shelf Graphic Design Software for Validating the Operation of an Image Processing System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

LSTM and GRU Neural Networks as Models of Dynamical Processes Used in Predictive Control: A Comparison of Models Developed for Two Chemical Reactors

by
Krzysztof Zarzycki
* and
Maciej Ławryńczuk
Faculty of Electronics and Information Technology, Institute of Control and Computation Engineering, Warsaw University of Technology, Nowowiejska 15/19, 00-665 Warsaw, Poland
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(16), 5625; https://doi.org/10.3390/s21165625
Submission received: 20 July 2021 / Revised: 16 August 2021 / Accepted: 17 August 2021 / Published: 20 August 2021

Abstract

:
This work thoroughly compares the efficiency of Long Short-Term Memory Networks (LSTMs) and Gated Recurrent Unit (GRU) neural networks as models of the dynamical processes used in Model Predictive Control (MPC). Two simulated industrial processes were considered: a polymerisation reactor and a neutralisation (pH) process. First, MPC prediction equations for both types of models were derived. Next, the efficiency of the LSTM and GRU models was compared for a number of model configurations. The influence of the order of dynamics and the number of neurons on the model accuracy was analysed. Finally, the efficiency of the considered models when used in MPC was assessed. The influence of the model structure on different control quality indicators and the calculation time was discussed. It was found that the GRU network, although it had a lower number of parameters than the LSTM one, may be successfully used in MPC without any significant deterioration of control quality.

1. Introduction

In Model Predictive Control (MPC) [1,2], a dynamical model of the controlled process is used to predict its behaviour over a certain time horizon and to optimise the control policy. This problem formulation leads to very good control quality, much better than that in classical control methods. As a result, MPC methods have been used for a great variety of processes, e.g., chemical reactors [3], heating, ventilation and air conditioning systems [4], robotic manipulators [5], electromagnetic mills [6], servomotors [7], electromechanical systems [8] and stochastic systems [9]. It must be pointed out that satisfactory control is only possible if the model used is precise enough. Although there are numerous types of dynamical models, e.g., fuzzy systems, polynomials, and piecewise linear structures [10], neural networks of different kinds [11] are very popular due to their excellent accuracy and simple structure [12]. In particular, Recurrent Neural Networks (RNNs) [13,14,15,16] can serve as a model as they are able to give predictions over the required horizon.
In theory, RNNs can be extremely useful in various machine learning tasks in which the data are time-dependent such as modelling of time series, speech synthesis or video analysis. In contrast to the classical feedforward neural networks, RNNs can be used to create models and predictions from sequential data. However, in practice, their use is limited due to their one major drawback: the lack of long-term memory. RNNs have short-term memory capabilities; however, they tend to forget about the long-term input–output time dependencies during the backpropagation training. This problem is caused by the vanishing gradient phenomena, which was described in great detail in [17,18,19]. Many ways of limiting the vanishing gradient influence on the training process have been proposed, such as using different activation functions (such as ReLU) or branch normalisation. Another approach is to modify the network architecture in a way that improves the gradient flow during training. Residual Neural Networks (ResNets) proposed in [20] and the Long Short-Term Memory Network (LSTM) structure proposed first in [18] and its modification—the Gated Recurrent Unit (GRU) architecture proposed in [21]—can serve as examples.
The unique long-term memory properties of LSTM and GRU neural networks made them widely popular in a large variety of machine learning tasks. Example applications of the LSTM architecture are: data classification [22], speech recognition [23,24], handwriting recognition [25], speech synthesis [26], text coherence tests [27], biometric authentication and anomaly detection [28], detecting deception from gaze and speech [29] and anomaly detection [30]. Similarly, example applications of the GRU structure are: facial expression recognition [31], human activity recognition [32], cyberbullying detection [33], defect detection [34], human activity surveillance [35], automated classification of cognitive workload tasks [36] and speaker identification [37].
Recently, the LSTM networks have been also used to model dynamical processes. Examples are: a benchmark process [38], a pH reactor [39], a reverse osmosis plant [40], temperature control [41] or an autonomous mobility-on-demand system [42]. In all cited publications, it was shown that the LSTM models are able to approximate the properties of dynamical processes; the models have very good accuracy. Some of these models have been used for prediction in MPC [40,41,42]; very good control quality has been reported. Although GRU networks are similar to the LSTM ones and they have many successful applications in classification and detection tasks, as mentioned in the previous paragraph, they are very rarely used as models of dynamical processes, e.g., a tandem-wing quadplane drone model was discussed in [43]. Hence, two important questions should be formulated:
(a)
What is the accuracy of the dynamical models based on the GRU networks, and how do they compare to the LSTM ones?
(b)
How do the GRU dynamical models perform in MPC, and how do they compare to the LSTM-based MPC approach?
Both of these issues are worth considering since the GRU networks have a simpler architecture and a lower number of parameters than the LSTM ones.
This work has three objectives:
(a)
A thorough comparison of LSTM and GRU neural networks as models of two dynamical processes, polymerisation and neutralisation (pH) reactors, is considered. An important question is whether or not the GRU network, although it has a simpler structure as the LSTM one, offers satisfying modelling accuracy;
(b)
The derivation of MPC prediction equations for the LSTM and GRU models;
(c)
The development of MPC algorithms for the two aforementioned processes with different LSTM and GRU models used for prediction. An important question is whether or not the GRU network offers control quality comparable to that possible when the more complex LSTM structure is applied.
Unfortunately, to the best of the authors’ knowledge, the efficiency of LSTM and GRU networks as dynamical models and their performance in MPC have not been thoroughly compared in the literature; typically, the LSTM structures are used [40,41,42].
The article is organised in the following way. Section 2 describes the structures of the LSTM and GRU neural networks. Section 3 defines the MPC optimisation task algorithm and details how the two discussed types of neural models are used for prediction in MPC. Section 4 thoroughly compares the efficiency of LSTM and GRU neural networks used as models of the two dynamical systems. Moreover, the efficiency of both considered model classes is validated in MPC. Finally, Section 5 summarises the whole article.

2. LSTM and GRU Neural Networks

2.1. The LSTM Neural Network

The LSTM approach aims to create a model that has a long-term memory and, at the same time, is able to forget about unimportant information in the training data. To achieve this, three main differences in comparison to classical RNNs are introduced:
  • Two types of activation functions;
  • A cell state that serves as the long-term memory of the neuron;
  • The neuron is called a cell and has a complex structure consisting of four gates that regulate the information flow.

2.1.1. Activation Functions

In the classical RRNs, the most commonly used activation function is the tanh type:
tanh ( x ) = e x e x e x + e x .
The output values of the hyperbolic tangent are in the range < 1 , 1 > . This helps to regulate the data flow through the network and avoid the exploding gradient phenomena [17,44]. In the LSTM networks, the usage of tanh is kept; however, the sigmoid activation function is additionally implemented. This function is defined as:
σ ( x ) = 1 1 + e x .
The output values of the sigmoid function are in the range of < 0 , 1 > . This allows the neural network to discard irrelevant information. If the output values are close to zero, they are not important and should be forgotten. If the values are close to one, they should be kept.

2.1.2. Hidden State and Cell State

In the classical RNN architecture, the hidden state is used as a memory of the network and an output of the hidden layer of the network. The LSTM networks additionally implement a cell state. In their case, the hidden state serves as a short-term working memory. On the other hand, the cell state is used as a long-term memory to keep information about important data from the past. As depicted in Figure 1, only a few linear operations are performed on the cell state. Therefore, the gradient flow during the backpropagation training is relatively undisturbed. This helps to limit the occurrence of the vanishing gradient problem.

2.1.3. Gates

The LSTM network has the ability to modify the value of the cell state through a mechanism called gates. The LSTM cell shown in Figure 1 consists of four gates:
  • The forget gate f decides which values of the previous cell state should be discarded and which should be kept;
  • The input gate i selects values from the previous hidden state and the current input to update by passing them through the sigmoid function. The function product is then multiplied by the previous cell state;
  • The cell state candidate gate g first regulates the information flow in the network by using the tanh function on the previous hidden state and the current input. The product of tanh is multiplied by the input gate output to calculate the candidate for the current cell state. The candidate is then added to the previous cell state;
  • The output gate o first calculates the current hidden state by passing the previous hidden state and the current input through the sigmoid function to select which new information should be taken into account. Then, the current cell state value is passed through the tanh function. The products of both of those functions are finally multiplied.

2.1.4. LSTM Layer Architecture

The LSTM layer of a neural network is composed of n N neurons. The layer has n f input signals. For a network used as a dynamical model of the process represented by the general equation:
y ( k ) = f ( x ( k ) ) = f ( u ( k 1 ) , , u ( k n B ) , y ( k 1 ) , , y ( k n A ) )
this parameter can be written as n f = n A + n B . The vector of the network’s input signals at the time instant k is then:
x ( k ) = u ( k 1 ) u ( k n B ) y ( k 1 ) y ( k n A ) .
When considering the entire LSTM network layer consisting of n N cells, the gates can be represented as vectors f , g , i , o , each of dimensionality n N × 1 . The LSTM layer of the network contains also a number of weights. The symbol W denotes the weights associated with the input signals x ; the symbol R denotes the so-called recursive weights, associated with the hidden state of the cell from the previous moment h ( k 1 ) ; the symbol b denotes the constant (bias) components. The subscripts f, g, i or o appear next to all the weights; they indicate to which gate the weights belong. Network weights can be therefore written in matrix form as:
W = W i W f W g W o , R = R i R f R g R o , b = b i b f b g b o .
The matrices W i , W f , W g and W o have dimensionality n N × n f ; the matrices R i , R f , R g and R o have dimensionality n N × n N ; the vectors b i , b f , b g and b o have dimensionality n N × 1 .
At the time instant k, the following calculations are performed sequentially in the LSTM layer of the network:
i ( k ) = σ W i x ( k ) + R i h ( k 1 ) + b i ,
f ( k ) = σ W f x ( k ) + R f h ( k 1 ) + b f ,
g ( k ) = tanh W g x ( k ) + R g h ( k 1 ) + b g ,
o ( k ) = σ W o x ( k ) + R o h ( k 1 ) + b o .
The new cell state at the time instant k is then determined:
c ( k ) = f ( k ) c ( k 1 ) + i ( k ) g ( k ) .
Finally, the hidden state at the time instant k can be calculated:
h ( k ) = o ( k ) tanh ( c ( k ) ) .
The symbol ∘ denotes the Hadamard product of the vectors. In other words, the vectors are multiplied elementwise. In Equation (10), this operation is used twice. The cell state from the previous time instant is multiplied by the values output by the forget gate. If those values are close to zero, the Hadamard product is close to zero as well, and therefore, the past information stored in the cell state is discarded. If the forget gate values are close to one, the past information becomes mostly unchanged. Then, the output of the input gate and cell candidate gate is pointwise multiplied. The purpose of this operation is similar. If the input gate values are close to zero, no new information is added to the cell state. Otherwise, the previous cell state values are updated with the values from the cell state candidate gate. In Equation (11), the Hadamard product is close to zero when the output gate values are close to zero. In this situation, the hidden state from the previous time instant becomes mostly unchanged. Otherwise, the new hidden state is updated with the new values from the cell state.
The LSTM layer of the neural network is then connected to the fully connected layer, as shown in Figure 2. It has its weight vector W y of dimensionality 1 × n N and bias b y . The output of the network at the time instant k is calculated as follows:
y ( k ) = W y h ( k ) + b y .

2.2. The GRU Neural Network

The GRU network is a modification of the LSTM concept, which aims to reduce to network’s computational cost. There are some differences between the architectures, mainly:
  • The GRU cell lacks the output gate; therefore, it has fewer parameters;
  • The usage of the cell state is discarded. The hidden state serves both as the working and long-term memory of the network.
The single-GRU cell layout is presented in Figure 3. It consists of three gates:
  • The reset gate r is used to select which information to discard from the previous hidden state and input values;
  • The role of the update gate z is to select which information from the previous hidden state should be kept and passed along to the next steps;
  • Candidate state gate g calculates the candidate for the future hidden state. This is done by firstly multiplying the previous state with the reset gate’s output. This step can be interpreted as forgetting unimportant information from the past. Next, new data form the input are added to the remaining information. Finally, the tanh function is applied to the data to regulate the information flow.
The current hidden state h k is calculated as follows. Firstly, the output from the update gate z is subtracted from one and then multiplied with the previous state h k 1 . Then, the state candidate g is multiplied by the unchanged output from the update gate z. The results of both of those operations are finally added. This means that if the values output from update gate z are close to zero, more new information is added to the current state h. Alternatively, if the values output from update gate z are close to one, the current state is mostly kept as it was in the previous time iterations.
When considering the whole GRU layer of n N cells, the weight matrices W r , W z , W o have dimensions n N × n f , matrices R r , R z , R g have dimensions n N × n N , and vectors b r , b z , b g have dimensions n N × 1 . The matrices can be written as:
W = W r W z W g , R = R r R z b g , b = b r b z b g .
The following calculations are performed at the sampling time k:
r ( k ) = σ W r x ( k ) + R r h ( k 1 ) + b r ,
z ( k ) = σ W z x ( k ) + R z h ( k 1 ) + b z ,
g ( k ) = tanh W g x ( k ) + r ( k ) R g h ( k 1 ) + b g ,
h ( k ) = 1 n N × 1 z ( k ) g ( k ) + z ( k ) h ( k 1 ) .
Similar to the LSTM layer, the GRU layer of the neural network is then connected to the fully connected layer. It has its weight vector W y of dimensionality 1 × n N and a constant component b y . The output of the network at the time k is determined by the hidden state of all cells of the GRU layer multiplied by the weights of the fully connected layer, respectively, according to the following relation:
y ( k ) = W y h ( k ) + b y .

3. LSTM and GRU Neural Networks in Model Predictive Control

The manipulated variable, i.e., the input of the controlled process, is denoted by u, while the controlled one, i.e., the process output, is denoted by y. A good control algorithm is expected to calculate the value of the manipulated variable, which leads to fast control, i.e., the process output should follow the changes of the set-point. Moreover, since fast control usually requires abrupt changes of the manipulated variables, which may be dangerous for the actuator, such situations should be penalised. Finally, it is necessary to take some constraints; they are usually imposed on the magnitude and the rate of change of the manipulated variable. In some cases, constraints can also be imposed on the process output variable.

3.1. The MPC Problem

The vector of decision variables calculated online at each sampling instant of MPC is defined as the increments of the manipulated variable:
u ( k ) = u ( k | k ) u ( k + N u 1 | k )
where the control horizon is denoted by N u . The general MPC optimisation problem is:
min u ( k ) J ( k ) = p = 1 N y sp ( k + p | k ) y ^ ( k + p | k ) + λ p = 0 N u 1 u ( k + p | k ) 2 subject to u min u ( k + p | k ) u max , p = 0 , , N u 1 u min u ( k + p | k ) u max , p = 0 , , N u 1 y min y ^ ( k + p | k ) y max , p = 1 , , N .
The cost function can be divided into two parts. The first part describes the control error, which is defined as the sum of the differences between the set-point value y sp ( k + p | k ) and the output prediction y ^ ( k + p | k ) over the prediction horizon N. The ( k + p | k ) notation should be interpreted as follows: the prediction of the moment in the future k + p is calculated in the current moment k. The second part of the cost function consists of the change of the manipulated variables multiplied by the weighting coefficient λ . When the whole cost function is taken into account, one can observe that it minimises both control errors and the change of control signals. Weighting coefficient λ is used to fine-tune the procedure.
The constraints of the MPC optimisation problem are as follows:
  • The magnitude constraints u min and u max are enforced on the manipulated variable over the control horizon N u ;
  • The constraints u min and u max are imposed on the increments of the same variable over the control horizon N u ;
  • The constraints put on the predicted output variable y min and y max over the prediction horizon N.
When the optimisation procedure calculates the decision vector (Equation (19)) from Equation (20), the first element of it is applied to the process. The most common way of this application is given by the following equation:
u ( k ) = Δ u ( k | k ) + u ( k 1 | k ) .
The whole computational scheme is then repeated at the next sampling instants.
In MPC [2], the general prediction equation for the sampling instant k + p is:
y ^ ( k + p | k ) = y ( k + p | k ) + d ( k )
where p = 1 , , N . The output of the model for the sampling instant k + p calculated at the current instant k is y ( k + p | k ) , and the current estimation of the unmeasured disturbance acting on the process output is d ( k ) . Typically, it is assumed that the disturbance is constant over the whole prediction horizon, and its value is determined as the difference between the real (measured) value of the process output and the model output calculated using the process input and output signals up to the sampling instant k 1 :
d ( k ) = y m ( k ) y ( k | k 1 ) .

3.2. The LSTM Neural Network in MPC

In the case of the LSTM model, to determine the predicted output, it is necessary to first calculate the prediction values of the cell state given by Equations (6)–(10) in the following way:
c ^ ( k + 1 | k ) = σ W f x ( k + 1 | k ) + R f h ( k ) + b f c ( k ) + σ W i x ( k + 1 | k ) + R i h ( k ) + b i × tanh W g x ( k + 1 | k ) + R g h ( k ) + b g
c ^ ( k + 2 | k ) = σ W f x ( k + 2 | k ) + R f h ^ ( k + 1 | k ) + b f c ^ ( k + 1 | k ) + σ W i x ( k + 2 | k ) + R i h ^ ( k + 1 | k ) + b i × tanh W g x ( k + 2 | k ) + R g h ^ ( k + 1 | k ) + b g
c ^ ( k + p | k ) = σ W f x ( k + p | k ) + R f h ^ ( k + p 1 | k ) + b f c ^ ( k + p 1 | k ) + σ W i x ( k + 1 | k ) + R i h ^ ( k + p 1 | k ) + b i × tanh W g x ( k + p | k ) + R g h ^ ( k + p 1 | k ) + b g .
Using Equations (6)–(9) and Equation (11), one can then calculate the prediction of the hidden state:
h ^ ( k + 1 | k ) = σ W o x ( k + 1 | k ) + R o h ( k ) tanh ( c ^ ( k + 1 | k )
h ^ ( k + 2 | k ) = σ W o x ( k + 2 | k ) + R o h ^ ( k + 1 | k ) tanh ( c ^ ( k + 2 | k )
h ^ ( k + p | k ) = σ W o x ( k + p | k ) + R o h ^ ( k + p 1 | k ) tanh ( c ^ ( k + p | k ) .
Finally, the prediction of the output signal can be calculated based on Equations (18) and (32) as:
y ( k + 1 | k ) = W y h ^ ( k + 1 | k ) + b y + d ( k )
y ( k + 2 | k ) = W y h ^ ( k + 2 | k ) + b y + d ( k )
y ( k + p | k ) = W y h ^ ( k + p | k ) + b y + d ( k ) .
Taking into account the input vector of the network (Equation (4)), for prediction over the prediction horizon, the vector of arguments of the network is:
x ( k + 1 | k ) = u ( k | k ) u ( k 1 ) u ( k n B + 1 ) y ( k ) y ( k 1 ) y ( k n A + 1 ) T
x ( k + 2 | k ) = u ( k + 1 | k ) u ( k | k ) u ( k n B + 2 ) y ^ ( k + 1 | k ) y ( k ) y ( k n A + 2 ) T
x ( k + p | k ) = [ u ( k + p 1 | k ) u ( k + p 2 | k ) u ( k n B + p ) y ^ ( k + p 1 | k ) y ^ ( k + p 2 | k ) ) y ( k n A + p 1 ) ] T .

3.3. The GRU Neural Network in MPC

There is no cell state in the GRU neural networks, and therefore, to calculate the predicted output signal values y ^ , only the prediction of hidden state h is necessary to evaluate first. This is performed based on Equations (14)–(17) in the following way:
h ^ ( k + 1 | k ) = 1 n N × 1 σ W z x ( k + 1 | k ) + R z h ( k ) + b z tanh W g x ( k + 1 | k ) + σ W r x ( k + 1 | k ) + R r h ( k ) + b r R g h ( k ) + b g + σ W z x ( k + 1 | k ) + R z h ( k ) + b z h ( k ) )
h ^ ( k + 2 | k ) = 1 n N × 1 σ W z x ( k + 2 | k ) + R z h ^ ( k + 1 | k ) + b z tanh [ W g x ( k + 2 | k ) + σ W r x ( k + 2 | k ) + R r h ^ ( k + 1 | k ) + b r R g h ^ ( k + 1 | k ) + b g ] + σ W z x ( k + 2 | k ) + R z h ^ ( k + 1 | k ) + b z h ^ ( k + 1 | k ) )
h ^ ( k + p | k ) = 1 n N × 1 σ W z x ( k + p | k ) + R z h ^ ( k + p 1 | k ) + b z tanh [ W g x ( k + p | k ) + σ W r x ( k + p | k ) + R r h ^ ( k + p 1 | k ) + b r R g h ^ ( k + p 1 | k ) + b g ] + σ W z x ( k + p | k ) + R z h ^ ( k + p 1 | k ) + b z h ^ ( k + p 1 | k ) )
where 1 n N × 1 is an identity matrix with dimensions n N × 1 . The prediction of the output signal Equation (32), as well as the input vector Equation (35) are the same as in the LSTM neural network model.
The proposed MPC control procedure may be summarised as follows:
  • The estimated disturbance d ( k ) is calculated from Equation (22):
    a.
    In the case of the LSTM network, the model output y ( k | k 1 ) is calculated from Equations (6)–(12);
    b.
    In the case of the GRU network, the model output is calculated from Equations (14)–(17) and Equation (12);
  • The MPC optimisation task is then performed. To calculate the output prediction, the cell and hidden state prediction must be calculated first:
    a.
    For the LSTM model, the predictions are calculated from Equations (24)–(32);
    b.
    For the GRU, the model state prediction are calculated from Equations (36)–(38) and the output prediction is generated as shown in Equation (32). The cost function is the same for both models and is given by Equation (20);
  • The first element of the calculated decision vector (Equation (19)) is applied to the process, i.e., u ( k ) = u ( k | k ) + u ( k 1 | k ) .

4. Results of the Simulations

In order to compare the accuracy of the LSTM and GRU networks and their efficiency in MPC, we considered two dynamical systems: a polymerisation reactor and a neutralisation (pH) reactor.

4.1. Description of the Dynamical Systems

First, two considered processes are briefly described. Moreover, a short description of the data preparation procedure is given.

4.1.1. Benchmark 1: The Polymerisation Reactor

The first considered benchmark was a polymerisation reaction taking place in a jacketed continuous stirred-tank reactor. The reaction was the free-radical polymerisation of methyl methacrylate with azo-bis-isobutyronitrile as the initiator and toluene as the solvent. The process input was the inlet initiator flow rate F I (m 3 h 1 ); the output was the value of Number Average Molecular Weight ( NAMW ) of the product(kg kmol 1 ). The detailed fundamental model of the process was given in [45]. The process was nonlinear: in particular, its static gain depended on the operating point. The polymerisation reactor is frequently used to evaluate model identification algorithms and advanced nonlinear control methods, e.g., [12,45,46].
The fundamental model of the polymerisation process, comprising four nonlinear differential equations, was solved using the Runge–Kutta 45 method to obtain training and validation and test datasets, each of them having 2000 samples. After each 50 samples, there was a step change of the control signal. The magnitude of the control signal was chosen randomly. Next, since process input and output signals had different magnitudes, these signals were scaled in the following way:
u = 100 ( F I F ¯ I ) , y = 0.0001 ( NAMW NAMW ¯ )
where F ¯ I = 0.016783 and NAMW ¯ = 20,000 denote the values of the variables at the nominal operating point. The sampling time was 1.8 s.

4.1.2. Benchmark 2: The Neutralisation Reactor

The second considered benchmark was a neutralisation reactor. The process input was the base ( NaOH ) streamflow-rate q 1 (mL/s); the output was the value of the pH of the product. The detailed fundamental model of the process was given in [47]. The process was nonlinear since its static and dynamic properties depended on the operating point. Hence, it is frequently used as a good benchmark to evaluate model identification algorithms and advanced nonlinear control methods, e.g., [46,47,48].
The fundamental model of the neutralisation process, comprising two nonlinear differential equations and a nonlinear algebraic equation, was solved using the Runge–Kutta 45 method to obtain training and validation and test datasets, each of them having 2000 samples. After each 50 samples, there was a step change of the control signal. The magnitude of the control signal was chosen randomly. The process signals were scaled in the following way:
u = q 1 q ¯ 1 , y = pH pH ¯
where q ¯ 1 = 15.5 and pH ¯ = 7 denote the values of the variables at the nominal operating point. The sampling time was 10 s.

4.2. LSTM and GRU Neural Networks for Modelling of Polymerisation and Neutralisation Reactors

A number of LSTM and GRU models were trained for the two considered dynamic processes. All models were trained using the Adam optimisation algorithm. The maximum number of training epochs (iterations) was:
  • 500 for the models with n N 3 ;
  • 750 for the models with 3 < n N 7 ;
  • 1000 for the models with 7 < n N .
The training procedure was performed as follows:
  • The order of the dynamics of the LSTM model was set to n A = n B = 1 . The number of neurons in the hidden layer was set to n N = 1 . For the considered configuration, ten models were trained, and the best one was chosen;
  • The number of neurons was increased to two. Ten models were trained, and the best was chosen. This procedure was repeated until the number of neurons reached n N = 30 ;
  • The first two steps were repeated with the increased order of the dynamics n A = n B = 2 , n A = n B = 3 .
It is important to stress that setting the order of the dynamics to higher than n A = n B = 3 did not result in any significant increase of the modelling quality. Therefore, further experiments with n A = n B > 3 are not presented.
It is an interesting question if LSTM and GRU models without recurrent input signals y ( k 1 ) y ( k n A ) can perform well in modelling tasks. In theory, the recurrent nature of hidden state h should be sufficient to ensure good model quality. To verify this expectation, an additional series of models was trained. The training procedure was similar to the one described above, the only difference being that now, the model order of the dynamics was first set to n A = 0 , n B = 1 , then increased to n A = 0 , n B = 2 and, finally, to n A = 0 , n B = 3 .
The quality of all trained models was then validated with the mean squared error chosen as the quality index. The models were validated in the nonrecurrent Autoregressive with eXogenous input (ARX) mode and the Output Error (OE) recurrent mode. The model input vectors for the two considered cases are:
x ARX ( k ) = u ( k 1 ) u ( k n B ) y data ( k 1 ) y data ( k n A ) x rec ( k ) = u ( k 1 ) u ( k n B ) y model ( k 1 ) y model ( k n A ) .
It is important to stress that in the case of the models with n A = 0 , the ARX and OE modes were the same.
Taking into account the objective of this work, it is interesting to compare the accuracy of the LSTM and GRU models with different structures, defined by the number of neurons, n N , and the order of the model dynamics, determined by n A and n B . For the polymerisation reactor, the results for the chosen networks are given in Table 1 and Table 2, and Figure 4 depicts the model validation errors for all considered numbers of neurons. For the neutralisation reactor, the results for the chosen networks are given in Table 3 and Table 4, and Figure 5 depicts the model validation errors for all considered numbers of neurons. The following notation is used:
  • E t is the the mean squared error for the training dataset in ARX mode;
  • E v is the the mean squared error for the validation dataset in ARX mode;
  • E t rec is the the mean squared error for the training dataset in recurrent mode;
  • E v rec is the the mean squared error for the validation dataset in recurrent mode.
The presented results can be summarised in the following way:
  • In the case of the polymerisation reactor, the results achieved with the LSTM and GRU networks were comparable. As seen in Figure 4, the means squared errors were similar for every combination of n A , n B and n N ;
  • In the case of the neutralisation reactor, the LSTM models ensured a better quality of modelling, especially for models with a low number of parameters. However, as seen in Figure 5, as the number of neurons increased, this difference became more and more negligible. This is again not surprising. GRU networks have less parameters than LSTM networks. Therefore, GRU models with a low number of neurons and a low order of the dynamics performed worse than their LSTM counterparts. As the models became bigger and more complex, the difference between their quality decreased.
  • Models with a higher numbers of neurons (15–30) ensured the best and most consistent modelling quality. This is not surprising, as the number of model parameters is directly proportional to the capacity to reproduce the behaviour of more complex processes. However, this can also be a main drawback of complex models, because of the enormous number of parameters, as shown in Figure 6 and Figure 7, increases their computational cost significantly;
    These models had too few parameters to accurately represent the behaviour of the processes under study;
  • For the models with a medium number of neurons (3–10), the modelling quality was not consistent. In some cases, it was quite poor; in others, it even outperformed models with a huge number of neurons (an example can be found in Table 4, the GRU network with n A = n B = 1 n N = 5 ). One can conclude that this group of models has a structure complex enough to represent the behaviour of the systems under investigation. The training procedure must be, however, performed many times, as training may sometimes not be successful. In other words, if the goal is to find the model with the minimum number of parameters and good quality, the medium-sized models are the best option;
  • Models with a low (1–2) number of neurons did not ensure a good modelling quality regardless of the neural network type and the model order of the dynamics, as shown in Figure 8 and Figure 9.
  • Interestingly enough, the order of the dynamics of the model seemed not to greatly impact the modelling quality. Models with higher order were most commonly only slightly better than those with n A = n B = 1 . Only in the case of the neutralisation reactor with n A = 0 in Table 3 could a noticeable improvement be observed when n B was set to two. The unique long-term memory quality of the networks under study may be a cause of this phenomenon. The information about the important previous input and output signals from the past can be kept inside the hidden and cell states, and therefore, the networks can perform very well with only the most recent input values (i.e., n A = 0 , n B = 1 );
Based on the observations summarised above, it can be concluded that it is a good practice to train a model with a medium number of neurons and a low order of the dynamics. This approach may require many training trials, but as a result, the model has a relatively low number of parameters; therefore, a lower computational cost can be achieved. A direct comparison of the polymerisation reactor models can be seen in Figure 10 and Figure 11. Both models performed very well, and the modelling errors were minimal. A similar comparison for the pH reactor can be seen in Figure 12 and Figure 13. The modelling quality was again very satisfactory. Here, it is important to stress that in the case of the GRU model with n A = 0 , it was necessary to choose one with a higher order of the dynamics to achieve results similar to those ensured by the simpler LSTM models.

4.3. LSTM and GRU Neural Network for the MPC of Polymerisation and Neutralisation Reactors

A few of the best-performing models were been chosen with the aim of being applied in the MPC control scheme for prediction. First, let us describe the tuning procedure of the MPC controller. It starts with the selection of the prediction horizon. It should be long enough to cover the dynamic behaviour of the process. However, if the horizons are too long, the computation cost of the optimisation task increases. The control horizon cannot be too short since it gives insufficient control quality, while its lengthening also increases the computational burden. The process of tuning was therefore as follows:
  • The constant weighting coefficient λ = 1 was assumed;
  • The prediction horizon N and the control horizon N u were set to have the same, arbitrarily chosen lengths. If the controller was not working properly, both horizons were lengthened;
  • The prediction horizon was gradually shortened, and its minimal possible length was chosen (with the condition N u = N );
  • The effect of changing the length of the control horizon on the resulting control quality was then assessed experimentally (e.g., assuming successively N u = 1 , 2 , 3 , 4 , 5 , 10 , , N ). The shortest possible control horizon was chosen;
  • Finally, after determining the horizon’s lengths, the weighting coefficient λ was adjusted.
After applying the tuning procedure on both processes under study, the following settings were determined:
  • N = 10 , N u = 5 , λ = 0.5 for the polymerisation process;
  • N = 10 , N u = 3 , λ = 0.5 for the neutralisation process.
Simulations of the MPC algorithms were performed with MATLAB. For optimisation, the fmincon() function was used with the following settings:
  • Optimisation algorithm—Sequential Quadratic Programming (SQP);
  • Finite differences type—centred.
MPC performance using the models without the recursive input signals ( n A ) proved to be very satisfactory. In the case of the polymerisation reactor, in Figure 14, minimal overshoot and a short settling time can be observed. Similar control quality was achieved for the neutralisation reactor system as depicted in Figure 15. Interestingly enough, for MPC with more complex models ( n A = n B ), the results were comparable, as demonstrated in Figure 16. In the case of the polymerisation system and the LSTM model, small oscillations around the set-point could be observed, as shown in Figure 17, and the overall control quality was slightly worse. Table 5 and Table 6 compare the simulation results of the MPC algorithms based on the LSTM and GRU models, for the polymerisation and neutralisation processes, respectively. The following indicators used in process control performance assessment were considered [49]:
  • The sum of squared errors (E);
  • The Huber standard deviation ( σ H ) of the control error;
  • The rational entropy ( S r ) of the control error.
Additionally, the average time of calculation (t) during the whole simulation horizon (in seconds) was specified.
From the performed experiments, we were able to draw the following conclusions:
  • Both types of neural networks allowed for a successful application of the MPC control scheme. All control performance indicators, i.e., E, σ H and S r , showed that GRU network models, when applied for prediction in MPC, lead to very similar control quality when the rudimentary LSTM networks are used. What is more, as GRU models have fewer internal parameters, their computation cost and, therefore, the time of calculations are lower, as shown in Table 5 and Table 6;
  • It is advisable to choose models with a relatively simple structure and a low number of parameters to implement in the MPC scheme. More complex models often provide comparable or even worse quality of control, and the computation cost rises with the number of parameters of the model;
  • Minor model imperfections are reduced with great success by feedback in MPC. An example of this phenomenon can be observed in the bottom plots in Figure 12, where the model outputs differ slightly from the validation data in some areas. However, when the models are implemented in the MPC scheme, as shown in Figure 16, the quality of control is very satisfactory. However, the negative feedback is not sufficient to ensure satisfactory control if the model itself has poor quality. Example simulation results for the polymerisation process are presented in Figure 18. As a result of a very bad model, the MPC algorithm leads to unacceptable control quality, i.e., the set-point is never achieved, and strong oscillations are observed. Example simulations results when an inaccurate model is used in MPC for the neutralisation process are presented in Figure 19. In this case, the overshoot is larger and the setting time is longer when compared with the MPC algorithm based on a good model, e.g., as shown in Figure 15.
It is important to stress that the above observations are true for the two considered processes.

5. Conclusions

Having performed numerous experiments with different structures of LSTM and GRU neural networks as models of dynamical systems used in the MPC of two chemical reactors, we found that the GRU network gives very good results. Firstly, it approximates the properties of the dynamical systems with good accuracy, comparable with that possible when the rudimentary LSTM model is used. Secondly, it gives very good results when used for prediction in MPC, very similar to those observed in the case of the LSTM models. It is necessary to point out that the number of model parameters is lower in the case of the GRU network. Hence, the use of the GRU network is recommended for modelling of dynamical processes and MPC.
Future work is planned to develop more computationally efficient MPC control schemes based on the GRU structure and for Multiple-Input Multiple-Output (MIMO) processes. Moreover, it is planned to develop GRU models and use them in MPC applied to the ball-on-plate laboratory process [8].

Author Contributions

Conceptualisation, K.Z. and M.Ł.; methodology, K.Z.; software, K.Z.; validation, K.Z. and M.Ł.; formal analysis, K.Z.; investigation, K.Z.; writing—original draft preparation, M.Ł.; writing—review and editing, K.Z. and M.Ł.; visualisation, K.Z. and M.Ł.; supervision, M.Ł. Both authors read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Maciejowski, J. Predictive Control with Constraints; Prentice Hall: Harlow, UK, 2002. [Google Scholar]
  2. Tatjewski, P. Advanced Control of Industrial Processes, Structures and Algorithms; Springer: London, UK, 2007. [Google Scholar]
  3. Nebeluk, R.; Marusak, P. Efficient MPC algorithms with variable trajectories of parameters weighting predicted control errors. Arch. Control Sci. 2020, 30, 325–363. [Google Scholar]
  4. Carli, R.; Cavone, G.; Ben Othman, S.; Dotoli, M. IoT Based Architecture for Model Predictive Control of HVAC Systems in Smart Buildings. Sensors 2020, 20, 781. [Google Scholar] [CrossRef] [Green Version]
  5. Rybus, T.; Seweryn, K.; Sąsiadek, J.Z. Application of predictive control for manipulator mounted on a satellite. Arch. Control Sci. 2018, 28, 105–118. [Google Scholar]
  6. Ogonowski, S.; Bismor, D.; Ogonowski, Z. Control of complex dynamic nonlinear loading process for electromagnetic mill. Arch. Control Sci. 2020, 30, 471–500. [Google Scholar]
  7. Horla, D. Experimental Results on Actuator/Sensor Failures in Adaptive GPC Position Control. Actuators 2021, 10, 43. [Google Scholar] [CrossRef]
  8. Zarzycki, K.; Ławryńczuk, M. Fast real-time model predictive control for a ball-on-plate process. Sensors 2021, 21, 3959. [Google Scholar] [CrossRef]
  9. Bania, P. An information based approach to stochastic control problems. Int. J. Appl. Math. Comput. Sci. 2020, 30, 47–59. [Google Scholar]
  10. Nelles, O. Nonlinear System Identification: From Classical Approaches to Neural Networks and Fuzzy Models; Springer: Berlin, Germany, 2001. [Google Scholar]
  11. Haykin, S. Neural Networks and Learning Machines; Pearson Education: Upper Saddle River, NJ, USA, 2009. [Google Scholar]
  12. Ławryńczuk, M. Computationally Efficient Model Predictive Control Algorithms: A Neural Network Approach; Studies in Systems, Decision and Control; Springer: Cham, Switzerland, 2014; Volume 3. [Google Scholar]
  13. Bianchi, F.M.; Maiorino, E.; Kampffmeyer, M.C.; Rizzi, A.; Jenssen, R. Recurrent Neural Networks for Short-Term Load Forecasting: An Overview and Comparative Analysis; Springer Briefs in Computer Science; Springer: Berlin, Germany, 2017. [Google Scholar]
  14. Hammer, B. Learning with Recurrent Neural Networks; Lecture Notes in Control and Information Sciences; Springer: Berlin, Germany, 2000; Volume 254. [Google Scholar]
  15. Mandic, D.P.; Chambers, J.A. Recurrent Neural Networks for Prediction: Learning Algorithms, Architectures and Stability; Wiley: Chichester, UK, 2001. [Google Scholar]
  16. Rovithakis, G.A.; Christodoulou, M.A. Adaptive Control with Recurrent High-Order Neural Networks; Springer: Berlin, Germany, 2000. [Google Scholar]
  17. Bengio, Y.; Simard, P.; Frasconi, P. Learning long-term dependencies with gradient descent is difficult. IEEE Trans. Neural Netw. 1994, 5, 157–166. [Google Scholar] [CrossRef] [PubMed]
  18. Hochreiter, S. Untersuchungen zu Dynamischen Neuronalen Netzen. Master’s Thesis, Technical University Munich, Munich, Germany, 1991. [Google Scholar]
  19. Hochreiter, S.; Schmidhuber, J. Long Short-term Memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef] [PubMed]
  20. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  21. Chung, J.; Gulcehre, C.; Cho, K.; Bengio, Y. Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling. arXiv 2014, arXiv:1412.3555. [Google Scholar]
  22. Islam, A.; Chang, K.H. Real-time AI-based informational decision-making support system utilizing dynamic text sources. Appl. Sci. 2021, 11, 6237. [Google Scholar] [CrossRef]
  23. Graves, A.; Schmidhuber, J. Offline handwriting recognition with multidimensional recurrent neural networks. In Advances in Neural Information Processing Systems; Koller, D., Schuurmans, D., Bengio, Y., Bottou, L., Eds.; Curran Associates, Inc.: La Jolla, CA, USA, 2009; Volume 21, pp. 1–8. [Google Scholar]
  24. Sak, H.; Senior, A.; Beaufays, F. Long short-term memory recurrent neural network architectures for large scale acoustic modeling. In Proceedings of the Annual Conference of the International Speech Communication Association, Interspeech 2014, Singapore, 14–18 September 2014; pp. 338–342. [Google Scholar]
  25. Graves, A.; Abdel-Rahman, M.; Geoffrey, H. Speech recognition with deep recurrent neural networks. In Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, Vancouver, BC, Canada, 26–31 May 2013; pp. 6645–6649. [Google Scholar]
  26. Capes, T.; Coles, P.; Conkie, A.; Golipour, L.; Hadjitarkhani, A.; Hu, Q.; Huddleston, N.; Hunt, M.; Li, J.; Neeracher, M.; et al. Siri on-device deep learning-guided unit selection text-to-speech system. In Proceedings of the Interspeech 2017, Stockholm, Sweden, 20–24 August 2017; pp. 4011–4015. [Google Scholar]
  27. Telenyk, S.; Pogorilyy, S.; Kramov, A. Evaluation of the coherence of Polish texts using neural network models. Appl. Sci. 2021, 11, 3210. [Google Scholar] [CrossRef]
  28. Ackerson, J.M.; Dave, R.; Seliya, N. Applications of recurrent neural network for biometric authentication & anomaly detection. Information 2021, 12, 272. [Google Scholar]
  29. Gallardo-Antolín, A.; Montero, J.M. Detecting deception from gaze and speech using a multimodal attention LSTM-based framework. Appl. Sci. 2021, 11, 6393. [Google Scholar] [CrossRef]
  30. Kulanuwat, L.; Chantrapornchai, C.; Maleewong, M.; Wongchaisuwat, P.; Wimala, S.; Sarinnapakorn, K.; Boonya-Aroonnet, S. Anomaly detection using a sliding window technique and data imputation with machine learning for hydrological time series. Water 2021, 13, 1862. [Google Scholar] [CrossRef]
  31. Bursic, S.; Boccignone, G.; Ferrara, A.; D’Amelio, A.; Lanzarotti, R. Improving the accuracy of automatic facial expression recognition in speaking subjects with deep learning. Appl. Sci. 2020, 10, 4002. [Google Scholar] [CrossRef]
  32. Chen, J.; Huang, X.; Jiang, H.; Miao, X. Low-cost and device-free human activity recognition based on hierarchical learning model. Sensors 2021, 21, 2359. [Google Scholar] [CrossRef] [PubMed]
  33. Fang, Y.; Yang, S.; Zhao, B.; Huang, C. Cyberbullying detection in social networks using Bi-GRU with self-attention mechanism. Information 2021, 12, 171. [Google Scholar] [CrossRef]
  34. Knaak, C.; von Eßen, J.; Kröger, M.; Schulze, F.; Abels, P.; Gillner, A. A spatio-temporal ensemble deep learning architecture for real-time defect detection during laser welding on low power embedded computing boards. Sensors 2021, 21, 4205. [Google Scholar] [CrossRef]
  35. Ullah, A.; Muhammad, K.; Ding, W.; Palade, V.; Haq, I.U.; Baik, S.W. Efficient activity recognition using lightweight CNN and DS-GRU network for surveillance applications. Appl. Soft Comput. 2021, 103, 107102. [Google Scholar] [CrossRef]
  36. Varshney, A.; Ghosh, S.K.; Padhy, S.; Tripathy, R.K.; Acharya, U.R. Automated classification of mental arithmetic tasks using recurrent neural network and entropy features obtained from multi-channel EEG signals. Electronics 2021, 10, 1079. [Google Scholar] [CrossRef]
  37. Ye, F.; Yang, J. A Deep Neural Network Model for Speaker Identification. Appl. Sci. 2021, 11, 3603. [Google Scholar] [CrossRef]
  38. Gonzalez, J.; Yu, W. Non-linear system modeling using LSTM neural networks. IFAC-PapersOnLine 2018, 51, 485–489. [Google Scholar] [CrossRef]
  39. Schwedersky, B.B.; Flesch, R.C.C.; Dangui, H.A.S. Practical nonlinear model predictive control algorithm for long short-term memory networks. IFAC-PapersOnLine 2019, 52, 468–473. [Google Scholar] [CrossRef]
  40. Karimanzira, D.; Rauschenbach, T. Deep learning based model predictive control for a reverse osmosis desalination plant. J. Appl. Math. Phys. 2020, 8, 2713–2731. [Google Scholar] [CrossRef]
  41. Jeon, B.K.; Kim, E.J. LSTM-based model predictive control for optimal temperature set-point planning. Sustainability 2021, 13, 894. [Google Scholar] [CrossRef]
  42. Iglesias, R.; Rossi, F.; Wang, K.; Hallac, D.; Leskovec, J.; Pavone, M. Data-driven model predictive control of autonomous mobility-on-demand systems. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–25 May 2018; pp. 6019–6025. [Google Scholar]
  43. Okulski, M.; Ławryńczuk, M. A novel neural network model applied to modeling of a tandem-wing quadplane drone. IEEE Access 2021, 9, 14159–14178. [Google Scholar]
  44. Pascanu, R.; Mikolov, T.; Bengio, Y. On the difficulty of training recurrent neural networks. In Proceedings of the 30th International Conference on Machine Learning, Atlanta, GA, USA, 16–21 June 2013; Volume 28, pp. 1310–1318. [Google Scholar]
  45. Doyle, F.J.; Ogunnaike, B.A.; Pearson, R. Nonlinear model-based control using second-order Volterra models. Automatica 1995, 31, 697–714. [Google Scholar] [CrossRef]
  46. Ławryńczuk, M. Practical nonlinear predictive control algorithms for neural Wiener models. J. Process Control 2013, 23, 696–714. [Google Scholar] [CrossRef]
  47. Gómez, J.C.; Jutan, A.; Baeyens, E. Wiener model identification and predictive control of a pH neutralisation process. Proc. IEEE Part D Control Theory Appl. 2004, 151, 329–338. [Google Scholar] [CrossRef] [Green Version]
  48. Ławryńczuk, M. Modelling and predictive control of a neutralisation reactor using sparse Support Vector Machine Wiener models. Neurocomputing 2016, 205, 311–328. [Google Scholar] [CrossRef]
  49. Domański, P. Control Performance Assessment: Theoretical Analyses and Industrial Practice; Studies in Systems, Decision and Control; Springer: Cham, Switzerland, 2020; Volume 245. [Google Scholar]
Figure 1. The LSTM cell structure.
Figure 1. The LSTM cell structure.
Sensors 21 05625 g001
Figure 2. The topology of the LSTM and GRU networks.
Figure 2. The topology of the LSTM and GRU networks.
Sensors 21 05625 g002
Figure 3. The GRU cell structure.
Figure 3. The GRU cell structure.
Sensors 21 05625 g003
Figure 4. The polymerisation reactor: LSTM and GRU model validation errors for different numbers of neurons n N .
Figure 4. The polymerisation reactor: LSTM and GRU model validation errors for different numbers of neurons n N .
Sensors 21 05625 g004
Figure 5. The neutralisation reactor: LSTM and GRU model validation errors for different numbers of neurons n N .
Figure 5. The neutralisation reactor: LSTM and GRU model validation errors for different numbers of neurons n N .
Sensors 21 05625 g005
Figure 6. The number of the parameters of the LSTM and GRU models as a function of the number of neurons and the order of the dynamics determined by n A = n B .
Figure 6. The number of the parameters of the LSTM and GRU models as a function of the number of neurons and the order of the dynamics determined by n A = n B .
Sensors 21 05625 g006
Figure 7. The number of parameters of the LSTM and GRU models as a function of the number of neurons and the order of the dynamics determined by n B ; n A = 0 .
Figure 7. The number of parameters of the LSTM and GRU models as a function of the number of neurons and the order of the dynamics determined by n B ; n A = 0 .
Sensors 21 05625 g007
Figure 8. The polymerisation reactor: the validation dataset vs. the output of the LSTM and GRU models for n N = 1 , n A = n B = 1 .
Figure 8. The polymerisation reactor: the validation dataset vs. the output of the LSTM and GRU models for n N = 1 , n A = n B = 1 .
Sensors 21 05625 g008
Figure 9. The neutralisation reactor: the validation dataset vs. the output of the LSTM and GRU models for n N = 1 , n A = n B = 1 .
Figure 9. The neutralisation reactor: the validation dataset vs. the output of the LSTM and GRU models for n N = 1 , n A = n B = 1 .
Sensors 21 05625 g009
Figure 10. The polymerisation reactor: the validation dataset vs. the output of the LSTM and GRU models for n N = 9 , n A = 0 , n B = 1 .
Figure 10. The polymerisation reactor: the validation dataset vs. the output of the LSTM and GRU models for n N = 9 , n A = 0 , n B = 1 .
Sensors 21 05625 g010
Figure 11. The polymerisation reactor: the validation dataset vs. the output of the LSTM and GRU models for n N = 10 , n A = n B = 1 .
Figure 11. The polymerisation reactor: the validation dataset vs. the output of the LSTM and GRU models for n N = 10 , n A = n B = 1 .
Sensors 21 05625 g011
Figure 12. The neutralisation reactor: the validation dataset vs. the output of the LSTM and GRU models for n N = 5 , n A = n B = 1 .
Figure 12. The neutralisation reactor: the validation dataset vs. the output of the LSTM and GRU models for n N = 5 , n A = n B = 1 .
Sensors 21 05625 g012
Figure 13. The neutralisation reactor: the validation dataset vs. the output of the LSTM and GRU models for n N = 8 , n A = 0 , n B = 3 .
Figure 13. The neutralisation reactor: the validation dataset vs. the output of the LSTM and GRU models for n N = 8 , n A = 0 , n B = 3 .
Sensors 21 05625 g013
Figure 14. The polymerisation reactor: MPC results with the LSTM and GRU models n N = 9 , n A = 0 , n B = 1 .
Figure 14. The polymerisation reactor: MPC results with the LSTM and GRU models n N = 9 , n A = 0 , n B = 1 .
Sensors 21 05625 g014
Figure 15. The neutralisation reactor: MPC results with the LSTM and GRU models for n N = 8 , n A = 0 , n B = 3 .
Figure 15. The neutralisation reactor: MPC results with the LSTM and GRU models for n N = 8 , n A = 0 , n B = 3 .
Sensors 21 05625 g015
Figure 16. The neutralisation reactor: MPC results with the LSTM and GRU models for n N = 5 , n A = n B = 1 .
Figure 16. The neutralisation reactor: MPC results with the LSTM and GRU models for n N = 5 , n A = n B = 1 .
Sensors 21 05625 g016
Figure 17. The polymerisation reactor: MPC results with the LSTM and GRU models for n N = 10 , n A = n B = 1 .
Figure 17. The polymerisation reactor: MPC results with the LSTM and GRU models for n N = 10 , n A = n B = 1 .
Sensors 21 05625 g017
Figure 18. The polymerisation reactor: MPC results with the LSTM and GRU models for n N = 1 , n A = n B = 1 .
Figure 18. The polymerisation reactor: MPC results with the LSTM and GRU models for n N = 1 , n A = n B = 1 .
Sensors 21 05625 g018
Figure 19. The neutralisation reactor: MPC results with the LSTM and GRU models for n N = 1 , n A = n B = 1 .
Figure 19. The neutralisation reactor: MPC results with the LSTM and GRU models for n N = 1 , n A = n B = 1 .
Sensors 21 05625 g019
Table 1. The polymerisation reactor: comparison of selected LSTM and GRU networks without the recurrent inputs ( n A = 0 ) in terms of the training ( E t ) and validation errors ( E v ).
Table 1. The polymerisation reactor: comparison of selected LSTM and GRU networks without the recurrent inputs ( n A = 0 ) in terms of the training ( E t ) and validation errors ( E v ).
LSTMGRU
n B n N E t E v E t E v
1110.2212.176.5811.08
22.514.061.582.26
31.852.981.001.87
41.211.850.580.99
50.350.880.611.06
100.080.190.290.65
150.020.080.160.30
200.060.130.090.16
250.070.190.160.31
215.217.339.2514.10
20.831.462.564.58
31.412.672.023.00
40.300.590.571.19
50.501.090.260.63
100.060.190.190.39
150.140.260.240.50
200.130.230.110.24
250.080.170.150.31
317.6811.052.965.24
21.372.421.091.76
31.452.491.011.82
40.551.010.661.27
50.801.490.220.55
100.080.180.100.21
150.070.240.240.65
200.070.160.140.27
250.060.170.200.38
Table 2. The polymerisation reactor: comparison of selected LSTM and GRU networks with the recurrent inputs in terms of the training ( E t ) and validation errors ( E v ).
Table 2. The polymerisation reactor: comparison of selected LSTM and GRU networks with the recurrent inputs in terms of the training ( E t ) and validation errors ( E v ).
LSTMGRU
n A n B n N E t E v E t rec E v rec E t E v E t rec E v rec
1112.673.266.837.863.735.648.0611.07
21.512.842.644.751.392.533.095.11
30.230.370.590.950.330.550.661.08
40.250.540.370.840.400.830.971.89
50.100.190.210.410.190.500.531.18
100.080.170.120.270.100.210.260.52
150.060.100.100.180.190.400.440.89
200.060.120.090.180.030.070.090.19
300.020.040.040.080.070.120.170.31
2213.274.508.1310.294.076.246.539.78
21.813.192.884.900.530.941.292.10
30.991.841.602.830.821.421.542.61
40.340.690.471.030.450.961.032.08
50.180.370.260.590.220.420.550.97
100.090.180.130.270.330.680.831.61
150.130.310.180.460.040.140.100.30
200.060.130.090.230.040.080.100.19
300.030.060.050.100.080.140.190.33
3311.933.185.728.261.482.812.644.59
21.092.201.583.130.240.550.511.07
31.051.891.392.580.861.951.292.79
40.701.180.891.730.150.390.300.80
50.170.340.310.620.270.640.411.00
100.050.130.070.200.180.310.350.64
150.100.200.140.320.080.210.160.39
200.080.210.110.310.090.240.190.48
300.110.170.150.240.100.180.220.36
Table 3. The neutralisation reactor: comparison of selected LSTM and GRU networks without the recurrent inputs ( n A = 0 ) in terms of the training ( E t ) and validation errors ( E v ).
Table 3. The neutralisation reactor: comparison of selected LSTM and GRU networks without the recurrent inputs ( n A = 0 ) in terms of the training ( E t ) and validation errors ( E v ).
LSTMGRU
n B n N E t E v E t E v
116.565.1413.0713.03
23.954.187.939.22
33.234.086.366.58
44.434.284.985.18
52.552.816.065.84
102.333.103.453.87
152.372.974.524.71
202.382.804.734.70
301.472.041.032.01
216.335.0511.5710.80
23.994.167.327.52
33.093.896.116.41
42.943.436.666.49
51.291.855.965.33
101.482.141.482.27
151.682.071.602.54
201.281.541.912.32
301.151.691.382.37
317.796.8812.2712.10
24.074.517.317.40
33.244.086.567.46
43.824.704.855.81
53.474.112.863.79
102.633.461.161.77
151.261.881.071.90
201.081.601.221.92
300.941.751.212.11
Table 4. The neutralisation reactor: comparison of selected LSTM and GRU networks with the recurrent inputs in terms of the training ( E t ) and validation errors ( E v ).
Table 4. The neutralisation reactor: comparison of selected LSTM and GRU networks with the recurrent inputs in terms of the training ( E t ) and validation errors ( E v ).
LSTMGRU
n A n B n N E t E v E t rec E v rec E t E v E t rec E v rec
1112.462.925.005.172.393.255.987.79
24.223.915.114.501.622.284.385.73
32.222.743.894.441.582.313.805.30
43.023.204.704.611.982.723.914.77
52.562.973.813.980.771.321.552.33
101.551.962.362.721.622.263.424.50
152.192.763.644.052.192.763.644.05
201.442.132.503.581.442.132.503.58
301.111.682.132.861.111.682.132.86
2212.192.723.804.502.363.255.517.25
22.633.025.155.381.992.784.345.44
32.012.773.163.991.972.843.885.17
42.743.434.144.612.363.194.325.28
52.603.153.213.532.222.933.644.68
101.141.671.642.401.932.453.513.99
151.552.032.282.671.552.032.282.67
200.931.291.451.820.931.291.451.82
301.301.681.852.191.301.681.852.19
3312.052.503.784.252.793.396.156.66
22.873.344.114.223.914.377.757.13
31.992.702.823.561.762.404.145.31
42.573.123.693.981.842.503.634.57
52.592.993.733.722.162.823.984.68
100.761.221.362.121.692.423.474.50
150.781.221.151.650.781.221.151.65
201.482.002.032.511.482.002.032.51
301.291.771.822.141.291.771.822.14
Table 5. The polymerisation reactor: quality indexes and average time of calculation comparison.
Table 5. The polymerisation reactor: quality indexes and average time of calculation comparison.
LSTMGRU
n N n A n B E σ H S r t E σ H S r t
5012.67 × 10 7 667.701782.416.672.76 × 10 7 661.551608.187.64
6012.71 × 10 7 677.171758.726.642.72 × 10 7 771.121653.337.68
7012.80 × 10 7 630.391665.626.502.73 × 10 7 518.581568.327.62
8012.75 × 10 7 627.671758.216.492.80 × 10 7 639.271742.637.89
9012.74 × 10 7 477.861701.536.522.75 × 10 7 588.221603.828.15
10012.79 × 10 7 454.591659.056.712.78 × 10 7 549.561623.568.05
5112.63 × 10 7 1562.432092.486.692.77 × 10 7 509.751577.648.01
6112.68 × 10 7 1536.201962.026.642.77 × 10 7 898.481743.707.62
7112.73 × 10 7 367.471587.796.652.77 × 10 7 1209.391964.147.48
8112.70 × 10 7 617.221729.516.672.71 × 10 7 759.491843.607.49
9112.76 × 10 7 455.081688.716.722.75 × 10 7 686.341734.037.41
10112.73 × 10 7 463.271688.716.612.78 × 10 7 469.351612.847.86
5222.68 × 10 7 839.531785.236.792.78 × 10 7 591.741706.747.46
6222.77 × 10 7 528.511705.006.502.80 × 10 7 860.321878.827.98
7222.77 × 10 7 826.311710.115.612.72 × 10 7 413.741569.337.79
5022.72 × 10 7 573.751726.277.682.74 × 10 7 824.281841.888.56
6022.76 × 10 7 458.171731.287.662.74 × 10 7 611.421713.808.80
7022.75 × 10 7 449.801676.017.932.74 × 10 7 499.091592.738.52
Table 6. The neutralisation reactor: quality indexes and average time of calculation comparison.
Table 6. The neutralisation reactor: quality indexes and average time of calculation comparison.
LSTMGRU
n N n A n B E σ H S r t E σ H S r t
501213.670.210.533.47208.4040.180.483.73
601208.520.170.503.44209.060.200.494.04
701210.560.260.533.46210.630.190.493.86
801212.520.260.523.66212.120.170.493.89
901210.510.250.563.64210.490.180.483.88
1001211.330.270.533.55210.730.200.513.95
511215.700.330.563.78208.590.190.504.07
611220.540.230.513.97214.440.230.503.79
711217.200.190.523.76209.260.210.514.13
811219.030.270.533.71213.900.220.534.61
911220.600.520.593.86215.560.200.524.23
1011225.690.210.523.86208.410.180.484.02
502412.730.220.483.48206.900.160.463.99
602218.320.220.523.66215.560.230.524.01
702208.800.180.513.66208.980.200.513.84
522227.500.220.494.60217.950.220.524.44
622222.800.250.514.34212.760.250.524.69
722217.910.240.534.52221.070.230.514.80
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zarzycki, K.; Ławryńczuk, M. LSTM and GRU Neural Networks as Models of Dynamical Processes Used in Predictive Control: A Comparison of Models Developed for Two Chemical Reactors. Sensors 2021, 21, 5625. https://doi.org/10.3390/s21165625

AMA Style

Zarzycki K, Ławryńczuk M. LSTM and GRU Neural Networks as Models of Dynamical Processes Used in Predictive Control: A Comparison of Models Developed for Two Chemical Reactors. Sensors. 2021; 21(16):5625. https://doi.org/10.3390/s21165625

Chicago/Turabian Style

Zarzycki, Krzysztof, and Maciej Ławryńczuk. 2021. "LSTM and GRU Neural Networks as Models of Dynamical Processes Used in Predictive Control: A Comparison of Models Developed for Two Chemical Reactors" Sensors 21, no. 16: 5625. https://doi.org/10.3390/s21165625

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop