Design of Neural Network Quantizers for Networked Control Systems

Abstract: Nowadays, networked control systems (NCSs) are being widely implemented in many applications. However, several problems negatively affect and compromise the design of practical NCSs. One of them is the performance degradation of the system due to quantization. This paper aims to develop dynamic quantizers for NCSs and their design methods that alleviate the effects of the quantization problem. In this paper, we propose a type of dynamic quantizers implemented with neural networks and memories, which can be tuned by a time series data of the plant inputs and outputs. Since the proposed quantizer can be designed without the model information of the system, the quantizer could be applied to any system with uncertainty or nonlinearity. This paper gives two types of quantizers, and they differ from each other in the neural networks structure. The effectiveness of these quantizers and their design method are verified using numerical examples. Besides, their performances are compared among each other using statistical analysis tools.


Introduction
The networked control systems (NCSs) are systems in which its elements are physically separated but connected by some communication channels.They have been around for some decades already, and they have been implemented successfully in many fields such as industrial automation, robotics, and power grids.Although the NCSs provide several advantages to the systems, it is well-known that one of the problems is the system performance degradation caused by the data rate constraints in the communication channels [1][2][3].In the case that operation signals of NCSs are transmitted over networks under data rate constraints, the signal quantization is a fundamental process in which a continuous-valued signal is transformed into a discrete-valued one.However, the quantization error between the continuous-valued signal and discrete-valued one occurs, and it affects the performance of the NCSs.Therefore, one of the significant works is to develop a method to minimize the influence of the quantization error to the performance of the NCSs.
It has been proven that properly designed feedback-type dynamic quantizers are effective to reduce this degradation in the system's performance [4].Several studies have considered the design of dynamic quantizers.For instance, a mathematical expression of an optimal dynamic quantizer for time-invariant linear plants was presented in [4], and an equivalent expression for nonlinear plants was introduced in [5].Furthermore, design methods for dynamic quantizers that minimize the system's performance degradation and satisfy the channel's data rate constraints were developed in [6][7][8].Then, in [9] event-triggered dynamic quantizers that reduce the traffic in the communication network were proposed.In these studies, the design of the quantizers is carried out using information from the plant; namely, these quantizers are based on model-based approach.Thus, if the model of the plant is inaccurate, then the quantizers will be faulty.
Accordingly, in this paper, the data-driven approach is considered for the design of feedback type dynamic quantizers.Besides, this paper presents a class of dynamic quantizers that are constructed using feedforward neural networks.The quantizer, called neural network quantizers, are designed using time series data of plant inputs and outputs.Some advantages of this approach are that a model of the plant is not required for the design, i.e., model-free design, and that the quantizer can be designed not only for linear but also for nonlinear plants.The selection of neural networks to perform this job is motivated by the fact that feedforward neural networks are very flexible and that they can be used to represent any nonlinear function/system, in this sense, they work as universal approximators [10,11].This property is especially important for the design of optimal dynamic quantizers because their structures are functions of the plants' model [4,5].If the model of the plant is not given, but it is known to be linear, then the structure of the optimal quantizer is also known to be linear.However, if the plant is nonlinear and its model is absent, then the optimal quantizer's structure is unknown.Thus, the neural network can approximate the optimal quantizer's structure based on a series of plant input and output data.
This paper is structured as follows.First, we propose a class of dynamic quantizers composed of feedforward neural network, memories, and a static quantizer.The proposed quantizer has two variations in neural network structures: one is based on a regression-based approach, and the other is based on a classification-based approach.Then, we formulate the quantizers design problem that finds the parameters of the neural network and the quantization interval for given a time series data of plant input and output.Next, with numerical examples, the effectiveness of these quantizers and their design method are verified.Finally, several design variations are considered in order to optimize the quantizer's performance, and comparisons among these variations are carried out.
It should be remarked that various results on the quantizer design for networked control systems have been obtained, e.g., [12][13][14][15][16].However, the contributions of this paper are distinguished from them as follows.The papers in [12][13][14] focus on the zoom-in and zoom-out strategy based dynamic quantizer, i.e., the quantizer with time-varying quantization interval.Besides, the paper [15] considers the static logarithmic quantizer, i.e., its quantization interval is not uniform.On the other hand, this paper proposes dynamic quantizer with the time-invariant and uniform interval.Furthermore, the paper [16] proposes a ∆Σ modulator, which is related to the proposed quantizer.Although the result in [16] is restricted to the case of two quantization levels, this paper can deal with the case of multi-levels.
This paper is a journal version of our previous conference papers that were presented in [17,18].The main difference between this paper and its predecessors is as follows.The system description and problem formulation are improved, and detail explanations of the proposed quantizers are added.Then, we use the ANOVA test to analyze the several simulation results.Besides, we take into account different activation functions for the neural networks hidden layers and compares different initialization methods for the network tuning.

Neural Network Quantizers
In this section, we first describe a networked control system considered here.Then we present a quantizer, composed of neural networks and a static quantizer, called neural network quantizer.

System Description
This paper considers the system depicted in Figure 1.This system is composed of a plant P, a communication channel that has no loses or delays, and the neural network quantizer Q NN proposed in this paper.
< l a t e x i t s h a 1 _ b a s e 6 4 = " Y p P r g Y v S 5 + M j s Z 4 g f W M I H U r 4 k J A = " > A A A C y 3 i c h V E 9 L w R R F D 3 G 9 / e i k W j E h q g 2 d 5 A Q l U R B I / G 1 b I J s Z s b D x H x l 3 t s N l l K j 1 C h o S B T i Z 2 j 8 A R E / Q Z Q k G o W 7 b 0 c E w Z 3 M e / e d d 8 9 9 9 + T Y k e d K R f R Q Z V T X 1 N b V N z Q 2 N b e 0 t r W n O j q X Z F i I H Z F 1 Q i + M c 7 Y l h e c G I q t c 5 Y l c F A v L t z 2 x b G 9 P l u + X i y K W b h g s q t 1 I r P n W Z u B u u I 6 l G M q t T l m + b + X N f C p N G d L R + z M x k y S N J G b D 1 D 1 W s Y 4 Q D g r w I R B A c e 7 B g u R v B S Y I E W N r K D E W c + b q e 4 E D N D G 3 w F W C K y x G t 3 n d 5 N N K g g Z 8 L v e U m u 3 w K x 7 / M T N 7 0 U 9 3 d E X P d E v X 9 E h v v / Y q 6 R 7 l W X Z 5 t y t c E e X b j 7 o X X v 9 l + b w r b H 2 y / p j 5 o 1 Y y G u u J B V c 6 j H r / a F X Y w J h m u K w 5 0 k h Z v V N 5 t 7 h 3 8 r w w P t 9 f G q A L e m L d 5 / R A N 6 w 8 K L 4 4 l 3 3 T I 7 3 9 m q s a 5 P B 7 2 e d b q 3 O F k + 8 5 H M i 8 / s s q 8 i 2 x / c n 6 o + e P W I 9 R N + h Y c K T O q P m P V o l N T A U M g z U 7 A e K r 1 + t 1 y 5 X j 5 8 x 0 e r g 6 Q h f 0 P g v 8 2 g h Q i 9 / N n J 7 P 1 r m K y d t l 5 g S m 6 J 6 u 6 Z n u 6 I Y e 6 e 3 X X D U / R 7 O X K t 9 a i y u c f O h 4 L P P 6 L 6 v E t 8 T u J + u P n j 9 i P U Z d v 2 P B k T q j 5 j 9 a J X Y Q 9 x k G a 3 Z 8 p K l e b 9 W t H J 4 9 Z x a X p 2 r T d E l P r P u C G n T L y q 3 K i

Memory
< l a t e x i t s h a 1 _ b a s e 6 4 = " Z B u + u 1 S J n k X w u P r R 2 l p W e P p i I

Static quantizer
< l a t e x i t s h a 1 _ b a s e 6 4 = " L r 8 The plant is represented by the following single-input-single-output (SISO) model: where k ∈ {0} ∪ N is the discrete time, x ∈ R n P is the state vector with initial value x(0) = x 0 , v ∈ R is the input, and y ∈ R is the output.The functions f : R n P × R → R n P and g : R n P → R are in general nonlinear mappings.It is assumed that f and g are continuous and smooth.
The quantizer Q NN , shown in Figure 1, is composed of a neural network, a static quantizer q, and a couple of memories S v and S y .The quantizer is represented are by the following expression: where u ∈ R is the input, y ∈ R K n L is the output of the neural network, u q ∈ R is the input of q, and is the output of q, i.e., the output of Q NN .Note that d is the quantization interval, and M is the number of quantization levels which is determined from the data rate of network channel.The signals V (k) and Y q (k) are the outputs of the memories S v and S y , respectively.They are time series of past values of the quantized inputs v(k) and the outputs y q (k) of the plant, and they are given by where n V and n Y are the dimensions of these memories.Thus, the proposed quantizer is tuned by using the past input and output data of the plant.This means that the quantizer may capture the dynamics of the plant.This paper proposes two types of neural network quantizers: Q NNR and Q NNC .The quantizers Q NNR and Q NNC differ in the expressions of the nonlinear functions Γ 1 (•), Γ 2 (•), and q(•).The illustrations of Q NNR and Q NNC are shown in Figure 2.Although detail explanations of two quantizers will be shown in the following subsections, the main difference between them is in the neural network's structure.In Q NNR the network has only one output that shapes the input signal, i.e., the network is trained to perform regression.On the other hand, in Q NNC the network has as many outputs as the considered amount of quantization levels M. Each output represents the probability that a given input is matched with a specific quantization level, i.e., the network is trained for classification.Besides, in Figure 2a, the numbers 2, 1, 1, 3, 3 mean the selected quantization levels, which are determined by the function Γ 2 .

Neural Net
< l a t e x i t s h a 1 _ b a s e 6 4 = " H w < l a t e x i t s h a 1 _ b a s e 6 4 = " Z B u + u 1 S J n k X w u P r R 2 l p W e P p i (a) < l a t e x i t s h a 1 _ b a s e 6 4 = " Z B u + u 1 S J n k X w u P r R 2 l p W e P p i Index (the biggest probability) The number of the neural network output is same as that of quantization levels, and each output correponds to the probability that a original signal is classified into a specific quantization level.

Regression Based Neural Network Quantizer
For the regression-based neural network quantizer Q NNR , the static quantizer q(•) is a regular finite-level static quantizer with saturation as shown in Figure 3.It receives directly the continuous output of the neural network u q (k) and rounds it to the nearest discrete value to generate v(k).It has two parameters: one is the number of quantization levels M ∈ N and the other is the quantization interval d ∈ R with d > 0. Figure 3 shows an example of this static quantizer with M = 4.In this paper, the fully connected feed-forward type neural network is adopted to build the function Γ 1 (•) in Equation ( 2).An example is shown in Figure 4.In this case, the function Γ 1 is given by The neural network Γ 1 in Q NNC is same as that in Q NNR .The inputs of this network are the same as in the previous case x(k) = [u(k), V (k), Y q (k)] and the hidden units activation function h(•) is also the logistic sigmoid in Equation (7).The dimension of the ouput is K n L = M. Then each output of the network y i (k) is associated with one quantization level, and represents the probability that a given input is classified into a specific quantization level.Therefore, the quantization level with the biggest probability is selected to be the network's output, and it is given by

Quantizer Design Problem
In this paper, it is assumed that the number of quantization levels M, the memory sizes n V and n Y , and the neural network structure K are given.Thus, the design parameters are the weight vector w and the quantization interval d of the neural network quantizer Q NN .
The performance of the quantizer Q NN in NCSs can be evaluated using a construction known as error system.The considered error system is depicted in Figure 6.This system is composed of two branches.In the lower branch, the input signal u is applied directly to the plant P that produces the ideal output y.In the upper branch the effects of quantization are considered and u is applied to the quantizer Q NN that generates the quantized signal v that is applied to the plant.The output of the plant in this case is represented by y q , and the difference y q − y is the error signal.The error signal e(k) is used to evaluate the performance degradation of the system.By minimizing y q − y, the system composed of the quantizer Q NN and the plant P can be optimally approximated to the plant P, in terms of the input-output relation.

Neural Network Quantizer
< l a t e x i t s h a 1 _ b a s e 6 4 = " H w  In this context, a parameter known as performance index is used to measure the system's performance degradation.The performance index considered here is the sum-of-squares error function that is defined by where u(k) is used to build x(k) along side with V (k) and Y q (k) that are generated dynamically.It is necessary to make E(Q NN ) as small as possible to maintain the output error low.Then, the design of Q NN is set up as an optimization problem in which the performance index is minimized.
This paper assumes that, although the model is unknown, it is possible to feed it with inputs and measure its outputs.Then, a time series of inputs and outputs of the plant will be available.These time series are represented as follows.
where n s is the length of the time series, namely, the number of samples.Notice that y(k) (k = 1, 2, . . ., n s ) represents the output of the plant P when u(k) is applied directly to it, i.e., v(k) = u(k).
Then, the neural network quantizers design problem is formulated as follows: Problem 1. Suppose that the time series data U and Y of the plant, the number of quantization levels M, the neural network structure K, and the memory sizes n V and n Y are given.Then, find the parameters of Q NN : the weight vector w and the quantization interval d which minimize E(Q NN ), under the condition that d > 0.
This design problem is nonlinear and nonconvex.Thus, it cannot be solved using gradient-based optimization methods such as linear programming or quadratic programming.Moreover, conventional neural network training techniques based on error backpropagation cannot be used either due to the structure of the system, as it was explained previously.Therefore, alternative optimization methods should be used.
In this regard, the metaheuristics stand out from the available options because of their flexibility and a wide variety of implementations [19].In particular, the differential evolution (DE) metaheuristic algorithm is used to perform the design of Q NN .This choice is justified by the fact that DE has proven to be effective in the training of neural networks [20,21] and that it has shown an outstanding performance in the design of dynamic quantizers [9].DE is a population-based metaheuristic algorithm inspired in the mechanism of biological evolution [22,23].In this algorithm, the cost function J(θ) is evaluated iteratively over a population of possible solutions or individuals θ i ∈ R n in each iteration the individuals improve their values and move towards the best solution.Finally, the individual with the lowest fitness value in the last iteration is regarded as the optimal solution.Some advantages of DE are that it is very easy to implement and has only two tuning parameters: the scale factor F and the crossover constant H, apart from the number of individuals N and the maximum number of iterations t max .Besides, DE shows very good exploration capacities and converges fast to global optima.DE has many versions and variations; the one considered in this study is the classical DE/best/1/bin strategy, which is described in Algorithm 1.
Step 1: The cost function J(θ) is evaluated for each θ i and θ base = θ l is calculated by: l = arg min i∈{1,2,...,N} J(θ i ). ( If t = t max then θ base is the final solution, if not go to Step 2. Step 2 (Mutation): For each θ i a mutant vector M i is generated by: where τ 1,i and τ 2,i are random indexes subject to i = τ 1,i = τ 2,i = l.
Step 4 (Selection): The members of the next generation k + 1 are selected by: then t ← t + 1 and go to Step 1.
Since the design parameters of Q NN are w and d, an individual for the DE algorithm will have the following form θ = [d w] with dimension n = 1 + n w .From these parameters, the weights vector w is not affected by any constraint, but the quantization interval d should always be positive d > 0. DE has no direct way to handle the constraints of the optimization problem since it was designed to solve unconstrained problems.Then, in order to manage the constraint condition, a method developed by Maruta et al. in [24] is employed.This method transforms the constrained optimization problem into the following unconstrained one.minimize where E(θ) is the performance index in Equation ( 9).This constraints management method ensures that d is positive.The learning resulting from the training of a deep neural network depends highly on the initial weights of the network because many of the learning techniques are in essence local searches.Therefore, it is very important to initialize the network's weights appropriately [25,26].There are several ways to initialize the neural networks to perform the training.The most common method is the uniformly random initialization where random values sampled from a certain interval using a uniform probability distribution are assigned to the weights and biases of the network.The initialization intervals are selected according to, but they are usually small and close to zero.Popular ones are the intervals [−1, 1] or [−0.5, 0.5].Another prevalent type of initialization was developed in [27] by Glorot and Bengio.This method is known as Xavier Uniform initialization (from Xavier Glorot).In this method, the weights of each layer in the network are initialized using random uniform sampling in a specific interval where w i represents the weights of the ith layer.The limits of the interval are given by l i which is a function of the number of neurons of the considered layer K i , the number of neurons in the previous layer K i−1 and the hidden layers activation function h.The limit is the following

Numerical Simulations
To verify that the proposed neural network quantizers and their design method work properly, several numerical simulations were performed.In these simulations, the following discrete, nonlinear and stable plant is used.
This plant is a modified version of the plant shown in [5].The initial state is x 0 = [0.1,−0.2] , and the input signal used in the examples is given by The evaluation interval is L = 1000, which implies that the amount of samples taken is n s = 1000.The quantizers are constructed with n Y = n V = 5, M = {2, 8} and neural networks with n L = {2, 4}.Given the size of the memories and the dimension of u(k) all the networks have inputs with dimension K 0 = 11.The neural networks' structure depends on the type of quantizer and M. Table 1 summaries the structure of the quantizers used in the simulations.For the regression case (R) the network's structure and the dimension of w (n w ) are independent of M. This is not the case for the classification type of quantizer (C).Table 1 also shows a comparison among the n w of each network.The hyper parameters of DE are N = 500, t max = 2000, F = 0.6, and H = 0.9.The simulations were performed N run = 50 times for each considered case.Then, since the individuals have the form θ = [d w] the dimensions of the optimization problems n will be the ones shown is the last column of Table 1.Looking at Table 1 it is possible to see that Q NNC has more parameters than Q NNR , this is a factor that influences the performance of the proposed design method.
The DE individuals are initialized as follows.The first element d is uniformly sampled from the interval (0, 1].The network weights are initialized using the uniform random and the Xavier uniform initialization methods, described in Section 3. After running the DE algorithm N run = 50 times for each considered case, the quantizers Q NN with the lowest E(w, d) are selected to be the optimal quantizers.Then, in order to test these quantizers, the error system in Figure 6 is fed with the input signal u(k) for each case.It results that all the quantizers work properly and show good performance.For instance, Figure 7 depicts the signals resulting from applying u(k) to the system with the quantizers designed for M = 2 and n L = 2.This figure shows that the output signals obtained by quantization y q (k) follow the ideal output signal y(k) pretty well and that the error between them is small in both cases.Also, the inputs u q (k) of the static quantizers are shown for comparison.) and the blues ones are the signals when quantization is applied (v(k), u q (k), y q (k)).
To further validate this observation, in Figure 8 there are shown the output signals of the system where the neural network quantizers were designed for M = 2 and n L = 4, and in Figure 9 the ones for M = 8, n L = 2 and n L = 4. From these, we see that the proposed quantizer works well.In addition, the result with the static quantizer q case and the result with the optimal dynamic quantizer case proposed in [5] are shown in Figure 10 for comparison.The value of the performance index for the static quantizer is E(q) = 7.9961 and that for the optimal dynamic quantizer is E(Q NNR ) = 3.6002.On the other hand, the performance of the proposed quantizer Q NNR for M = 2 and n L = 2 is E(Q NNR ) = 3.73724 and that for M = 2 and n L = 4 is E(Q NNR ) = 3.66764.From this comparison result, we see that the proposed quantizer achieves higher performance than the static quantizer.Then, we find that the proposed quantizer is similar to the optimal dynamic quantizer, although the proposed quantizer is designed with the time series data of plant inputs and outputs, i.e., without the model information of the plant.Therefore, we can confirm that the neural network in the proposed quantizer captures the dynamics of the plant appropriately based on the time series data of the plant input and output. .Signals resulting from applying u(k) to the systems with the static quantizer q in Figure 3 and the optimal dynamic quantizer proposed in [5].The black lines represent the signals without quantization (u(k), y(k)) and the blues ones are the signals when quantization is applied (v(k), u q (k), y q (k)).
The minimum values of the performance indexes in Equation ( 9), found by DE, are listed in Table 2.In addition, this table lists the average performance indexes and their standard deviation.The values in this table are divided according to their M, initialization method, n L and type (regression or classification).There are two initialization methods implemented: uniform random (Urand) and Xavier.Drawing conclusions from this table by simple observation is difficult.For example, looking at the minimum values of E(Q NN ) in the case of M = 2, it is possible to say that Q NNC have better performance (smaller E(Q NN )) than Q NNR in most cases.The average values not always corroborate this observation.For M = 8, Q NNR has the smallest value of E(Q NN ) in each case.However, there is no evidence that there is a significant difference in the performance of these types of quantizers.Therefore, the analysis of variance (ANOVA) is used to check if there are significant differences or not among these values.
Because many factors influence E(Q NN ), the 3-way ANOVA (ANOVA with three factors) is used.The considered factors are Type, initialization method (Init.)and number of layers n L .The categories of each factor are known as elements.For instance, the elements of the factor Type are R (Q NNR : regression) and C (Q NNC : classification).The M is not taken as a factor, because M = 8 gives smaller E(Q NN )s than M = 2.Then, it is not necessary to check which one gives better results.The considered significance level is α = 0.05.The goal is to determine if there is some statistical difference among the E(Q NN )'s means of the design methods.
A summary of this test is shown in Table 3.The ANOVA test shows if there are significant differences among sets of data.When doing 3-way ANOVA, it is possible to see not only if there is a significant difference among elements of a factor but also combinations of elements of different factors.In this particular case, it will tell if there is a significant difference between Q NNR and Q NNC , and also it will tell if there are differences among the combinations of the quantizer types and the initialization methods.Then, the 3-way ANOVA test is run separately for M = 2 and M = 8.For the case of M = 2, the significant difference is found only for the initialization method.For the case of M = 8 the significant difference is found for all the factors and the combinations of them with the exception of the combination of the quantizer type and n L .The results of this test are summarized in Table 5.From the results, we see the following things.First, they tell that there is a significant difference between the Q NNR and Q NNC , and that Q NNR outperforms Q NNC .Second, they show that there is a difference between the initialization methods and that the Urand method exhibits better performance than the Xavier method.These results corroborate the ones shown in Table 3 previously obtained for M = 8 and h = sigm.Third, the table shows that the performances of the considered activation functions vary significantly, that the one with the best performance is h = tanh, and that the one with the lowest performance is h = ReLU.

Conclusions
This paper introduces the concept of neural network quantizers that are designed using a set of the inputs and outputs of the plant.These quantizers are aimed at systems in which the model of the plant is unknown or unreliable.They are constructed using feedforward neural networks and static quantizers.Two types of neural network quantizers are proposed: regression-based type Q NNR and classification based type Q NNC .In addition, a design method based on differential evolution is proposed for these quantizers.By means of several numerical examples, it was found that both types of neural network quantizers are effective alongside with their DE based design method.Furthermore, many variations were considered in the construction of these quantizers.These variations are reflected in the number d 4 7 P I r 1 y I W v y u 5 / Q C t s 5 V T N 4 u M 8 e R o C e 6 o x o 9 0 j 2 9 0 M e f u S p B j n o v R 3 x r D a 5 w C o N n o 5 n 3 f 1 k l y s X b v v l + T y u 3 t 0 a O x z 9 8 j 4 o t B k g w v / 2 3 k 7 W Z 3 K F 1 7 k a e l l r j i T W p 3 B G C b w n P 2 c R h G v s I i S m e 8 C n / D Z e m M d W s f W S b v U 6 k g 5 I / g n r N P f E d G o I A = = < / l a t e x i t > R b 1 d f b e 4 d / K 8 O L k w W B 6 i C 3 p i 3 e f 0 Q D e s 3 C + + 2 J f z Y u H 0 D x 0 K u 9 w 9 0 L 5 I H L D B 6 e 9 2 / k y W R 1 L p 0 R T N j y U z E 7 H V T e j D A I b Z z 3 F k M I M 5 Z L W P x z j F m T F r S G P P 2 K + W G j U x p w d f w j h 8 B 5 X 6 o U 0 = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " L r 8 V 9 K c v B 6 0 4 g m D m d z w

Figure 2 .
Figure 2. Difference between the neural network quantizer based on regression and the one based on classification.(a) Regression based approach.The neural net has one output and shapes the original signal; (b) Classification based approach.The number of the neural network output is same as that of quantization levels, and each output correponds to the probability that a original signal is classified into a specific quantization level.
p h x 8 A 7 0 W J 6 w < / l a t e x i t > (input and output data) can be available< l a t e x i t s h a 1 _ b a s e 6 4 = " C M n Q U k p 2 e E x W 7 N s y T G 3 M E U 1 n d X A = " > A A A C x 3 i c h V E 9 S w N B E H 2 e X / E 7 a i P Y i C F i I W G i g m I l 2 G j n V 2 J E J d y d m + T w c n f c b Y I a I t h a 2 C p Y K V i I P 8 P G P y C S n y C W C j Y W z l 1 O R I M 6 y + 7 O v p 0 3 M 4 / R H N P w J F G t S W l u a W 1 r j 3 R 0 d n X 3 9 P Z F + w f S n l 1 y d Z H S b d N 2 M 5 r q C d O w R E o a 0 h Q Z x x V q U T P F hr a 3 4 P 9 v l I X r G b a 1 L g 8 c s V N U 8 5 a R M 3 R V + l B q 4 m g z G 4 1 R g g I b a X S S o R N D a M t 2 9 B H b 2 I U N H S U U I W B B s m 9 C h c d r C 0

Figure 7 .
Figure 7. Signals resulting from applying u(k) to the system with the Q NN designed for M = 2 and n L = 2.The black lines represent the signals without quantization (u(k), y(k)) and the blues ones are the signals when quantization is applied (v(k), u q (k), y q (k)).

Figure 8 .
Figure 8.Output signals y q (k) (blue) and y(k) (black) resulting from applying u(k) to the error system with Q NN designed for M = 2 and n L = 4.

Figure 9 .
Figure 9. Output signals y q (k) (blue) and y(k) (black) resulting from applying u(k) to the error system with Q NN designed for M = 8, n L = 2 (upper figure) and n L = 4 (lower figure).

Figure 10
Figure10.Signals resulting from applying u(k) to the systems with the static quantizer q in Figure3and the optimal dynamic quantizer proposed in[5].The black lines represent the signals without quantization (u(k), y(k)) and the blues ones are the signals when quantization is applied (v(k), u q (k), y q (k)).

Table 5 .
Tukey pairwise comparison 3-way ANOVA for the activation functions comparison (M = 8).Grouping information using the Tukey test and 95% confidence.Means that do not share a letter are significantly different.