Abstract
Nowadays, networked control systems (NCSs) are being widely implemented in many applications. However, several problems negatively affect and compromise the design of practical NCSs. One of them is the performance degradation of the system due to quantization. This paper aims to develop dynamic quantizers for NCSs and their design methods that alleviate the effects of the quantization problem. In this paper, we propose a type of dynamic quantizers implemented with neural networks and memories, which can be tuned by a time series data of the plant inputs and outputs. Since the proposed quantizer can be designed without the model information of the system, the quantizer could be applied to any system with uncertainty or nonlinearity. This paper gives two types of quantizers, and they differ from each other in the neural networks structure. The effectiveness of these quantizers and their design method are verified using numerical examples. Besides, their performances are compared among each other using statistical analysis tools.
1. Introduction
The networked control systems (NCSs) are systems in which its elements are physically separated but connected by some communication channels. They have been around for some decades already, and they have been implemented successfully in many fields such as industrial automation, robotics, and power grids. Although the NCSs provide several advantages to the systems, it is well-known that one of the problems is the system performance degradation caused by the data rate constraints in the communication channels [,,]. In the case that operation signals of NCSs are transmitted over networks under data rate constraints, the signal quantization is a fundamental process in which a continuous-valued signal is transformed into a discrete-valued one. However, the quantization error between the continuous-valued signal and discrete-valued one occurs, and it affects the performance of the NCSs. Therefore, one of the significant works is to develop a method to minimize the influence of the quantization error to the performance of the NCSs.
It has been proven that properly designed feedback-type dynamic quantizers are effective to reduce this degradation in the system’s performance []. Several studies have considered the design of dynamic quantizers. For instance, a mathematical expression of an optimal dynamic quantizer for time-invariant linear plants was presented in [], and an equivalent expression for nonlinear plants was introduced in []. Furthermore, design methods for dynamic quantizers that minimize the system’s performance degradation and satisfy the channel’s data rate constraints were developed in [,,]. Then, in [] event-triggered dynamic quantizers that reduce the traffic in the communication network were proposed. In these studies, the design of the quantizers is carried out using information from the plant; namely, these quantizers are based on model-based approach. Thus, if the model of the plant is inaccurate, then the quantizers will be faulty.
Accordingly, in this paper, the data-driven approach is considered for the design of feedback type dynamic quantizers. Besides, this paper presents a class of dynamic quantizers that are constructed using feedforward neural networks. The quantizer, called neural network quantizers, are designed using time series data of plant inputs and outputs. Some advantages of this approach are that a model of the plant is not required for the design, i.e., model-free design, and that the quantizer can be designed not only for linear but also for nonlinear plants. The selection of neural networks to perform this job is motivated by the fact that feedforward neural networks are very flexible and that they can be used to represent any nonlinear function/system, in this sense, they work as universal approximators [,]. This property is especially important for the design of optimal dynamic quantizers because their structures are functions of the plants’ model [,]. If the model of the plant is not given, but it is known to be linear, then the structure of the optimal quantizer is also known to be linear. However, if the plant is nonlinear and its model is absent, then the optimal quantizer’s structure is unknown. Thus, the neural network can approximate the optimal quantizer’s structure based on a series of plant input and output data.
This paper is structured as follows. First, we propose a class of dynamic quantizers composed of feedforward neural network, memories, and a static quantizer. The proposed quantizer has two variations in neural network structures: one is based on a regression-based approach, and the other is based on a classification-based approach. Then, we formulate the quantizers design problem that finds the parameters of the neural network and the quantization interval for given a time series data of plant input and output. Next, with numerical examples, the effectiveness of these quantizers and their design method are verified. Finally, several design variations are considered in order to optimize the quantizer’s performance, and comparisons among these variations are carried out.
It should be remarked that various results on the quantizer design for networked control systems have been obtained, e.g., [,,,,]. However, the contributions of this paper are distinguished from them as follows. The papers in [,,] focus on the zoom-in and zoom-out strategy based dynamic quantizer, i.e., the quantizer with time-varying quantization interval. Besides, the paper [] considers the static logarithmic quantizer, i.e., its quantization interval is not uniform. On the other hand, this paper proposes dynamic quantizer with the time-invariant and uniform interval. Furthermore, the paper [] proposes a ΔΣ modulator, which is related to the proposed quantizer. Although the result in [] is restricted to the case of two quantization levels, this paper can deal with the case of multi-levels.
This paper is a journal version of our previous conference papers that were presented in [,]. The main difference between this paper and its predecessors is as follows. The system description and problem formulation are improved, and detail explanations of the proposed quantizers are added. Then, we use the ANOVA test to analyze the several simulation results. Besides, we take into account different activation functions for the neural networks hidden layers and compares different initialization methods for the network tuning.
2. Neural Network Quantizers
In this section, we first describe a networked control system considered here. Then we present a quantizer, composed of neural networks and a static quantizer, called neural network quantizer.
2.1. System Description
This paper considers the system depicted in Figure 1. This system is composed of a plant P, a communication channel that has no loses or delays, and the neural network quantizer proposed in this paper.
Figure 1.
Considered system with a neural network quantizer .
The plant is represented by the following single-input-single-output (SISO) model:
where is the discrete time, is the state vector with initial value , is the input, and is the output. The functions and are in general nonlinear mappings. It is assumed that f and g are continuous and smooth.
The quantizer , shown in Figure 1, is composed of a neural network, a static quantizer , and a couple of memories and . The quantizer is represented are by the following expression:
where is the input, is the output of the neural network, is the input of , and is the output of , i.e., the output of . Note that d is the quantization interval, and M is the number of quantization levels which is determined from the data rate of network channel. The signals and are the outputs of the memories and , respectively. They are time series of past values of the quantized inputs and the outputs of the plant, and they are given by
where and are the dimensions of these memories. Thus, the proposed quantizer is tuned by using the past input and output data of the plant. This means that the quantizer may capture the dynamics of the plant.
This paper proposes two types of neural network quantizers: and . The quantizers and differ in the expressions of the nonlinear functions , , and . The illustrations of and are shown in Figure 2. Although detail explanations of two quantizers will be shown in the following subsections, the main difference between them is in the neural network’s structure. In the network has only one output that shapes the input signal, i.e., the network is trained to perform regression. On the other hand, in the network has as many outputs as the considered amount of quantization levels M. Each output represents the probability that a given input is matched with a specific quantization level, i.e., the network is trained for classification. Besides, in Figure 2a, the numbers mean the selected quantization levels, which are determined by the function .
Figure 2.
Difference between the neural network quantizer based on regression and the one based on classification. (a) Regression based approach. The neural net has one output and shapes the original signal; (b) Classification based approach. The number of the neural network output is same as that of quantization levels, and each output correponds to the probability that a original signal is classified into a specific quantization level.
2.2. Regression Based Neural Network Quantizer
For the regression-based neural network quantizer , the static quantizer is a regular finite-level static quantizer with saturation as shown in Figure 3. It receives directly the continuous output of the neural network and rounds it to the nearest discrete value to generate . It has two parameters: one is the number of quantization levels and the other is the quantization interval with . Figure 3 shows an example of this static quantizer with .
Figure 3.
Example of a static quantizer for ().
In this paper, the fully connected feed-forward type neural network is adopted to build the function in Equation (2). An example is shown in Figure 4. In this case, the function is given by
Figure 4.
Fully connected 3 layered feed-forward neural network example.
The following elements can be recognized in the network, the input units , the output units , and the hidden units (), where is the number of layers in the network. Note that the inputs of this network are , and , and they are represented as . Besides, the size of the neural network is represented by . Each neuron performs a nonlinear transformation of a weighted summation of the previous layer outputs as follows
where represents the weight of the connection that goes from the ith neuron in layer to the jth neuron in layer l. Notice here that a simplified notation is used, where instead of having biases the units and are included in the network. Then, because these elements are constants, their respective connection’s weight serves as bias parameters. The weights of all the connections in the network are put together in a vector called the weights vector that has dimension . Furthermore, represents the nonlinear transformation and is called activation function. There are many functions that serve as activation functions such as logistic sigmoid, hyperbolic tangent, and rectified linear unit (ReLU). In this paper, we adopt the most commonly used sigmoid function:
Finally, since in , the function is given by .
2.3. Classification Based Neural Network Quantizer
For the classification based neural network quantizer , the static quantizer is not a conventional one. Its input comes from a set of indexes, each of which makes reference to a specific quantization level, i.e., . Thus, is adapted to match each index to the corresponding quantization level as Figure 5 shows. This quantizer is also defined by the number of quantization levels M and the quantization interval d.
Figure 5.
Example of static quantizer adapted for ().
The neural network in is same as that in . The inputs of this network are the same as in the previous case and the hidden units activation function is also the logistic sigmoid in Equation (7). The dimension of the ouput is . Then each output of the network is associated with one quantization level, and represents the probability that a given input is classified into a specific quantization level. Therefore, the quantization level with the biggest probability is selected to be the network’s output, and it is given by
3. Quantizer Design Problem
In this paper, it is assumed that the number of quantization levels M, the memory sizes and , and the neural network structure are given. Thus, the design parameters are the weight vector and the quantization interval d of the neural network quantizer .
The performance of the quantizer in NCSs can be evaluated using a construction known as error system. The considered error system is depicted in Figure 6. This system is composed of two branches. In the lower branch, the input signal u is applied directly to the plant P that produces the ideal output y. In the upper branch the effects of quantization are considered and u is applied to the quantizer that generates the quantized signal v that is applied to the plant. The output of the plant in this case is represented by , and the difference is the error signal. The error signal is used to evaluate the performance degradation of the system. By minimizing , the system composed of the quantizer and the plant P can be optimally approximated to the plant P, in terms of the input-output relation.
Figure 6.
Error system.
In this context, a parameter known as performance index is used to measure the system’s performance degradation. The performance index considered here is the sum-of-squares error function that is defined by
where is used to build along side with and that are generated dynamically. It is necessary to make as small as possible to maintain the output error low. Then, the design of is set up as an optimization problem in which the performance index is minimized.
This paper assumes that, although the model is unknown, it is possible to feed it with inputs and measure its outputs. Then, a time series of inputs and outputs of the plant will be available. These time series are represented as follows.
where is the length of the time series, namely, the number of samples. Notice that () represents the output of the plant P when is applied directly to it, i.e., .
Then, the neural network quantizers design problem is formulated as follows:
Problem 1.
Suppose that the time series data and of the plant, the number of quantization levels M, the neural network structure , and the memory sizes and are given. Then, find the parameters of : the weight vector and the quantization interval d which minimize , under the condition that .
This design problem is nonlinear and nonconvex. Thus, it cannot be solved using gradient-based optimization methods such as linear programming or quadratic programming. Moreover, conventional neural network training techniques based on error backpropagation cannot be used either due to the structure of the system, as it was explained previously. Therefore, alternative optimization methods should be used.
In this regard, the metaheuristics stand out from the available options because of their flexibility and a wide variety of implementations []. In particular, the differential evolution (DE) metaheuristic algorithm is used to perform the design of . This choice is justified by the fact that DE has proven to be effective in the training of neural networks [,] and that it has shown an outstanding performance in the design of dynamic quantizers []. DE is a population-based metaheuristic algorithm inspired in the mechanism of biological evolution [,]. In this algorithm, the cost function is evaluated iteratively over a population of possible solutions or individuals in each iteration the individuals improve their values and move towards the best solution. Finally, the individual with the lowest fitness value in the last iteration is regarded as the optimal solution. Some advantages of DE are that it is very easy to implement and has only two tuning parameters: the scale factor F and the crossover constant H, apart from the number of individuals N and the maximum number of iterations . Besides, DE shows very good exploration capacities and converges fast to global optima. DE has many versions and variations; the one considered in this study is the classical DE/best/1/bin strategy, which is described in Algorithm 1.
| Algorithm 1: DE (DE/best/1/bin strategy) |
| Initialization: Given , , , and the initial search space . Set then select randomly N individuals in the search space. Step 1: The cost function is evaluated for each and is calculated by: Step 2 (Mutation): For each a mutant vector is generated by: Step 3 (Crossover): For each and a trial vector is generated by: Step 4 (Selection): The members of the next generation are selected by: |
Since the design parameters of are and d, an individual for the DE algorithm will have the following form with dimension . From these parameters, the weights vector is not affected by any constraint, but the quantization interval d should always be positive . DE has no direct way to handle the constraints of the optimization problem since it was designed to solve unconstrained problems. Then, in order to manage the constraint condition, a method developed by Maruta et al. in [] is employed. This method transforms the constrained optimization problem into the following unconstrained one.
where is the performance index in Equation (9). This constraints management method ensures that d is positive.
The learning resulting from the training of a deep neural network depends highly on the initial weights of the network because many of the learning techniques are in essence local searches. Therefore, it is very important to initialize the network’s weights appropriately [,]. There are several ways to initialize the neural networks to perform the training. The most common method is the uniformly random initialization where random values sampled from a certain interval using a uniform probability distribution are assigned to the weights and biases of the network. The initialization intervals are selected according to, but they are usually small and close to zero. Popular ones are the intervals or . Another prevalent type of initialization was developed in [] by Glorot and Bengio. This method is known as Xavier Uniform initialization (from Xavier Glorot). In this method, the weights of each layer in the network are initialized using random uniform sampling in a specific interval
where represents the weights of the ith layer. The limits of the interval are given by which is a function of the number of neurons of the considered layer , the number of neurons in the previous layer and the hidden layers activation function . The limit is the following
4. Numerical Simulations
To verify that the proposed neural network quantizers and their design method work properly, several numerical simulations were performed. In these simulations, the following discrete, nonlinear and stable plant is used.
This plant is a modified version of the plant shown in []. The initial state is , and the input signal used in the examples is given by
The evaluation interval is , which implies that the amount of samples taken is .
The quantizers are constructed with , and neural networks with . Given the size of the memories and the dimension of all the networks have inputs with dimension . The neural networks’ structure depends on the type of quantizer and M. Table 1 summaries the structure of the quantizers used in the simulations. For the regression case (R) the network’s structure and the dimension of () are independent of M. This is not the case for the classification type of quantizer (C). Table 1 also shows a comparison among the of each network.
Table 1.
Simulation conditions.
The hyper parameters of DE are , , , and . The simulations were performed times for each considered case. Then, since the individuals have the form the dimensions of the optimization problems n will be the ones shown is the last column of Table 1. Looking at Table 1 it is possible to see that has more parameters than , this is a factor that influences the performance of the proposed design method.
The DE individuals are initialized as follows. The first element d is uniformly sampled from the interval . The network weights are initialized using the uniform random and the Xavier uniform initialization methods, described in Section 3.
After running the DE algorithm times for each considered case, the quantizers with the lowest are selected to be the optimal quantizers. Then, in order to test these quantizers, the error system in Figure 6 is fed with the input signal for each case. It results that all the quantizers work properly and show good performance. For instance, Figure 7 depicts the signals resulting from applying to the system with the quantizers designed for and . This figure shows that the output signals obtained by quantization follow the ideal output signal pretty well and that the error between them is small in both cases. Also, the inputs of the static quantizers are shown for comparison.
Figure 7.
Signals resulting from applying to the system with the designed for and . The black lines represent the signals without quantization (, ) and the blues ones are the signals when quantization is applied (, , ).
To further validate this observation, in Figure 8 there are shown the output signals of the system where the neural network quantizers were designed for and , and in Figure 9 the ones for , and . From these, we see that the proposed quantizer works well.
Figure 8.
Output signals (blue) and (black) resulting from applying to the error system with designed for and .
Figure 9.
Output signals (blue) and (black) resulting from applying to the error system with designed for , (upper figure) and (lower figure).
In addition, the result with the static quantizer case and the result with the optimal dynamic quantizer case proposed in [] are shown in Figure 10 for comparison. The value of the performance index for the static quantizer is and that for the optimal dynamic quantizer is . On the other hand, the performance of the proposed quantizer for and is and that for and is . From this comparison result, we see that the proposed quantizer achieves higher performance than the static quantizer. Then, we find that the proposed quantizer is similar to the optimal dynamic quantizer, although the proposed quantizer is designed with the time series data of plant inputs and outputs, i.e., without the model information of the plant. Therefore, we can confirm that the neural network in the proposed quantizer captures the dynamics of the plant appropriately based on the time series data of the plant input and output.
Figure 10.
Signals resulting from applying to the systems with the static quantizer in Figure 3 and the optimal dynamic quantizer proposed in []. The black lines represent the signals without quantization (, ) and the blues ones are the signals when quantization is applied (, , ).
The minimum values of the performance indexes in Equation (9), found by DE, are listed in Table 2. In addition, this table lists the average performance indexes and their standard deviation. The values in this table are divided according to their M, initialization method, and type (regression or classification). There are two initialization methods implemented: uniform random (Urand) and Xavier.
Table 2.
analysis for ().
Drawing conclusions from this table by simple observation is difficult. For example, looking at the minimum values of in the case of , it is possible to say that have better performance (smaller ) than in most cases. The average values not always corroborate this observation. For , has the smallest value of in each case. However, there is no evidence that there is a significant difference in the performance of these types of quantizers. Therefore, the analysis of variance (ANOVA) is used to check if there are significant differences or not among these values.
Because many factors influence , the 3-way ANOVA (ANOVA with three factors) is used. The considered factors are Type, initialization method (Init.) and number of layers . The categories of each factor are known as elements. For instance, the elements of the factor Type are R (: regression) and C (: classification). The M is not taken as a factor, because gives smaller s than . Then, it is not necessary to check which one gives better results. The considered significance level is . The goal is to determine if there is some statistical difference among the ’s means of the design methods.
A summary of this test is shown in Table 3. The ANOVA test shows if there are significant differences among sets of data. When doing 3-way ANOVA, it is possible to see not only if there is a significant difference among elements of a factor but also combinations of elements of different factors. In this particular case, it will tell if there is a significant difference between and , and also it will tell if there are differences among the combinations of the quantizer types and the initialization methods. Then, the 3-way ANOVA test is run separately for and . For the case of , the significant difference is found only for the initialization method. For the case of the significant difference is found for all the factors and the combinations of them with the exception of the combination of the quantizer type and .
Table 3.
Tukey pairwise comparison for and single factors]Tukey pairwise comparison 3-way ANOVA for and single factors. Grouping information using the Tukey test and confidence. Means that do not share a letter are significantly different.
So far only one type of activation function , the sigmoid function, have been used in the hidden layers to build the neural networks. However, there are other activation functions that can be used. In this section two additional activation functions are considered: the hyperbolic tangent (tanh) and the Rectified Linear Unit (). These functions were defined by
Several numerical simulations were performed to compare the performance of the neural network quantizers built with these functions. The settings of these simulations are the same as in the previous cases where , but they were carried out only for . These simulations were run times for each case. The results are summaries in Table 4.
Table 4.
results summary for and ().
As before, it is difficult to conclude from the table by simple observation. Therefore, the ANOVA test is used to analyze the data. In this case, four factors influence the results: , initialization method, and quantizer type. However, because the influence of is understood the focus in this section will be in the factors: , initialization method, quantizer type, and the interaction among each other. Therefore, the 3-way ANOVA general linear model of versus quantizer type (Type), initialization method (Init) and activation function is considered. The significance level used in this analysis is . The analysis of variance showed that the statistical null hypothesis that all the means are the same was rejected for every single factor and the combination of them. This means that in each case there is at least one element that significantly differs from the others. The Tukey pairwise comparison is made to see the differences among the quantizer’s design elements.
The results of this test are summarized in Table 5. From the results, we see the following things. First, they tell that there is a significant difference between the and , and that outperforms . Second, they show that there is a difference between the initialization methods and that the Urand method exhibits better performance than the Xavier method. These results corroborate the ones shown in Table 3 previously obtained for and . Third, the table shows that the performances of the considered activation functions vary significantly, that the one with the best performance is , and that the one with the lowest performance is .
Table 5.
Tukey pairwise comparison 3-way ANOVA for the activation functions comparison (). Grouping information using the Tukey test and confidence. Means that do not share a letter are significantly different.
5. Conclusions
This paper introduces the concept of neural network quantizers that are designed using a set of the inputs and outputs of the plant. These quantizers are aimed at systems in which the model of the plant is unknown or unreliable. They are constructed using feedforward neural networks and static quantizers. Two types of neural network quantizers are proposed: regression-based type and classification based type . In addition, a design method based on differential evolution is proposed for these quantizers.
By means of several numerical examples, it was found that both types of neural network quantizers are effective alongside with their DE based design method. Furthermore, many variations were considered in the construction of these quantizers. These variations are reflected in the number of quantization levels (), in the number of layers of the network (), in the type of network initialization technique (Urand, Xavier), and in the hidden layers’ activation functions (, tanh, ). Several conclusions were reached based on the analysis of variance performed on the simulations results. Some of the most important is that the quantizers based on regression outperform the ones based on classification, that the best initialization method is the random uniform (Urand), and that the activation function that gives the best performance is tanh.
Author Contributions
Conceptualization, Y.M. and J.E.R.R.; methodology, Y.M. and J.E.R.R.; software, J.E.R.R.; validation, J.E.R.R.; formal analysis, J.E.R.R.; investigation, J.E.R.R.; data curation, J.E.R.R.; writing—original draft preparation, J.E.R.R.; writing—review and editing, J.E.R.R. and Y.M.; visualization, J.E.R.R. and Y.M.; project administration, Y.M.; funding acquisition, Y.M.
Funding
This research was partly supported by JSPS KAKENHI (16H06094) and a research grant from The Mazda Foundation.
Acknowledgments
The authors would like to thank Professor Kenji Sugimoto, Nara Institute of Science and Technology for his support.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Hespanha, J.P.; Naghshtabrizi, P.; Xu, Y. A Survey of Recent Results in Networked Control Systems. Proc. IEEE 2007, 95, 138–162. [Google Scholar] [CrossRef]
- Gupta, R.A.; Chow, M. Networked Control System: Overview and Research Trends. Ind. Electron. IEEE Trans. 2010, 57, 2527–2535. [Google Scholar] [CrossRef]
- Mahmoud, M.S.; Hamdan, M.M. Fundamental issues in networked control systems. IEEE/CAA J. Autom. Sin. 2018, 5, 902–922. [Google Scholar] [CrossRef]
- Azuma, S.; Sugie, T. Optimal dynamic quantizers for discrete-valued input control. Automatica 2008, 44, 396–406. [Google Scholar] [CrossRef]
- Azuma, S.; Sugie, T. Dynamic Quantization of Nonlinear Control Systems. IEEE Trans. Autom. Control 2012, 57, 875–888. [Google Scholar] [CrossRef]
- Okajima, H.; Sawada, K.; Matsunaga, N. Dynamic Quantizer Design Under Communication Rate Constraints. IEEE Trans. Autom. Control 2016, 61, 3190–3196. [Google Scholar] [CrossRef]
- Sawada, K.; Okajima, H.; Matsunaga, N.; Minami, Y. Dynamic quantizer design for MIMO systems based on communication rate constraint. In Proceedings of the 37th Annual Conference on IEEE Industrial Electronics Society, Melbourne, Austrilia, 7–10 November 2011; pp. 2572–2577. [Google Scholar] [CrossRef]
- Ramirez, J.E.; Minami, Y.; Sugimoto, K. Design of finite-level dynamic quantizers by using covariance matrix adaptation evolution strategy. Int. J. Innov. Comput. Inf. Control 2016, 795–808. [Google Scholar] [CrossRef]
- Ramirez, J.E.R.; Minami, Y.; Sugimoto, K. Synthesis of event-triggered dynamic quantizers for networked control systems. Expert Syst. Appl. 2018, 188–194. [Google Scholar] [CrossRef]
- Cybenko, G. Approximation by superpositions of a sigmoidal function. Math. Control Signals Syst. 1989, 303–314. [Google Scholar] [CrossRef]
- Hornik, K.; Stinchcombe, M.; White, H. Multilayer feedforward networks are universal approximators. Neural Netw. 1989, 359–366. [Google Scholar] [CrossRef]
- Moustakis, N.; Yuan, S.; Baldi, S. An adaptive approach to zooming-based control for uncertain systems with input quantization. In Proceedings of the 2018 European Control Conference (ECC), Limassol, Cyprus, 13–15 June 2018; pp. 2423–2428. [Google Scholar] [CrossRef]
- Liu, K.; Fridman, E.; Johansson, K.H. Dynamic quantization of uncertain linear networked control systems. Automatica 2015, 59, 248–255. [Google Scholar] [CrossRef]
- Ren, W.; Xiong, J. Quantized Feedback Stabilization of Nonlinear Systems With External Disturbance. IEEE Trans. Autom. Control 2018, 63, 3167–3172. [Google Scholar] [CrossRef]
- Takijiri, K.; Ishii, H. Networked control of uncertain systems via the coarsest quantization and lossy communication. Syst. Control Lett. 2018, 119, 57–63. [Google Scholar] [CrossRef]
- Almakhles, D.; Swain, A.; Nasiri, A.; Patel, N. An Adaptive Two-Level Quantizer for Networked Control Systems. IEEE Trans. Control Syst. Technol. 2017, 25, 1084–1091. [Google Scholar] [CrossRef]
- Rodriguez Ramirez, J.E.; Minami, Y.; Sugimoto, K. Neural network quantizers for discrete-valued input control. In Proceedings of the 2017 11th Asian Control Conference (ASCC), Gold Coast, Australia, 12–15 December 2017; pp. 2019–2024. [Google Scholar] [CrossRef]
- Rodriguez Ramirez, J.E.; Minami, Y.; Sugimoto, K. Design of Quantizers with Neural Networks: Classification Based Approach. In Proceedings of the 2018 International Symposium on Nonlinear Theory and Its Applications (NOLTA2018), Tarragona, Spain, 2–6 September 2018; pp. 312–315. [Google Scholar]
- Ojha, V.K.; Abraham, A.; Snášel, V. Metaheuristic design of feedforward neural networks: A review of two decades of research. Eng. Appl. Artif. Intell. 2017, 60, 97–116. [Google Scholar] [CrossRef]
- Ilonen, J.; Kamarainen, J.K.; Lampinen, J. Differential Evolution Training Algorithm for Feed-Forward Neural Networks. Neural Process. Lett. 2003, 93–105. [Google Scholar] [CrossRef]
- PÉrez, J.; Cabrera, J.A.; Castillo, J.J.; Velasco, J.M. Bio-inspired spiking neural network for nonlinear systems control. Neural Netw. 2018, 15–25. [Google Scholar] [CrossRef] [PubMed]
- Storn, R.; Price, K. Differential Evolution—A Simple and Efficient Heuristic for global Optimization over Continuous Spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
- Price, K.; Storn, R.M.; Lampinen, J.A. Differential Evolution: A Practical Approach to Global Optimization, 1st ed.; Natural Computing Series; Springer: Berlin/Heidelberg, Germany, 2005. [Google Scholar]
- Kim, I.M.T.; Sugie, T. Fixed-structure controller synthesis: A meta-heuristic approach using simple constrained particle swarm optimization. Automatica 2009, 553–559. [Google Scholar] [CrossRef]
- Vapnik, V.N. Statistical Learning Theory; Wiley-Interscience: Hoboken, NJ, USA, 1998; Chapter 9; pp. 395–399. [Google Scholar]
- Alpaydin, E. Introduction to Machine Learning; The MIT Press: Cambridge, MA, USA, 2010; pp. 233–277. [Google Scholar]
- Glorot, X.; Bengio, Y. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, Sardinia, Italy, 13–15 May 2010; Volume 9, pp. 249–256. [Google Scholar]
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).