Abstract
This work addresses parameter estimation of a class of neural systems with limit cycles. An identification model is formulated based on the discretized neural model. To estimate the parameter vector in the identification model, the recursive least-squares and stochastic gradient algorithms including their multi-innovation versions by introducing an innovation vector are proposed. The simulation results of the FitzHugh–Nagumo model indicate that the proposed algorithms perform according to the expected effectiveness.
1. Introduction
1.1. Background
The nervous system is a complex and huge system, composed of a large number of nerve cells. The performance of neurons is closely related to the function of the nervous system. Oscillatory activity is ubiquitous in the nervous system and its synchronous behavior has become an increasing topic in neuroinformatics [1,2]. Although great progress has been made in information coding and understanding nonlinear dynamics of neural oscillator systems in neuroscience and from the system control perspective, there is relatively little research on the modeling and control of neural oscillator systems. In a typical neural system, the measurable state is limited by using voltage clamp technique, while the internal parameters of the system can only be estimated by the measurable state output sequence such as membrane voltage. Sometimes, it is even impossible for biologists to get the modeling information due to various technical difficulties. For example, in the FitzHugh–Nagumo model, the membrane voltage of the neuron is measurable and the model control parameters are unknown, which relies on the parameter identification methods in control theory. In particular, parameter estimation for neural models are used to estimate the uncertain parameters from the available input and output data.
1.2. Parameter Estimation in Neural Model
System parameter estimation has received substantial attention in several decades [3,4,5,6,7]. One reason is its broad application to mechanical systems [8,9], neuron systems [10], communication systems [11], structural systems [12], power systems [13], etc. Existing black-box parameter identification methods for nonlinear systems are simple and universal, but the identification accuracy and convergence of the identification algorithm are not of universal significance. Therefore, accurately identifying the system parameters of the spiking neuron models under finite measurement presents a far more challenging modeling problem. Examples of parameter identification methods for neural models include the time/frequency-domain method [14], maximum likelihood method [15], self-organizing state-space-model approach [16], heuristic optimization method [10], and so on. In [14], a time-domain method and a frequency-domain method are applied to estimate parameters of a neuronal model consisting of a soma coupled to a uniform dendritic cylinder, where the time-domain method is more prone to estimation errors in the cable parameters. Vavoulis et al. [16] presented an adaptive sampling algorithm using the self-organizing state-space model to estimate parameters in a Hodgkin-Huxley-type model of single neurons and to achieve reduced variance of parameter estimates. Mullowney et al. [15] proposed a maximum likelihood methodology for parameter estimation in a leaky integrate-and-fire neuronal model with the only available data being the interspike intervals or the times between firings. In [10], a global heuristic search method using in-vitro and in-vivo electrophysiological data was explored for identification of an Izhikevich-type neuron model.
In spite of these advances, little attention on parameter estimation of the FitzHugh–Nagumo (FHN) model has been received, including Bayesian statistical approaches [17,18], and least-squares algorithms [19,20]. In [17], a Bayesian framework was proposed for drift parameter estimation of the stochastic FHN model. Arnold and Lloyd [18] employed nonlinear filtering for periodic, time-varying parameter estimation, which was illustrated in estimation of the external voltage parameter in the FHN model. In comparison, from the recursive estimation point of view, we perform parameter inference for the FHN model with data generated from the model in this paper. Concha and Garrido [19] presented two methodologies based on the least-squares algorithm to estimate the parameters of the FHN model. The two methods were only suitable for the case with noncontinuous input current stimulus, at the cost of a linear integral filter. Che et al. [20] employed the recursive least-squares algorithm for parameter estimation of the FHN model, which requires the first and second time derivatives of the membrane potential and the input current stimulus being continuously differentiable. It is well known that the stochastic gradient algorithm is a class of important stochastic approximation methods, which have received much attention and have been widely used in different systems, such as Hammerstein systems [21], Wiener systems [22] and sampled systems [23]. Though the stochastic gradient algorithm has a slower convergence rate than the least-squares algorithms, but it requires less computational effort. In this paper, to improve the convergence rate of the stochastic gradient algorithm, we extend the innovation concept in [24] and explore the multiinnovation stochastic gradient algorithm for parameter estimation of the FHN neuron system. The parameterized FHN model in [20] can also be performed by using the proposed multiinnovation recursive least-squares algorithm in this paper and a better accuracy of the parameter estimation will be obtained due to the use of past innovations and repeated available data. However, from the aspect of the computing style, identification algorithms with qualified identification accuracy and convergence are desired for parameter estimation of the neural models. In particular, it seems that little effort has been made toward performing parameter estimation of the FHN model using the stochastic gradient methods.
1.3. Contributions
Inspired by these works, we propose four parameter estimation algorithms for the FHN neuron system with limit cycle and external disturbance. The contributions of this paper include the following:
- We formulate the FHN neuron system as an identification model based on the explicit forward Euler method.
- We propose a recursive least-squares algorithm and a stochastic gradient algorithm to estimate the unknown parameters of the model.
- We extend the innovation concept in [24], and explore the multiinnovation recursive least-squares algorithm and multiinnovation stochastic gradient algorithm for parameter estimation of the FHN neuron system.
- We show that a faster convergence rate and better accuracy can be achieved using the innovation and repeated available data.
1.4. Organization
The organization of the paper is as follows. Section 2 describes the FHN neural model. Section 3 formulates the identification model and presents the parameter estimation problem. The proposed algorithms are given in Section 4, where we estimate the unknown parameters using four different algorithms and summarize corresponding algorithm procedures. Section 5 provides the computational simulations. Finally, we draw some conclusions in Section 6.
2. The Spiking Neuron Model
In this section, we introduce the general spiking neuron models. A conductance-based spiking neuron model is described by
where is the state which usually consists of the membrane potential, gating variable, recovery variable and adaptation variable; denotes the disturbance in the neuron system; is the external input injected to the neuron; is a constant vector, which is typically taken as when the external input acts on the membrane potential.
The system described by (1) is quite general as it includes many spiking neuron models such as the Hodgkin-Huxley model, the Hindmarsh–Rose (HR) model, the FHN model and so on. In this paper, we shall focus on parameter estimation of the FHN model using different identification algorithms, but the proposed approach is applicable to other neuron models in a similar manner. The FHN model in a dimensionless form [25,26] can be written in the form of (1)
where v denotes the voltage potential of the neuron membrane and w denotes the inactivation of the sodium channels; denotes the disturbance in the neuron system and is the white noise with zero mean. The parameters are unknown to be estimated later in detail. In this paper, we consider a constant external input resulting in periodic spiking dynamics and the external input J will also be estimated. Note that when the parameters are specified as , , and the input current value is taken as , the neural system from without disturbance exhibits periodic spiking dynamics and converge to a limit cycle (see Figure 1).
Figure 1.
A limit cycle of the FHN model.
In the real nervous system, the parameters of the spiking neurons are difficult to be measured or determined, the identification of system parameters has always been an important topic in the field of neural computing and system control. Motivated by this, the present work is directed towards developing efficient algorithms to identify the parameters of spiking neuron models.
3. The Identification Model of Spiking Neurons
Usually, the neuron membrane and the inactivation of the sodium channels are discretely sampled in experiment. Hence, let us consider the discretized system of the FHN model using the explicit forward Euler method with step size T as follows
where T is the sampling period which is sufficiently small, k represents the time instant and
Denote , and for Then, we have
Define , and
Therefore, Equation (4) can be rewritten as
which is called as the identification model. Apparently, to estimate the parameters , it is equivalent to estimate the parameter vector .
4. Parameter Estimation of the Spiking Neurons
4.1. Least-Squares Estimation Algorithms
To estimate the parameters , let us consider the cost function
where denotes the data length. Using the least-squares principle to solve the optimization problem, we explore the following recursive least-squares (RLS) parameter estimation formulas [3,27] to estimate the parameter vector .
where is the forgotten factor, is the vector of adjustment gains, , , and is a large positive number.
To initialize the RLS algorithm, the initial value is generally taken to be a zero vector or a small real vector, e.g., with being an -dimensional column vector whose elements are 1. Now, the RLS parameter estimation algorithm for the spiking neuron model is summarized in Algorithm 1.
| Algorithm 1 RLS algorithm |
|
To obtain better estimation accuracy, we modify the algorithm (6) and apply a multiinnovation recursive least-squares (MIRLS) parameter estimation algorithm [7] for (5) by introducing an innovation length p. By defining the information matrix and stacked output vector as
the innovation vector can be expressed as
From here, we present the following MIRLS iterative formulas
The MIRLS parameter estimation algorithm for the FHN model is summarized in Algorithm 2.
| Algorithm 2 MIRLS algorithm |
|
4.2. Stochastic Gradient Estimation Algorithms
Let denote the norm of the matrix For the model (5), we present the following stochastic gradient (SG) parameter estimation formulas to estimate the parameter vector
where is the forgotten factor. Note that a smaller results in a faster convergence but the price we paid is a large parameter fluctuation. In this paper, since the SG algorithm converges slowly in parameter estimation of the FHN neuron, we choose a moderate .
The SG parameter estimation algorithm for the spiking neuron model is summarized in Algorithm 3.
| Algorithm 3 SG algorithm |
|
Similarly, to obtain better estimation accuracy, we can derive multiinnovation stochastic gradient (MISG) parameter estimation formulas [28] using the and in Section 4.1 as follows
Comparing with the SG algorithm in (10) using only the current data, the innovation in (11) uses not only the current innovation, but also the past innovations, which thus can improve the convergence rates compared with the SG algorithm. Moreover, the available data are repeatedly used in the MISG algorithm, and such a treatment can enhance accuracy of the parameter estimation [24]. The MISG parameter estimation algorithm for the FHN model is summarized in Algorithm 4.
| Algorithm 4 MISG algorithm |
|
5. Simulations
For purpose of illustration, we consider the FHN model [25,26] described in the dimensionless form (2). In simulation, the parameters are taken as and The FHN model is firstly discretized based on the explicit forward Euler method with step size . Then, consider the initial value , and perform the simulations based on the identification algorithms in Section 4. Taking the data length , the forgotten factor is chosen as for RLS and MIRLS algorithms. For both SG and MISG algorithms, we take the forgotten factor . To compare the estimation performance of the RLS, SG, MIRLS and MISG algorithms, apply these four algorithms to estimate the parameter vector of the neuron system, respectively. Note that, when the MIRLS algorithm reduces to the RLS algorithm and the MISG algorithm reduces to the SG algorithm. For the MIRLS and MISG algorithms, two cases with different innovation lengths and are considered, respectively. To quantify the estimation accuracy and clearly compare the performance of the RLS, SG, MIRLS and MISG algorithms, we consider two different noise levels, i.e., and . The parameter estimates and their errors are shown in Table 1, Table 2, Table 3 and Table 4 with different noise variances. The estimation errors versus k are shown in Figure 2 and Figure 3, where .
Table 1.
The RLS estimates and errors.
Table 2.
The MIRLS estimates and errors ().
Table 3.
The SG estimates and errors.
Table 4.
The MISG estimates and errors ().
Figure 2.
Parameter estimation errors versus k using RLS and MIRLS.
Figure 3.
Parameter estimation errors versus k using SG and MISG.
From Table 1 and Table 2, it is seen that both the RLS and MIRLS algorithms have a fast convergence rate and a high estimation accuracy, which can also be observed from Figure 2 and Figure 3. On the other hand, by using the SG and MISG algorithms, it is shown in Table 3 and Table 4 that it requires a data length around to achieve an acceptable estimation accuracy. Though not shown in table, the MIRLS algorithm with enjoys the most accurate parameter estimate and the fastest convergence rate. From Figure 2 and Figure 3, we can see that for the same batch of data, compared with the RLS algorithm, the SG algorithm extracts less information from the measured data and uses information inefficiently, which results in a much slower convergence. Hence, to show the effect clearly, the results by the RLS and SG algorithms are shown in two figures, respectively. Meanwhile, from Figure 2, it can be found that due to the high efficiency of the RLS algorithm, the MIRLS algorithm has limited improvement potential for the accuracy of parameter estimation. However, the computation efficiency of the MIRLS algorithm will be revealed in the case of data missing, which remains our future work. From Figure 3, it is clear that the MISG algorithm has a faster convergence rate than the SG algorithm or the MISG algorithm with and the MISG estimates with and have higher accuracy than the SG estimates. In fact, the parameter estimation errors by the MISG algorithm become smaller and smaller as the innovation length p increases.
6. Conclusions
In this paper, we have addressed the parameter estimation problem of the FHN neuron model with limit cycles by utilizing the RLS, MIRLS, SG and MISG algorithms. The MIRLS and MISG identification algorithms, which take into the account history innovations, have been applied to improve the accuracy of identification. The framework using the algorithms here could serve as a template for performing parameter inference on more complex neuronal models. Finally, simulation results have been provided to corroborate the effectiveness of the proposed algorithms.
Author Contributions
X.L., X.C. and B.C. conceived and designed the theoretical framework; X.L. and X.C. performed the experiments; X.L., X.C. and B.C. analyzed the data; and X.L. wrote the paper with contributions from all authors. All authors read and approved the submitted manuscript, agreed to be listed, and accepted this version for publication.
Funding
This work is partially supported by National Natural Science Foundation of China (61473136) and Natural Science Foundation of Jiangsu Higher Education Institutions of China (18KJB180026).
Acknowledgments
The authors thank the anonymous reviewers for their valuable comments.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Buzsaki, G.; Draguhn, A. Neuronal oscillations in cortical networks. Science 2004, 304, 1926–1929. [Google Scholar] [CrossRef] [PubMed]
- Singer, W. Neuronal synchrony: A versitile code for the definition of relations. Neuron 1999, 24, 49–65. [Google Scholar] [CrossRef]
- Ljung, L. System Identification: Theory for the User; Prentice-Hall: Englewood Cliffs, NJ, USA, 1987. [Google Scholar]
- Ljung, L. Perspectives on system identification. Annu. Rev. Control 2010, 34, 1–12. [Google Scholar] [CrossRef]
- Juang, J.N.; Phan, M.Q. Identification and Control of Mechanical Systems; Cambridge University Press: Cambridge, UK, 2001. [Google Scholar]
- Moonen, M.; Ramos, J. A subspace algorithm for balanced state space system identification. IEEE Trans. Autom. Control 1993, 38, 1727–1729. [Google Scholar] [CrossRef]
- Ding, F. System Identification-New Theory and Methods; Science Press: Beijing, China, 2013. [Google Scholar]
- Pappalardo, C.M.; Guida, D. A time-domain system identification numerical procedure for obtaining linear dynamical models of multibody mechanical systems. Archiv. Appl. Mech. 2018, 88, 1325–1347. [Google Scholar] [CrossRef]
- Pappalardo, C.M.; Guida, D. System identification algorithm for computing the modal parameters of linear mechanical systems. Machines 2018, 6, 1–20. [Google Scholar]
- Lynch, E.P.; Houghton, C.J. Parameter estimation of neuron models using in-vitro and in-vivo electrophysiological data. Front. Neuroinf. 2015, 9, 10. [Google Scholar] [CrossRef] [PubMed]
- Duan, C.; Zhan, Y. The response of a linear monostable system and its application in parameters estimation for PSK signals. Phys. Lett. A 2016, 380, 1358–1362. [Google Scholar] [CrossRef]
- Pappalardo, C.M.; Guida, D. System identification and experimental modal analysis of a frame structure. Eng. Lett. 2018, 26, 56–68. [Google Scholar]
- Kenné, G.; Ahmed-Ali, T.; Lamnabhi-Lagarrigue, F.; Arzandé, A. Nonlinear systems time-varying parameter estimation: application to induction motors. Electr. Power Syst. Res. 2008, 78, 1881–1888. [Google Scholar] [CrossRef]
- Tabak, J.; Murphey, R.; Moore, L.E. Parameter estimation methods for single neuron models. J. Comput. Neurosci. 2000, 9, 215–236. [Google Scholar] [CrossRef] [PubMed]
- Mullowney, P.; Iyengar, S. Parameter estimation for a leaky integrate-and-fire neuronal model from ISI data. J. Comput. Neurosci. 2008, 24, 179–194. [Google Scholar] [CrossRef] [PubMed]
- Vavoulis, D.V.; Straub, V.A.; Aston, J.A.D.; Feng, F. A self-organizing state-space-model approach for parameter estimation in Hodgkin-Huxley-type models of single neurons. PLoS Comput. Biol. 2012, 8, e1002401. [Google Scholar] [CrossRef] [PubMed]
- Jensen, A.; Ditlevsen, S.; Kessler, M.; Papaspiliopoulos, O. Markov chain Monte Carlo approach to parameter estimation in the FitzHugh–Nagumo model. Phys. Rev. E 2012, 86, 041114. [Google Scholar] [CrossRef] [PubMed]
- Arnold, A.; Lloyd, A.L. An approach to periodic, time-varying parameter estimation using nonlinear filtering. Inverse Probl. 2018, 34, 105005. [Google Scholar] [CrossRef]
- Concha, A.; Carrido, R. Parameter estimation of the FitHugh–Nagumo neuron model using integrals over finite time periods. J. Comput. Nonlinear Dyn. 2015, 10, 021023. [Google Scholar] [CrossRef]
- Che, Y.; Geng, L.; Han, C.; Cui, S.; Wang, J. Parameter estimation of the FitzHugh–Nagumo model using noisy measurements for membrane potential. Chaos 2012, 22, 023139. [Google Scholar] [CrossRef] [PubMed]
- Ding, F. Hierarchical multi-innovation stochastic gradient algorithm for Hammerstein nonlinear system modeling. Appl. Math. Model. 2013, 37, 1694–1704. [Google Scholar] [CrossRef]
- Li, J.; Hua, C.; Tang, Y.; Guan, X. Stochastic gradient with changing forgetting factor-based parameter identification for Wiener systems. Appl. Math. Lett. 2014, 33, 40–45. [Google Scholar] [CrossRef]
- Chen, J.; Lv, L.; Ding, R. Multi-innovation stochastic gradient algorithms for dual-rate sampled systems with preload nonlinearity. Appl. Math. Lett. 2013, 26, 124–129. [Google Scholar] [CrossRef]
- Ding, F.; Chen, T. Performance analysis of multi-innovation gradient type identification methods. Automatica 2007, 43, 1–14. [Google Scholar] [CrossRef]
- Keener, J.; Sneyd, J. Mathematical Physiology; Springer: New York, NY, USA, 2009; pp. 1–47. [Google Scholar]
- Danzl, P.; Hespanha, J.; Moehlis, J. Event-based minimum-time control of oscillatory neuron models: phase randomization, maximal spike rate increase, and desynchronization. Biol. Cybern. 2009, 101, 387–399. [Google Scholar] [CrossRef] [PubMed]
- Ding, F.; Wang, Y.J.; Dai, J.Y.; Li, Q.S.; Chen, Q.J. A recursive least squares parameter estimation algorithm for output nonlinear autoregressive systems using the input-output data filtering. J. Franklin Inst. 2017, 354, 6938–6955. [Google Scholar] [CrossRef]
- Xu, L.; Ding, F.; Gu, Y.; Alsaedi, A.; Hayat, T. A multi-innovation state and parameter estimation algorithm for a state space system with d-step state-delay. Signal Process. 2017, 140, 97–103. [Google Scholar] [CrossRef]
© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).


