Next Article in Journal
A Machine Learning View on Momentum and Reversal Trading
Next Article in Special Issue
The Bias Compensation Based Parameter and State Estimation for Observability Canonical State-Space Models with Colored Noise
Previous Article in Journal
Fractional Order Sliding Mode Control of a Class of Second Order Perturbed Nonlinear Systems: Application to the Trajectory Tracking of a Quadrotor
Previous Article in Special Issue
Online Adaptive Parameter Estimation for Quadrotors
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Parameter Estimation of a Class of Neural Systems with Limit Cycles

Institute of System Engineering, Jiangnan University, 1800 Lihu Road, Wuxi 214122, China
*
Author to whom correspondence should be addressed.
Algorithms 2018, 11(11), 169; https://doi.org/10.3390/a11110169
Submission received: 11 September 2018 / Revised: 22 October 2018 / Accepted: 23 October 2018 / Published: 26 October 2018
(This article belongs to the Special Issue Parameter Estimation Algorithms and Its Applications)

Abstract

:
This work addresses parameter estimation of a class of neural systems with limit cycles. An identification model is formulated based on the discretized neural model. To estimate the parameter vector in the identification model, the recursive least-squares and stochastic gradient algorithms including their multi-innovation versions by introducing an innovation vector are proposed. The simulation results of the FitzHugh–Nagumo model indicate that the proposed algorithms perform according to the expected effectiveness.

1. Introduction

1.1. Background

The nervous system is a complex and huge system, composed of a large number of nerve cells. The performance of neurons is closely related to the function of the nervous system. Oscillatory activity is ubiquitous in the nervous system and its synchronous behavior has become an increasing topic in neuroinformatics [1,2]. Although great progress has been made in information coding and understanding nonlinear dynamics of neural oscillator systems in neuroscience and from the system control perspective, there is relatively little research on the modeling and control of neural oscillator systems. In a typical neural system, the measurable state is limited by using voltage clamp technique, while the internal parameters of the system can only be estimated by the measurable state output sequence such as membrane voltage. Sometimes, it is even impossible for biologists to get the modeling information due to various technical difficulties. For example, in the FitzHugh–Nagumo model, the membrane voltage of the neuron is measurable and the model control parameters are unknown, which relies on the parameter identification methods in control theory. In particular, parameter estimation for neural models are used to estimate the uncertain parameters from the available input and output data.

1.2. Parameter Estimation in Neural Model

System parameter estimation has received substantial attention in several decades [3,4,5,6,7]. One reason is its broad application to mechanical systems [8,9], neuron systems [10], communication systems [11], structural systems [12], power systems [13], etc. Existing black-box parameter identification methods for nonlinear systems are simple and universal, but the identification accuracy and convergence of the identification algorithm are not of universal significance. Therefore, accurately identifying the system parameters of the spiking neuron models under finite measurement presents a far more challenging modeling problem. Examples of parameter identification methods for neural models include the time/frequency-domain method [14], maximum likelihood method [15], self-organizing state-space-model approach [16], heuristic optimization method [10], and so on. In [14], a time-domain method and a frequency-domain method are applied to estimate parameters of a neuronal model consisting of a soma coupled to a uniform dendritic cylinder, where the time-domain method is more prone to estimation errors in the cable parameters. Vavoulis et al. [16] presented an adaptive sampling algorithm using the self-organizing state-space model to estimate parameters in a Hodgkin-Huxley-type model of single neurons and to achieve reduced variance of parameter estimates. Mullowney et al. [15] proposed a maximum likelihood methodology for parameter estimation in a leaky integrate-and-fire neuronal model with the only available data being the interspike intervals or the times between firings. In [10], a global heuristic search method using in-vitro and in-vivo electrophysiological data was explored for identification of an Izhikevich-type neuron model.
In spite of these advances, little attention on parameter estimation of the FitzHugh–Nagumo (FHN) model has been received, including Bayesian statistical approaches [17,18], and least-squares algorithms [19,20]. In [17], a Bayesian framework was proposed for drift parameter estimation of the stochastic FHN model. Arnold and Lloyd [18] employed nonlinear filtering for periodic, time-varying parameter estimation, which was illustrated in estimation of the external voltage parameter in the FHN model. In comparison, from the recursive estimation point of view, we perform parameter inference for the FHN model with data generated from the model in this paper. Concha and Garrido [19] presented two methodologies based on the least-squares algorithm to estimate the parameters of the FHN model. The two methods were only suitable for the case with noncontinuous input current stimulus, at the cost of a linear integral filter. Che et al. [20] employed the recursive least-squares algorithm for parameter estimation of the FHN model, which requires the first and second time derivatives of the membrane potential and the input current stimulus being continuously differentiable. It is well known that the stochastic gradient algorithm is a class of important stochastic approximation methods, which have received much attention and have been widely used in different systems, such as Hammerstein systems [21], Wiener systems [22] and sampled systems [23]. Though the stochastic gradient algorithm has a slower convergence rate than the least-squares algorithms, but it requires less computational effort. In this paper, to improve the convergence rate of the stochastic gradient algorithm, we extend the innovation concept in [24] and explore the multiinnovation stochastic gradient algorithm for parameter estimation of the FHN neuron system. The parameterized FHN model in [20] can also be performed by using the proposed multiinnovation recursive least-squares algorithm in this paper and a better accuracy of the parameter estimation will be obtained due to the use of past innovations and repeated available data. However, from the aspect of the computing style, identification algorithms with qualified identification accuracy and convergence are desired for parameter estimation of the neural models. In particular, it seems that little effort has been made toward performing parameter estimation of the FHN model using the stochastic gradient methods.

1.3. Contributions

Inspired by these works, we propose four parameter estimation algorithms for the FHN neuron system with limit cycle and external disturbance. The contributions of this paper include the following:
  • We formulate the FHN neuron system as an identification model based on the explicit forward Euler method.
  • We propose a recursive least-squares algorithm and a stochastic gradient algorithm to estimate the unknown parameters of the model.
  • We extend the innovation concept in [24], and explore the multiinnovation recursive least-squares algorithm and multiinnovation stochastic gradient algorithm for parameter estimation of the FHN neuron system.
  • We show that a faster convergence rate and better accuracy can be achieved using the innovation and repeated available data.

1.4. Organization

The organization of the paper is as follows. Section 2 describes the FHN neural model. Section 3 formulates the identification model and presents the parameter estimation problem. The proposed algorithms are given in Section 4, where we estimate the unknown parameters using four different algorithms and summarize corresponding algorithm procedures. Section 5 provides the computational simulations. Finally, we draw some conclusions in Section 6.

2. The Spiking Neuron Model

In this section, we introduce the general spiking neuron models. A conductance-based spiking neuron model is described by
x ˙ ( t ) = F ( x ( t ) ) + B u ( t ) + ξ ( t ) ,
where x ( t ) R n is the state which usually consists of the membrane potential, gating variable, recovery variable and adaptation variable; ξ ( t ) R n denotes the disturbance in the neuron system; u ( t ) R is the external input injected to the neuron; B R n is a constant vector, which is typically taken as B = [ 1 , 0 , , 0 n 1 ] when the external input acts on the membrane potential.
The system described by (1) is quite general as it includes many spiking neuron models such as the Hodgkin-Huxley model, the Hindmarsh–Rose (HR) model, the FHN model and so on. In this paper, we shall focus on parameter estimation of the FHN model using different identification algorithms, but the proposed approach is applicable to other neuron models in a similar manner. The FHN model in a dimensionless form [25,26] can be written in the form of (1)
v ˙ ( t ) w ˙ ( t ) = μ ( v ( v + a ) ( b v ) w ) + u ( t ) c 1 v c 2 w + ξ ( t )
where v denotes the voltage potential of the neuron membrane and w denotes the inactivation of the sodium channels; ξ R 2 denotes the disturbance in the neuron system and is the white noise with zero mean. The parameters a , b , c 1 , c 2 , μ are unknown to be estimated later in detail. In this paper, we consider a constant external input u ( t ) J resulting in periodic spiking dynamics and the external input J will also be estimated. Note that when the parameters are specified as a = 0.1 b = 1 , c 1 = 1 , c 2 = 0.5 , μ = 100 and the input current value is taken as J = 0.5 , the neural system from ( v ( 0 ) , w ( 0 ) ) = ( 0.3 , 0.6 ) without disturbance exhibits periodic spiking dynamics and converge to a limit cycle (see Figure 1).
In the real nervous system, the parameters of the spiking neurons are difficult to be measured or determined, the identification of system parameters has always been an important topic in the field of neural computing and system control. Motivated by this, the present work is directed towards developing efficient algorithms to identify the parameters of spiking neuron models.

3. The Identification Model of Spiking Neurons

Usually, the neuron membrane and the inactivation of the sodium channels are discretely sampled in experiment. Hence, let us consider the discretized system of the FHN model using the explicit forward Euler method with step size T as follows
x ( k + 1 ) x ( k ) T = f ( x ( k ) ) + ξ ( k ) ,
where T is the sampling period which is sufficiently small, k represents the k T time instant and
f ( x ( k ) ) = μ v ( k ) ( v ( k ) + a ) ( b v ( k ) ) w ( k ) + J c 1 v ( k ) c 2 w ( k ) .
Denote v k , m : = v ( k m ) , w k , m : = w ( k m ) and ξ k , m : = ξ ( k m ) for m = 1 , 2 , . Then, we have
v ( k ) v ( k 1 ) T w ( k ) w ( k 1 ) T = μ v k , 1 ( v k , 1 + a ) ( b v k , 1 w k , 1 ) + J c 1 v k , 1 c 2 w k , 1 + ξ k , 1
Define y ( k ) = [ y 1 ( k ) , y 2 ( k ) ] , y 1 ( k ) = ( v ( k ) v ( k 1 ) ) / T , y 2 ( k ) = ( w ( k ) w ( k 1 ) ) / T , θ ( k ) = μ ( a + b ) μ a b μ J c 1 c 2 and
ϕ ( k ) = v k , 1 3 w k , 1 v k , 1 2 v k , 1 1 0 0 0 0 0 0 v k , 1 w k , 1 .
Therefore, Equation (4) can be rewritten as
y ( k ) = ϕ ( k ) θ ( k ) + ξ k , 1
which is called as the identification model. Apparently, to estimate the parameters a , b , μ , J , c 1 , c 2 , it is equivalent to estimate the parameter vector θ .

4. Parameter Estimation of the Spiking Neurons

4.1. Least-Squares Estimation Algorithms

To estimate the parameters θ , let us consider the cost function
J ( θ ) = k = 1 L d i = 1 2 ( y i ( k ) ϕ i ( k ) θ ) 2 ,
where L d denotes the data length. Using the least-squares principle to solve the optimization problem, we explore the following recursive least-squares (RLS) parameter estimation formulas [3,27] to estimate the parameter vector θ ( k ) .
θ ^ ( k ) = θ ^ ( k 1 ) + L ( k ) ε ( k ) , ε ( k ) = y 1 ( k ) ϕ 1 ( k ) θ ^ ( k 1 ) y 2 ( k ) ϕ 2 ( k ) θ ^ ( k 1 ) , L ( k ) = P ( k 1 ) ϕ ( k ) λ + ϕ ( k ) P ( k 1 ) ϕ ( k ) 1 , P ( k ) = ( I L ( k ) ϕ ( k ) ) P ( k 1 ) ,
where λ ( 0 , 1 ] is the forgotten factor, P ( k ) R n ˜ × n ˜ is the vector of adjustment gains, n ˜ = 6 , P ( 0 ) = p 0 I , and p 0 is a large positive number.
To initialize the RLS algorithm, the initial value θ ^ ( 0 ) is generally taken to be a zero vector or a small real vector, e.g., θ ^ ( 0 ) = 10 6 1 n ˜ with 1 n ˜ being an n ˜ -dimensional column vector whose elements are 1. Now, the RLS parameter estimation algorithm for the spiking neuron model is summarized in Algorithm 1.
Algorithm 1 RLS algorithm
(1)
Discretize the FHN model based on the explicit forward Euler method with step size T.
(2)
Initialization: set the forgotten factor λ ( 0 , 1 ] and let k = 1 , θ ^ ( 0 ) = 1 n ˜ / p 0 , P ( 0 ) = p 0 I , p 0 = 10 6 .
(3)
Collect the measurement state data and determine a data length L d . Then, form the output data y ( k ) and the information vector ϕ ( k ) .
(4)
Compute L ( k ) and P ( k ) by
L ( k ) = P ( k 1 ) ϕ ( k ) λ + ϕ ( k ) P ( k 1 ) ϕ ( k ) , P ( k ) = ( I L ( k ) ϕ ( k ) ) P ( k 1 ) ,
respectively.
(5)
Update the estimate θ ^ ( k ) by θ ^ ( k ) = θ ^ ( k 1 ) + L ( k ) ε ( k ) , where
ε ( k ) = y 1 ( k ) ϕ 1 ( k ) θ ^ ( k 1 ) y 2 ( k ) ϕ 2 ( k ) θ ^ ( k 1 ) .
(6)
If k < L d , increase k by 1 and go to step (3); otherwise, stop the procedure and output the estimate θ ^ ( L d ) of the parameter vector θ .
To obtain better estimation accuracy, we modify the algorithm (6) and apply a multiinnovation recursive least-squares (MIRLS) parameter estimation algorithm [7] for (5) by introducing an innovation length p. By defining the information matrix Φ ( p , k ) and stacked output vector Y ( p , k ) as
Y ( p , k ) = [ y ( k ) , y ( k 1 ) , , y ( k p + 1 ) ] R 2 p ,
Φ ( p , k ) = [ ϕ ( k ) , ϕ ( k 1 ) , , ϕ ( k p + 1 ) ] R 6 × 2 p ,
the innovation vector E ( p , k ) can be expressed as
E ( k ) = Y ( p , k ) Φ ( p , k ) θ ^ ( k 1 ) .
From here, we present the following MIRLS iterative formulas
θ ^ ( k ) = θ ^ ( k 1 ) + L ( k ) E ( k ) , E ( k ) = Y ( p , k ) Φ ( p , k ) θ ^ ( k 1 ) , L ( k ) = P ( k 1 ) Φ ( p , k ) λ + Φ ( p , k ) P ( k 1 ) Φ ( p , k ) 1 , P ( k ) = ( I L ( k ) Φ ( p , k ) ) P ( k 1 ) .
The MIRLS parameter estimation algorithm for the FHN model is summarized in Algorithm 2.
Algorithm 2 MIRLS algorithm
(1)
Discretize the FHN model based on the explicit forward Euler method with step size T.
(2)
Initialization: set the forgotten factor λ ( 0 , 1 ] and let k = 1 , θ ^ ( 0 ) = 1 n ˜ / p 0 , P ( 0 ) = p 0 I , p 0 = 10 6 .
(3)
Collect the measurement state data and determine a data length L d . Then, form the output data y ( k ) and the information vector ϕ ( k ) .
(4)
Given an innovation length p, form Y ( p , k ) by (7) and Φ ( p , k ) by (8).
(5)
Compute L ( k ) and P ( k ) by
L ( k ) = P ( k 1 ) Φ ( p , k ) λ + Φ ( p , k ) P ( k 1 ) Φ ( p , k )
and
P ( k ) = ( I L ( k ) Φ ( p , k ) ) P ( k 1 ) ,
respectively.
(6)
Update the estimate θ ^ ( k ) by θ ^ ( k ) = θ ^ ( k 1 ) + L ( k ) E ( k ) , where
E ( k ) = Y ( p , k ) Φ ( p , k ) θ ^ ( k 1 ) .
(7)
If k < L d , increase k by 1 and go to step (3); otherwise, stop the procedure and output the estimate θ ^ ( L d ) of the parameter vector θ .

4.2. Stochastic Gradient Estimation Algorithms

Let X 2 : = tr [ X X ] denote the norm of the matrix X . For the model (5), we present the following stochastic gradient (SG) parameter estimation formulas to estimate the parameter vector θ ( k )
θ ^ ( k ) = θ ^ ( k 1 ) + 1 r ( k ) ϕ ( k ) ε ( t ) , ε ( t ) = y 1 ( k ) ϕ 1 ( k ) θ ^ ( k 1 ) y 2 ( k ) ϕ 2 ( k ) θ ^ ( k 1 ) , r ( k ) = α r ( k 1 ) + ϕ ( k ) 2 , r ( 0 ) = 1 ,
where α is the forgotten factor. Note that a smaller α results in a faster convergence but the price we paid is a large parameter fluctuation. In this paper, since the SG algorithm converges slowly in parameter estimation of the FHN neuron, we choose a moderate α = 0.8 .
The SG parameter estimation algorithm for the spiking neuron model is summarized in Algorithm 3.
Algorithm 3 SG algorithm
(1)
Discretize the FHN model based on the explicit forward Euler method with step size T.
(2)
Initialization: set a small forgotten factor α ( 0 , 1 ] and let k = 1 , θ ^ ( 0 ) = 1 n ˜ / p 0 , r ( 0 ) = 1 , p 0 = 10 6 .
(3)
Collect the measurement state data and determine a data length L d . Then, form the output data y ( k ) and the information vector ϕ ( k ) .
(4)
Compute r ( k ) by r ( k ) = α r ( k 1 ) + ϕ ( k ) 2 .
(5)
Update the estimate θ ^ ( k ) by θ ^ ( k ) = θ ^ ( k 1 ) + ϕ ( k ) ε ( t ) / r ( k ) , where ε ( t ) = y 1 ( k ) ϕ 1 ( k ) θ ^ ( k 1 ) y 2 ( k ) ϕ 2 ( k ) θ ^ ( k 1 ) .
(6)
If k = L d / 2 , reset a large forgotten factor α ( 0 , 1 ] and go to step (7); otherwise, go to step (7).
(7)
If k < L d , increase k by 1 and go to step (3); otherwise, stop the procedure and output the estimate θ ^ ( L d ) of the parameter vector θ .
Similarly, to obtain better estimation accuracy, we can derive multiinnovation stochastic gradient (MISG) parameter estimation formulas [28] using the Y ( p , k ) and Φ ( p , k ) in Section 4.1 as follows
θ ^ ( k ) = θ ^ ( k 1 ) + Φ ( p , k ) E ( t ) / r ( k ) , E ( k ) = Y ( p , k ) Φ ( p , k ) θ ^ ( k 1 ) , r ( k ) = α r ( k 1 ) + Φ ( p , k ) 2 , r ( 0 ) = 1 .
Comparing with the SG algorithm in (10) using only the current data, the innovation Φ in (11) uses not only the current innovation, but also the past innovations, which thus can improve the convergence rates compared with the SG algorithm. Moreover, the available data are repeatedly used in the MISG algorithm, and such a treatment can enhance accuracy of the parameter estimation [24]. The MISG parameter estimation algorithm for the FHN model is summarized in Algorithm 4.
Algorithm 4 MISG algorithm
(1)
Discretize the FHN model based on the explicit forward Euler method with step size T.
(2)
Initialization: set a small forgotten factor α ( 0 , 1 ] and let k = 1 , θ ^ ( 0 ) = 1 n ˜ / p 0 , r ( 0 ) = 1 , p 0 = 10 6 .
(3)
Collect the measurement state data and determine a data length L d . Then, form the output data y ( k ) and the information vector ϕ ( k ) .
(4)
Given an innovation length p, form Y ( p , k ) by (7) and Φ ( p , k ) by (8).
(5)
Compute r ( k ) by r ( k ) = α r ( k 1 ) + Φ ( p , k ) 2 .
(6)
Update the estimate θ ^ ( k ) by θ ^ ( k ) = θ ^ ( k 1 ) + Φ ( p , k ) E ( t ) / r ( k ) , E ( k ) = Y ( p , k ) Φ ( p , k ) θ ^ ( k 1 ) .
(7)
If k = L d / 2 , reset a large forgotten factor α ( 0 , 1 ] and go to step (8); otherwise, go to step (8).
(8)
If k < L d , increase k by 1 and go to step (3); otherwise, stop the procedure and output the estimate θ ^ ( L d ) of the parameter vector θ .

5. Simulations

For purpose of illustration, we consider the FHN model [25,26] described in the dimensionless form (2). In simulation, the parameters are taken as a = 0.1 , b = 1 , c 1 = 1 , c 2 = 0.5 , μ = 100 and J = 0.5 . The FHN model is firstly discretized based on the explicit forward Euler method with step size T = 10 ms . Then, consider the initial value v ( 0 ) = 0.3 , w ( 0 ) = 0.6 , and perform the simulations based on the identification algorithms in Section 4. Taking the data length L d = 20 , 000 , the forgotten factor λ is chosen as 0.99 for RLS and MIRLS algorithms. For both SG and MISG algorithms, we take the forgotten factor α = 0.8 . To compare the estimation performance of the RLS, SG, MIRLS and MISG algorithms, apply these four algorithms to estimate the parameter vector θ of the neuron system, respectively. Note that, when p = 1 , the MIRLS algorithm reduces to the RLS algorithm and the MISG algorithm reduces to the SG algorithm. For the MIRLS and MISG algorithms, two cases with different innovation lengths p = 3 and p = 5 are considered, respectively. To quantify the estimation accuracy and clearly compare the performance of the RLS, SG, MIRLS and MISG algorithms, we consider two different noise levels, i.e., σ 2 = 0 . 2 2 and σ 2 = 0 . 5 2 . The parameter estimates and their errors are shown in Table 1, Table 2, Table 3 and Table 4 with different noise variances. The estimation errors δ versus k are shown in Figure 2 and Figure 3, where δ : = θ ^ θ / θ .
From Table 1 and Table 2, it is seen that both the RLS and MIRLS algorithms have a fast convergence rate and a high estimation accuracy, which can also be observed from Figure 2 and Figure 3. On the other hand, by using the SG and MISG algorithms, it is shown in Table 3 and Table 4 that it requires a data length around L d = 20 , 000 to achieve an acceptable estimation accuracy. Though not shown in table, the MIRLS algorithm with p = 5 enjoys the most accurate parameter estimate and the fastest convergence rate. From Figure 2 and Figure 3, we can see that for the same batch of data, compared with the RLS algorithm, the SG algorithm extracts less information from the measured data and uses information inefficiently, which results in a much slower convergence. Hence, to show the effect clearly, the results by the RLS and SG algorithms are shown in two figures, respectively. Meanwhile, from Figure 2, it can be found that due to the high efficiency of the RLS algorithm, the MIRLS algorithm has limited improvement potential for the accuracy of parameter estimation. However, the computation efficiency of the MIRLS algorithm will be revealed in the case of data missing, which remains our future work. From Figure 3, it is clear that the MISG algorithm has a faster convergence rate than the SG algorithm or the MISG algorithm with p = 1 and the MISG estimates with p = 3 and p = 5 have higher accuracy than the SG estimates. In fact, the parameter estimation errors by the MISG algorithm become smaller and smaller as the innovation length p increases.

6. Conclusions

In this paper, we have addressed the parameter estimation problem of the FHN neuron model with limit cycles by utilizing the RLS, MIRLS, SG and MISG algorithms. The MIRLS and MISG identification algorithms, which take into the account history innovations, have been applied to improve the accuracy of identification. The framework using the algorithms here could serve as a template for performing parameter inference on more complex neuronal models. Finally, simulation results have been provided to corroborate the effectiveness of the proposed algorithms.

Author Contributions

X.L., X.C. and B.C. conceived and designed the theoretical framework; X.L. and X.C. performed the experiments; X.L., X.C. and B.C. analyzed the data; and X.L. wrote the paper with contributions from all authors. All authors read and approved the submitted manuscript, agreed to be listed, and accepted this version for publication.

Funding

This work is partially supported by National Natural Science Foundation of China (61473136) and Natural Science Foundation of Jiangsu Higher Education Institutions of China (18KJB180026).

Acknowledgments

The authors thank the anonymous reviewers for their valuable comments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Buzsaki, G.; Draguhn, A. Neuronal oscillations in cortical networks. Science 2004, 304, 1926–1929. [Google Scholar] [CrossRef] [PubMed]
  2. Singer, W. Neuronal synchrony: A versitile code for the definition of relations. Neuron 1999, 24, 49–65. [Google Scholar] [CrossRef]
  3. Ljung, L. System Identification: Theory for the User; Prentice-Hall: Englewood Cliffs, NJ, USA, 1987. [Google Scholar]
  4. Ljung, L. Perspectives on system identification. Annu. Rev. Control 2010, 34, 1–12. [Google Scholar] [CrossRef] [Green Version]
  5. Juang, J.N.; Phan, M.Q. Identification and Control of Mechanical Systems; Cambridge University Press: Cambridge, UK, 2001. [Google Scholar]
  6. Moonen, M.; Ramos, J. A subspace algorithm for balanced state space system identification. IEEE Trans. Autom. Control 1993, 38, 1727–1729. [Google Scholar] [CrossRef]
  7. Ding, F. System Identification-New Theory and Methods; Science Press: Beijing, China, 2013. [Google Scholar]
  8. Pappalardo, C.M.; Guida, D. A time-domain system identification numerical procedure for obtaining linear dynamical models of multibody mechanical systems. Archiv. Appl. Mech. 2018, 88, 1325–1347. [Google Scholar] [CrossRef]
  9. Pappalardo, C.M.; Guida, D. System identification algorithm for computing the modal parameters of linear mechanical systems. Machines 2018, 6, 1–20. [Google Scholar]
  10. Lynch, E.P.; Houghton, C.J. Parameter estimation of neuron models using in-vitro and in-vivo electrophysiological data. Front. Neuroinf. 2015, 9, 10. [Google Scholar] [CrossRef] [PubMed]
  11. Duan, C.; Zhan, Y. The response of a linear monostable system and its application in parameters estimation for PSK signals. Phys. Lett. A 2016, 380, 1358–1362. [Google Scholar] [CrossRef]
  12. Pappalardo, C.M.; Guida, D. System identification and experimental modal analysis of a frame structure. Eng. Lett. 2018, 26, 56–68. [Google Scholar]
  13. Kenné, G.; Ahmed-Ali, T.; Lamnabhi-Lagarrigue, F.; Arzandé, A. Nonlinear systems time-varying parameter estimation: application to induction motors. Electr. Power Syst. Res. 2008, 78, 1881–1888. [Google Scholar] [CrossRef]
  14. Tabak, J.; Murphey, R.; Moore, L.E. Parameter estimation methods for single neuron models. J. Comput. Neurosci. 2000, 9, 215–236. [Google Scholar] [CrossRef] [PubMed]
  15. Mullowney, P.; Iyengar, S. Parameter estimation for a leaky integrate-and-fire neuronal model from ISI data. J. Comput. Neurosci. 2008, 24, 179–194. [Google Scholar] [CrossRef] [PubMed]
  16. Vavoulis, D.V.; Straub, V.A.; Aston, J.A.D.; Feng, F. A self-organizing state-space-model approach for parameter estimation in Hodgkin-Huxley-type models of single neurons. PLoS Comput. Biol. 2012, 8, e1002401. [Google Scholar] [CrossRef] [PubMed]
  17. Jensen, A.; Ditlevsen, S.; Kessler, M.; Papaspiliopoulos, O. Markov chain Monte Carlo approach to parameter estimation in the FitzHugh–Nagumo model. Phys. Rev. E 2012, 86, 041114. [Google Scholar] [CrossRef] [PubMed]
  18. Arnold, A.; Lloyd, A.L. An approach to periodic, time-varying parameter estimation using nonlinear filtering. Inverse Probl. 2018, 34, 105005. [Google Scholar] [CrossRef]
  19. Concha, A.; Carrido, R. Parameter estimation of the FitHugh–Nagumo neuron model using integrals over finite time periods. J. Comput. Nonlinear Dyn. 2015, 10, 021023. [Google Scholar] [CrossRef]
  20. Che, Y.; Geng, L.; Han, C.; Cui, S.; Wang, J. Parameter estimation of the FitzHugh–Nagumo model using noisy measurements for membrane potential. Chaos 2012, 22, 023139. [Google Scholar] [CrossRef] [PubMed]
  21. Ding, F. Hierarchical multi-innovation stochastic gradient algorithm for Hammerstein nonlinear system modeling. Appl. Math. Model. 2013, 37, 1694–1704. [Google Scholar] [CrossRef]
  22. Li, J.; Hua, C.; Tang, Y.; Guan, X. Stochastic gradient with changing forgetting factor-based parameter identification for Wiener systems. Appl. Math. Lett. 2014, 33, 40–45. [Google Scholar] [CrossRef]
  23. Chen, J.; Lv, L.; Ding, R. Multi-innovation stochastic gradient algorithms for dual-rate sampled systems with preload nonlinearity. Appl. Math. Lett. 2013, 26, 124–129. [Google Scholar] [CrossRef]
  24. Ding, F.; Chen, T. Performance analysis of multi-innovation gradient type identification methods. Automatica 2007, 43, 1–14. [Google Scholar] [CrossRef]
  25. Keener, J.; Sneyd, J. Mathematical Physiology; Springer: New York, NY, USA, 2009; pp. 1–47. [Google Scholar]
  26. Danzl, P.; Hespanha, J.; Moehlis, J. Event-based minimum-time control of oscillatory neuron models: phase randomization, maximal spike rate increase, and desynchronization. Biol. Cybern. 2009, 101, 387–399. [Google Scholar] [CrossRef] [PubMed]
  27. Ding, F.; Wang, Y.J.; Dai, J.Y.; Li, Q.S.; Chen, Q.J. A recursive least squares parameter estimation algorithm for output nonlinear autoregressive systems using the input-output data filtering. J. Franklin Inst. 2017, 354, 6938–6955. [Google Scholar] [CrossRef]
  28. Xu, L.; Ding, F.; Gu, Y.; Alsaedi, A.; Hayat, T. A multi-innovation state and parameter estimation algorithm for a state space system with d-step state-delay. Signal Process. 2017, 140, 97–103. [Google Scholar] [CrossRef]
Figure 1. A limit cycle of the FHN model.
Figure 1. A limit cycle of the FHN model.
Algorithms 11 00169 g001
Figure 2. Parameter estimation errors δ versus k using RLS and MIRLS.
Figure 2. Parameter estimation errors δ versus k using RLS and MIRLS.
Algorithms 11 00169 g002
Figure 3. Parameter estimation errors δ versus k using SG and MISG.
Figure 3. Parameter estimation errors δ versus k using SG and MISG.
Algorithms 11 00169 g003
Table 1. The RLS estimates and errors.
Table 1. The RLS estimates and errors.
σ 2 k μ ( a + b ) μ ab μ J c 1 c 2 δ ( % )
103.7906−0.72330.60772.94442.64910.323098.2022
201.61804.6890−6.88952.73571.80700.563997.0998
0 . 2 2 5094.7731102.82368.329547.55740.99960.60665.9548
10099.1037108.80179.743249.54081.02070.55761.0100
15099.2001108.89069.723249.59281.04010.53900.9256
20099.5227109.37719.894649.76201.04040.53460.5272
1019.0174−2.30667.125610.37184.24560.415491.6760
2011.494328.0891−11.56478.20262.05160.950482.3618
0 . 5 2 5093.5511100.38157.437347.08431.21760.99687.7788
10098.8246108.30589.581449.37991.09480.66961.4012
15098.9660108.41559.519949.45491.13550.62901.2950
20099.7218109.51639.885849.85921.12100.59710.3861
True values100.0000110.000010.000050.00001.00000.5000
Table 2. The MIRLS estimates and errors ( p = 3 ).
Table 2. The MIRLS estimates and errors ( p = 3 ).
σ 2 k μ ( a + b ) μ ab μ J c 1 c 2 δ ( % )
107.9850−1.60922.59974.86642.41150.354196.5310
203.47108.9666−7.00013.63401.72400.563894.2987
0 . 2 2 5098.1459107.49639.457949.13720.98940.60512.0867
10099.6327109.41249.813949.80191.01950.55590.4751
15099.5767109.35559.798849.77471.03640.54070.5280
20099.7563109.64689.926049.87761.03930.53270.2896
1033.4907−4.570613.694717.04383.77550.440286.9092
2024.810841.9383−8.483314.47921.99200.917769.3805
0 . 5 2 5096.1344104.11688.404548.28171.19060.98194.7325
10099.3021108.86679.653549.61531.09190.66520.9166
15099.3244108.86919.602949.62781.12760.63380.9145
20099.9346109.76139.914749.96511.11890.59260.1935
True values100.0000110.000010.000050.00001.00000.5000
Table 3. The SG estimates and errors.
Table 3. The SG estimates and errors.
σ 2 k μ ( a + b ) μ ab μ J c 1 c 2 δ ( % )
50013.3084−7.849713.649211.15061.04670.438296.3408
100021.9925−4.82759.181621.65461.02180.551690.1498
0 . 2 2 500061.446238.23532.514831.35701.05110.523453.3865
10,00079.729072.45605.907340.72801.06060.529827.9030
15,00088.385890.88517.092846.60201.02020.569614.5130
20,00094.523699.88088.919847.44230.93980.31337.5321
50015.7472−7.869214.30788.53231.09530.340095.9265
100022.1267−4.05078.401123.28201.07820.659289.5044
0 . 5 2 500061.718939.91001.849434.04181.12170.586652.0776
10,00081.155273.85685.794540.92380.89810.547526.7046
15,00090.287391.27297.719545.14001.02270.621413.8507
20,00095.0617100.59818.967547.91770.87270.01316.9244
True values100.0000110.000010.000050.00001.00000.5000
Table 4. The MISG estimates and errors ( p = 3 ).
Table 4. The MISG estimates and errors ( p = 3 ).
σ 2 k μ ( a + b ) μ ab μ J c 1 c 2 δ ( % )
50019.1634−11.822023.243716.51061.13740.420695.8047
100035.0060−8.875918.569137.06881.04260.592186.7670
0 . 2 2 500085.139957.0311−0.640442.62411.09920.540935.9599
10,00094.169490.44756.321347.83111.09940.557713.2635
15,00097.3389103.14618.313549.93111.03380.60454.8003
20,00099.1619107.50479.594749.71000.93740.21081.7150
50023.1916−11.981524.647412.44171.27850.259195.2371
100035.3666−7.561517.158239.91211.14190.778085.7224
0 . 5 2 500085.567859.2415−0.695544.14611.23500.682734.4612
10,00095.237591.81216.431447.79210.90140.665012.2575
15,00098.4425103.46028.595149.25261.08690.72084.3983
20,00099.6694108.10129.700950.14960.9396−0.25271.3341
True values100.0000110.000010.000050.00001.00000.5000

Share and Cite

MDPI and ACS Style

Lou, X.; Cai, X.; Cui, B. Parameter Estimation of a Class of Neural Systems with Limit Cycles. Algorithms 2018, 11, 169. https://doi.org/10.3390/a11110169

AMA Style

Lou X, Cai X, Cui B. Parameter Estimation of a Class of Neural Systems with Limit Cycles. Algorithms. 2018; 11(11):169. https://doi.org/10.3390/a11110169

Chicago/Turabian Style

Lou, Xuyang, Xu Cai, and Baotong Cui. 2018. "Parameter Estimation of a Class of Neural Systems with Limit Cycles" Algorithms 11, no. 11: 169. https://doi.org/10.3390/a11110169

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop