Next Article in Journal
Soft-Switching Full-Bridge Topology with AC Distribution Solution in Power Converters’ Auxiliary Power Supplies
Previous Article in Journal
Improved k-Means Clustering Algorithm for Big Data Based on Distributed SmartphoneNeural Engine Processor
Previous Article in Special Issue
Angles-Only Initial Orbit Determination via Multivariate Gaussian Process Regression
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Parameter Estimation for Hindmarsh–Rose Neurons

by
Alexander L. Fradkov
1,2,
Aleksandr Kovalchukov
1,2 and
Boris Andrievsky
1,2,*
1
Control of Complex Systems Lab., Institute of Problems in Mechanical Engineering, Russian Academy of Sciences (IPME RAS), 61 Bol’shoy pr. V.O., 199178 Saint Petersburg, Russia
2
Faculty of Mathematics and Mechanics, Saint-Petersburg State University, 198504 Saint Petersburg, Russia
*
Author to whom correspondence should be addressed.
Electronics 2022, 11(6), 885; https://doi.org/10.3390/electronics11060885
Submission received: 7 February 2022 / Revised: 1 March 2022 / Accepted: 9 March 2022 / Published: 11 March 2022
(This article belongs to the Special Issue Nonlinear Estimation Advances and Results)

Abstract

:
In the paper, a new adaptive model of a neuron based on the Hindmarsh–Rose third-order model of a single neuron is proposed. The learning algorithm for adaptive identification of the neuron parameters is proposed and analyzed both theoretically and by computer simulation. The proposed algorithm is based on the Lyapunov functions approach and reduced adaptive observer. It allows one to estimate parameters of the population of the neurons if they are synchronized. The rigorous stability conditions for synchronization and identification are presented.

1. Introduction

The human brain is one of the most complex systems existing on Earth. Many researchers have been working hard for many years in order to understand how the human brain is functioning, to create its model, and to redesign its construction [1,2]. Recently, the study of the whole brain dynamics models has started [3,4]. However, the overwhelming complexity demands a variety of simplified models of different complexity degrees. The simplest models are represented as the networks of biological neuron models connected by couplings. Typical models of this class are based on the celebrated Hodgkin–Huxley model and its simplifications: FitzHugh–Nagumo (FHN), Morris–Lecar (ML), Hindmarsh–Rose (HR), etc. New models still appear in the literature, e.g., [5]. The HR model is the simplest one that can exhibit most of the biological neuron’s behavior, such as spiking or bursting.
Chong et al. [6] studied the activity of neurons at the macroscopic level, known in the literature as “neural mass models”. To estimate the neurons’ activity, a robust circle criterion observer was proposed, and its application to estimate the average membrane potential of neuron populations in the single cortical column model was demonstrated. Further steps were based on using network models. However, well-understood linear network models [7,8] are not suitable for modeling networks of biological neurons. A number of approaches to state estimation for neural populations are based on synchronization; see [9,10] and the references therein.
An important part of brain network modeling is the identification of the model parameters. Its first step is the identification of the building blocks—the models of the single neurons or their subpopulations. Ideally, it should be performed based on the neuron state measurements. However, there exists the problem of uncertainty and incomplete measurements. A number of such approaches based on adaptation and learning are described in the literature. Dong et al. [11] dealt with an identification of the FHN model dynamics, employing the deterministic learning and interpolation method. For global identification, the FHN model was transformed in [11] into a set of ordinary differential equations (ODEs), and the dynamics of the approximation system were then identified by employing deterministic learning. In [12], an approach to solving the problem of identifying topology and parameters in HR neural networks was proposed. For this purpose, the so-called generalized extremal optimization (GEO) was introduced, and a heuristic identification algorithm was employed. Identifying HR neural networks’ topology was also considered by Zhao et al. [13], who employed the sinusoidal disturbance to identify the topology at the stage when the complex network achieves synchronization. It was demonstrated by the simulations that, compared with the disturbance of all the nodes, the disturbance of the key nodes alone can achieve a very good effect. In [14], an adaptive observer for asymptotical estimation of the parameters and states of a model of interconnected cortical columns was presented. The model adopted is capable of realistically reproducing the patterns seen on (intracranial) electroencephalograms (EEGs). The assessment of the parameters and status allows a better understanding of the mechanisms underlying neurological phenomena and can be used to predict the onset of epileptic seizures. Tang et al. [15] studied the effect of electromagnetic induction on the electrical activity of neurons, and the variable for magnetic flow was used to improve the HR neuron model. By the simulations, it was demonstrated that the neuron model, proposed in [15], can show multiple modes of electrical activity, which is dependent on the time delay and external forcing current. In [16], an approach based on adaptive observers was developed for partial identification HR model parameters. Malik and Mir [17] studied the synchronization of HR neurons, demonstrating that the coupled system shows several behaviors depending on the parameters of the HR model and coupling function. Recently, Xu [18] proposed to use an impulse response identification experiment with dynamical observations with an increasing data length for capturing the real-time information of systems and serving for online identification. In Xu [18], a separable Newton recursive parameter estimation approach was developed, and its efficacy was demonstrated by the Monte Carlo tests.
The models of biological neural networks and single neurons have many applications, e.g., in brain–computer interfaces [19,20,21]. A practical application of coupled HR neural networks was also discussed in [17]. In particular, it was demonstrated that the spiking network successfully encodes and decodes a time-varying input. Synchronization of the artificial HR neurons was also studied in [22,23]. An adaptive controller that provides synchronization of two connected HR neurons using only the output signal of the reference neuron was suggested in [22]. Andreev and Maksimenko [24] considered synchronization in a coupled neural network with inhibitory coupling. It was shown in [24] that in the case of a discrete neuron model, the periodic dynamics are manifested in the alternate excitation of various neural ensembles, whereas periodic modulation of the synchronization index of neural ensembles was observed in the continuous-time model.
In this paper, the problem of HR neuron model parameters’ identification is considered. To solve this, the reduced-order adaptation algorithm based on the speed gradient (SG) method and feedback Kalman–Yakubovich lemma (FKYL) [25,26] is proposed. A rigorous statement about the convergence of the parameter estimates to the true values was formulated, and the proof is given. The performance of the identification procedure was analyzed by computer simulation.
The remainder of the paper is organized as follows. The problem statement is presented in Section 2. Section 3 presents the design of the adaptation algorithm. The main results are given in Section 4. Computer simulation results are described in Section 5. Concluding remarks and the future work intentions of Section 6 finalize the paper.

2. Problem Statement

Consider the classical Hindmarsh–Rose (HR) neuron model [27]:
x ˙ 1 ( t )   =   x 2 ( t ) a x 1 3 ( t )   +   b x 1 2 ( t ) x 3 ( t )   +   I x ˙ 2 ( t )   =   c d x 1 2 ( t ) x 2 ( t ) x ˙ 3 ( t )   =   ε s x 1 ( t ) ε s r ε x 3 ( t )
The variable x 1 ( t ) is the membrane potential, while x 2 ( t ) ,   x 3 ( t ) are the fast and slow ionic currents, respectively. Variables x 1 ( t ) ,   x 2 ( t ) ,   x 3 ( t ) represent the state vector x ( t ) . The values a ,   b ,   c ,   d ,   ε ,   s ,   r ,   I are model parameters. We considered all quantities to be dimensionless.
Assume that x 1 ( t ) , x 2 ( t ) , x 3 ( t ) can be measured. In some cases (e.g., in in vitro experiments), this assumption is quite realistic. In order to estimate the state and parameters of the HR neuron model (1), introduce an auxiliary system—adaptive model—as follows:
z ˙ 1 ( t )   =   z 2 ( t ) a ^ x 1 3 ( t )   +   b ^ x 1 2 ( t ) z 3 ( t )   +   I ^ k ( x 1 ( t ) z 1 ( t ) ) z ˙ 2 ( t )   =   c ^ d ^ x 1 2 ( t ) z 2 ( t ) z ˙ 3 ( t )   =   ε s x 1 ( t ) ε s r ε z 3 ( t )
The variables z 1 ( t ) ,   z 2 ( t ) ,   z 3 ( t ) represent the state vector z ( t ) of the adaptive model (2). The term k 1 ( x 1 ( t ) z 1 ( t ) ) is the stabilizing term, typical for observers.
In this study, we assumed for simplicity that the value ε characterizing the relative rate of fast and slow currents is known. Then, one may denote the vector of tunable parameters θ   =   [ θ 1 , , θ 7 ] T , where θ 1   =   a ^ , θ 2   =   b ^ , θ 3   =   I ^ , θ 4   =   c ^ , θ 5   =   d ^ , θ 6   =   ε s ^ , θ 7   =   ε s r ^ . The problem is to design the adaptation/learning algorithm for θ ensuring the goals:
lim t z ( t ) x ( t )   =   0 ,
lim t θ ( t ) θ   =   0 ,
where θ = [ a , b , I , c , d , ε s , ε s r ] T .

3. Adaptation/Learning Algorithm Design

The equations for the state estimation error e ( t )   =   x ( t ) z ( t ) are as follows:
e ˙   =   A k e   +   B 1 ( θ 1 ( x 1 3 )   +   θ 2 x 1 2   +   θ 3 · 1 )   +   B 2 ( θ 4   +   θ 5 ( x 1 2 ) )   +   B 3 ( θ 6 x 1   +   θ 7 ( 1 ) ) ,
where A k   =   k 1 1 0 1 0 0 0 ε , B 1   =   1 0 0 , B 2   =   0 1 0 , B 3   =   0 0 1
For the design of the adaptation/learning algorithm, we used the speed gradient method [28], suggesting to change the tunable vector θ ( t ) in the direction of the gradient in θ of the speed of changing the goal function Q ( e ) , where e   =   x z along the trajectories of the system (1), (2). It is suggested to choose quadratic goal function Q ( e )   =   e T P e , where P   =   P T > 0 is a positive-definite matrix to be determined later. Since Q ( e ) depends only on e, the error model (5) can be used instead of (1), (2) at this stage. Applying the SG methodology [25,28] yields the following algorithm:
θ ^ ˙ 1   =   γ e T P B 1 · ( x 1 3 ) θ ^ ˙ 2   =   γ e T P B 1 · x 1 2 θ ^ ˙ 3   =   γ e T P B 1 · 1 θ ^ ˙ 4   =   γ e T P B 2 · 1 θ ^ ˙ 5 = γ e T P B 2 · ( x 1 2 ) θ ^ ˙ 6 = γ e T P B 3 · x 1 θ ^ ˙ 7 = γ e T P B 3 · ( 1 )

4. Main Results

The following proposition concerning the convergence of the adaptive model to the true one holds.
Theorem 1.
Let the parameter ε of the HR neuron model be known. Let k 1 > 0 and γ > 0 . Then, the goals (3), (4) are achieved for any initial conditions of the system (1), (2), (6).
The proof is based on exploiting the Lyapunov function:
V ( e )   =   e T P e   +   θ 2 / ( γ )
Evaluating the derivative of (7) along the system (1), (2), (6), one obtains:
V ˙ ( e )   =   e T ( A k T P   +   P A k ) e   +   2 e T P B 1 θ 1 x 1 3   +   θ 2 x 1 2   +   θ 3 + 2 e T P B 2 θ 4 θ 5 x 1 2 + 2 e T P B 3 θ 6 x 1 θ 7 + i = 1 7 θ i θ ˙ i / γ
Applying (6) to (8) yields V ˙ ( e ) = e T ( A k T P + P A k ) e . Condition V ˙ ( e ) < 0 is fulfilled e : e 0 when:
A k T P + P A k = Q ,
where Q = Q T > 0 . Since the eigenvalues of A k have a negative real part, P, introduced before, is the solution of Lyapunov Equation (9). The fact that V ( e ( t ) ) is bounded is obtained from [29]. It is shown that the solutions of the system (1) are bounded, and the synchronization error is also bounded.
Definition 1.
Time-varying vector function f : [ 0 , + ] R m is persistent excitation (PE) if it is bounded and there exists T > 0 and α > 0 such that t > 0 :
t t + T f ( s ) f ( s ) T d s α E .
Theorem 5.1 in Fradkov [25], the conditions for achieving goal (4) are derived. His theorem is reformulated below for the problem of interest.
Theorem 2.
If trajectories of the systems (1), (2) are bounded as far as e ( t ) , θ ( t ) are bounded, all roots of d e t ( λ E A k ) have a negative real part, and functions f 1 ( t ) = [ x 1 3 x 1 2 1 ] T , f 2 ( t ) = [ 1 x 1 2 ] T , f 3 ( t )   =   [ x 1 1 ] T are PE, then the tunable parameters converge to the regulator, achieving the goals (3), (4).
Persistent excitation of vector Φ ( t ) is equivalent to the existence of α > 0 , T > 0 , t 0 > 0 such as h R ( | h |   =   1 ) , t > t 0 satisfying max [ t ,   t   +   T ] | Φ T h | > α [28]. The proof of PE for the signals mentioned before is quite simple. Let at some moment t, [ x 1 ( t ) 3 x 1 ( t ) 2 1 ] t   =   0 x 1 ( t ) 3 h 1   +   x 1 ( t ) 2 h 2   +   h 3   =   0 . This equation has no more then three separate solutions. Then, for an arbitrary close moment, t   +   Δ t equation does not hold. The PE of the rest of the equations is proven similarly.
Remark 1.
The identification algorithm (6) requires the measurement of all state system variables. It is aimed at studying real neural cells “in vitro”, i.e., during real experiments with single cells where the measurement of all auxiliary variables can be implemented.

5. Computer Simulation

5.1. Neuron Modeling

For the systems (1), (2) and algorithm (6), we conducted mathematical modeling to ensure that the approach adequately meets the formulated problem. We built the neuron model (1) and observed different dynamical behaviors, which are presented in Figure 1, Figure 2 and Figure 3.
We chose numerical values of quantities a ,   b ,   c ,   d ,   e ,   s ,   r ,   I from [30] where the bifurcation analysis of the HR neuron was performed. a   =   1 ,   c   =   1 ,   d   =   5 ,   s   =   4 ,   r   =   1 , and parameters b ,   r ,   ε , and I are in the figure captions.

5.2. Regular Neuron Modeling

Secondly, we modeled the difference of the systems (1) and (2) in order to perform the identification process by Algorithm (6). The parameters of the observer were calculated just in time by the adaptation algorithm. One can see the identification process in Figure 4. It is seen that the observer parameters converge to the reference neuron parameters. However, synchronization error converges to zero, which is shown in Figure 5
Further simulations showed that the convergence depends on the small parameter ε , and too small values may result in the divergence of the identification process. The reason is the liner part of error Equation (5). One may subtract regularization term k e 3 from the third equation of (2). The linear part of e ˙ will take the form:
A k = k 1 1 0 1 0 0 0 ε k
After this modification, convergence becomes faster. You may see the accelerated identification process in Figure 6 and Figure 7. The synchronization error for irregular bursting is shown on Figure 8.

5.3. Robustness of Identification with Respect to Noise

Suppose time-varying ultimately bounded disturbance v ( t ) . Let us propagate v ( t ) in the first equation on Model (1). If v m a x is small enough [31], then the adaptive system will have globally bounded solutions. The identification process for the system with disturbance is presented in Figure 9.

6. Conclusions

A new algorithm for Hindmarsh–Rose neuron model parameter estimation based on the Lyapunov functions approach and reduced adaptive observer was proposed. It allows one to estimate the parameters of the population of the neurons if they are synchronized. The rigorous stability conditions for synchronization and identification were presented. Future work will be aimed at taking into account disturbances and measurement noises. It would be also interesting to use the Hindmarsh–Rose model instead of the FitzHugh–Nagumo model for studying gamma oscillation activity in the brain [32]. In future research, it is also planned to apply the results of the works [33], where the state filter for a time delay state-space system with unknown parameters from noisy observation information was proposed, and [34], where a gradient approach for the adaptive filter design based on the fractional-order derivative and a linear filter was developed.

Author Contributions

Conceptualization, A.L.F.; data curation, A.K. and B.A.; formal analysis, A.L.F. and A.K.; funding acquisition, A.L.F.; investigation, A.K. and B.A.; methodology, A.L.F.; project administration, A.L.F.; software, A.K.; supervision, A.L.F.; writing—original draft, A.K. and B.A.; writing—review and editing, A.L.F. and B.A. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Ministry of Science and Higher Education of the Russian Federation (Project No. 075-15-2021-573, performed in the IPME RAS). The mathematical formulation of the problem was performed partly in SPbU under the support of SPbU Grant ID 84912397.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
EEGelectroencephalogram
FHNFitzHugh–Nagumo
FKYLfeedback Kalman–Yakubovich lemma
HRHindmarsh–Rose
LTIlinear time-invariant
MLMorris–Lecar
ODEordinary differential equation
PEpersistent excitation
SGspeed gradient

References

  1. Ashby, W.R. Design for a Brain; Wiley: New York, NY, USA, 1960. [Google Scholar]
  2. Rabinovich, M.; Friston, K.J.; Varona, P. Principles of Brain Dynamics: Global State Interactions; MIT Press: Cambridge, MA, USA, 2012. [Google Scholar]
  3. Breakspear, M. Dynamic models of large-scale brain activity. Nat. Neurosci. 2017, 20, 340–352. [Google Scholar] [CrossRef] [PubMed]
  4. Cofré, R.; Herzog, R.; Mediano, P.A.; Piccinini, J.; Rosas, F.E.; Sanz Perl, Y.; Tagliazucchi, E. Whole-brain models to explore altered states of consciousness from the bottom up. Brain Sci. 2020, 10, 626. [Google Scholar] [CrossRef] [PubMed]
  5. Belyaev, M.; Velichko, A. A Spiking Neural Network Based on the Model of VO2-Neuron. Electronics 2019, 8, 1065. [Google Scholar] [CrossRef] [Green Version]
  6. Chong, M.; Postoyan, R.; Nešić, D.; Kuhlmann, L.; Varsavsky, A. A robust circle criterion observer with application to neural mass models. Automatica 2012, 48, 2986–2989. [Google Scholar] [CrossRef] [Green Version]
  7. Dzhunusov, I.A.; Fradkov, A.L. Synchronization in networks of linear agents with output feedbacks. Automat. Remote Control 2011, 72, 1615–1626. [Google Scholar] [CrossRef]
  8. Furtat, I.; Fradkov, A.; Tsykunov, A. Robust synchronization of linear dynamical networks with compensation of disturbances. Intern. J. Robust Nonlinear Control. 2014, 24, 2774–2784. [Google Scholar] [CrossRef]
  9. Lehnert, J.; Hövel, P.; Selivanov, A.; Fradkov, A.L.; Schöll, E. Controlling cluster synchronization by adapting the topology. Phys. Rev. E 2014, 90, 042914. [Google Scholar] [CrossRef] [Green Version]
  10. Plotnikov, S. Synchronization conditions in networks of Hindmarsh–Rose systems. Cybern. Phys. 2021, 10, 254–259. [Google Scholar] [CrossRef]
  11. Dong, X.; Si, W.; Wang, C. Global Identification of FitzHugh–Nagumo Equation via Deterministic Learning and Interpolation. IEEE Access 2019, 7, 107334–107345. [Google Scholar] [CrossRef]
  12. Wang, L.; Yang, G.; Yeung, L. Identification of Hindmarsh–Rose Neuron Networks Using GEO Metaheuristic. In Proceedings of the Second International Conference on Advances in Swarm Intelligence-Volume Part I, Chiang Mai, Thailand, 26–30 July 2019; Springer: Berlin/Heidelberg, Germany, 2011; pp. 455–463. [Google Scholar]
  13. Zhao, J.; Aziz-Alaoui, M.A.; Bertelle, C.; Corson, N. Sinusoidal disturbance induced topology identification of Hindmarsh–Rose neural networks. Sci. China Inf. Sci. 2016, 59, 112205. [Google Scholar] [CrossRef]
  14. Postoyan, R.; Chong, M.; Nešić, D.; Kuhlmann, L. Parameter and state estimation for a class of neural mass models. In Proceedings of the 51st IEEE Conference Decision Control (CDC 2012), Maui, HI, USA, 10–13 December 2012; pp. 2322–2327. [Google Scholar] [CrossRef]
  15. Tang, K.; Wang, Z.; Shi, X. Electrical Activity in a Time-Delay Four-Variable Neuron Model under Electromagnetic Induction. Front. Comput. Neurosci. 2017, 11. [Google Scholar] [CrossRef] [Green Version]
  16. Mao, Y.; Tang, W.; Liu, Y.; Kocarev, L. Identification of biological neurons using adaptive observers. Cogn. Process. 2009, 10. [Google Scholar] [CrossRef]
  17. Malik, S.; Mir, A. Synchronization of Hindmarsh Rose Neurons. Neural Netw. 2020, 123, 372–380. [Google Scholar] [CrossRef]
  18. Xu, L. Separable Newton Recursive Estimation Method Through System Responses Based on Dynamically Discrete Measurements with Increasing Data Length. Int. J. Control Autom. Syst. 2022, 20, 432–443. [Google Scholar] [CrossRef]
  19. Bonci, A.; Fiori, S.; Higashi, H.; Tanaka, T.; Verdini, F. An Introductory Tutorial on Brain–Computer Interfaces and Their Applications. Electronics 2021, 10, 560. [Google Scholar] [CrossRef]
  20. Chung, M.A.; Lin, C.W.; Chang, C.T. The Human–Unmanned Aerial Vehicle System Based on SSVEP–Brain Computer Interface. Electronics 2021, 10, 25. [Google Scholar] [CrossRef]
  21. Choi, H.; Lim, H.; Kim, J.W.; Kang, Y.J.; Ku, J. Brain Computer Interface-Based Action Observation Game Enhances Mu Suppression in Patients with Stroke. Electronics 2019, 8, 1466. [Google Scholar] [CrossRef] [Green Version]
  22. Kovalchukov, A. Adaptive identification and synchronization for two Hindmarsh–Rose neurons. In Proceedings of the 2021 5th Scientific School Dynamics of Complex Networks and their Applications (DCNA), Kaliningrad, Russia, 13–15 September 2021; pp. 108–111. [Google Scholar] [CrossRef]
  23. Semenov, D.; Fradkov, A. Adaptive control of synchronization for the heterogeneous Hindmarsh–Rose network. In Proceedings of the 3rd IFACWorkshop on Cyber-Physical & Human Systems CPHS, Shanghai, China, 3–5 December 2020. [Google Scholar] [CrossRef]
  24. Andreev, A.; Maksimenko, V. Synchronization in coupled neural network with inhibitory coupling. Cybern. Phys. 2019, 8, 199–204. [Google Scholar] [CrossRef]
  25. Fradkov, A.L. Cybernetical Physics: From Control of Chaos to Quantum Control; Springer: Berlin/Heidelberg, Germany, 2007. [Google Scholar]
  26. Andrievsky, B.R.; Churilov, A.N.; Fradkov, A.L. Feedback Kalman–Yakubovich lemma and its applications to adaptive control. In Proceedings of the 35th IEEE Conference Decision Control, Kobe, Japan, 13 December 1996; Volume 4, pp. 4537–4542. [Google Scholar]
  27. Hindmarsh, J.L.; Rose, R. A model of neuronal bursting using three coupled first order differential equations. Proc. R. Soc. London. Ser. B. Biol. Sci. 1984, 221, 87–102. [Google Scholar]
  28. Fradkov, A.L.; Miroshnik, I.V.; Nikiforov, V.O. Nonlinear and Adaptive Control of Complex Systems; Mathematics and Its Applications; Springer: Dordrecht, The Netherlands, 1999; Volume MAIA 491. [Google Scholar] [CrossRef]
  29. Semenov, D.M.; Fradkov, A.L. Adaptive synchronization in the complex heterogeneous networks of Hindmarsh–Rose neurons. Chaos Solitons Fractals 2021, 150, 111170. [Google Scholar] [CrossRef]
  30. Storace, M.; Linaro, D.; de Lange, E. The Hindmarsh–Rose neuron model: Bifurcation analysis and piecewise-linear approximations. Chaos Interdiscip. J. Nonlinear Sci. 2008, 18, 033128. [Google Scholar] [CrossRef]
  31. Annaswamy, A.; Fradkov, A. A historical perspective of adaptive control and learning. Annu. Rev. Control 2021, 52, 18–41. [Google Scholar] [CrossRef]
  32. Sevasteeva, E.; Plotnikov, S.; Lynnyk, V. Processing and model design of the gamma oscillation activity based on FitzHugh–Nagumo model and its interaction with slow rhythms in the brain. Cybern. Phys. 2021, 10, 265–272. [Google Scholar] [CrossRef]
  33. Zhang, X.; Ding, F. Adaptive parameter estimation for a general dynamical system with unknown states. Int. J. Robust Nonlinear Control 2020, 30, 1351–1372. [Google Scholar] [CrossRef]
  34. Zhang, X.; Ding, F. Optimal Adaptive Filtering Algorithm by Using the Fractional-Order Derivative. IEEE Signal Process. Lett. 2022, 29, 399–403. [Google Scholar] [CrossRef]
Figure 1. Regular bursting, b   =   3 ,   r   =   1 ,   ε   =   0.003 ,   I   =   0 .
Figure 1. Regular bursting, b   =   3 ,   r   =   1 ,   ε   =   0.003 ,   I   =   0 .
Electronics 11 00885 g001
Figure 2. Irregular bursting b   =   2.8 ,   r   =   1.6 ,   ε   =   0.01 ,   I   =   3.7 .
Figure 2. Irregular bursting b   =   2.8 ,   r   =   1.6 ,   ε   =   0.01 ,   I   =   3.7 .
Electronics 11 00885 g002
Figure 3. Regular spiking, b   =   3 ,   r   =   1 ,   ε   =   0.003 ,   I   =   2 .
Figure 3. Regular spiking, b   =   3 ,   r   =   1 ,   ε   =   0.003 ,   I   =   2 .
Electronics 11 00885 g003
Figure 4. Identification process, γ   =   1 .
Figure 4. Identification process, γ   =   1 .
Electronics 11 00885 g004
Figure 5. Synchronization error for bursting, γ   =   1 .
Figure 5. Synchronization error for bursting, γ   =   1 .
Electronics 11 00885 g005
Figure 6. Identification process for bursting regime, γ   =   1 .
Figure 6. Identification process for bursting regime, γ   =   1 .
Electronics 11 00885 g006
Figure 7. Identification process for irregular bursting regime, γ   =   1 .
Figure 7. Identification process for irregular bursting regime, γ   =   1 .
Electronics 11 00885 g007
Figure 8. Synchronization error.
Figure 8. Synchronization error.
Electronics 11 00885 g008
Figure 9. Identification with bounded Gaussian disturbance ( μ   =   4 ,   σ 2   =   0.01 ) .
Figure 9. Identification with bounded Gaussian disturbance ( μ   =   4 ,   σ 2   =   0.01 ) .
Electronics 11 00885 g009
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Fradkov, A.L.; Kovalchukov, A.; Andrievsky, B. Parameter Estimation for Hindmarsh–Rose Neurons. Electronics 2022, 11, 885. https://doi.org/10.3390/electronics11060885

AMA Style

Fradkov AL, Kovalchukov A, Andrievsky B. Parameter Estimation for Hindmarsh–Rose Neurons. Electronics. 2022; 11(6):885. https://doi.org/10.3390/electronics11060885

Chicago/Turabian Style

Fradkov, Alexander L., Aleksandr Kovalchukov, and Boris Andrievsky. 2022. "Parameter Estimation for Hindmarsh–Rose Neurons" Electronics 11, no. 6: 885. https://doi.org/10.3390/electronics11060885

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop