Next Article in Journal
Get Set or Get Distracted? Disentangling Content-Priming and Attention-Catching Effects of Background Lure Stimuli on Identifying Targets in Two Simultaneously Presented Series
Next Article in Special Issue
A Comprehensive sLORETA Study on the Contribution of Cortical Somatomotor Regions to Motor Imagery
Previous Article in Journal
Brain Connectivity Analysis Under Semantic Vigilance and Enhanced Mental States
Previous Article in Special Issue
EEG Signals Feature Extraction Based on DWT and EMD Combined with Approximate Entropy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Estimating the Parameters of Fitzhugh–Nagumo Neurons from Neural Spiking Data

by
Resat Ozgur Doruk
*,† and
Laila Abosharb
Department of Electrical and Electronics Engineering, Atılım University, Incek, Golbasi, 06836 Ankara, Turkey
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Brain Sci. 2019, 9(12), 364; https://doi.org/10.3390/brainsci9120364
Submission received: 31 October 2019 / Revised: 3 December 2019 / Accepted: 5 December 2019 / Published: 9 December 2019
(This article belongs to the Collection Collection on Theoretical and Computational Neuroscience)

Abstract

:
A theoretical and computational study on the estimation of the parameters of a single Fitzhugh–Nagumo model is presented. The difference of this work from a conventional system identification is that the measured data only consist of discrete and noisy neural spiking (spike times) data, which contain no amplitude information. The goal can be achieved by applying a maximum likelihood estimation approach where the likelihood function is derived from point process statistics. The firing rate of the neuron was assumed as a nonlinear map (logistic sigmoid) relating it to the membrane potential variable. The stimulus data were generated by a phased cosine Fourier series having fixed amplitude and frequency but a randomly shot phase (shot at each repeated trial). Various values of amplitude, stimulus component size, and sample size were applied to examine the effect of stimulus to the identification process. Results are presented in tabular and graphical forms, which also include statistical analysis (mean and standard deviation of the estimates). We also tested our model using realistic data from a previous research (H1 neurons of blowflies) and found that the estimates have a tendency to converge.

1. Introduction

Application of computational tools in neuroscience is an emerging field of research in the last 50 years. The Hodgkin–Huxley model [1] is a striking development in the field of theoretical and computational neuroscience. Here, the membrane potential and its bursting properties are modeled as a fourth-order nonlinear system. In addition to the membrane potential, it describes the behaviors of sodium and potassium ion channels. Its nonlinear properties make some researchers search for possibilities that yield simpler nonlinear differential equations. One such attempt is the second-order Morris–Lecar [2], which lumps the ion channel activation dynamics into a single recovery variable. It is still a conductance based model. Further simplifications involve complete elimination of physical parameters such as ion conductances. Two major examples are the second-order Fitzhugh–Nagumo [3,4] and the third-order Hindmarsh–Rose [5] models. These can model the pulses and bursts occurring in the membrane potential without the need of physical parameters like ion conductances. In addition, as in the case of Morris–Lecar models, the behaviors of ion channels are lumped into generic variables.
In the case that only the input output (stimulus/response) relationships are important, general neural network models can be a good choice. Some examples from literature are the static feed-forward models [6,7] and nonlinear recurrent dynamical neural network models [8,9]. The dynamical neural network models can be structured such that one may receive a membrane potential information (bursts can be explicitly recovered) or just the instantaneous firing rate as the output [10]. In addition, sometimes only the statistical properties of the stimulus/response pair is import and thus statistical black-box models are taken into account [11,12].
Regardless of the chosen model, stimulus/response data are required to obtain an accurate relationship. Depending on the experiment, these data may be continuous or discrete in nature. In the case of an in-vitro environment such as a patch clamp experiment, one may record a full time dependent profile of membrane potential. That allows computational biologists to perform an identification (parameter estimation) based on traditional minimum mean square estimation (MMSE) techniques. However, in an in-vivo experiment, it is very difficult to collect continuous data revealing exact (or in an acceptable range at least) membrane potential information. If a membrane potential micro electrode contacts a living neuron membrane, the resistive and capacitive properties of the electrode may alter the operation of the neuron. This is not desired as one will not model a realistically functioning neuron at the end of the identification process.
In [7,8], it is suggested that one can record the successive action potential timings if the electrodes are suitably placed in surroundings of the membrane. With that, one is able to form a neural spike train which has the discrete timings of the spikes (or of the action potential bursts). Of course, a spike train cannot have dedicated amplitude information. However, this does not mean that one is hopeless concerning model identification. In [13], it is suggested that neural spike timings largely obey Inhomogeneous Poisson Point Processes (IPPPs). Being aware of the fact that an IPPP can be approximated by a local Bernoulli process [14], it would be convenient to derive suitable likelihood functions and apply statistical parameter identification techniques on that.
In addition, previous research suggests that the transmitted neural information is not directly coded by the membrane potential level but rather vested in the firing rate [15], interspiking intervals (ISI) [16] or individual timings of the spikes [17]. Thus, training neuron models from discrete and stochastic spiking data is expected to be a beneficial approach to understand the computational features of our nervous system.
Concerning the application of statistical techniques based on point process likelihoods to neural modeling, there are a few research works in the related literature. The authors of [6,7] applied maximum a-posteriori estimation (MAP) technique to identification of the weights of a static feed-forward model of the auditory cortex of marmoset monkeys. The authors of [8,9] presented a computational study aiming at the estimation of the network parameters and time constants of a dynamical recurrent neural network model using point process maximum likelihood technique. The authors of [18] applied likelihood techniques to generate models for point process information coding. The authors of [19] trained a state space model from point process neural spiking data.
In a few research studies, Fitzhugh–Nagumo models are involved in stochastic neural spiking related studies. For example, the authors of [20] dealt with the interspike interval statistics when the original Fitzhugh–Nagumo model is modified to include noisy inputs. The number of small amplitude oscillations has a random nature and tend to have an asymptotically geometric distribution. Bashkirtseva et al. [21] studied the effect of stochastic dynamics represented by a standard Wiener process on the limit cycle behavior. In [22], the authors performed research on the hypoelliptic stochastic properties of Fitzhugh–Nagumo neurons. They studied the effect of those properties on the neural spiking behavior of Fitzhugh–Nagumo models. Finally, Zhang et al. [23] investigated the stochastic resonance occurring in the Fitzhugh–Nagumo models when trichotomous noise is present in the model. They found that, when the stimulus power is not sufficient to generate firing responses, trichotomous noise itself may trigger the firing.
In this research, we treated a conventional single Fitzhugh–Nagumo equation [3,4] as a computational model to form a theoretical stimulus/response relationship. We were interested in the algorithmic details of the modeling. Thus, we modified the original equation to provide firing rate output instead of the membrane potential. Based on the findings in [8,9,10], we mapped the firing rate and membrane potential of the neuron by a gained logistic sigmoid function. Sigmoid functions have a significance in neuron models as they are a feasible way of mapping the ion channel activation dynamics and membrane potential [1,2].
Although the output of our model is the neural firing rate, the responses from in vivo neurons are stochastic neural spike timings. To obtain representative data, we simulated the Fitzhugh–Nagumo neurons with a set of true reference parameters and then generated the spikes from the output firing rate by simulating an Inhomogeneous Poisson process on it.
The parameter estimation procedure was based on maximum likelihood method. Similar to that of Eden [14], the likelihood was derived from the local Bernoulli approximation of the inhomogeneous Poisson process. That depends on the individual spike timings rather than the mean firing rate (which is the case in Poisson distribution’s probability mass function).
The stimulus was modeled as a Fourier series in phased cosine form. This choice was made to investigate the performance of the estimation when the same stimulus as that in [8,9] was applied. In the computational framework of this research, the stimulus was applied for a duration 30 ms. This may be observed as a relatively short duration and it is chosen to speed up the computation. In some studies (e.g., [24,25,26]), one can infer that such short duration stimuli may be possible for fast spiking neurons.
In addition, fast spiking responses obtained from a single long random stimulus can be partitioned to segments of short duration such as 30 ms. Thus, the approach in this research can also be utilized in modeling studies that involves longer duration stimuli.
In addition to the computational features of this study, we also investigated the performance of our developments when the training data are taken from a realistic experiment. To achieve this goal, we used the data generated by de Ruyter and Bialek [27]. The data from this research have a 20 min recording of neural spiking responses obtained from H1 neurons of blowfly vision system against white noise random stimulus. The response was divided into segments of 500 ms and the developed algorithms were applied. Each 500 ms segment can be thought as an independent stimulus and its associated response.

2. Materials and Methods

2.1. FitzHugh-Nagumo Model

Fitzhugh–Nagumo (FN) model is a second-order polynomial nonlinear differential equation bearing two states representing the membrane potential ( V ) and a recovery variable ( W ) , which lumps all ion channel related processes into one state. Mathematically, it can be represented as shown below [28]:
V ˙ = V d V 3 W + I W ˙ = c V + a b W
The above model has four parameters [ a , b , c , d ] determining its properties. In the original text associated with the FN models, the coefficient of the V 3 is M M 1 / 3 ; however, in this work, we suppose that the coefficient of that cubic term is not constant and we assign a parameter d to it. In Equation (1), I represents the stimulus exciting the neuron. It can be thought of as an electric current.
In the introduction, we state that we need a relationship between the membrane potential representative variable V and the firing rate of our neuron. In addition, we also state that we can construct such a map by developing a nonlinear sigmoidal map as shown below:
r = F 1 + exp ( V ) )
where r is the firing rate of the neuron in ms 1 and F is the maximum firing rate parameter. Thus, one has five parameters to estimate and they can be vectorally expressed as:
θ = [ a , b , c , d , F ]
Thus, we can call θ ^ as the estimates of θ . In the application, we needed the true values of θ so that we coul generate the spikes that represent the collected data from a realistic experiment. These are available in Table 1.

2.2. Stimulus

The signal for stimulation was modeled using a phased cosine Fourier series as:
I = n = 1 N U A n cos ω n t + ϕ n
where A n represents the amplitude, ω n = 2 π f 0 n stands for the frequency of the nth Fourier component in M M rad / sec , and ϕ n stands for the phase of the component in radians. The amplitude A n along with the base frequency f 0 (in Hz) were kept constant, whereas the phase ϕ n was selected randomly from a uniform distribution in [ π , π ] radians. The amplitude parameter A n was unchanged for all mode n and it was set as A n = A max .

2.3. Neural Spiking and Point Processes

We state in the introduction that the neural spiking is a point process that largely obeys an Inhomogeneous Poisson Process (IPP). A basic Poisson process is characterized by an event rate λ and has an exponential probability mass function defined by:
P r o b N t + Δ t N t = k = e λ λ k k !
where k is the number of events that occur in the interval t , t + Δ t . In the simplest case, λ is constant in that interval. In neural operation, the process is much more complex and assuming a constant event rate is insufficient; thus, we refer to a time varying event rate, which is actually equivalent to the firing rate r ( t ) of the neuron (refer to Equation (2)). This yields an inhomogeneous Poisson point process with the event rate λ replaced by the mean firing rate defined by:
λ = t t + Δ t r τ d τ
Now, the term k represents the spike count in the interval t , t + Δ t , which is statistically related to the firing rate r ( t ) ; λ now represents the mean spike count for the firing rate r ( t ) , which varies with time; and N ( τ ) stands for the cumulative total number of spikes up to time τ , thus making N t + Δ t N t the spike count for the time interval t , t + Δ t .
Now, let us take a spike train ( t 1 , t 2 , , t K ) in the time interval ( 0 , T ) . Here, 0 t 1 t 2 t K T , thus t and Δ t become 0 and T. The spike train can be defined using a series of time stamps for K spikes. As a result, the likelihood density function related to any spike train ( t 1 , t 2 , , t K ) is gained using an inhomogeneous Poisson process [14,30] in the following way:
p t 1 , t 2 , , t K = exp 0 T r t , x , θ d t k = 1 K r t k , x , θ
The function reveals the likelihood of a given spike train ( t 1 , t 2 , , t K ) to occur with the rate function r t , x , θ , which obviously is relying mainly upon network parameters and the stimulus applied.

2.4. Maximum Likelihood Methods and Parameter Estimation

The parameters requiring assessment appear as a vector:
θ = θ 1 , , θ 5 = θ 1 ^ , , θ 5 ^
to cover all the parameters in Equation (3). The maximum probability here relies on the function proposed in Equation (7) and includes each spike timing as well. Estimation theory asserts that determining maximum probability is asymptotically effective and goes as far as the Cramér–Rao bound within the scope of large data. Therefore, for us to expand the probability function in Equation (7) to further cover settings with numerous spike trains initiated by numerous stimuli, a series of M stimuli should be assumed. Take the mth stimulus ( m = 1 , , M ) to initiate a spike train containing K m spikes in the time window [ 0 , T ] , and the spike timings are given by S m = t 1 ( m ) , t 2 ( m ) , , t K m ( m ) . By Equation (7). According to Equation (7), the probability function for the spike train S m can be determined as:
p S m θ = exp 0 T r ( m ) t d t k = 1 K m r ( m ) t k ( m )
in which r ( m ) represents the firing rate due to the mth stimulus. Let us denote that the rate function r ( m ) entirely relies on the parameters related to neuron parameters θ and the stimulus. On the left-hand side of Equation (9), its reliance on the neuron parameters θ can be noted.
Supposing the stimulus and its elicited responses in each mth trial are independent, one can derive a joint likelihood function as:
L S 1 , S 2 , , S M θ = m = 1 M p S m θ
To improve its convexity, we can make use of natural logarithm and derive a log likelihood function as shown below:
l S 1 , S 2 , , S M θ = m = 1 M 0 T r ( m ) t d t + m = 1 M k = 1 K m ln r ( m ) t k ( m )
Finally, the maximum likelihood estimates of the parameter vector θ is obtained by:
θ ^ M L = arg max θ l S 1 , S 2 , , S M θ

2.5. Spike Generation for Data Collection

Since this study was of computational type and targeted the development of an algorithm to be applied in a realistic experiment, we needed a solid approach to generate a dataset to represent the output of a realistic experiment. In the current research, the data were a set of neural spike trains that bear the individual spike timings with no amplitude information. In addition, we also know that the neural spiking process largely obeys inhomogeneous Poisson statistics, thus we could achieve that goal by implementing a stable Poisson process simulation. In other words, we simulated an inhomogeneous Poisson process with r ( t ) as its event rate. There are several algorithms to simulate an inhomogeneous Poisson process. The local Bernoulli approximation [14], thinning [31], and time scale transformation [32] can be shown as examples.
If the time bin is sufficiently small (e.g., δ t = 10 μ s ) such that only one spike is fitted, one can use local Bernoulli approximation to generate the neural spiking data very easily. The is also a reasonable choice when the neuron models are integrated by discrete solvers such as the Euler or Runge–Kutta method. One can see a summary of the related algorithm below [8]:
  • Given the firing rate of a neuron as r ( t ) .
  • Find the probability of firing at time t i by evaluating p i = r ( t i ) δ t where δ t is the integration interval. It should be a small real number such as 1 ms.
  • Draw a random number x r a n d = U [ 0 , 1 ] that is uniformly distributed in the interval [ 0 , 1 ] . Here, U stands for a uniform distribution.
  • If p i > x r a n d , fire a spike at t = t i , else do nothing.
  • Collect spikes as S = [ t 1 , , t N s ] where N s will be the total number of spikes collected from one simulation.

3. Application

In this section, we introduce a simulation-based approach to evaluate the parameters of a firing rate-based single Fitzhugh–Nagumo neuron model. The process in brief appears as follows:
  • A single run of simulation lasted for T f = 30 ms.
  • The stimulus amplitude A max and base frequency f 0 were assigned prior to each trial m. The phase angles ϕ n was assigned randomly, as defined in Section 2.2.
  • The firing rate profile was obtained by integrating the FN model in Equation (1) for T f = 30 ms using a time bin of δ t = 10 μ s . The integration was performed at the true values of the parameters in Table 1 to generate the actual firing rate information r m ( t ) of current mth trial.
  • Using the approach presented in Section 2.5, the spike train S m of the mth trial was generated from the firing rate r m ( t ) . The number of spikes was K m at the mth trial.
  • The simulation was repeated N i t times to collect several statistically independent spike trains, i.e., m = 1 N i t .
  • The neural spiking data needed by Equation (11) were obtained at the fifth step. However, the firing rate r m ( t ) in Equation (11) should be computed at the current iteration of the optimization.
  • An optimization algorithm (e.g., fmincon) was run on the joint likelihood function in Equation (11) to obtain the maximum likelihood estimates of the parameters ( θ M L in Equation (12)).

3.1. Optimization Algorithm

To perform a maximum likelihood estimation (i.e., the problem defined in Equation (12)), we needed an optimizer. Most optimizers target a local minimum and thus require multiple initial guesses to increase the probability of finding a global optimum to the problem. However, this is a time consuming task and in a problem similar to that of this research duration is a crucial parameter. This was even more critical when we are using our algorithms in a physiological experiment. Some optimization algorithms such as genetic, pattern search, or simulated annealing do not require the online computation of gradients but they are computationally extensive and will most likely require a longer duration. Thus, in this research, we preferred a gradient based algorithm and utilized MATLAB’s fmincon routine. It is based on interior-point algorithms (a modified Newton’s method) and allows lower and upper bounds to be set on the result. As all parameters of an FN model are positive, a zero lower bound will prevent unnecessary parameter sweeps.

3.2. Simulation Scenarios

In this section, we introduce the results related to parameter estimation using table for the variation of mean estimated values θ ^ = θ 1 ^ , θ 2 ^ , θ 3 ^ , θ 4 ^ , θ 5 ^ of parameters θ = θ 1 , θ 2 , θ 3 , θ 4 , θ 5 . The scenario information for the present problem appear in Table 2. To show impact of various stimulus components N U , amplitude level A max , and number of trials N i t , the problem was re-run for a set of different values of those parameters.
The initial conditions of the states representing the membrane potential V and recovery activity W in Equation (1) were assumed as V ( 0 ) = 0 and W ( 0 ) = 0 . This is a reasonable choice as we did not have any information about them.
A typical stimulus response relationship can be seen in Figure 1. Here, the stimulus parameters are A max = 100 , f 0 = 10 / 3 kHz, and N U = 5 . The nominal parameters in Table 1 were used in this simulation.

3.3. Estimation of Parameters Using a Realistic Data

As stated in the end of the Introduction, we were likely interested in the results of the estimation when the stimulus/response data (spike trains collected) were collected from realistic neurons. Although performing an experiment may not be possible, one can use data from repositories or other sites on the web. We used the data collected in an experiment performed by de Ruyter and Bialek [27]. Here, the stimulus was of white noise type and the response was measured from H1 neurons of blowfly vision system. The data are available as a MATLAB workspace file on the website http://www.gatsby.ucl.ac.uk/~dayan/book/exercises/c1/data/c1p8.mat. In this dataset, a single stimulus of 20 min duration stimulates the H1 neurons of the flies. We divided these 1200 s long data into 2400 segments, each of which is 500 ms long. Thus, our algorithm was applied as if there were 2400 independent stimuli of 500 ms duration. Since we had a random stimulus here, we could assume that segments were triggered by independent stimuli. The algorithm was provided by subsets of data having 25, 50, 100, 200, 300, 400, 500, 600, 700, 800, 900, 1000, 1100, 1200, 1300, 1400, 1500, 1600, 1700, 1800, 1900, 2000, 2100, 2200, 2300, and 2400 samples (in other words, the value of N i t ).

4. Results

In this section, the results of our example problem are presented. The maximum likelihood estimates ( θ M L ) of the parameters ( θ ) in Equation (3) were obtained by maximizing Equation (10) using MATLAB’s fmincon routine.
The relevant results can be categorized under two headings:
  • The variations of mean estimated values of θ ( θ M L ) against varying sample size N i t , amplitude level A max , stimulus component size N U , and base frequency f 0 are presented in Section 4.1.
  • The variations of standard deviations of the estimated parameters against varying sample size N i t , amplitude level A max , stimulus component size N U , and base frequency f 0 are presented in Section 4.2.

4.1. Mean Estimated Values

One can see the variation of the mean estimated values of each parameter in Equation (3) against the number of samples N i t , amplitude A max , component size N U , and base frequency f 0 of the stimulus in Table 3, Table 4, Table 5 and Table 6, respectively.

4.2. Standard Deviations

One can see the variation of the standard deviations of the estimates of each parameter in Equation (3) against the number of samples N i t , amplitude A max , component size N U , and base frequency f 0 of the stimulus in Table 7, Table 8, Table 9 and Table 10, respectively.
In addition to the tabular results, the variation of the standard deviations are also presented in graphical forms in Figure 2, Figure 3, Figure 4 and Figure 5.

4.3. Results of Estimation from Realistic Data

As mentioned in Section 3.3, we also utilized realistic data obtained from H1 neurons of blowflies [27]. A little more detailed discussion is available in Section 3.3. The variation of estimated values of neuron parameters a , b , c , d , F against the sample sizes are available in Table 11. In Table 12, the relative error with respect to the case with previous sample size setting is shown. The relative error was computed with the following scheme:
E R ( k ) = θ ^ ( k ) θ ^ ( k 1 ) θ ^ ( k 1 )
where k refers to each of the cases in Table 11 and they are identified by the sample size parameter N i t . Here, k did not start from k = 1 because we did not have any data concerning the cases N i t < 25 . Thus, in Table 12, the k value starts from k = 2 . Thus, in its first column, the relative error of the case with N i t = 50 was computed against the case with N i t = 25 . Similarly, the relative error of the case with N i t = 100 was computed against the case with N i t = 50 , and so on. When we examine Table 12, we can observe that the relative errors ( E R ) of parameters [ a , b , c , d , F ] reduce as the sample size increases (as k progresses). Although there seems a fluctuation of the relative error, the magnitude of this fluctuation tends to decrease. This is noted especially after the case with N i t = 600 .

4.4. Statistical Testing of the Parameter Estimation with Realistic Data

To test the validity of the results of Section 4.3, one needs to perform a statistical comparison test. To achieve this goal, we performed a Kolmogorov–Smirnov test on the interspike intervals of the spike trains obtained from the H1 neuron measurement data and the simulated spike trains with one of the parameter sets [ a , b , c , d , F ] in Table 11. As one set of measurement is not statistically adequate, we used superimposed spike sequences. As they were obtained from independent stimuli, their statistical nature was not disturbed. As we did in the estimation experiment, we superimposed the spike sequences in the response segments of both realistic measurements and the simulated output from our model. After obtaining that, we performed a Kolmogorov–Smirnov test for the two samples (one is from realistic response and one is from the simulated response from our model). We applied different segment lengths and plotted the variation of the p-values. The tool used in the application was MATLAB’s kstest2(x1,x2) routine (here, x1 and x2 are two samples from similar or dissimilar distributions). We used the parametric estimations from the last column in Table 11. One can see the relevant results in Figure 6, Figure 7, Figure 8, Figure 9, Figure 10 and Figure 11. From those outcomes, one can note that the p-value starts crossing the p = 0.05 line after obtaining about 80 samples of measurement. This may be normal in the view of statistics, as these hypothesis testing algorithms require large numbers of samples to yield strong results.

5. Conclusions

In this paper, we present a theoretical and computational study aiming at the identification of the parameters of a single Fitzhugh–Nagumo model from stochastic discrete neural spiking data. To pursue this goal, we needed to modify the classical Fitzhugh–Nagumo model so that the output generates a firing rate instead of a membrane potential. We transformed the membrane potential information into that of a time dependent firing rate through a nonlinear map in sigmoidal form. The spiking data that are representative of an experimental application were obtained by simulating the Fitzhugh–Nagumo model and an Inhomogeneous Poisson process together. To assess the performance of the work, we repeated the simulations under different sample sizes (the number of repeated trials), stimulus component sizes, and stimulus base frequencies and amplitudes. The variation of mean estimated values and standard deviations are presented as results. The following concluding remarks can be made:
  • The estimation algorithm showed a stable behavior for all examined conditions, as shown in Table 2.
  • The results in Table 3, Table 4, Table 5 and Table 6 show that the mean estimated values are closest to the true values of the parameters in Table 1 when N i t = 100 , N U = 5 , f 0 = 0.333 KHz, and A max = 25 .
  • In general, the standard deviations of estimates present a decreasing behavior increasing sample size N i t (Figure 2). For parameters b and c, there is a slightly oscillating behavior in the standard deviation values (Figure 2b,c). The standard deviations when N i t = 100 are slightly larger than those of the case N i t = 200 . The situation may be treated inferior to the results of others studies (e.g., [8]). However, one should bear in mind that the model in [8] is a type of generic recurrent neural network and those are known to have universal approximation capabilities [33]. Thus, one should expect that the standard deviations of network weight estimates will have a better correlation to stimulus parameters when a generic model with a universal approximation capability is utilized for model fitting. In addition, the absolute standard deviations of the estimates in this research seem smaller. Thus, the overall results can be considered successful.
  • For most of the parameters ( a , b , c , d ) , the variation of standard deviations against the amplitude parameter A max has a worsening behavior (Figure 3). The only exception is associated with the maximum firing rate parameter F. It has an improved standard deviation when the amplitude level A max increases. Concerning the mean estimated values, changing the amplitude from A max = 25 to A max = 200 does not make a sensible variation. Thus, keeping A max = 25 seems a good choice.
  • Standard deviations of the estimates showed a little improvement when one has a large number of stimulus components N U (Figure 4). However, based on the mean estimated values, keeping it smaller together with the amplitude parameter A max seems a viable choice.
  • Concerning the stimulus base frequency f 0 , it seems better to keep it in the lower side of the range ( 0.333 f 0 3.333 KHz) applied in this research (i.e., f 0 1 KHz).
  • For assessing the performance of our model when more realistic data and longer stimuli exist, we performed an estimation attempt using the data from a previous research [27]. We divided a 20 min recording into 2400 segments, the lengths of which equal 500 ms each. The stimulus was randomly generated and thus each segment was treated as an independent experiment. It appears that the estimates of the parameters have a tendency to converge to a final value, with the increasing sample size N i t . This can be understood from the relative errors in Table 12. The errors become smaller and fluctuations diminish as the sample size advances. As a result, our model can be used in modeling studies where the computational features of the neural signal processing is important.
  • The statistical Kolmogorov–Smirnov testing reveals that our modified Fitzhugh–Nagumo computational model can successfully describe the statistics stimulus/response relationship.
In general, the obtained results are promising. However, a slight improvement may be obtained if an optimal stimulus profile is generated prior to the identification process. The theory of optimal design of experiments [34] may be beneficial in this respect. An application to the continuous time recurrent neural network models is available in [9]. It appears to improve the mean square errors of network weight estimates (thus also the variance). This may be a part of future related studies under the same topic.

Author Contributions

L.A. wrote the MATLAB codes, performed the simulations, collected the necessary data, and prepared the tabular and graphical results. R.O.D. performed the validation of theoretical framework which this research is based on, polished the text, and brought the manuscript into its final form.

Funding

This research received no external funding.

Acknowledgments

The authors would like to thank to the Editor in Chief, associate editors, and reviewers for their valuable contribution to the improvement of this work.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hodgkin, A.L.; Huxley, A.F. A quantitative description of membrane current and its application to conduction and excitation in nerve. J. Physiol. 1952, 117, 500. [Google Scholar] [CrossRef] [PubMed]
  2. Morris, C.; Lecar, H. Voltage oscillations in the barnacle giant muscle fiber. Biophys. J. 1981, 35, 193–213. [Google Scholar] [CrossRef] [Green Version]
  3. FitzHugh, R. Impulses and physiological states in theoretical models of nerve membrane. Biophys. J. 1961, 1, 445–466. [Google Scholar] [CrossRef] [Green Version]
  4. Nagumo, J.; Arimoto, S.; Yoshizawa, S. An active pulse transmission line simulating nerve axon. Proc. IRE 1962, 50, 2061–2070. [Google Scholar] [CrossRef]
  5. Hindmarsh, J.L.; Rose, R. A model of neuronal bursting using three coupled first order differential equations. Proc. R. Soc. Lond. B 1984, 221, 87–102. [Google Scholar]
  6. DiMattina, C.; Zhang, K. Active data collection for efficient estimation and comparison of nonlinear neural models. Neural Comput. 2011, 23, 2242–2288. [Google Scholar] [CrossRef]
  7. DiMattina, C.; Zhang, K. Adaptive stimulus optimization for sensory systems neuroscience. Front. Neural Circuit 2013, 7, 101. [Google Scholar] [CrossRef] [Green Version]
  8. Doruk, R.O.; Zhang, K. Fitting of dynamic recurrent neural network models to sensory stimulus-response data. J. Biol. Phys. 2018, 44, 449–469. [Google Scholar] [CrossRef]
  9. Doruk, O.R.; Zhang, K. Adaptive stimulus design for dynamic recurrent neural network models. Front. Neural Circuits 2019, 12, 119. [Google Scholar] [CrossRef]
  10. Miller, K.D.; Fumarola, F. Mathematical equivalence of two common forms of firing rate models of neural networks. Neural Comput. 2012, 24, 25–31. [Google Scholar] [CrossRef] [Green Version]
  11. Barlow, H.B. Sensory Communication, 1961, 1, 217–234. Possible principles underlying the transformation of sensory messages. Sens. Commun. 1961, 1, 217–234. [Google Scholar]
  12. Fairhall, A.L.; Lewen, G.D.; Bialek, W.; van Steveninck, R.R.D.R. Efficiency and ambiguity in an adaptive neural code. Nature 2001, 412, 787. [Google Scholar] [CrossRef] [PubMed]
  13. Shadlen, M.N.; Newsome, W.T. Noise, neural codes and cortical organization. Curr. Opin. Neurol 1994, 4, 569–579. [Google Scholar] [CrossRef]
  14. Czanner, G.; Eden, U.T.; Wirth, S.; Yanike, M.; Suzuki, W.A.; Brown, E.N. Analysis of between-trial and within-trial neural spiking dynamics. J. Neurophysiol. 2008, 99, 2672–2693. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Adrian, E.D.; Zotterman, Y. The impulses produced by sensory nerve-endings: Part II. The response of a Single End-Organ. J. Physiol. 1926, 61, 151–171. [Google Scholar] [CrossRef]
  16. Singh, C.; Levy, W.B. A consensus layer V pyramidal neuron can sustain interpulse-interval coding. PLoS ONE 2017, 12, e0180839. [Google Scholar] [CrossRef] [Green Version]
  17. Smith, E.C.; Lewicki, M.S. Efficient auditory coding. Nature 2006, 439, 978. [Google Scholar] [CrossRef]
  18. Paninski, L. Maximum likelihood estimation of cascade point-process neural encoding models. Netw.-Comput. Neural Syst. 2004, 15, 243–262. [Google Scholar] [CrossRef]
  19. Smith, A.C.; Brown, E.N. Estimating a state-space model from point process observations. Neural Comput. 2003, 15, 965–991. [Google Scholar] [CrossRef]
  20. Berglund, N.; Landon, D. Mixed-mode oscillations and interspike interval statistics in the stochastic FitzHugh–Nagumo model. Nonlinearity 2012, 25, 2303. [Google Scholar] [CrossRef] [Green Version]
  21. Bashkirtseva, I.; Ryashko, L.; Slepukhina, E. Noise-induced oscillating bistability and transition to chaos in Fitzhugh–Nagumo model. Fluct. Noise Lett. 2014, 13, 1450004. [Google Scholar] [CrossRef]
  22. Leon, J.R.; Samson, A. Hypoelliptic stochastic FitzHugh–Nagumo neuronal model: Mixing, up-crossing and estimation of the spike rate. Ann. Appl. Probab. 2018, 28, 2243–2274. [Google Scholar] [CrossRef] [Green Version]
  23. Zhang, H.; Yang, T.; Xu, Y.; Xu, W. Parameter dependence of stochastic resonance in the FitzHugh-Nagumo neuron model driven by trichotomous noise. Eur. Phys. J. B 2015, 88, 125. [Google Scholar] [CrossRef]
  24. Arabzadeh, E.; Panzeri, S.; Diamond, M.E. Deciphering the spike train of a sensory neuron: Counts and temporal patterns in the rat whisker pathway. J. Neurosci. 2006, 26, 9216–9226. [Google Scholar] [CrossRef] [Green Version]
  25. Walsh, D.A.; Brown, J.T.; Randall, A.D. In vitro characterization of cell-level neurophysiological diversity in the rostral nucleus reuniens of adult mice. J. Physiol. 2017, 595, 3549–3572. [Google Scholar] [CrossRef] [Green Version]
  26. Sakai, M.; Chimoto, S.; Qin, L.; Sato, Y. Neural mechanisms of interstimulus interval-dependent responses in the primary auditory cortex of awake cats. BMC Neurosci. 2009, 10, 10. [Google Scholar] [CrossRef] [Green Version]
  27. De Ruyter, R.; Bialek, W. Timing and Counting Precision in the Blowfly Visual System. In Models of Neural Networks IV; Springer: Berlin, Germany, 2002; pp. 313–371. [Google Scholar]
  28. Doruk, R.Ö.; Ihnish, H. Bifurcation control of Fitzhugh-Nagumo models. Süleyman Demirel Üniversitesi Fen Bilimleri Enstitüsü Dergisi 2018, 22, 375–391. [Google Scholar] [CrossRef]
  29. Izhikevich, E.M.; FitzHugh, R. Fitzhugh-nagumo model. Scholarpedia 2006, 1, 1349. [Google Scholar] [CrossRef]
  30. Brown, E.N.; Barbieri, R.; Ventura, V.; Kass, R.E.; Frank, L.M. The time-rescaling theorem and its application to neural spike train data analysis. Neural Comput. 2002, 14, 325–346. [Google Scholar] [CrossRef]
  31. Lewis, P.A.; Shedler, G.S. Simulation of nonhomogeneous Poisson processes by thinning. Nav. Res. Logist. Q. 1979, 26, 403–413. [Google Scholar] [CrossRef]
  32. Klein, R.W.; Roberts, S.D. A time-varying Poisson arrival process generator. Simulation 1984, 43, 193–195. [Google Scholar] [CrossRef]
  33. Schäfer, A.M.; Zimmermann, H.G. Recurrent neural networks are universal approximators. In International Conference on Artificial Neural Networks; Springer: Berlin/Heidelberg, Germany, 2006; pp. 632–640. [Google Scholar]
  34. Pukelsheim, F. Optimal design of experiments. In SIAM Classics in Applied Mathematics; SIAM: Philadelphia, PA, USA, 1993; Volume 50. [Google Scholar]
Figure 1. A typical stimulus and response pattern. In the first pane, a Fourier series stimulus with parameters A max = 100 , f 0 = 333 Hz, and N U = 5 is displayed. In the second pane, the neural spiking pattern of the Fitzhugh–Nagumo model in Equation (1) with the nominal parameters in Table 1 obtained after Poisson simulation can be seen.
Figure 1. A typical stimulus and response pattern. In the first pane, a Fourier series stimulus with parameters A max = 100 , f 0 = 333 Hz, and N U = 5 is displayed. In the second pane, the neural spiking pattern of the Fitzhugh–Nagumo model in Equation (1) with the nominal parameters in Table 1 obtained after Poisson simulation can be seen.
Brainsci 09 00364 g001
Figure 2. The variation of individual standard deviations (or relative errors) of the estimates against varying sample (iteration) size N i t . Other stimulus parameters are N U = 5 , A max = 100 , and f 0 = 333.3 Hz. For most parameters, these relative errors show an improving behavior with the increasing sample size. However, some parameters such as b do not present any improvement or degradation in relative errors. However, in general, the relative error levels remain small.
Figure 2. The variation of individual standard deviations (or relative errors) of the estimates against varying sample (iteration) size N i t . Other stimulus parameters are N U = 5 , A max = 100 , and f 0 = 333.3 Hz. For most parameters, these relative errors show an improving behavior with the increasing sample size. However, some parameters such as b do not present any improvement or degradation in relative errors. However, in general, the relative error levels remain small.
Brainsci 09 00364 g002
Figure 3. The variation of individual standard deviations (or relative errors) of the estimates against varying stimulus amplitude parameter A max . Other stimulus parameters are N i t = 100 , N U = 5 , and f 0 = 333.3 Hz. Except for parameter F, one cannot see an improvement with raising the stimulus amplitude. However, in general, the relative error levels remain small.
Figure 3. The variation of individual standard deviations (or relative errors) of the estimates against varying stimulus amplitude parameter A max . Other stimulus parameters are N i t = 100 , N U = 5 , and f 0 = 333.3 Hz. Except for parameter F, one cannot see an improvement with raising the stimulus amplitude. However, in general, the relative error levels remain small.
Brainsci 09 00364 g003
Figure 4. The variation of individual standard deviations (or relative errors) of the estimates against varying stimulus component size N U . Other stimulus parameters are N i t = 100 , A max = 100 , and f 0 = 333.3 Hz. Stimuli with small N U = 5 or large N U = 30 component size can be preferred. In general, relative error levels also stay smaller in this case.
Figure 4. The variation of individual standard deviations (or relative errors) of the estimates against varying stimulus component size N U . Other stimulus parameters are N i t = 100 , A max = 100 , and f 0 = 333.3 Hz. Stimuli with small N U = 5 or large N U = 30 component size can be preferred. In general, relative error levels also stay smaller in this case.
Brainsci 09 00364 g004
Figure 5. The variation of individual standard deviations (or relative errors) of the estimates against varying base frequency f 0 . Other stimulus parameters are N i t = 100 , A max = 100 , and N U = 5 . The frequencies are in KHz. Although overall relative error levels are smaller, one can prefer a mid frequency range, e.g. 1 f 0 M M 7 / 3 KHz
Figure 5. The variation of individual standard deviations (or relative errors) of the estimates against varying base frequency f 0 . Other stimulus parameters are N i t = 100 , A max = 100 , and N U = 5 . The frequencies are in KHz. Although overall relative error levels are smaller, one can prefer a mid frequency range, e.g. 1 f 0 M M 7 / 3 KHz
Brainsci 09 00364 g005
Figure 6. The variation of the Kolmogorov–Smirnov test p value with the number of samples N i t obtained from both measurements (simulation and realistic measurement). Here, the segment size is 500 ms.
Figure 6. The variation of the Kolmogorov–Smirnov test p value with the number of samples N i t obtained from both measurements (simulation and realistic measurement). Here, the segment size is 500 ms.
Brainsci 09 00364 g006
Figure 7. The variation of the Kolmogorov–Smirnov test p value with the number of samples N i t obtained from both measurements (simulation and realistic measurement). Here, the segment size is 1 s.
Figure 7. The variation of the Kolmogorov–Smirnov test p value with the number of samples N i t obtained from both measurements (simulation and realistic measurement). Here, the segment size is 1 s.
Brainsci 09 00364 g007
Figure 8. The variation of the Kolmogorov–Smirnov test p value with the number of samples N i t obtained from both measurements (simulation and realistic measurement). Here, the segment size is 2 s.
Figure 8. The variation of the Kolmogorov–Smirnov test p value with the number of samples N i t obtained from both measurements (simulation and realistic measurement). Here, the segment size is 2 s.
Brainsci 09 00364 g008
Figure 9. The variation of the Kolmogorov–Smirnov test p value with the number of samples N i t obtained from both measurements (simulation and realistic measurement). Here, the segment size is 3 s.
Figure 9. The variation of the Kolmogorov–Smirnov test p value with the number of samples N i t obtained from both measurements (simulation and realistic measurement). Here, the segment size is 3 s.
Brainsci 09 00364 g009
Figure 10. The variation of the Kolmogorov–Smirnov test p value with the number of samples N i t obtained from both measurements (simulation and realistic measurement). Here, the segment size is 4 s.
Figure 10. The variation of the Kolmogorov–Smirnov test p value with the number of samples N i t obtained from both measurements (simulation and realistic measurement). Here, the segment size is 4 s.
Brainsci 09 00364 g010
Figure 11. The variation of the Kolmogorov–Smirnov test p value with the number of samples N i t obtained from both measurements (simulation and realistic measurement). Here, the segment size is 6 s.
Figure 11. The variation of the Kolmogorov–Smirnov test p value with the number of samples N i t obtained from both measurements (simulation and realistic measurement). Here, the segment size is 6 s.
Brainsci 09 00364 g011
Table 1. The nominal parameters of the FN model in Equations (1) and (2). These were evaluated using the information in [29].
Table 1. The nominal parameters of the FN model in Equations (1) and (2). These were evaluated using the information in [29].
ParameterValue
a0.08
b0.056
c0.064
d0.333
F100
Table 2. Data for the simulation scenario.
Table 2. Data for the simulation scenario.
ParameterSymbolValue
Simulation Time T f 30 ms
Number of Trials N i t 25, 50, 100, 200
# of Components in Stimulus N U 5, 10, 20, 30
Method of OptimizationN/AInterior-Point Gradient Descent (MATLAB)
# of True ParametersSize( θ )5
Stimulus Amplitude ( μ A) A max 25, 50, 100, 200
Base Frequency f 0 1 3 , 1, 7 3 , 10 3 KHz
Table 3. Estimated value vs. N i t ( N U = 5, A max = 100, and f 0 = 333.3 Hz).
Table 3. Estimated value vs. N i t ( N U = 5, A max = 100, and f 0 = 333.3 Hz).
N it θ ^ 1 θ ^ 2 θ ^ 3 θ ^ 4 θ ^ 5
50.07810.05040.06270.3348100.0135
500.09530.08160.07310.331799.9960
1000.08700.06350.06950.332699.9933
2000.08400.05970.06940.3325100.0065
Table 4. Estimated value vs. N U ( N i t = 100, A max = 100, and f 0 = 333.3 Hz).
Table 4. Estimated value vs. N U ( N i t = 100, A max = 100, and f 0 = 333.3 Hz).
N U θ ^ 1 θ ^ 2 θ ^ 3 θ ^ 4 θ ^ 5
50.07810.05040.06270.3348100.0135
100.08110.04360.05950.333399.9927
200.08490.06180.08010.332699.9943
300.07700.05050.06360.333199.9920
Table 5. Estimated value vs. A max ( N i t = 100, N U = 5, and f 0 = 333.3 Hz).
Table 5. Estimated value vs. A max ( N i t = 100, N U = 5, and f 0 = 333.3 Hz).
A max θ ^ 1 θ ^ 2 θ ^ 3 θ ^ 4 θ ^ 5
250.08170.05490.06380.333799.9980
500.08090.05860.06990.3330100.0008
1000.07810.05040.06270.3348100.0135
2000.07670.05050.06080.332299.9894
Table 6. Estimated value vs. f 0 ( N i t = 100, N U = 5, and A max = 100). Frequencies are in KHz.
Table 6. Estimated value vs. f 0 ( N i t = 100, N U = 5, and A max = 100). Frequencies are in KHz.
f 0 θ ^ 1 θ ^ 2 θ ^ 3 θ ^ 4 θ ^ 5
1/30.08560.06370.07120.331599.9942
10.07960.05500.06410.3364100.0124
5/30.08610.05660.06270.3327100.0195
7/30.08700.06350.06950.332699.9933
Table 7. Standard deviations vs. N i t ( N U = 5, A max = 100, and f 0 = 333.3 Hz).
Table 7. Standard deviations vs. N i t ( N U = 5, A max = 100, and f 0 = 333.3 Hz).
N it σ ( θ 1 ) σ ( θ 2 ) σ ( θ 3 ) σ ( θ 4 ) σ ( θ 5 )
50.04230.04870.03660.00570.0895
500.03500.03870.03210.00210.0770
1000.03390.04460.02460.00170.0634
2000.02350.03430.02760.00230.02995
Table 8. Standard deviations vs. N U ( N i t = 100, A max = 100, and f 0 = 333.3 Hz).
Table 8. Standard deviations vs. N U ( N i t = 100, A max = 100, and f 0 = 333.3 Hz).
N U σ ( θ 1 ) σ ( θ 2 ) σ ( θ 3 ) σ ( θ 4 ) σ ( θ 5 )
50.02580.03450.01960.00240.0399
100.02870.04060.03560.00160.0444
200.03370.04570.04850.00150.0499
300.01650.02040.01490.00160.0189
Table 9. Standard deviations vs. A max ( N i t = 100, N U = 5, and f 0 = 333.3 Hz).
Table 9. Standard deviations vs. A max ( N i t = 100, N U = 5, and f 0 = 333.3 Hz).
A max σ ( θ 1 ) σ ( θ 2 ) σ ( θ 3 ) σ ( θ 4 ) σ ( θ 5 )
250.01510.02160.01370.00220.0671
500.01810.02750.02320.00230.0640
1000.02580.03450.01960.00240.0399
2000.03110.03880.02890.00340.0264
Table 10. Standard deviations vs. f 0 ( N i t = 100, N U = 5, and A max = 100). The frequencies are in KHz.
Table 10. Standard deviations vs. f 0 ( N i t = 100, N U = 5, and A max = 100). The frequencies are in KHz.
f 0 σ ( θ 1 ) σ ( θ 2 ) σ ( θ 3 ) σ ( θ 4 ) σ ( θ 5 )
1/30.01780.03120.01580.00430.0407
10.01290.01650.00670.00640.0329
5/30.02580.03640.02540.00340.0447
7/30.03390.04460.02460.00170.0634
Table 11. The variation of estimated parameters a , b , c , d , F against increasing sample size N i t in the estimation using realistic stimulus/response data obtained from H1 neurons of blowfly neurons.
Table 11. The variation of estimated parameters a , b , c , d , F against increasing sample size N i t in the estimation using realistic stimulus/response data obtained from H1 neurons of blowfly neurons.
Case # N it a ^ b ^ c ^ d ^ F ^
125255.750623.1953344.36290.0000185.6737
250209.675721.3999288.88350.0814157.9571
3100233.437521.2668266.91640.0492154.7241
4200238.686121.1010242.46510.0571150.1093
5300244.554920.8891239.79120.0777145.6895
6400238.026320.1484227.63430.1002145.9515
7500220.609819.5167212.15910.1091142.7544
8600209.239818.9435203.54180.1155140.1229
9700208.379618.6725200.21830.1180138.9247
10800205.172218.6186196.19780.1294138.2120
11900206.834918.7251195.65440.1247137.1808
121000204.251418.5038192.37790.1250135.9998
131100201.775118.6313191.49300.1164136.7989
141200199.186218.7457190.47840.1237136.2337
151300196.861118.6375190.39530.1201135.1311
161400198.314418.5702190.73530.1230135.3718
171500196.159518.3109189.06240.1306134.3871
181600192.213517.9623185.54150.1447133.7077
191700190.585417.8516183.70310.1508133.3508
201800190.748117.8419184.60750.1495133.5511
211900192.336917.8900185.24150.1473133.6132
222000194.955318.0284185.73700.1495133.5813
232100198.588918.1381187.45820.1452134.3980
242200200.398418.1539188.06950.1366134.8025
252300201.901818.2673188.52410.1356134.8863
262400201.664518.2587187.87920.1357135.2327
Table 12. The relative error levels against the sample size parameter N i t . The errors were computed by evaluating the difference between the parameter values of the current case k and the previous case k 1 in Table 11. With increasing sample sizes, the estimates tend to have smaller fluctuations.
Table 12. The relative error levels against the sample size parameter N i t . The errors were computed by evaluating the difference between the parameter values of the current case k and the previous case k 1 in Table 11. With increasing sample sizes, the estimates tend to have smaller fluctuations.
N it e a e b e c e d e F
500.180160.077410.16111Inf0.14928
1000.113330.006220.076040.395210.02047
2000.022480.007790.091610.160780.02983
3000.024590.010040.011030.359510.02944
4000.026700.035460.050700.289760.00180
5000.073170.031350.067980.088860.02191
6000.051540.029370.040620.059410.01843
7000.004110.014310.016330.021230.00855
8000.015390.002880.020080.096650.00513
9000.008100.005720.002770.036190.00746
10000.012490.011820.016750.002080.00861
11000.012120.006890.004600.068850.00588
12000.012830.006140.005300.063000.00413
13000.011670.005770.000440.028750.00809
14000.007380.003610.001790.023590.00178
15000.010870.013970.008770.061980.00727
16000.020120.019040.018620.108190.00505
17000.008470.006160.009910.042070.00267
18000.000850.000540.004920.008710.00150
19000.008330.002700.003430.014430.00047
20000.013610.007730.002670.014290.00024
21000.018640.006080.009270.028610.00611
22000.009110.000870.003260.059050.00301
23000.007500.006250.002420.007350.00062
24000.001180.000480.003420.000910.00257

Share and Cite

MDPI and ACS Style

Doruk, R.O.; Abosharb, L. Estimating the Parameters of Fitzhugh–Nagumo Neurons from Neural Spiking Data. Brain Sci. 2019, 9, 364. https://doi.org/10.3390/brainsci9120364

AMA Style

Doruk RO, Abosharb L. Estimating the Parameters of Fitzhugh–Nagumo Neurons from Neural Spiking Data. Brain Sciences. 2019; 9(12):364. https://doi.org/10.3390/brainsci9120364

Chicago/Turabian Style

Doruk, Resat Ozgur, and Laila Abosharb. 2019. "Estimating the Parameters of Fitzhugh–Nagumo Neurons from Neural Spiking Data" Brain Sciences 9, no. 12: 364. https://doi.org/10.3390/brainsci9120364

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop