Next Article in Journal
Effect of Hypothermia Therapy on Children with Traumatic Brain Injury: A Meta-Analysis of Randomized Controlled Trials
Next Article in Special Issue
Virtual Intelligence: A Systematic Review of the Development of Neural Networks in Brain Simulation Units
Previous Article in Journal
Effect of Scalp Nerve Block Combined with Intercostal Nerve Block on the Quality of Recovery in Patients with Parkinson’s Disease after Deep Brain Stimulation: Protocol for a Randomized Controlled Trial
Previous Article in Special Issue
The Asymmetric Laplace Gaussian (ALG) Distribution as the Descriptive Model for the Internal Proactive Inhibition in the Standard Stop Signal Task
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Regularized Spectral Spike Response Model: A Neuron Model for Robust Parameter Reduction

1
Nanjing Institute of Intelligent Technology, Nanjing 210000, China
2
Institute of Microelectronics of the Chinese Academy of Sciences, Beijing 100000, China
3
University of Chinese Academy of Sciences, Beijing 100000, China
*
Author to whom correspondence should be addressed.
Brain Sci. 2022, 12(8), 1008; https://doi.org/10.3390/brainsci12081008
Submission received: 30 May 2022 / Revised: 12 July 2022 / Accepted: 25 July 2022 / Published: 29 July 2022
(This article belongs to the Collection Collection on Theoretical and Computational Neuroscience)

Abstract

:
The modeling procedure of current biological neuron models is hindered by either hyperparameter optimization or overparameterization, which limits their application to a variety of biologically realistic tasks. This article proposes a novel neuron model called the Regularized Spectral Spike Response Model (RSSRM) to address these issues. The selection of hyperparameters is avoided by the model structure and fitting strategy, while the number of parameters is constrained by regularization techniques. Twenty firing simulation experiments indicate the superiority of RSSRM. In particular, after pruning more than 99% of its parameters, RSSRM with 100 parameters achieves an RMSE of 5.632 in membrane potential prediction, a VRD of 47.219, and an F1-score of 0.95 in spike train forecasting with correct timing (±1.4 ms), which are 25%, 99%, 55%, and 24% better than the average of other neuron models with the same number of parameters in RMSE, VRD, F1-score, and correct timing, respectively. Moreover, RSSRM with 100 parameters achieves a memory use of 10 KB and a runtime of 1 ms during inference, which is more efficient than the Izhikevich model.

1. Introduction

Biological neurons, the fundamental building blocks of the brain, have been the subject of extensive research since the 1900s, and numerous models have been developed to describe them from various perspectives. The Hodgkin–Huxley Model [1], the Fitzhugh-Nagumo Model [2], and the Morris-Lecar Model [3] attempt to represent biophysical properties of real neurons. The Leaky Integrate-and-Fire (LIF) Model [4], the Quadratic Integrate-and-Fire Model [5], the Exponential Integrate-and-Fire Model [6], and the Izhikevich Model [7] are designed from the phenomenological viewpoint. In addition to these two categories consisting of a system of differential equations, the Point Process Model [8], the Linear-Nonlinear-Poisson (LNP) Model [9], and the Generalized Linear Model (GLM) [10] describe the input-output relation of biological neurons through a combination of a linear function representing subthreshold membrane potentials and a nonlinear function simulating spike firing rates. Although various neuron models provide valuable insights into biological neurons, it remains difficult to replicate the full functionalities of real neurons. To express the dynamics of a single biological neuron, it is necessary to implement a five-layer, fully-connected artificial neural network using deep learning technologies [11].
Either hyperparameter selection or overparameterization is cumbersome in current neuron models. First, it is challenging for neuron models to search the hyperparameter space and select the appropriate value for each hyperparameter. Since there is no effective technique to estimate hyperparameters, strategies for hyperparameter optimization depend on experts’ domain knowledge and brute-force searching algorithms, which are extremely laborious and time-consuming. In addition, the selection of hyperparameters is involved in neuron models with a system of differential equations. The Izhikevich model [7], for instance, is a two-dimensional dynamical system composed of ordinary differential equations with hyperparameters. The mechanism of neuron models is described by ordinary differential equations, whereas a set of hyperparameters correlates to a particular firing behavior. The systematic differential equation solution is computationally intensive. The iterative nature of numerical integration techniques such as the Euler’s method and the Runge–Kutta method necessitates a substantial number of iterations in order to acquire solutions. Noise makes the phenomenon more complicated, and stochastic differential equation solvers increase the computational burden even further. Second, overparameterization signifies a tremendous number of parameters. Since each parameter carries information regarding spike history, neuron models, such as the Point Process Model, invariably associate a high number of parameters [8]. Although increasing the number of parameters in a model tends to improve its accuracy, the resulting increase in computing load can impede the model’s computational efficiency.
In this article, a neuron model called the Regularized Spectral Spike Response Model (RSSRM) is proposed, which combines the Spike Response Model (SRM) [12] with the Fourier basis function. By combining their benefits, the model can overcome the aforementioned obstacles. Specifically, a data-driven algorithm for parameter estimation is implemented without hyperparameter optimization, and the enormous number of parameters is reduced by means of suitable regularization techniques.
The remaining sections of this paper are structured as follows. The Spike Response Model, the Fourier basis function, and regularization methods are reviewed in Section 2, followed by an introduction to the Regularized Spectral Spike Response Model. In Section 3, data preparation, extensive experiments, evaluation metrics, and a comprehensive model comparison are presented, while in Section 4, a discussion is provided.

2. Materials and Methods

This section provides an overview of the Spike Response Model, the Fourier basis function, and regularization methods. Then, the Regularized Spectral Spike Response Model, a novel neuron model, is introduced.

2.1. Spike Response Model

The Spike Response Model (SRM) is a neuron model that describes the input-output relation of biological neurons. It takes injected currents and spike history as inputs and produces membrane potentials. In contrast to other neuron models that are developed using differential equations, SRM is composed of linear filters and has the form
u ^ ( t ) = u r e s t + 0 κ ( s ) I ( t s ) d s + f η ( t t f ) ,
in which u ^ is the estimated membrane potential, u r e s t is the resting potential, I is injected currents, κ is the filter for I, and  f η ( t t f ) is the filter for spike history t f . Specifically, 0 κ ( s ) I ( t s ) d s is a low-pass filter that processes the information of injected currents. For example, when the input currents are noisy, such a low-pass filter can filter out extraneous information and maintain stable, clear signals. It has been proven that linear filters are essential components for portraying the neuronal behavior in the subthreshold regime [13]. f η ( t t f ) is a function of the spike history, which accounts for neuronal characteristics such as refractoriness after spiking.
Although linear filters form the foundation of SRM, there are strong connections between it and other neuron models that comprise a system of differential equations. First, SRM is a generalized LIF model when its analytical solution has been obtained [12]. In addition, there is a relationship between SRM and the Hodgkin–Huxley model under certain conditions [14]. Furthermore, a fast-spiking neuron model [15] can be well represented by SRM [16].
In addition to equivalent model structures between SRM and neuron models driven by a system of differential equations, the performance of SRM appears promising. A number of experiments have demonstrated that SRM is competitive in comparison to other neuron models [17].
It is crucial to select filters κ and η since they determine the performance of SRM. There are three primary methods. First, hand-crafted filters are proposed based on specific neural data. For example, the background firing rate data of cat spinal motoneurons [18] can be reproduced by SRM using filters of the following structure [12]
κ ( t t ^ , s ) = R τ m [ 1 e t t ^ τ r e c ] e s τ m Θ ( S ) Θ ( t t ^ s ) ,
η ( t t ^ ) = η 0 e t t ^ τ r e f Θ ( t t ^ ) ,
in which t ^ indicates the last firing time. Filter κ ( t t ^ , s ) is a modified version of filter κ ( s ) , which takes t t ^ an extra input. Θ represents the Heaviside step function with Θ ( s ) = 1 for s > 0 , and  Θ ( s ) = 0 else. Variables R, τ m , τ r e c , τ m , η 0 and τ r e f are hyperparameters. Second, filter η can be derived from data using domain knowledge, and then filter κ can be obtained by numerical implementation. The spike-triggered average (STA) is specifically employed to extract η . Experts are able to design an appropriate function to describe η based on its shape. The filter κ can then be determined by solving the Wiener–Hopf equation [19]. Third, a linear approximation can be used to generate filters κ and η . The numerical optimization is performed to prevent manually selecting filters [20]. In this scenario, SRM has the form
u ^ ( t ) = u r e s t + s = 0 S κ s I t s + q = 1 Q η q S ( t q ) ,
in which S ( t ) = f δ ( t t f ) represents spike history, Q is the number of time lags of S ( t ) , and S is the number of time lags of injected currents I. A time lag conveys information pertaining to a prior or current event. For instance, membrane potentials at time t 2 and t 1 represent two time lags for the membrane potential at time t with lag = 2 and 1, respectively. In Equation (4), to predict membrane potential u at time t, we use the information of injected currents from time t to time t S and spike history from time t 1 to time t Q , which corresponds to S time lags of injected currents with lag = 0 , 1 , 2 , , S and Q time lags of spike history with lag = 1 , 2 , , Q .
It is difficult for SRM to solve a variety of biologically realistic tasks using the first or the second method. Since selecting filters involves hyperparameter optimization, it is arduous for specialists to develop filters that correspond to each task. The third method of applying a linear approximation for filter selection is preferred, and since it is a data-driven method, every filter can be obtained automatically from the data. This approach will be performed to fit SRM in Section 3. However, overparameterization is an issue caused by this method. SRM must memorize a large amount of information about injected currents and spike history in order to achieve a high level of performance, which dramatically increases the number of parameters and restricts its applications to biologically realistic tasks due to intensive computation and large memory occupancy.

2.2. Fourier Basis Function

The Fourier series is a powerful tool in various fields. In mathematics, it is an efficient alternative for solving a system of partial differential equations [21]. In deep learning, it approximates non-differentiable functions to expedite the training process [22]. In data analysis, it improves the performance of rainfall prediction models [23]. In neuroscience, it is used to investigate the dynamics of neural spike trains [24].
The Fourier series has the form
f ( t ) = a 0 + i = 1 [ a i c o s ( 2 π t i / T ) + b i s i n ( 2 π t i / T ) ] , t = 1 , , n ,
in which T is the period, { a } i = 0 and { b } i = 1 are coefficients of sine and cosine components, respectively.
In a matrix form, it is represented as
f ( t ) = F c = 1 c o s ( 2 π 1 1 n ) s i n ( 2 π 1 1 n ) c o s ( 2 π 2 1 n ) s i n ( 2 π 2 1 n ) 1 c o s ( 2 π 1 2 n ) s i n ( 2 π 1 2 n ) c o s ( 2 π 2 2 n ) s i n ( 2 π 2 2 n ) 1 c o s ( 2 π 1 n n ) s i n ( 2 π 1 n n ) c o s ( 2 π 2 n n ) s i n ( 2 π 2 n n ) a 0 a 1 b 1 a 2 b 2 ,
in which t = [ 1 , , n ] T , F is a matrix called the Fourier basis [25], and  c is a vector of coefficients.
Considering the stability of estimating coefficients, the number of columns in F should not be larger than the number of rows [26]. Specifically, if n is odd, then
f ( t ) = a 0 + i = 1 ( n 1 ) / 2 [ a i c o s ( 2 π i t T ) + b i s i n ( 2 π i t T ) ] ,
and
F = 1 c o s ( 2 π 1 1 n ) s i n ( 2 π 1 1 n ) c o s ( 2 π 2 1 n ) s i n ( 2 π 2 1 n ) c o s ( 2 π n 1 2 1 n ) s i n ( 2 π n 1 2 1 n ) 1 c o s ( 2 π 1 2 n ) s i n ( 2 π 1 2 n ) c o s ( 2 π 2 2 n ) s i n ( 2 π 2 2 n ) c o s ( 2 π n 1 2 2 n ) s i n ( 2 π n 1 2 2 n ) 1 c o s ( 2 π 1 n n ) s i n ( 2 π 1 n n ) c o s ( 2 π 2 n n ) s i n ( 2 π 2 n n ) c o s ( 2 π n 1 2 n n ) s i n ( 2 π n 1 2 n n ) n × n .
If n is even, then
f ( t ) = a 0 + i = 1 n / 2 1 [ a i c o s ( 2 π i t T ) + b i s i n ( 2 π i t T ) ] + a n 2 ( 1 ) t ,
and
F = 1 c o s ( 2 π 1 1 n ) s i n ( 2 π 1 1 n ) c o s ( 2 π 2 1 n ) s i n ( 2 π 2 1 n ) c o s ( 2 π ( n 2 1 ) 1 n ) s i n ( 2 π ( n 2 1 ) 1 n ) 1 1 c o s ( 2 π 1 2 n ) s i n ( 2 π 1 2 n ) c o s ( 2 π 2 2 n ) s i n ( 2 π 2 2 n ) c o s ( 2 π ( n 2 1 ) 2 n ) s i n ( 2 π ( n 2 1 ) 2 n ) ( 1 ) 2 1 c o s ( 2 π 1 n n ) s i n ( 2 π 1 n n ) c o s ( 2 π 2 n n ) s i n ( 2 π 2 n n ) c o s ( 2 π ( n 2 1 ) n n ) s i n ( 2 π ( n 2 1 ) n n ) ( 1 ) n n × n .
One of the intriguing properties of the Fourier basis is its orthogonality. Orthogonality improves the estimate of coefficients { a } and { b } because it indicates that columns in F are independent of each other. Additionally, when performing variable selection, it is simple to filter irrelevant columns, and retain only the essential ones.
Basis function approach, as a nonparametric regression technique, is flexible to fit data [27]. It has the form [28]
y t = β 0 + β 1 b 1 ( x t ) + β 2 b 2 ( x t ) + β 3 b 3 ( x t ) + + ϵ t ,
in which y t is the dependent variable, x t is the independent variable, and  b 1 ( · ) , b 2 ( · ) , b 3 ( · ) , …represent basis functions, which are the transformations of x t .
The Fourier basis function applies the Fourier basis for the basis function approach, and is described by
y t = β 0 + β 11 b 11 ( x t ) + β 12 b 12 ( x t ) + β 21 b 21 ( x t ) + β 22 b 22 ( x t ) + β 31 b 31 ( x t ) + β 32 b 32 ( x t ) + + ϵ i ,
in which basis functions are b i 1 = c o s ( 2 π i t n ) x t and b i 2 = s i n ( 2 π i t n ) x t .
The orthogonality of the Fourier basis is inherited by the Fourier basis function so that it has advantages in both coefficients estimation and variable selection.

2.3. Regularization

Regularization is a technique for shrinking the magnitude of model parameters to restrict model complexity. A complex model may capture noisy signals within a given dataset, resulting in large variance and poor performance on the other dataset. By penalizing the scale of parameters, regularization methods reduce the variation and guarantee a smooth fitted model.
Regularization is incorporated into the loss function during the model fitting procedure, and constrained parameters are acquired by numerical optimization. The loss function has the form
E = e r r o r + r e g u l a r i z a t i o n ,
where the e r r o r term measures the goodness-of-fit of the model, while the r e g u l a r i z a t i o n term aims to shrink the magnitude of parameters. Some classic regularization methods are Ridge [29], LASSO [30], Elastic Net [31], and SCAD [32].

2.4. Regularized Spectral Spike Response Model

The Regularized Spectral Spike Response Model (RSSRM) is a neuron model that combines the Spectral Spike Response Model (SSRM) and regularization methods, where SSRM is constructed by a combination of SRM and the Fourier basis function. Specifically, SSRM has the form
u ^ ( t ) = u r e s t + j = 1 J w j b t , j [ I ( t ) ] + k = 1 K w k b t 1 , k [ S ( t 1 ) ] ,
in which u ^ ( t ) is the estimated membrane potential, u r e s t is the resting potential, I is the injected currents, and  S ( t ) = f δ ( t t f ) is the spike history. b t , i is the Fourier basis function that projects input currents and spike history from the time domain to the spectral domain. Specifically, b t , i [ x ( t ) ] = c o s ( i ω t ) x ( t ) + s i n ( i ω t ) x ( t ) , where ω = 2 π T and T is the length of time.
Similar to the third method for selecting filters κ and η in SRM mentioned in Section 2.1, a linear approximation is performed on SSRM so that filters κ and η , referred to as parameters w here, are obtained by numerical optimization methods.
The loss function is
E = 1 2 t [ u ( t ) u ^ ( t ) ] 2 + λ ( j = 1 J | w j | + k = 1 K | w k | ) ,
where the first term is the Mean Squared Error, and the second term is L1 regularization [30], which shrinks the magnitude of parameters to zero so that the number of parameters is reduced. λ , a hyperparameter, controls the degree of parameter shrinkage.
After mapping to the spectral domain, SSRM is able to exploit the potential of data, requiring significantly fewer time lags than SRM. Specifically, Section 3.2 shows that with zero time lag for the injected currents and one time lag for the spike history, SSRM outperforms SRM in membrane potential prediction and spike train forecasting under the scenario of parameter reduction.

3. Results

In this section, the data, including twenty classic firing behaviors of biological neurons, are generated, followed by exhaustive experiments of model fitting for membrane potentials and spike trains, respectively. Then, evaluation metrics are introduced, and comprehensive model comparisons are undertaken to evaluate the performance of each model.

3.1. Data

The data of twenty spiking behaviors of real neurons are generated from the Izhikevich model by altering the hyperparameter values [7,33]. It is a two-dimensional dynamic system where the first equation employs the quadratic integrate-and-fire model [20], and has the form
τ m d v d t = ( v v r e s t ) ( v θ ) R u + R I τ u d u d t = a ( v v r e s t ) u + b τ u f δ ( t t f ) ,
with the after-spike resetting
if v = θ r e s e t , then t f t u u r ,
in which v is the membrane potential. u is the auxiliary variable. τ m and τ u are the membrane time constants. v r e s t is the resting potential. θ is the critical voltage for spike initiation by a short current pulse. R is the membrane resistance. f δ ( t t f ) is the spike history. θ r e s e t is the numerical threshold. u r is the resting potential for the auxiliary variable. a and b are hyperparameters.
By fitting the spike initiation dynamics of a cortical neuron, dependent parameters are estimated. The simplified form is described below [7]
d v d t = 0.04 v 2 + 5 v + 140 u + I d u d t = a ( b v u ) ,
with the after-spike resetting
if v 30 mV , then v c u u + d ,
in which v is the membrane potential. u is the auxiliary variable. a , b , c , and d are hyperparameters.
To simulate twenty spiking patterns of biological neurons, injected currents I, initial values of v 0 and u 0 , and the value for each hyperparameter are provided. Then, the Euler’s method, a first-order numerical integration, is applied to solve the two-dimensional system of ordinary differential equations of the Izhikevich model [34]. Figure 1 shows the twenty generated spiking modes generated by the Izhikevich model, which are (a) tonic spiking, (b) phasic spiking, (c) tonic busting, (d) phasic bursting, (e) mixed mode, (f) spike frequency adaptation, (g) class 1 excitable, (h) class 2 excitable, (i) spike latency, (j) subthreshold oscillation, (k) resonator, (l) integrator, (m) rebound spike, (n) rebound burst, (o) threshold variability, (p) bistability, (q) depolarizing after-potential, (r) accommodation, (s) inhibition-induced spiking, and (t) inhibition-induced bursting.
Although the Izhikevich model strikes a balance between biological plausibility and computational efficiency [7], it is limited in its flexibility to tackle biologically realistic tasks. Given that the simplicity of the Izhikevich model is derived from the fitting of the spike initiation dynamics of a cortical neuron [7], it is unclear whether the superiority of such a modeling is universal. In addition, the Izhikevich model produces a limited number of firing patterns. Twenty firing behaviors correlate to hyperparameter values. In reality, biological neurons are diverse and sensitive to varying amplitudes of injected currents [35,36]. Consequently, the resulting spiking patterns are complicated, and it is difficult for the Izhikevich model to obtain the relevant hyperparameter values. The inability of the Izhikevich model to tackle various biologically realistic tasks is due to its restricted modeling of a single biological neuron and its hyperparameter-associated representation of firing behaviors. It is necessary to develop a novel neuron model that is adaptable enough to generate spiking behaviors that support a variety of biologically realistic tasks and that avoids the use of hyperparameters.

3.2. Experiments

Six neuron models, the Spike Response Model (SRM), the Regularized Spike Response Model (RSRM), the Spectral Spike Response Model (SSRM), the Regularized Spectral Spike Response Model (RSSRM), the Principal Component Regression (PCR) [28], and the Raised Cosine Basis Function Regression (RCR) [37,38,39], are applied to fit the data of twenty firing behaviors. The dependent variable, membrane potentials, is shared, whereas the independent variables are versatile. Specifically, independent variables of SRM are injected currents and spike history with different amounts of time lags. SSRM takes as inputs the Fourier basis function on injected currents and the spike history with a single time lag. RSRM is SRM with L1 regularization for the parameter reduction, while RSSRM is SSRM with L1 regularization. The features of PCR are modified inputs of SRM, where the principal component analysis, a technique for dimension reduction, is performed to transform the independent variables of SRM into the principal components. Then, the first few principal components are selected as the features of PCR to decrease the dimension. RCR takes the raised cosine basis function on injected currents and spike history with a single time lag as predictors, where the raised cosine basis function captures temporal signals close to the time of a spike with relatively few hyperparameters [39]. The details of each neuron model are shown in Table 1.
There are two aspects of the model fitting that should be emphasized. First, parameter estimation is undertaken using the same numerical optimization method. The curse of dimensionality results from the fact that some models contain more independent variables than observations, resulting in high-dimensional input data. To fairly compare all neuron models, the L2 norm with the hyperparameter λ = 0.01 is performed in the optimization process to reduce the variability in the parameter estimation. Second, since the goal is to assess the effect of parameter reduction on neuron models, instead of splitting data of spiking patterns into training and test sets, the entire data is used for the model fitting, and the generalizability of models is not considered here.
In addition to prominent membrane potential prediction, excellent spike train forecasting is expected for neuron models. However, the relationship between estimated membrane potentials and predicted spike trains are unobvious. It is possible that undesirable estimated membrane potentials are from the prediction in the subthreshold regime, while estimated spike trains are outstanding. Three algorithms are proposed to convert estimated membrane potentials to estimated spike trains with a reasonable tolerance for mismatches between the actual and estimated spike trains.
Before discussing each algorithm, the following notations are introduced. The lowercase letters represent scalars, while the bold ones indicate vectors. In addition, calligraphic uppercase letters signify sets.
Algorithm 1 aims to convert membrane potentials to spike trains. The quantile q in line 1 is defined as a threshold value in predicted membrane potentials, where the estimated firing ratio r × β is larger than the true firing proportion r to take fluctuations in predicted membrane potentials into account. For example, given the true firing proportion r = 10 % and an inflation coefficient β = 1.5 , the estimated firing ratio is r × β = 15 % . The 85th quantile q indicates that 15% of predicted membrane potentials are greater than q. At the time t, if the value of the predicted membrane potential is over the threshold point q and is also a local maximum, then a predicted spike is established, which is formalized by lines 2 to 6.
Algorithm 1: Convert membrane potentials to spike trains.
Brainsci 12 01008 i001
The hyperparameter β ranges from 1 to 1.5 for SRM and SSRM, and from 1 to 270 for RSRM, RSSRM, PCR, and RCR due to their significant oscillations after parameter reduction. It is introduced because the predicted membrane potentials may be subject to substantial fluctuations so that it is difficult to differentiate between these membrane potentials and potential membrane potentials for spiking. By tuning β , potential membrane potentials for spiking are accounted for, despite the inclusion of membrane potentials with substantial fluctuations. Evaluation metrics such as F1-score is applied to to accommodate this phenomenon in Section 3.3. Overall, the hyperparameter β is applied to capture all potential membrane potentials for spiking, despite the fact that some membrane potentials exhibit variations, which will be reflected in the evaluation metrics.
Algorithm 2 searches for the maximum tolerance of predicted spike trains, where maximum tolerance is a reasonable value such that predicted spikes within it are treated as the true spikes. The concept is introduced since it is common for the true spike time and the predicted spike time to differ by a few milliseconds [19]. The maximum tolerance is an auxiliary variable that improves the assessment of neuron model performance. Since the performance of each neuron model in twenty firing behaviors differs, the associated maximum tolerances can vary considerably. For a certain evaluation metric, the greater the performance, the smaller the maximum tolerance. It is straightforward to transform the output vector of spike trains from Algorithm 1 into the input set of the spike times for Algorithm 2 due to the one-to-one relationship between spike trains consisting of 0 s and 1 s and spike times denoting the precise time of firings. τ m a x in line 1 is the upper bound of the maximum tolerance. For example, T ^ = { 1 , 10 } and T = { 3 , 15 } , then τ m a x = max ( ( 10 3 ) , ( 15 1 ) ) = 14 .
Within iterations of the tolerance τ , a comparison is performed between the true spike time and the predicted spike time. When all elements in either set have been enumerated, indicating that the comparison is complete, the procedure terminates and further increments of τ have no effect. Such a τ is known as the maximum tolerance. To simulate specific firing behaviors, neuron models PCR and RCR may require large values of the maximum tolerance. Since an enormous maximum tolerance is not biologically reasonable and is inefficient in applications, the value of maximum tolerance for PCR and RCR is set to 30 ms by observing that the most considerable maximum tolerance among neuron models SRM, SSRM, RSRM, and RSSRM in twenty firing behaviors is around 30 ms. These pre-defined maximum tolerances are marked by asterisks in Table 2.
Algorithm 2: Find maximum tolerance.
Brainsci 12 01008 i002
Algorithm 3 takes two sets of the spike time and a maximum tolerance as inputs and returns a predicted spike train with the maximum tolerance. This algorithm is similar to Algorithm 2, with the exception that when a predicted spike time falls inside the maximum tolerance, the true spike time is substituted.
Algorithm 3: Predicted spike train with maximum tolerance.
Brainsci 12 01008 i003

3.3. Evaluation

Neuron models are evaluated based on the evaluation metrics Root Mean Square Error (RMSE) [20], van Rossum distance (VRD) [40], and F1-score [41] after fixing the number of parameters. RMSE, the measure of membrane potentials, calculates the deviation between true membrane potentials derived from data of firing behaviors and predicted membrane potentials obtained from neuron models. It is defined as
RMSE = 1 n i = 1 n ( u i u ^ i ) 2 ,
in which u i is the true membrane potential at time i, u ^ i is the predicted membrane potential at time i, and n is total time steps in the simulation.
VRD measures the similarity of two spike trains, which calculates the Euclidean distance of modified spike trains with an exponential function. It has the explicit form [42]
VRD = i j e | u i u j | / τ + i j e | v i v j | / τ 2 i j e | u i v j | / τ ,
where u is the true spike train and v is the predicted spike train. Without loss of generality, the hyperparameter τ is set to be 1 for simpler calculation.
Since VRD compares a specific time of a spike train with all times of another (the third term in Equation (21)), it is unnecessary to use maximum tolerance mentioned in Algorithms 2 and 3, and the predicted spike train is obtained by Algorithm 1. Neuron models with smaller VRD are desirable.
F1-score, the measure of spike trains, computes the harmonic mean between precision and recall. Since predicted spike trains are converted from predicted membrane potentials by Algorithms 1–3, neuron models with higher F1-score and smaller maximum tolerance are preferred. F1-score has the form
F 1 = 2 × precision × recall precision + recall ,
in which p r e c i s i o n is the proportion of predicted spikes that are true spikes, while r e c a l l indicates the proportion of true spikes predicted by neuron models correctly. They are described below
precision = T P T P + F P ,
recall = T P T P + F N ,
where T P represents the True Positive that there is a fired spike and the prediction of the neuron model is correct. F P is the False Positive, indicating there is no spike but the neuron model fires. F N , the False Negative, represents a spike generated by the biological neuron, but the neuron model does not fire.

3.4. Model Comparison

The model comparison of neuron models is shown in Table 3. The Izhikevich model composed of a two-dimensional dynamical system of differential equations is regarded as the ground truth, where parameters are fitted by the cortical neuron data and hyperparameters are specified. The Spike Response Model (SRM) owns 9895 parameters on average of twenty firing behaviors. In comparison, the Spectral Spike Response Model (SSRM) has an average of 29,659 parameters, which is a result of applying the Fourier basis function to injected currents and spike histories, respectively. In order to acquire 100 parameters on average in the Regularized Spike Response Model (RSRM), a parameter reduction, pruning more than 99% of parameters, is performed on SRM using L1 regularization for each firing behavior. To fairly compare different neuron models, the number of parameters for the Regularized Spectral Spike Response Model (RSSRM), the Principal Component Regression (PCR), and the Raised Cosine Basis Function Regression (RCR) is restricted and is the same as in RSRM for each firing pattern. Details are shown in Table 2. Specifically, RSSRM applies a similar procedure to RSRM, where L1 regularization is conducted on SSRM in order to control the number of parameters in RSSRM to be the same as in RSRM by selecting hyperparameters appropriately. PCR utilizes the first n principal components as new features [28], where n is the number of parameters in RSRM. The corresponding n parameters are achieved by numerical optimization. Due to the fact that the dimensionality of the raised cosine basis is a hyperparameter, the required parameters in RCR are controlled.
Although SRM has significantly fewer parameters than SSRM, its overall performance is superior. In particular, SRM it has a faster inference time, lower RMSE, and smaller maximum tolerance than SSRM, as well as comparable VRD and F1-score despite requiring slightly larger memory for inference, i.e., simulating firing behaviors. Both SRM and SSRM are well performed on the vast majority of firing behaviors, which are shown in Figure 2, Figure 3, Figure 4 and Figure 5. However, the parameter reduction of SRM is not robust. After pruning 99% of parameters using L1 regularization, the inference memory, RMSE, VRD, F1-score, and maximum tolerance of RSRM are inferior than those of RSSRM with the same number of parameters. Specifically, inference memory, RMSE and maximum tolerance of RSSRM are about half as tiny as those of RSRM, while VRD and F1-score of RSSRM are 72,000 times less and 8 times higher than those of RSRM, respectively. Their difference in inference time is minor, as both require approximately 1 ms, which is far faster than the Izhikevich model.
In addition, the robustness of parameter reduction in RSSRM has been demonstrated by comparing the changes in F1-score before and after the parameter pruning. RSSRM’s F1-score decreases by only 0.05 after parameter reduction, while F1-score of RSRM is reduced drastically. Furthermore, the difference in RMSE and VRD between RSSRM and SSRM is trivial when compared to the difference between RSRM and SRM. Specifically, the RMSE of RSSRM and RSRM is approximately 5 and 20 times larger than that of SSRM and SRM, whereas the VRD of RSSRM and RSRM is 47 and 3,600,000 times larger, respectively. Moreover, their changes in inference memory and inference time are noteworthy. After parameter pruning, the memory usage and runtime of RSSRM during inference are 56 times lower and 116 times faster than those of SSRM, although RSRM has 41 times larger memory and 72 times faster runtime than SRM.
Although PCR and RCR have superior overall performance than RSRM, they are not able to compete with RSSRM, which has the lowest RMSE, the lowest VRD, the highest F1-score, and the smallest maximum tolerance among these four neuron models with parameter reduction. The inadequate performance of PCR is further demonstrated by the low variance explanation. On average for twenty firing behaviors, the first 100 principal components of PCR explain only about 51% of the variance in data. In an ideal scenario, the first few principal components should explain more than 80% of variance. Since PCR assumes that principal components are obtained by a linear combination of the original data, non-linear methods are advocated. Therefore, RSSRM and RCR, obtaining nonlinearity from basis functions, perform better than PCR. RSSRM and RCR achieve the best and the second-best performance among neuron models with parameter reduction. However, considering the involvement of hyperparameters in constructing the raised cosine basis in RCR, RSSRM, consisting of the hyperparameter-free Fourier basis function, is preferable.
In addition to evaluating the performance of neuron models on an average of twenty firing behaviors, their capacity on different categories of spiking modes is studied, respectively. Twenty spiking patterns are classified into two types based on the number of spikes. One class, as shown in Table 4, is the one-spike firing behavior, which includes (b) phasic spiking, (i) spike latency, (j) subthreshold oscillations, (k) resonator, (i) integrator, (m) rebound spike, (o) threshold variability, (q) depolarizing after-potential, and (r) accommodation. These nine spiking modes each produce a single spike throughout the experiment. As illustrated in Table 5, the other class contains eleven spiking patterns that fire multiple spikes during the simulation. It consists of (a) tonic spiking, (c) tonic bursting, (d) phasic bursting, (e) mixed mode, (f) spike frequency adaptation, (g) class 1 excitable, (h) class 2 excitable, (n) rebound burst, (p) bistability, (s) inhibition-induced spiking, and (t) inhibition-induced bursting. Table 4 demonstrates that SRM is superior to SSRM in terms of the number of parameters, inference time, RMSE, and maximum tolerance. After parameter reduction, RSSRM beats RSRM in terms of inference memory, RMSE, VRD, and F1-score. With only 50 parameters, PCR has the best performance among the neuron models with parameter reduction. It has the fastest inference time, the smallest RMSE, the lowest VRD, the highest F1-score, and the smallest maximum tolerance. Although its memory usage during inference is greater than that of other neuron models with parameter reduction, it is significantly lower than that of the Izhikevich model. PCR is the most notable model for predicting one-spike firing behaviors, whereas RSSRM is the second-best model. In Table 5, it is challenging to distinguish between the performance of SRM and SSRM. Although SRM has fewer parameters, a faster inference time, and a smaller RMSE, SSRM shows lower memory usage, a lower VRD, a higher F1-score, and a smaller maximum tolerance. When parameter reduction is considered, RSSRM has the lowest memory consumption, the lowest RMSE, the lowest VRD, the highest F1-score, and the smallest maximum tolerance among all neuron models. Furthermore, RCR is the second most prominent neuron model though the maximum tolerance is greater than twelve times that of RSSRM. Moreover, RSRM is the most unsatisfactory neuron model. Compared with RSSRM, it has around twice the RMSE, more than seventy thousand times the VRD, one-ninth the F1-score, and more than seven times the maximum tolerance. Comparing Table 4 and Table 5, the Izhikevich model doubles the memory usage for inference from one-spike firing behaviors to multiple-spike firing patterns, but the other neuron models do not exhibit this increase. Moreover, RSSRM demonstrates a consistent inference time, whereas the inference time for the other neuron models increases dramatically.
Although Table 3, Table 4 and Table 5 demonstrate that the overall performance of RSSRM is prominent among neuron models with parameter reduction, it is worthwhile to measure the capability of neuron models for each firing behavior. Details are shown in Table 2.
First, SRM, containing fewer parameters, exhibits faster inference time than SSRM in twenty firing patterns with the exception of (b) phasic spiking. Furthermore, RMSE of SRM for spiking behaviors (i) spike latency, (j) subthreshold oscillations, (k) resonator, (m) rebound spike, (n) rebound burst, and (o) threshold variability is smaller than that of SSRM. However, VRD and F1-score of both SRM and SSRM are nearly all 0.0 and 1.0 for twenty firing behaviors, respectively, indicating that perfect spike train prediction, the maximum tolerance of SSRM for (j) subthreshold oscillations, (k) resonator, (n) Rebound Burst, and (o) threshold variability is substantially greater than those of SRM, which remains zero, indicating perfect matches between the predicted and true spike trains. Zero maximum tolerance for SSRM is shown for (i) spike latency and (m) rebound spike due to a moderate corresponding RMSE.
In addition, in contradiction to the other firing behaviors, RMSE of RSSRM for (j) subthreshold oscillations and (n) rebound burst are larger than those of RSSRM. Specifically, the comparable RMSE of SSRM and RSSRM for (j) subthreshold oscillations indicates the Fourier basis function cannot capture the variability of membrane potentials, as depicted in Figure 4 and Figure 6. When it comes to (n) rebound burst, Figure 4 and Figure 6 reveal that neither SSRM nor RSSRM are able to reproduce the membrane potentials during spikes firing precisely.
Furthermore, Table 2 show that none of RSRM owns a higher F1-score and a lower VRD than RSSRM though maximum tolerance of RSRM for (a) tonic spiking, (b) phasic spiking, (f) spike frequency adaptation, (i) spike latency, (j) subthreshold oscillations, and (k) resonator are all zeros. The rationale is indicated in Figure 7 and Figure 8, where RSRM is incapable of reproducing those firing behaviors precisely and instead provides a combination of step functions. In contrast to this circumstance, there are cases in which a considerable tolerance is required. To capture firing behaviors (g) class 1 excitable, (h) class 2 excitable, (r) accommodation, and (s) inhibition-induced spiking, the maximum tolerance values for RSRM range from 10.3 to 30.3 ms.
Moreover, Table 2 illustrates that, among all neuron models, RSSRM takes the lowest memory consumption during inference, whereas the Izhikevich model shows the highest memory usage for twenty spiking patterns, excluding (k) resonator, (m) rebound spike, (o) threshold variability and (q) depolarizing after-potential. Compared to SRM, SSRM requires smaller memory for twenty firing behaviors despite a slower inference time. Specifically, the memory consumption of SSRM is lower for fifteen firing behaviors (b) phasic spiking, (c) tonic bursting, (e) mixed mode, (f) spike frequency adaptation, (g) class 1 excitable, (h) class 2 excitable, (i) spike latency, (k) resonator, (m) rebound spike, (n) rebound burst, (o) threshold variability, (p) bistability, (q) depolarizing after-potential, (r) accommodation, and (t) inhibition-induced bursting, while SSRM has only one faster inference time than SRM for (b) phasic spiking. Among neuron models with parameter reduction, RSRM has the second-lowest memory usage in twenty firing patterns. Although PCR shows the highest memory consumption in nine spiking modes, it consumes the largest memory on average, as shown in Table 3. A comparison of inference time among neuron models with parameter reduction is trivial, as all of them takes about 1 ms during inference, which is over thirty times faster than the Izhikevich model.
To compare neuron models with parameter reduction, Table 6 ranks the performance of RSRM, RSSRM, PCR, and RCR across twenty firing behaviors. RSSRM achieves the smallest RMSE for eleven spiking patterns, including (a) tonic spiking, (b) phasic spiking, (c) tonic bursting, (d) phasic bursting, (f) spike frequency adaptation, (g) class 1 excitable, (h) class 2 excitable, (i) spike latency, (p) bistability, (s) inhibition-induced spiking, and (t) inhibition-induced bursting, and the second-smallest RMSE for five firing modes, which are (e) mixed mode, (k) resonator, (l) integrator, (m) rebound spike, and (r) accommodation. Specifically, the difference between RSSRM and the best neuron model in terms of RMSE is less than 0.5. RSSRM produces the third-smallest RMSE among the other four spike behaviors, and the difference between it and the second-best model is minor, except for (q) depolarizing after-potential, where the difference is around 0.6.
Taking spike train forecasting into account, RSSRM achieves the highest F1-score for ten firing modes containing (a) tonic spiking, (b) phasic spiking, (c) tonic bursting, (d) phasic bursting, (e) mixed mode, (f) spike frequency adaptation, (h) class 2 excitable, (i) spike latency, (n) rebound burst, and (p) bistability, and the second-highest F1-score for six spiking patterns, which includes (g) class 1 excitable, (k) resonator, (m) rebound spike, (q) depolarizing after-potential, (s) inhibition-induced spiking, and (t) inhibition-induced bursting. Specifically, RSSRM maintains a perfect F1-score for (g) class 1 excitable, (m) rebound spike, and (q) depolarizing after-potential despite a slightly larger maximum tolerance. RSSRM obtains the third-highest F1-score among the other four firing modes. While the perfect F1-score remains, the corresponding maximum tolerance is insufficient. Moreover, RSSRM accomplishes the lowest VRD across twenty firing behaviors, with the exception of (d) phasic bursting, (k) resonator, (s) inhibition-induced spiking, and (t) inhibition-induced bursting, where the second-lowest VRD is obtained. Although the rankings of VRD and F1-score for RSSRM fluctuate across eight firing modes, they are nearly identical when the influence of maximum tolerance is removed.
Moreover, PCR obtains six of the smallest RMSE, nine of the lowest VRD, and six of the highest F1-score among twenty spiking behaviors, whereas RCR achieves two of the smallest RMSE, twelve of the lowest VRD, and six of the highest F1-score. Specifically, RSSRM and RCR earn the highest F1-score for (n) rebound burst, while PCR and RCR accomplish the highest F1-score for (l) integrator. Although RSRM does not attain the highest F1-score for any firing behavior, it achieves the smallest RMSE for (j) subthreshold oscillations though the corresponding F1-score is the lowest.
In Table 6, RSSRM achieves the smallest RMSE, the lowest VRD, and the highest F1-score for eleven, sixteen, and ten firing behaviors, respectively, outperforming the other neuron models with parameter reduction.
Corresponding rankings support such an observation, where RMSE, VRD, and F1-score rankings of RSSRM are 1.65, 1.2, and 1.7, respectively. The superior performance of RSSRM is attributed to its excellent modeling strategy. Under the Fourier basis function, one time lag is imposed so that the model takes advantage of data maximally, which are shown in Figure 6 and Figure 9. Constructing RSRM requires an enormous number of time lags, which wastes a substantial quantity of data. Table 1, Figure 7 and Figure 8 present this phenomenon. With the second-lowest VRD and the second-highest F1-score, RCR, endows the potential in spike train prediction at the expense of membrane potential forecasting. Figure 10 shows the prominent spike train prediction, whereas Figure 11 demonstrates that predicted membrane potentials in firing behaviors such as (c) tonic bursting, (d) phasic bursting, (h) class 2 excitable, and (p) bistability are dramatically different from those of other neuron models, which use abnormal downward vertical lines to represent spikes. The performance of PCR is extreme, since the rankings of RMSE, VRD, and F1-score for the majority of firing patterns are either first or third. Figure 12 and Figure 13 illustrate that substantial fluctuations in the subthreshold regime contribute to its undesirable RMSE. Nevertheless, its performance exceeds expectations, considering that the modeling does not include the temporal structure.

4. Discussion

In this article, in order to overcome the challenges of hyperparameter optimization and overparameterization, which impede the deployment of neuron models in different biologically realistic tasks, the Regularized Spectral Spike Response Model (RSSRM) is proposed. This model avoids hyperparameter selection by data-driven methods and solves overparameterization through regularization approaches.
A comprehensive comparison of neuron models with parameter reduction is conducted, including the Regularized Spike Response Model (RSRM), the Regularized Spectral Spike Response Model (RSSRM), the Principal Component Regression (PCR), and the Raised Cosine Basis Function Regression (RCR). Their predictability of membrane potentials and spike trains is evaluated based on Root Mean Square Error (RMSE), van Rossum distance (VRD), F1-score, and maximum tolerance. On average of twenty firing behaviors, RSSRM with 100 parameters achieves the best performance, as demonstrated by the smallest RMSE (5.632), the lowest VRD (47.219), the highest F1-score (0.95), and the smallest maximum tolerance (1.4 ms). RSSRM with 141 parameters achieves the lowest RMSE (6.086), the lowest VRD (85.394), the highest F1-score (0.954), and the smallest maximum tolerance (0.997 ms) for the class of multiple-spike firing behaviors, whereas RSSRM with 50 parameters is the second-best neuron model that obtains the second-lowest RMSE (5.078), the second-lowest VRD (0.562), second-highest F1-score (0.944), and the fourth-smallest maximum tolerance (1.989 ms). RSSRM demonstrates superior performance in membrane potential prediction and spike train forecasting in the majority of spiking patterns examining twenty firing behaviors independently. After ranking the capacity of neuron models with parameter reduction for each firing behavior, RSSRM achieves the top position on average, where RMSE, VRD, and F1-score rankings are 1.65, 1.2, and 1.7, respectively. In addition, RSSRM achieves the lowest memory consumption (<10 KB) and approximate 1 ms runtime during inference among neuron models, which is 248 times lower and 25 times faster than the Izhikevich model, as shown in Table 3.
However, there are some limitations to this work. First, we only evaluate neuron models with L1 regularization and do not consider the other regularization methods such as elastic net and SCAD. Second, our simulated data of twenty firing behaviors is generated from the Izhikevich model. It is worthwhile to evaluate the performance of neuron models using data derived from biological neurons. Third, the performance of biological neural networks composing RSSRM is not evaluated since the involvement of learning rules and network structures potentially complicates the analysis. These problems will be investigated in future research.

Author Contributions

Conceptualization, Y.Z.; methodology, Y.Z., W.B., D.H., L.T., Z.Y., L.Y. and D.S.; software, Y.Z.; validation, Y.Z., W.B., D.H. and L.T.; formal analysis, Y.Z.; investigation, Y.Z.; resources, Y.Z.; data curation, Y.Z.; writing—original draft preparation, Y.Z., W.B., D.H. and L.T.; writing—review and editing, Y.Z., W.B., D.H., L.T., Z.Y., L.Y. and D.S.; visualization, Y.Z., W.B., D.H. and L.T.; supervision, D.S.; project administration, Y.Z.; funding acquisition, D.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by “Science and technology innovation 2030—new generation of artificial intelligence” project, grant number 2020AAA0109100.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

We wish to acknowledge the help provided by the technical and support staff in the Institute of Microelectronics of the Chinese Academy of Sciences, the Nanjing Institute of Intelligent Technology, and the Nanjing Qilin Administrative Committee.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
SRMSpike Response Model
RSRMRegularized Spike Response Model
SSRMSpectral Spike Response Model
RSSRMRegularized Spectral Spike Response Model
PCRPrincipal Component Regression
RCRRaised Cosine Regression
RMSERoot Mean Square Error
VRDvan Rossum Distance

References

  1. Hodgkin, A.L.; Huxley, A.F. A quantitative description of membrane current and its application to conduction and excitation in nerve. J. Physiol. 1952, 117, 500–544. [Google Scholar] [CrossRef]
  2. FitzHugh, R. Impulses and physiological states in theoretical models of nerve membrane. Biophys. J. 1961, 1, 445–466. [Google Scholar] [CrossRef]
  3. Morris, C.E. Voltage oscillations in the barnacle giant muscle fiber. Biophys. J. 1981, 35, 193–213. [Google Scholar] [CrossRef]
  4. Lapicque, L. Quantitative investigations of electrical nerve excitation treated as polarization. J. Physiol. Pathol. Gen. 1907, 9, 620–635. [Google Scholar]
  5. Ermentrout, G.B.; Kopell, N. Parabolic bursting in an excitable system coupled with a slow oscillation. SIAM J. Appl. Math. 1986, 46, 233–253. [Google Scholar] [CrossRef]
  6. Fourcaud-Trocmé, N.; Hansel, D.; Vreeswijk, C.V.; Brunel, N. How spike generation mechanisms determine the neuronal response to fluctuating inputs. J. Neurosci. 2003, 23, 11628–11640. [Google Scholar] [CrossRef] [PubMed]
  7. Izhikevich, E.M. Simple model of spiking neurons. IEEE Trans. Neural Netw. 2003, 14, 1569–1572. [Google Scholar] [CrossRef]
  8. Brillinger, D.R.; Villa, A. Examples of the investigation of neural information processing by point process analysis. Adv. Methods Physiol. Syst. Model. 1994, 3, 111–127. [Google Scholar]
  9. Paninski, L. Maximum likelihood estimation of cascade point-process neural encoding models. Netw. Comput. Neural Syst. 2004, 15, 243–262. [Google Scholar] [CrossRef]
  10. Mensi, S.; Naud, R.; Gerstner, W. From stochastic nonlinear integrate-and-Fire to generalized linear models. In Proceedings of the NeurIPS, Granada, Spain, 12–14 December 2011. [Google Scholar]
  11. Beniaguev, D.; Segev, I.; London, M. Single cortical neurons as deep artificial neural networks. Neuron 2021, 109, 2727–2739.e3. [Google Scholar] [CrossRef]
  12. Gerstner, W.; Kistler, W.M. Spiking Neuron Models: Single Neurons, Populations, Plasticity, 1st ed.; Cambridge University Press: Cambridge, UK, 2002; pp. 110–142. [Google Scholar]
  13. Jolivet, R.B.; Rauch, A.; Lüscher, H.; Gerstner, W. Predicting spike timing of neocortical pyramidal neurons by simple threshold models. J. Comput. Neurosci. 2006, 21, 35–49. [Google Scholar] [CrossRef] [PubMed]
  14. Kistler, W.M.; Gerstner, W.; Hemmen, J. Reduction of Hodgkin–Huxley equations to a single-variable threshold model. Neural Comput. 1997, 9, 1015–1045. [Google Scholar] [CrossRef]
  15. Erisir, A.; Lau, D.; Rudy, B.; Leonard, C.S. Function of specific K(+) channels in sustained high-frequency firing of fast-spiking neocortical interneurons. J. Neurophysiol. 1999, 82, 2476–2489. [Google Scholar] [CrossRef]
  16. Jolivet, R.; Lewis, T.J.; Gerstner, W. Generalized integrate-and-fire models of neuronal activity approximate spike trains of a detailed model to a high degree of accuracy. J. Neurophysiol. 2004, 92, 959–976. [Google Scholar] [CrossRef] [PubMed]
  17. Jolivet, R.B.; Kobayashi, R.; Rauch, A.; Naud, R.; Shinomoto, S.; Gerstner, W. A benchmark test for a quantitative assessment of simple neuron models. J. Neurosci. Methods 2008, 196, 417–424. [Google Scholar] [CrossRef] [PubMed]
  18. Powers, R.; Binder, M.D. Experimental evaluation of input-output models of motoneuron discharge. J. Neurophysiol. 1996, 75, 367–379. [Google Scholar] [CrossRef] [PubMed]
  19. Jolivet, R.; Lewis, T.J.; Gerstner, W. The Spike Response Model: A Framework to Predict Neuronal Spike Trains. In Proceedings of the ICANN/ICONIP, Istanbul, Turkey, 26–29 June 2003. [Google Scholar]
  20. Gerstner, W.; Kistler, W.M.; Naud, R.; Paninski, L. Neuronal Dynamics: From Single Neurons to Networks and Models of Cognition, 1st ed.; Cambridge University Press: Cambridge, UK, 2014; pp. 154–165. [Google Scholar]
  21. Li, Z.; Kovachki, N.; Azizzadenesheli, K.; Liu, B.; Bhattacharya, K.; Stuart, A.; Anandkumar, A. Fourier neural operator for parametric partial differential equations. In Proceedings of the ICLR, Virtual-Only Conference, 3–7 May 2021. [Google Scholar]
  22. Xu, Y.; Han, K.; Xu, C.; Tang, Y.; Xu, C.; Wang, Y. Learning frequency domain approximation for binary neural network. In Proceedings of the NeurIPS, Virtual-Only Conference, 6–14 December 2021. [Google Scholar]
  23. Suhaimi, N.; Jamaludin, H.S.S. Generalized linear models (GLMS) approach in modelling rainfall data over Johor and Kelantan area. In Proceedings of the SKSM 21, Penang, Malaysia, 6–8 November 2013. [Google Scholar]
  24. Orcioni, S.; Paffi, A.; Apollonio, F.; Liberti, M. Revealing spectrum features of stochastic neuron spike trains. Mathematics 2020, 8, 1011. [Google Scholar] [CrossRef]
  25. Howell, K.B. Principle of Fourier Analysis, 1st ed.; CRC Press: Boca Raton, FL, USA, 2001; pp. 3–4. [Google Scholar]
  26. Shumway, R.H.; Stoffer, D.S. Time Series Analysis and Its Applications, 4th ed.; Springer: Cham, Switzerland, 2017; pp. 166–172. [Google Scholar]
  27. Faraway, J.J. Extending the Linear Model with R, 2nd ed.; CRC Press: Boca Raton, FL, USA, 2016; pp. 302–306. [Google Scholar]
  28. James, G.; Witten, D.; Hastie, T.; Tibshirani, R. An Introduction to Statistical Learning, 2nd ed.; Springer: Cham, Switzerland, 2021; pp. 252–301. [Google Scholar]
  29. Hoerl, A.E.; Kennard, R.W. Ridge regression: Biased estimation for nonorthogonal problems. Technometrics 1970, 12, 55–67. [Google Scholar] [CrossRef]
  30. Tibshirani, R. Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B 1996, 58, 267–288. [Google Scholar] [CrossRef]
  31. Zou, H.; Hastie, T. Regularization and variable selection via the elastic net. J. R. Stat. Soc. Ser. B 2005, 67, 301–320. [Google Scholar] [CrossRef]
  32. Fan, J.; Li, R. Variable selection via nonconcave penalized likelihood and its oracle properties. J. Am. Stat. Assoc. 2001, 96, 1348–1360. [Google Scholar] [CrossRef]
  33. Izhikevich, E.M. Which model to use for cortical spiking neurons? IEEE Trans. Neural Netw. 2004, 15, 1063–1070. [Google Scholar] [CrossRef] [PubMed]
  34. Weber, A.I.; Pillow, J.W. Capturing the dynamical repertoire of single neurons with generalized linear models. Neural Comput. 2017, 29, 3260–3289. [Google Scholar] [CrossRef] [PubMed]
  35. Markram, H.; Toledo-Rodriguez, M.; Wang, Y.; Gupta, A.; Silberberg, G.; Wu, C. Interneurons of the neocortical inhibitory system. Nat. Rev. 2004, 5, 793–807. [Google Scholar] [CrossRef] [PubMed]
  36. Naud, R.; Marcille, N.; Clopath, C.; Gerstner, W. Firing patterns in the adaptive exponential integrate-and-fire model. Biol. Cybern. 2008, 99, 335–347. [Google Scholar] [CrossRef]
  37. Keat, J.; Reinagel, P.; Reid, R.C.; Meister, M. Predicting every spike: A model for the responses of visual neurons. Neuron 2001, 30, 803–817. [Google Scholar] [CrossRef]
  38. Pillow, J.W.; Paninski, L.; Uzzell, V.; Simoncelli, E.P.; Chichilnisky, E.J. Prediction and decoding of retinal ganglion cell responses with a probabilistic spiking model. J. Neurosci. 2005, 25, 11003–11013. [Google Scholar] [CrossRef]
  39. Pillow, J.W.; Shlens, J.; Paninski, L.; Sher, A.; Litke, A.M.; Chichilnisky, E.J.; Simoncelli, E.P. Spatio-temporal correlations and visual signalling in a complete neuronal population. J. Neurosci. 2008, 454, 995–999. [Google Scholar] [CrossRef]
  40. van Rossum, M. A novel spike distance. Neural Comput. 2001, 13, 751–763. [Google Scholar] [CrossRef] [PubMed]
  41. Goutte, C.; Gaussier, E. A probabilistic interpretation of precision, recall and f-score, with implication for evaluation. In Proceedings of the ECIR, Santiago de Compostela, Spain, 21–23 March 2005. [Google Scholar]
  42. Houghton, C.; Kreuz, T. On the efficient calculation of van Rossum distances. Network 2012, 23, 48–58. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Twenty firing behaviors generated by the Izhikevich Model, where y-axis is the membrane potential (mV) and x-axis is the time (ms).
Figure 1. Twenty firing behaviors generated by the Izhikevich Model, where y-axis is the membrane potential (mV) and x-axis is the time (ms).
Brainsci 12 01008 g001
Figure 2. Membrane potentials of SRM (blue) vs. Izhikevich Model (green), where y-axis is the membrane potential (mV) and x-axis is the time (ms).
Figure 2. Membrane potentials of SRM (blue) vs. Izhikevich Model (green), where y-axis is the membrane potential (mV) and x-axis is the time (ms).
Brainsci 12 01008 g002aBrainsci 12 01008 g002b
Figure 3. Spike trains of Izhikevich Model (black) vs. unmodified SRM (green) vs. modified SRM by maximum tolerance (blue), where x-axis is the time (ms).
Figure 3. Spike trains of Izhikevich Model (black) vs. unmodified SRM (green) vs. modified SRM by maximum tolerance (blue), where x-axis is the time (ms).
Brainsci 12 01008 g003aBrainsci 12 01008 g003b
Figure 4. Membrane potentials of SSRM (blue) vs. Izhikevich Model (green), where y-axis is the membrane potential (mV) and x-axis is the time (ms).
Figure 4. Membrane potentials of SSRM (blue) vs. Izhikevich Model (green), where y-axis is the membrane potential (mV) and x-axis is the time (ms).
Brainsci 12 01008 g004aBrainsci 12 01008 g004b
Figure 5. Spike trains of Izhikevich Model (black) vs. unmodified SSRM (green) vs. modified SSRM by maximum tolerance (blue), where x-axis is the time (ms).
Figure 5. Spike trains of Izhikevich Model (black) vs. unmodified SSRM (green) vs. modified SSRM by maximum tolerance (blue), where x-axis is the time (ms).
Brainsci 12 01008 g005aBrainsci 12 01008 g005b
Figure 6. Membrane potentials of RSSRM, where y-axis is the membrane potential (mV) and x-axis is the time (ms).
Figure 6. Membrane potentials of RSSRM, where y-axis is the membrane potential (mV) and x-axis is the time (ms).
Brainsci 12 01008 g006
Figure 7. Membrane potentials of RSRM, where y-axis is the membrane potential (mV) and x-axis is the time (ms).
Figure 7. Membrane potentials of RSRM, where y-axis is the membrane potential (mV) and x-axis is the time (ms).
Brainsci 12 01008 g007
Figure 8. Spike trains of Izhikevich model (black) vs. unmodified RSRM (green) vs. modified RSRM by maximum tolerance (blue), where x-axis is the time (ms).
Figure 8. Spike trains of Izhikevich model (black) vs. unmodified RSRM (green) vs. modified RSRM by maximum tolerance (blue), where x-axis is the time (ms).
Brainsci 12 01008 g008
Figure 9. Spike trains of Izhikevich Model (black) vs. unmodified RSSRM (green) vs. modified RSSRM by maximum tolerance (blue), where x-axis is the time (ms).
Figure 9. Spike trains of Izhikevich Model (black) vs. unmodified RSSRM (green) vs. modified RSSRM by maximum tolerance (blue), where x-axis is the time (ms).
Brainsci 12 01008 g009aBrainsci 12 01008 g009b
Figure 10. Spike trains of Izhikevich model (black) vs. unmodified RCR (green) vs. modified RCR by maximum tolerance (blue), where x-axis is the time (ms).
Figure 10. Spike trains of Izhikevich model (black) vs. unmodified RCR (green) vs. modified RCR by maximum tolerance (blue), where x-axis is the time (ms).
Brainsci 12 01008 g010aBrainsci 12 01008 g010b
Figure 11. Membrane potentials of RCR, where y-axis is the membrane potential (mV) and x-axis is the time (ms).
Figure 11. Membrane potentials of RCR, where y-axis is the membrane potential (mV) and x-axis is the time (ms).
Brainsci 12 01008 g011
Figure 12. Membrane potentials of PCR, where y-axis is the membrane potential (mV) and x-axis is the time (ms).
Figure 12. Membrane potentials of PCR, where y-axis is the membrane potential (mV) and x-axis is the time (ms).
Brainsci 12 01008 g012
Figure 13. Spike trains of Izhikevich Model (black) vs. unmodified PCR (green) vs. modified PCR by maximum tolerance (blue), where x-axis is the time (ms).
Figure 13. Spike trains of Izhikevich Model (black) vs. unmodified PCR (green) vs. modified PCR by maximum tolerance (blue), where x-axis is the time (ms).
Brainsci 12 01008 g013aBrainsci 12 01008 g013b
Table 1. Attributes of neuron models.
Table 1. Attributes of neuron models.
ModelIndependent VariablesTransformationRegularization
SRM I t , I t 1 , , I t J , S t 1 , , S t K //
SSRM I t , S t 1 Fourier basis function/
RSRM I t , I t 1 , , I t J , S t 1 , , S t K /L1
RSSRM I t , S t 1 Fourier basis functionL1
PCR I t , I t 1 , , I t J , S t 1 , , S t K Principal component analysis/
RCR I t , S t 1 Raised cosine basis function/
Table 2. The performance on twenty firing behaviors.
Table 2. The performance on twenty firing behaviors.
(a) Tonic Spiking
ModelNo. ParamsInference Memory (MB)1Inference Time (s)RMSEVRD2F1-Score (Maximum Tolerance (ms))
Izhikevich/3.420.034001 (0)
SRM10,0000.520.0560.4511.2640.980 (0.02)
SSRM39,9980.560.2526.93 × 10 7 01 (0)
RSRM950.050.000715.0927.81 × 10 6 0.006 (0)
RSSRM9500.00088.84445.5130.857 (0.55)
PCR950.770.000612.8503.92 × 10 5 0.079 (30 *)
RCR950.480.000812.472126.4240.621 (30 *)
(b) Phasic Spiking
ModelNo. ParamsInference Memory (MB)Inference Time (s)RMSEVRDF1-Score (Maximum Tolerance (ms))
Izhikevich/3.250.033001 (0)
SRM11,0000.770.0560.08401 (0)
SSRM17,99800.0521.14 × 10 6 01 (0)
RSRM1090.080.00078.9101.12 × 10 5 0.007 (0)
RSSRM10800.00142.38001 (0.25)
PCR1090.510.00077.93701 (0.67)
RCR1091.110.00344.08401 (0.56)
(c) Tonic Bursting
ModelNo. ParamsInference Memory (MB)Inference Time (s)RMSEVRDF1-Score (Maximum Tolerance (ms))
Izhikevich/3.190.032001 (0)
SRM10,0000.950.0520.03301 (0)
SSRM37,9980.590.2253.85 × 10 7 01 (0)
RSRM1000.050.001217.1849.17 × 10 6 0.005 (1.5)
RSSRM9900.00097.82801 (0.23)
PCR1000.950.000612.1907.22 × 10 4 0.135 (30 *)
RCR1000.310.00158.6775.0570.773 (30 *)
(d) Phasic Bursting
ModelNo. ParamsInference Memory (MB)Inference Time (s)RMSEVRDF1-Score (Maximum Tolerance (ms))
Izhikevich/3.630.031001 (0)
SRM98320.830.06310.28101 (0)
SSRM39,9980.860.25685.59 × 10 5 01 (0)
RSRM1010.040.000613.9493.29 × 10 6 0.007 (0.01)
RSSRM10000.00086.5581.2640.957 (0.43)
PCR1011.30.000412.6232.41 × 10 4 0.075 (30 *)
RCR1010.550.00087.32400.727 (30 *)
(e) Mixed Mode
ModelNo. ParamsInference Memory (MB)Inference Time (s)RMSEVRDF1-Score (Maximum Tolerance (ms))
Izhikevich/3.380.032001 (0)
SRM14,0000.680.0720.08701 (0)
SSRM39,9980.640.2593.39 × 10 7 01 (0)
RSRM13700.000710.0161.92 × 10 6 0.005 (0.33)
RSSRM13600.0014.09501 (0.26)
PCR1360.90.00077.3311.07 × 10 4 0.077 (30 *)
RCR1360.840.00133.9691.2640.923 (0.62)
(f) Spike Frequency Adaptation
ModelNo. ParamsInference Memory (MB)Inference Time (s)RMSEVRDF1-Score (Maximum Tolerance (ms))
Izhikevich/3.230.031001 (0)
SRM12,0000.870.0570.05401 (0)
SSRM39,9980.460.2572.08 × 10 8 01 (0)
RSRM1270.050.000810.6771.98 × 10 7 0.003 (0)
RSSRM12700.0015.30901.000 (0.4)
PCR1270.880.00078.8881.07 × 10 4 0.111 (30 *)
RCR1270.950.0015.48620.2280.8 (1.23)
(g) Class 1 Excitable
ModelNo. ParamsInference Memory (MB)Inference Time (s)RMSEVRDF1-Score (Maximum Tolerance (ms))
Izhikevich/3.60.039001 (0)
SRM19,2840.970.0690.29901 (0)
SSRM39,9980.950.2521.67 × 10 7 01 (0)
RSRM2000.060.00118.010247.7910.125 (30.3)
RSSRM19900.00143.91901 (1.4)
PCR2001.280.00077.6742.92 × 10 4 0.099 (30 *)
RCR2001.280.00154.85001 (0.2)
(h) Class 2 Excitable
ModelNo. ParamsInference Memory (MB)Inference Time (s)RMSEVRDF1-Score (Maximum Tolerance (ms))
Izhikevich/3.040.031001 (0)
SRM19,5340.860.0681.64501 (0.1)
SSRM39,9980.580.2590.000201 (0)
RSRM2010.020.000810.298668.7840.768 (21.1)
RSSRM20100.00145.70931.6060.938 (0.3)
PCR2010.180.000710.4381.19 × 10 6 0.0713 (9.5)
RCR2010.810.00157.1131461.4630.19 (36)
(i) Spike Latency
ModelNo. ParamsInference Memory (MB)Inference Time (s)RMSEVRDF1-Score (Maximum Tolerance (ms))
Izhikevich/3.120.031001 (0)
SRM10,0010.970.0470.04401 (0)
SSRM39,9980.060.261.3701 (0)
RSRM1020.020.00092.65180910.024 (0)
RSSRM10200.00391.86001 (0.4)
PCR1020.90.00071.94601 (1.4)
RCR1021.130.00121.93701 (1.5)
(j) Subthreshold Oscillations
ModelNo. ParamsInference Memory (MB)Inference Time (s)RMSEVRDF1-Score (Maximum Tolerance (ms))
Izhikevich/2.360.024001 (0)
SRM10,0010.520.0340.22801 (0)
SSRM30,1980.910.1434.05401 (0.41)
RSRM970.010.00051.753557.5300.087 (0)
RSSRM9600.00064.05301 (0.43)
PCR970.720.00061.99501 (0.23)
RCR970.880.00034.05401 (0.41)
(k) Resonator
ModelNo. ParamsInference Memory (MB)Inference Time (s)RMSEVRDF1-Score (Maximum Tolerance (ms))
Izhikevich/0.630.005001 (0)
SRM10430.870.0040.38201 (0)
SSRM47300.40.0042.29201 (11)
RSRM1000.00013.6871820.143 (0)
RSSRM900.00032.3085.0570.5 (12.5)
PCR100.950.00013.54801 (1)
RCR100.580.00012.29911.3780.4 (1)
(l) Integrator
ModelNo. ParamsInference Memory (MB)Inference Time (s)RMSEVRDF1-Score (Maximum Tolerance (ms))
Izhikevich/1.030.008001 (0)
SRM20010.520.0033.5701.000 (0)
SSRM10,5980.560.0173.16 × 10 7 01 (0)
RSRM200.010.000111.6557.46 × 10 4 0.008 (0.23)
RSSRM2000.00049.17101 (0.34)
PCR200.170.00014.39401 (0.06)
RCR200.130.00019.20501 (0.06)
(m) Rebound Spike
ModelNo. ParamsInference Memory (MB)Inference Time (s)RMSEVRDF1-Score (Maximum Tolerance (ms))
Izhikevich/0.710.01001 (0)
SRM21591.040.0020.14701 (0)
SSRM69980.950.0070.92801 (0)
RSRM220.030.000112.4322.88 × 10 5 0.004 (1)
RSSRM2200.00016.17101 (0.51)
PCR221.670.00045.61601 (0.3)
RCR220.270.00016.61301 (0.64)
(n) Rebound Burst
ModelNo. ParamsInference Memory (MB)Inference Time (s)RMSEVRDF1-Score (Maximum Tolerance (ms))
Izhikevich/2.090.031001 (0)
SRM10,0010.860.0630.00901 (0)
SSRM49,9980.580.4055.96301 (0.2)
RSRM1050.010.00085.9634.22 × 10 5 0.049 (6.9)
RSSRM10500.0015.97701 (0.2)
PCR1050.80.00084.0392.44 × 10 4 0.191 (1.8)
RCR1050.350.00185.96301 (0.2)
(o) Threshold Variability
ModelNo. ParamsInference Memory (MB)Inference Time (s)RMSEVRDF1-Score (Maximum Tolerance (ms))
Izhikevich/0.760.01001 (0)
SRM29461.060.0054.23301 (0)
SSRM12,3980.70.02426.51201 (0.54)
RSRM3000.00028.7653.7 × 10 5 0.004 (0.2)
RSSRM3000.00026.52101 (0.61)
PCR300.160.00015.03601 (0.31)
RCR301.030.00026.51801 (0.54)
(p) Bistability
ModelNo. paramsInference Memory (MB)Inference Time (s)RMSEVRDF1-Score (Maximum Tolerance (ms))
Izhikevich/3.470.031001 (0)
SRM16,0011.120.0610.03301 (0)
SSRM39,9980.240.254.37 × 10 9 01 (0)
RSRM17200.000819.727365.3660.154 (4.2)
RSSRM17200.005110.3415.0570.909 (0.4)
PCR1721.590.001615.401728.2030.49 (3.3)
RCR1721.050.001315.57380.9110.5 (7.2)
(q) Depolarizing After-potential
ModelNo. ParamsInference Memory (MB)Inference Time (s)RMSEVRDF1-Score (Maximum Tolerance (ms))
Izhikevich/0.280.004001 (0)
SRM10010.860.0010.0901 (0)
SSRM32900.160.0026.34 × 10 7 01 (0)
RSRM100.010.000518.2032.34 × 10 4 0.015 (0.88)
RSSRM1000.000510.80201 (0.26)
PCR100.180.00056.73501 (0.22)
RCR100.20.000110.2691.2640.667 (0.22)
(r) Accommodation
ModelNo. ParamsInference Memory (MB)Inference Time (s)RMSEVRDF1-Score (Maximum Tolerance (ms))
Izhikevich/1.790.016001 (0)
SRM51030.840.0140.0601 (0)
SSRM18,9980.710.0573.59 × 10 9 01 (0)
RSRM5100.00022.75001 (10.3)
RSSRM5100.00032.43801 (2.6)
PCR511.460.00021.76101 (0.3)
RCR510.350.00033.05001 (0.2)
(s) Inhibition-induced Spiking
ModelNo. ParamsInference Memory (MB)Inference Time (s)RMSEVRDF1-Score (Maximum Tolerance (ms))
Izhikevich/3.350.031001 (0)
SRM16,0010.60.0620.00401 (0)
SSRM39,9980.70.251.08 × 10 9 01 (0)
RSRM16000.00086.7341.88 × 10 4 0.013 (10.3)
RSSRM16000.00113.0671.2640.963 (0.9)
PCR1600.970.00075.49573020.233 (16.2)
RCR1601.480.00125.23701 (8.5)
(t) Inhibition-induced Bursting
ModelNo. ParamsInference Memory (MB)Inference Time (s)RMSEVRDF1-Score (Maximum Tolerance (ms))
Izhikevich/3.210.031001 (0)
SRM16,0010.860.0650.00701 (0)
SSRM39,9980.680.2522.44 × 10 9 01 (0)
RSRM15600.000714.7012.86 × 10 7 0.035 (1.8)
RSSRM15500.00115.2948540.869 (3.2)
PCR1550.820.00079.4001.31 × 10 5 0.359 (8.5)
RCR1551.520.001411.96501 (0.2)
1 Inference memory, the memory consumption during inference, of 0 MB indicates the actual memory usage is smaller than 10 KB. 2 VRD of 0 represents the actual VRD is tiny and is close to 0. Such a rounding error may come from Equation (21), which approximates the exact VRD involving kernels and integrals, and an error in numerical computations that is smaller than the machine epsilon. * They are pre-defined maximum tolerances considering biological plausibility and applicability.
Table 3. Average performance on twenty firing behaviors.
Table 3. Average performance on twenty firing behaviors.
ModelNo. ParamsInference Memory (MB) 1Inference Time (s)RMSEVRD 2F1-Score (Maximum Tolerance (ms))
Izhikevich/2.480.025001 (0)
SRM98950.830.0430.5870.0630.999 (0.051)
SSRM29,6590.560.1741.05601 (1.056)
RSRM1000.020.000610.1583.6 × 10 6 0.123 (4.453)
RSSRM10000.00155.63247.2190.95 (1.444)
PCR1000.860.00067.2659.5 × 10 4 0.546 (11.19)
RCR1000.770.0016.83385.3990.83 (7.049)
1 Inference memory, the memory consumption during inference, of 0 MB indicates the actual memory usage is smaller than 10 KB. 2 VRD of 0 represents the actual VRD is tiny and is close to 0. Such a rounding error may come from Equation (21), which approximates the exact VRD involving kernels and integrals, and an error in numerical computations that is smaller than the machine epsilon.
Table 4. Average performance on one-spike firing behaviors.
Table 4. Average performance on one-spike firing behaviors.
ModelNo. ParamsInference Memory (MB) 1Inference Time (s)RMSEVRD 2F1-Score (Maximum Tolerance (ms))
Izhikevich/1.550.016001 (0)
SRM50280.830.0180.98201 (0)
SSRM16,1340.490.0631.68501 (1.328)
RSRM500.020.00047.8679.8 × 10 4 0.144 (1.401)
RSSRM5000.00155.0780.5620.944 (1.989)
PCR500.750.00044.33001 (0.499)
RCR500.630.00065.3361.4050.896 (0.57)
1 Inference memory, the memory consumption during inference, of 0 MB indicates the actual memory usage is smaller than 10 KB. 2 VRD of 0 represents the actual VRD is tiny and is close to 0. Such a rounding error may come from Equation (21), which approximates the exact VRD involving kernels and integrals, and an error in numerical computations that is smaller than the machine epsilon.
Table 5. Average performance on multiple-spike firing behaviors.
Table 5. Average performance on multiple-spike firing behaviors.
ModelNo. ParamsInference Memory (MB) 1Inference Time (s)RMSEVRD 2F1-Score (Maximum Tolerance (ms))
Izhikevich/3.240.032001 (0)
SRM13,8780.830.0620.2640.1150.998 (0.093)
SSRM40,7250.620.2650.54201 (0.018)
RSRM1410.030.000812.0326.5 × 10 6 0.106 (6.95)
RSSRM14100.00146.08685.3940.954 (0.997)
PCR1410.950.00089.6671.7 × 10 5 0.175 (19.936)
RCR1410.870.00138.057154.1230.776 (12.35)
1 Inference memory, the memory consumption during inference, of 0 MB indicates the actual memory usage is smaller than 10 KB. 2 VRD of 0 represents the actual VRD is tiny and is close to 0. Such a rounding error may come from Equation (21), which approximates the exact VRD involving kernels and integrals, and an error in numerical computations that is smaller than the machine epsilon.
Table 6. Ranked Performance on twenty firing behaviors.
Table 6. Ranked Performance on twenty firing behaviors.
RMSEVRDF1-Score
RSRMRSSRMPCRRCRRSRMRSSRMPCRRCRRSRMRSSRMPCRRCR
(a) Tonic spiking413241324132
(b) Phasic Spiking413241114132
(c) Tonic Bursting413241324132
(d) Phasic Bursting413242314132
(e) Mixed Mode423141324132
(f) Spike Frequency Adaptation413241324132
(g) Class 1 Excitable413231413241
(h) Class 2 Excitable314221432143
(i) Spike Latency413241114123
(j) Subthreshold Oscillations132441114312
(k) Resonator423142134213
(l) Integrator421341114311
(m) Rebound Spike421341114213
(n) Rebound Burst431241314131
(o) Threshold Variability431241114312
(p) Bistability412331424132
(q) Depolarizing After-potential431241124213
(r) Accommodation321411114321
(s) Inhibition-induced Spiking413242314231
(t) Inhibition-induced Bursting412342314231
Mean3.751.652.302.303.651.202.251.503.851.702.401.95
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zeng, Y.; Bao, W.; Tao, L.; Hu, D.; Yang, Z.; Yang, L.; Shang, D. Regularized Spectral Spike Response Model: A Neuron Model for Robust Parameter Reduction. Brain Sci. 2022, 12, 1008. https://doi.org/10.3390/brainsci12081008

AMA Style

Zeng Y, Bao W, Tao L, Hu D, Yang Z, Yang L, Shang D. Regularized Spectral Spike Response Model: A Neuron Model for Robust Parameter Reduction. Brain Sciences. 2022; 12(8):1008. https://doi.org/10.3390/brainsci12081008

Chicago/Turabian Style

Zeng, Yinuo, Wendi Bao, Liying Tao, Die Hu, Zonglin Yang, Liren Yang, and Delong Shang. 2022. "Regularized Spectral Spike Response Model: A Neuron Model for Robust Parameter Reduction" Brain Sciences 12, no. 8: 1008. https://doi.org/10.3390/brainsci12081008

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop