Rethinking the Role of Normalization and Residual Blocks for Spiking Neural Networks

Biologically inspired spiking neural networks (SNNs) are widely used to realize ultralow-power energy consumption. However, deep SNNs are not easy to train due to the excessive firing of spiking neurons in the hidden layers. To tackle this problem, we propose a novel but simple normalization technique called postsynaptic potential normalization. This normalization removes the subtraction term from the standard normalization and uses the second raw moment instead of the variance as the division term. The spike firing can be controlled, enabling the training to proceed appropriately, by conducting this simple normalization to the postsynaptic potential. The experimental results show that SNNs with our normalization outperformed other models using other normalizations. Furthermore, through the pre-activation residual blocks, the proposed model can train with more than 100 layers without other special techniques dedicated to SNNs.


Introduction
Recently, spiking neural networks (SNNs) [1] have attracted substantial attention due to ultra-low power consumption and high friendliness with hardware such as neuromorphicchips [2,3] and field-programmable gate array (FPGA) [4]. In addition, SNNs are biologically more plausible than artificial neural networks (ANNs) because their neurons communicate with each other through spatio-temporal binary events (spike trains), similar to biological neural networks (BNNs). However, SNNs are difficult to train since spike trains are non-differentiable.
Several researchers have focused on the surrogate gradient to efficiently train SNNs [5][6][7][8][9]. The surrogate gradient is an approximation of the true gradient and is applied to the backpropagation (BP) algorithm [10]. Recent studies have successfully trained deep SNNs using this method [11]. However, it is still challenging to train deepened models due to the increasing difficulty of controlling spike firing.
To control the spike firing properly, we propose a novel and simple normalization: postsynaptic potential normalization. Contrary to the standard batch/layer normalizations, our normalization removes the subtraction term from the standard normalization and uses the second raw moment instead of the variance as the division term. We can automatically control the firing threshold of the membrane potential and spike firing by conducting this simple normalization to the postsynaptic potential (PSP). The experimental results on neuromorphic-MNIST (N-MNIST) [12] and Fashion-MNIST (F-MNIST) [13] show that SNNs with our normalization outperform other models using other normalizations. We also show that the proposed method can train the SNN, consisting of more than 100 layers without other special techniques dedicated to SNNs.
The contributions of this study are summarized as follows.
• We propose a novel and simple normalization technique based on the firing rate. The experimental results show that the proposed model can simultaneously achieve high classification accuracy and low firing rate; • We trained deep SNNs based on the pre-activation residual blocks [14]. Consequently, we successfully obtained a model with more than 100 layers without other special techniques dedicated to SNNs.
The remainder of the paper is organized as follows. In Sections 2-4, we describe the related works, SNN used in this paper, and our normalization technique. Section 5 presents the experimental results. Finally, Section 6 presents the conclusion and future works.

Spiking Neuron
SNN consists of spiking neurons that model the behavior of biological neurons and handle the firing timing of the spikes. Owing to the differences in approximations, several spiking neuron models have been proposed, such as the integrate-fire (IF) [15], leaky-integrateand-fire (LIF) [16], Izhikevich [17], and Hodgkin-Huxley model [18]. In this study, we adopt the spike response model (SRM) [19] to deal with the refractory period (Section 3).
The refractory period is an essential function of biological neurons to suppress the spike firing. Spike firing occurs when the neuron's membrane potential exceeds the firing threshold. From a biological perspective, the membrane potential is calculated using PSP, representing the electrical signals converted from the chemical signals. These behaviors are represented within the chemical synapse model shown in Figure 1a [20]. SRM was implemented to approximate this synaptic model better than IF/LIF neurons, which are widely used in SNNs.

Training of Spiking Neural Networks
It is well-known that SNNs are difficult to train due to non-differential spike trains. Researchers are working on this problem, and their solutions can be divided into two approaches: first, the ANN-SNN conversion [22][23][24][25], and second, the usage of the surrogate gradient [5][6][7][8][9]. The ANN-SNN conversion method uses the trained ANN parameters of SNN. The sophisticated and state-of-the-art ANN model can be reused through this method. However, this conversion approach requires many time-steps during inference and increases the power consumption. In contrast, the surrogate gradient is used to directly train SNNs by approximating the gradient of the non-differentiable spiking neurons. The surrogate gradient approach was adopted since the model obtained by surrogate gradient requires far fewer inference time-steps than the ANN-SNN conversion model [26].

Normalization
One of the techniques that have contributed to the success of ANNs is Batch Normalization (BN) [27]. BN is used to reduce the internal covariate shift, leading to a smooth landscape [28] while corresponding to the homeostatic plasticity mechanism of BNNs [29]. Using a mini-batch, BN computes the sample mean and standard deviation (STD). Meanwhile, several variants have been proposed to compute the sample mean and STD, such as Layer Normalization (LN) [30], Instance Normalization (IN) [31], and Group Normalization (GN) [32]. In particular, LN is effective at stabilizing the hidden state dynamics in recurrent neural networks for time-series processing [30].
Several normalization methods have also been proposed in the field of SNNs, such as threshold-dependent BN (tdBN) [33] and BN through time (BNTT) [34]. Thus, tdBN incorporates the firing threshold into BN, whereas BNTT computes BN at each time step. Furthermore, some studies used BN as is [35]. These studies applied the normalization to the membrane potential. In contrast, our method was applied to PSP, as shown in Figure 2b, to simplify the normalization form (Section 4).

Spiking Neural Networks Based on the Spike Response Model
In this section, we describe the SNN used in this study. Our SNN is constructed using SRM [19]; it uses SLAYER [6] as the surrogate gradient function to train the SRM.

Spike Response Model
We adopt SRM as a spiking neuron model [19]. SRM model is based on combining the effects of the incoming spike arriving at the spiking neuron. It also has a function to the spike firing when the membrane potential u(t)(t = 1, 2, · · · , T) reaches the firing threshold. Figure 1b,c indicate the behavior of this model. The equations are given as follows: where w i,j is the synaptic weight from the presynaptic neuron j to the postsynaptic neuron i. s j (t) is the spike train inputted from the presynaptic neuron j, s i (t) is the output spike train of the postsynaptic neuron i, * is a temporal convolution operator, and θ is a threshold used to control the spike generation. f s is the Heaviside step function, which fires the spike when the membrane potential u i (t) exceeds the firing threshold θ as shown in Figure 3. In addition, ε(·) and ν(·) are the spike response and refractory kernels formulated using the exponential function as follows: where τ s and τ r are the time constants of spike response and refractory kernels, respectively. Note that ε * s j (t) represents the PSP. After firing, the postsynaptic neuron goes into the refractory period and cannot fire until its membrane potential resets to its resting potential.  The main role of the refractory period suppresses the firing rate at a given spike interval. If the spike interval T is constant, the firing rate without the refractory period is given by 1/T. On the other hand, if the refractory period r is taken into account, it can be rewritten as 1/(T + r). Therefore, the firing rate decreases as the refractory period increases, as shown in Figure 4. In SNNs, the firing rate is proportional to the computational cost. Namely, using the refractory period ensures biological plausibility and reduces computational costs.

Multiple Layers Spike Response Model
By using Equations (1) and (2), the SNNs with multi-layers can be described as follows: where a (l) (t) ∈ R C×W×H ≥0 and s (l) (t) ∈ {0, 1} C×W×H are the PSP and input spike tensor of time step t; C is the number of channels; and W and H are the width and height of the input spike tensor, respectively. Since a (l) (t) does not take a value less than zero, we consider an excitatory neuron. Furthermore, W (l) ∈ R M is the weight matrix representing the synaptic strengths between the spiking neurons in l and l + 1 layers; M is the number of neurons of l + 1-th layer.

Deep SNNs by Pre-Activation Blocks
A deep neural network is essential to recognize complex input patterns. In particular, ResNet is widely used in ANNs [14,36], and its use in SNNs is expanding.
The ResNet's networks are divided into the pre-activation and post-activation residual blocks, as follows ( Figure 5): where h (k) and h (k+1) are the input and output in the k + 1 block, respectively. G represents the residual function, corresponding to "Conv-Func-Conv" and "Func-Conv-Func-Conv" in Figure 5; F represents the Func layer ("Spike-PSP-Norm"). Note that the refractory period is used in F. In the experimental section, we compare these blocks and show that deep SNNs can be trained using the pre-activated residual blocks. This result shows that identity mapping is an essential tool to train the deep SNNs, similar to ANNs [14].

Surrogate-Gradient
We use SLAYER [6] as one of the surrogate gradient algorithms to tarin the SNN with multi-layers. In SLAYER, the derivative of the spike activation function f s of the l + 1 layer is approximated as follows (Figure 1d): where α and β are hyperparameters to adjust the peak value and sharpness for the surrogate gradient, and θ ∈ R M is the firing threshold. SLAYER can be used to train SRM as described in [6].

Normalization of Postsynaptic Potential
In this section, we explain the derivation of our normalization, which is called postsynaptic-potential normalization, as shown in Figure 6a.
As the depth of the SNN becomes deeper, it becomes more difficult to control spike firing properly (Figure 6b,c). To tackle this problem, we first introduce the following typical normalization into the PSP.
where γ and ξ are trainable parameters, and the operator denotes the Hadamard product; each variable of E X [a (l) ] and V X [a (l) ] is approximated as follows: where a (l) i (x) represents the x-th variable required to compute these statistics of the i-th variable of a (l) ∈ R C×W×H×N×T ≥0 (N is the mini-batch size), and X depends on what kind of summation to compute. For example, if we compute these equations as in BN, X = W × H × N × T. In addition, if we compute them as in LN, X = W × H × C × T. Note that the normalization to PSP means that it inserts before the convolution or fully connected layers. This position differs from the other normalization ones, which use normalization to the membrane potential [33][34][35].
As shown in Equation (12),â (l) (t) may take minus. Therefore,â (l) (t) < 0 is not valid since neurons of SLAYER represent excitatory neurons. This phenomenon clearly arises from the trainable parameter ξ and the shift parameter E X [a (l) ]. Thus, we modify Equation (12) as follows:â Next, we consider the case whenû (l+1) (t) reaches the firing threshold θ.
Here, we have merged the trainable parameter γ and the weight matrix W (l) intoŴ (l) . This merging is possible because of the normalization performed before multiplying W (l) . Then, we express Equation (17) as follows: Equation (18) shows that the firing threshold varies dynamically as shown in Figure 3, which is consistent with the activity of cortical neurons in the human brain [37][38][39][40]. The refractory period (ν * s (l+1) )(t) and V X [a (l) ] + λ can decreaseθ and scaling, respectively.   Next, we focus on the scale factor V X [a (l) ] + λ. As shown in Equation (18), the firing thresholdθ becomes larger as the variance (second central moment) V X [a (l) ] increases. However, considering the behavior of the membrane potential,θ should become larger when the value of PSP (not variance) increases. Thus, we modify the equation as follows.
where E X [(a (l) ) 2 ] represents the second raw moment consisting of the following variable, By using this equation, we do not have to compute the mean beforehand, in contrast to using the variance.
In addition to E X [(a (l) ) 2 ], there is a hyperparameter λ in the scale factor. λ is usually set to a small constant, e.g., λ = 10 −3 because it plays the role of the numerical stability. Figure 7 shows the relationship between E X [(a (l) i ) 2 ] andθ when changing θ and λ. As shown in this figure,θ monotonically decreases as E X [(a (l) i ) 2 ] decreases. In particular,θ is close to zero when λ is sufficiently small, regardless of the initial threshold θ.θ ≈ 0 means that spikes fire at all times even if the membrane potential is significantly small, making it difficult to train a proper model. Thus, we set a relatively large value (λ = 0.1) as the default value.

Experiments
In this section, we evaluate two PSP normalizations: BN (the most common normalization) and LN (which is effective in time-series processing, such as SNN). We called them PSP-BN (X = W × H × N × T) and PSP-LN (X = W × H × C × T).

Experimental Setup
We evaluated PSP-BN and PSP-LN on the spatio-temporal event and static image datasets. We used N-MNIST [12] and F-MNIST [13]. N/F-MNISTs are widely used datasets containing 60 K training and 10K test samples with 10 classes. Each size is 34 × 34× 30,000 events (N-MNIST), and 28 × 28 pixels (F-MNIST). We partitioned the 60 K data using 54 K and 6 K as our training and validation data, respectively. We also resized the F-MNIST image from 28 × 28 to 34 × 34 to achieve higher accuracy.
We evaluated the performance of several spiking convolutional neural network models, such as 14-convolutional layers on N/F-MNIST. We also used more deep models, such as ResNet-106 on N-MNIST and F-MNIST, respectively.
We used hyperparameters shown in Table 1 in all experiments and implemented by PyTorch. We used the default initialization of PyTorch and showed the best accuracies of all models. All experiments were conducted using a single Tesla V100 GPU. In addition to this computational resource limitation, we randomly sampled 6 K of the training data for both datasets to train in each epoch since SLAYER requires a significant amount of time to train.

Effectiveness of Postsynaptic Potential Normalization
We first evaluate the effectiveness of our normalizations. Table 2 presents the accuracies of PSP-BN and PSP-LN and other approaches. Note that we set our normalization before the convolution as described in Section 4, which is different from the position proposed in previous studies [33][34][35]. This table illustrates that PSP-BN and PSP-LN achieve high accuracies on both datasets compared to the other approaches.
We also investigate the effect of the proposed method on the firing rate. Figures 8 and 9 show the firing rates of each method. As shown in Figure 8, our normalized models can suppress the firing rate in most layers compared to the unnormalized model. Furthermore, Figure 9 and Table 2 show that our normalized models can simultaneously achieve high classification accuracy and low firing rate compared to other normalizations. These results verify the effectiveness of our normalizations.
Then, we also analyze the training and inference times of the proposed method. Figure 10 shows the computational cost of BN, PSP-BN, and PSP-LN. The training time of PSP-BN and PSP-LN is shorter than BN because our normalization method does not require training parameters (γ and ξ) as Equation (17). On the other hand, the training time of PSP-BN and PSP-LN are almost the same because these differ only in X. In addition, there is no significant difference in inference time for each normalization. These results show that our normalization is suitable for training SNN. Table 2. Accuracies of N-MNIST and F-MNIST obtained from different methods. PSP-BN and PSP-LN are our normalization methods, and None is the model without normalization. Here, "c", "n", and "o" represent the convolution, normalization, and output neurons, respectively. In addition, each layer and spatial dimension in the network are separated by "-" and "×".

Performance Evaluation of Deep SNNs by Residual Modules
Finally, we evaluate the performance of SNNs using the residual blocks. Table 3 shows the performance of SNNs using the pre-activation and post-activation residual blocks. As shown in this table, the accuracy is substantially improved using the pre-activation residual blocks. This result shows that the post-activation employed in previous studies without refractory period [5,11,33] is unsuitable for SNNs with a refractory period. Thus, while ensuring the biological plausibility, due to the refractory period, we can obtain deep SNNs beyond 100 layers using our normalizations and pre-activation residual blocks. Table 3. Performance comparison using post-activation and pre-activation residual blocks. We use ResNet-106 on N-MNIST and F-MNIST datasets, respectively.

Discussion and Conclusions
In this study, we proposed an appropriate normalization method for SNN. The proposed normalization removes the subtraction term from the standard normalization and uses the second raw moment as the denominator. Our normalized models outperformed other normalized models based on existing normalization such as BN, BNTT, and tdBN by inserting this simple normalization before the convolutional layer. Furthermore, our proposed model with pre-activation residual blocks can train with more than 100 layers without any other special techniques dedicated to SNNs.
Besides the type of normalization, some papers pointed out that tuning hyperparameters τ s and λ is essential for high accuracy [41,42]. Investigating this aspect, we found that PSP-BN is sensitive to λ, whereas PSP-LN is robust to τ s and λ (Figures 11 and 12). These results imply that the effectiveness of tuning hyperparameters depends on X. We will conduct more detailed analysis in this regard in the future. In addition, we will also analyze the effect on other datasets and networks. Furthermore, we aim to extend postsynaptic normalization based on tdBN to develop robust normalization techniques for the thresholds in spiking neurons.

Conflicts of Interest:
The authors declare no conflict of interest.