The Influence of the Number of Spiking Neurons on Synaptic Plasticity

The main advantages of spiking neural networks are the high biological plausibility and their fast response due to spiking behaviour. The response time decreases significantly in the hardware implementation of SNN because the neurons operate in parallel. Compared with the traditional computational neural network, the SNN use a lower number of neurons, which also reduces their cost. Another critical characteristic of SNN is their ability to learn by event association that is determined mainly by postsynaptic mechanisms such as long-term potentiation. However, in some conditions, presynaptic plasticity determined by post-tetanic potentiation occurs due to the fast activation of presynaptic neurons. This violates the Hebbian learning rules that are specific to postsynaptic plasticity. Hebbian learning improves the SNN ability to discriminate the neural paths trained by the temporal association of events, which is the key element of learning in the brain. This paper quantifies the efficiency of Hebbian learning as the ratio between the LTP and PTP effects on the synaptic weights. On the basis of this new idea, this work evaluates for the first time the influence of the number of neurons on the PTP/LTP ratio and consequently on the Hebbian learning efficiency. The evaluation was performed by simulating a neuron model that was successfully tested in control applications. The results show that the firing rate of postsynaptic neurons post depends on the number of presynaptic neurons pre, which increases the effect of LTP on the synaptic potentiation. When post activates at a requested rate, the learning efficiency varies in the opposite direction with the number of pres, reaching its maximum when fewer than two pres are used. In addition, Hebbian learning is more efficient at lower presynaptic firing rates that are divisors of the target frequency of post. This study concluded that, when the electronic neurons additionally model presynaptic plasticity to LTP, the efficiency of Hebbian learning is higher when fewer neurons are used. This result strengthens the observations of our previous research where the SNN with a reduced number of neurons could successfully learn to control the motion of robotic fingers.


Introduction
Spiking neural networks (SNNs) benefit from biological plausibility, fast response, high reliability, and low power consumption when implemented in hardware. An SNN operates using spikes that are the effect of neuronal activation that occurs if a given threshold is exceeded and provides the SNN with sensitivity to event occurrence [1,2]. Thus, one of the critical advantages of SNN over traditional convolutional neural networks is the introduction of time in information processing. Another characteristic of SNN is their ability to learn, which is related to time based on the relative occurrence of the events.

Long-Term Plasticity
The main mechanism that determines learning is the long-term potentiation (LTP) that strengthens the synapses when the presynaptic neuron (pre) activates before the stimulated postsynaptic neuron (post). The reversed order of post and pre activation reduces the synaptic weights via long-term depression (LTD) [3]. The amplitude of the synaptic change due to pairs pre − post and post − pre depends strongly on the temporal difference between the activation of pre and post, respectively [4]. A detailed study of the biological neurons in vitro showed that LTP and LTD windows are asymmetric, making the LTP to dominate the LTD [5], implying that the resultant effect is LTP. Indeed, the essential mechanism of learning in the hippocampus is LTP, as stated in neuroscience [6]. LTP is also the basic element of Hebbian learning that determines the potentiation of the weak synapses if these are paired with strong synapses that activate postsynaptic neurons [7]. This implies that Hebb's rules are critical for learning in the human brain and they are the foundation for the most biologically plausible supervised SNN learning algorithm [8,9].

Hebbian Learning in Artificial Systems
Besides the biological importance of Hebbian learning, these rules are used in artificial systems, mainly for training competitive networks [8] and to store memories in Hopfield neural networks [10,11]. Hebb's rule strengthens neural paths that have temporal correlations between pre and post activation. This implies that each neuron tends to pick out its own cluster of neurons, of which the activation is correlated in time by potentiating the synapses that contribute to the activation of postsynaptic neurons [12,13]. In this case, each neuron competes to respond to a subset of inputs matching the principles of competitive learning [8]. In addition, recent research showed that Hebbian learning is suitable to train SNNs of high biological plausibility to control robotic fingers using external forces mimicking the principles of physical guidance [14]. Here, the effect of the strong synapses that are driven by sensors was associated with the effect of weak synapses driven by a command signal [15].
Supervised learning based on gradient descent is more powerful than Hebbian learning [8] in computational applications. However, these error-correcting learning rules are not suitable for bioinspired SNNs because the explicit adjustment of the synaptic weight is not typically feasible. Therefore, for adaptive systems of high biological plausibility, Hebb's rules are more suitable to train the synapses unsupervised, as they are trained in the brain.
The repetitive activation of pres is independent of a post activity increase in synaptic efficacy by the presynaptic elements of learning such as post-tetanic potentiation (PTP) [16] that can last from a few seconds to minutes [17]. This synaptic potentiation represents an increase in the quantity of the mediator released from the presynaptic membrane during pre activation [16,18]. PTP influences the motor learning specific to Purkinje cells that plays a fundamental role in motor control [19]. Taking into account that this type of presynaptic long-term plasticity occurs in the absence of postsynaptic activity, the Hebbian learning mechanisms are altered by PTP [18].

The Number of Neurons in SNNs
Each neuron is a complex cell comprising multiple interacting parts and small chambers containing molecules, ions, and proteins. The human brain is composed of neurons in the order of 10 11 connected by about 10 15 synapses. Creating mathematical and computational models would be an efficient solution towards understanding the functions of the brain, but even if with an exponential increase in computational power, it does not seem achievable in the near future. Even if this could be achieved, the complexity of the resulting simulation maybe as complex as the brain itself. Hence, there is a need for tractable methods for reducing the complexity while preserving the functionality of the neural system. The size effect in SNNs has various approaches. Statistical physics formalism based on the many-body problem was used to derive the fluctuation and correlational effects on finite networks of N neurons as a perturbation expansion of 1/N around the mean field limit of N → ∞ [20]. Another method that was used to optimise the size and resilience of the SNN is with empirical analysis using evolutionary algorithms. Thus, smaller networks may be generated by using a multiobjective fitness function incorporating a penalty for the number of neurons evaluating every network in a population [21].
In addition, research on computational neural networks showed that, for classification problems, SNNs use fewer neurons than the second generation of artificial neural networks (ANNs) does [22]. In addition, the hardware implementation of SNNs demonstrated their efficacy in modelling conditional reflex formation [23] or in controlling the contraction of artificial muscles composed of shape memory alloy (SMA). In later applications, SNNs with only a few excitatory and inhibitory neurons have been able to control the force [24,25] and learn the motion [14,15] of anthropomorphic fingers. In addition, using fewer neurons is important in reducing the cost and increasing the reliability of the hardware implementation of SNNs.
Analysing the Hebbian learning efficiency of adaptive SNN provides a useful tool for reducing the size of experimental networks and minimising the simulation time while preserving bioinspired features.

The Goal and Motivation of the Current Research
The presynaptic long-term plasticity determined by PTP reduces the efficiency of Hebbian learning that is determined by LTP. This mechanism is critical to make the neural network respond to concurrent events by potentiating the untrained synapses when activated with the trained neural paths. Thus, PTP potentiates synapses in the absence of a postsynaptic response, meaning that the causality is broken.
Considering these aspects, the goal of this paper is to determine in which conditions the effect of LTP over PTP is maximised, increasing the efficacy of Hebbian learning. Typically, fewer neurons must fire at a higher rate or have stronger potentiated synapses to activate post above the preset rates. Reducing the number of neurons can increase both the firing rate of pres and the synaptic weights that are necessary to reach the requested frequencies of post.
At certain firing rates and synaptic weights, the ratio between the LTP and PTP rates can be higher, implying that associative learning is more efficient. Considering that dW LTP represents the maximal contribution of LTP and dW PTP , the effect of PTP to the synaptic weight during the period t L of training, then there is a maximal ratio: r MAX W of dW rMAX LTP and dW rMAX PTP . In this work, we consider that the maximal efficiency of Hebbian learning is for r MAX W , which corresponds to dW rMAX LTP . If a target frequency f POST for the postsynaptic neuron is requested, then a minimal number of untrained presynaptic neurons n UNT with weight dW rMAX LTP can be activated to reach f POST . Therefore, the ideal case when LTP is maximal n UNT depends on the functions that describe the weight variation by PTP and LTP, and on f POST .
Starting from these ideas, the contribution of this work is twofold: (i) The quantification of Hebbian learning efficiency as the ratio between LTP and PTP; (ii) the evaluation of the influence of the number of neurons on the efficiency of Hebbian learning, focusing on an SNN with reduced number of neurons (fewer than 20 per area).
As presented in Sections 1.2 and 1.3, there are several comprehensive studies related to Hebbian learning or focused on the influence of variation in the number of neurons on the performance of adaptive SNN. However, there are no studies focused on overlapping these two research directions in systems of high biological plausibility.
The rest of the paper is organised as follows: Section 2 presents the general structure of the neural network and the experimental phases focusing on the proposed neuron model, and on the implementation of PTP and LTP mechanisms. The experimental results along with the details for each measured item are presented in Section 3. The paper ends with Section 4, which discusses the results, focusing on the biological plausibility of the used model, and presents some considerations for future research.

Materials and Methods
The SNN is based on a neuronal model of high biological plausibility [14]. Although this electronic neuron was implemented and tested in PCB hardware, the analysis presented in this work was based on spice simulations of the electronic circuit.

The Model of the Artificial Neuron
An artificial neuron includes a SOMA and one or more synapses. The electronic SOMA models elements related to information processing, such as the temporal integration of incoming stimuli, the detection of the activation threshold, and a refractory period. Electronic synapses model presynaptic elements of learning such as PTP and the postsynaptic plasticity that determines Hebbian learning via long-term potentiation (LTP). In addition, a synapse stores the synaptic weight using a capacitor that can be charged or discharged in real time using cheap circuits [26,27]. Electronic synapses generate excitatory or inhibitory spikes to feed the corresponding posts. Figure 1 shows the main elements related to learning that are included by the neuronal schematic that is detailed in Appendix A [14]. The neuron detects the activation threshold using the transistor T M , and, during activation, T S generates a spike of variable energy E SPK that depends on the synaptic weight w s . In this work, we refer to w s as the voltage V W read in the capacitor C L , shown in Figure 1. The synapse is potentiated by PTP that is modelled by the discharge of C L when the neuron activates, and by LTP, which alters the charge in capacitor during the activation of post. Potential V W determines the duration of the generated spike at OUT, modelling the effect of synaptic weight on postsynaptic activity. Spike duration t SPK is determined by potential V W because, during SOMA activation, transistor T S is open as long as V U (which is proportional with V W ) is below the emitter-base voltage of T S . The variation in V U is given by: The initial potential V U 0 is calculated using Equation (2) for cut-off and Equation (3) for saturated regimes of transistor T S as follows: In and V EB are the forward and emitter base voltages, respectively. Similarly, after SOMA inactivation, V U is restored to V DD as follows: where V U I is the initial value of V U when SOMA inactivates and C U starts charging.

Model for PTP and LTP
The activation of the neuron that lasts t SPK = 44us reverses the polarity of C L , which is discharged by an amount that is given by: Equation (5) models the potentiation by PTP of the synapse. For expressing LTP, we should consider the charge variation in C L when it is discharged using C A , followed by a reset of the charge in C A during post activation.
During neuronal activation, the potential in the capacitor C L varies as follows: where the equivalent capacitance is: Considering that ∆V A and ∆V C represent the variation in potential in C A and C L , respectively, we denote the ratio: Thus, variation in the potential in C L that represents the temporary potentiation of the synapse is: During the neuronal idle state after its activation, the resultant variation of the potential in C L and C A is: where t W is the time window t p ost − t p re between the moments of the neuronal activation. C A discharges in C L until the potentials in both capacitors reach equilibrium. This variation restores the synaptic weight to the value that was before activation of the presynaptic neuron. If the postsynaptic neuron fires during the restoration of the synaptic weight, capacitor C A is discharged at a significantly higher rate until equilibrium is reached. Taking into account that R L << R A + R M , the potential variation in C L during post activation is negligible. This implies that the variation in the potential in C L that models weight variation by LTP is: ∆V S decreases according to Equation (8), implying that ∆V LTP depends on the time window t W .
For this neuronal design w s varies in the opposite direction, with V W in the range of [0.2 V:1.6 V]. This implies that a lower V W models higher synaptic potentiation. To simplify the presentation, in this research, we refer to variation dW in the voltage in C L that occurs during potentiation.
Therefore, the experiments presented in the following section focus on evaluating how the number of neurons affect learning efficiency.

The Structure of the SNN
The synaptic configuration includes two presynaptic neural areas, preN A T and preN A UNT , which include n T and n UNT neurons, respectively. The pres included in these neural areas connect to only one post, as in Figure 2a. For allowing weight variation by LTP, at the beginning of each experiment, synapses S T between PreN A T and post were fully potentiated for the activation of post, while the weights of synapses S UNT driven by PreN A UNT were minimal. The SNN included additional neurons Pre AUX and Post AUX for the evaluation of the potentiation by PTP of S AUX , which had the same value as that for S UNT . This allowed for us to compare the PTP and LTP effects in similar conditions. As shown in Figure 2b, neurons in each presynaptic area were activated by constant potentials V 1 , . . . , V N or by pulses, as we detail in the sequel. For modelling the variability in the electronic components, input resistors R 1 , R 2 , and R N , shown in Figure 2b, were set in an 10% interval that varied the firing rate of pres slightly. The firing rate of the postsynaptic neurons f POST , and the variation in the synaptic weights dW LTP and dW PTP due to LTP and PTP, respectively, were determined via measurements on the simulated electronic signals [28,29]. The values of the input voltages for the activation of pres were set to several values to activate the neurons in the range that was used during our previous experiments [14]. In order to highlight the influence of the number of neurons on the Hebbian learning efficiency, the initial synaptic weights were minimal for extending their variation range.

Experimental Phases
The experiments started with a preliminary phase in which we determined dW LTP and dW PTP for a single spike when f UNT took several values, and variation in f POST with the number n UNT . Following these preliminary measurements, we evaluated the efficiency of Hebbian learning by calculating the ratio r W = dW LTP /dW PTP during several phases as follows: Phase 1. The value of dW LTP was determined when n T and n UNT in the neural areas preN A T and preN A UNT , respectively, varied independently or simultaneously. These results were compared with the effect of PTP when only the neurons in the untrained area preN A UNT were activated. Variation dW LTP in the synaptic weight included potentiation due to LTP being determined by pre − post pair activation and due to PTP that occurred due to pre action potential.
Typically, the frequency of post can be controlled in certain limits by adjusting the firing rate of pres independent of the number of neurons. In order to simplify the SNN structure during Phases 2 and 3, preN A T included one neuron.
Phase 2. Next, we determined the variation in dW LTP and dW PTP when synapses in preN A UNT were trained until they were able to activate post in the absence of preN A T activity.
Phase 3. For the last phase of the experiments, we considered a fixed frequency f M of the output neuron that matched the firing rate of the output neurons that actuated the robotic junctions in our previous experiments [14]. Thus, the SNN was trained until f M had reached 100 Hz when stimulated only by preN A UNT independent of preN A T . In order to extract the contribution of PTP to the dW LTP , neuron Pre AUX was activated at the same rate with preN A UNT , and dW PTP was measured.
For the untrained pres, we set different frequencies that were not divisible with the firing rate of post, mimicking a less favourable scenario of neuronal activation. In this case, the time interval between pre and post activation varied randomly, increasing the diversity of the weight gain per the action potential of post. In a favourable scenario, the frequency of post is the divisor of the firing rate of pre, which improves the weight gain via the synchronisation of neuronal activation.

Results
The obtained results during the experimental phases mentioned above are presented here.

Preliminary Phase
In order to asses the influence of the electronics on synaptic potentiation during a single spike, we determined the weight variation by PTP for several values of V I N when pre activated once. As presented in Figure 3a, PTP decreased as f UNT increased. However, long term, this variation was compensated by the number of spikes per time unit that increased at a higher rate with f UNT . A similar evaluation was performed for LTP when post was activated by trained neurons at t = 0.002s after the activation of the untrained pre. In this case, the influence of LTP presented in Figure 3b was extracted from the measured dW by the subtraction of the PTP effect shown in Figure 3a. Typically, the output frequency of an SNN depends on the number of pres that stimulate post, as shown in Figure 4a. Starting from this observation, we determined dW LTP after 2 s of training for a different number of pres in PreN A UNT and PreN A T when f UNT = 75 Hz and f T = 100 Hz. Figure 4b-d show that dW LTP depended on the number of pres following different patterns for the trained and untrained neurons. In addition, the learning rate by LTP increased with the number of pres, mainly due to higher values of f POST determined by the activation of more pres.

The Efficiency of Hebbian Learning
Variation in the ratio r W with the voltage V W synaptic weight is presented in Figure 5a. This represents the ideal case when LTP is maximal, which was obtained by the activation of the postsynaptic neuron shortly after the untrained pre. r W was maximal for a specific weight that was far from the limits of the variation interval.
Weight variation dW for different firing rates of pre was determined for both PTP and LTP when the neurons activated for a fixed period of time t = 2s. The data plotted in Figure 5b show that variation in the ratio r W reduced significantly when the frequency of the untrained pres was above 50 Hz. Thus, taking into account that rW was almost stable for a single neuron in PreN A UNT , for the next experimental phase, we evaluated the influence of the number of neurons on rW for fixed activation frequency f UNT = 50 Hz.
In this setup, the SNN was trained until the first activation of post by the synapse that was potentiated by LTP. The PTP level for the synapse S AUX was determined via the activation of the auxiliary neuron Pre AUX (see SNN structure in Figure 2a) at the same frequency as that of the untrained pres. In order to determine if r W had a similar variation for another frequency of the neurons in the PreN A UNT , we performed similar measurements for f UNT = 75 Hz. As presented in Figure 6, the variation in r W showed that the best learning efficiency was obtained for n UNT = 1neuron when f UNT = 75 Hz and for n UNT = 3neurons when f UNT = 75 Hz. The different number of pres indicated that f UNT influenced the optimal number of neurons for the best learning efficiency when the neural paths were trained until the first activation of post by PreN A UNT independent of PreN A T . The next experimental phase evaluated rW and the duration of the training process t L when the firing rate of the output neurons reached f M = 100 Hz, while the untrained pres in the area PreN A UNT activated at several firing rates in a set that included divisors of f M .
The plots in Figure 7a,b show that weight variations dW PTP and dW LTP decreased when the number of untrained pres n UNT increased. Typically, dW PTP is proportional with t L , implying that the SNN learns faster when more neurons activate post. The improved value for t L was determined by the lower weights that were necessary to activate post at the requested firing rate. In order to compare dW PTP and dW LTP , we determined the w s that were potentiated by PTP when the neuron Pre AUX was activated at f UNT as the neurons in the area PreN A UNT . Figure 8a shows the variation rW for several firing rates f UNT that were not divisors of f M . In this case, local maximum rW local was for a single neuron per area (n UNT = 1). Taking into account that LTP may be more efficient when f UNT is a divisor of f M due to the synchronisation of the pre − post neurons, we evaluated the weight variation for f UNT =25, 33.3, and 50 Hz. In order to eliminate the variation in PTP with the continuous input voltage of the neurons, the pres were activated with digital pulses with the same amplitude generated at rate f UNT . The results presented in Figure 8b show that the maximal rW local was obtained for n UNT = 2 neurons. The best learning efficiency rW MAX was obtained when f UNT = 25 Hz, and the untrained presynaptic area included two neurons.
Typically, the weight variation with training duration t L by LTP and PTP varies, following different patterns. The difference between the two functions implies that there is a value for t max L where the ratio between LTP and PTP is maximal. This value corresponds to a synaptic weight w LTP obtained by LTP and consequently to a potential V W . Typically, there is a minimal number of pres n UNT firing at a fixed frequency that are able to activate a post when the weight is w LTP . In our work, the best r W = 4.28 for n UNT = 2 neurons corresponded to the potential V W = 0.7 V in the weight capacitor.

Discussion and Conclusions
At the synaptic level, the neural paths in the brain are trained by associative or Hebbian learning that is based on long-term potentiation, which is the postsynaptic element of learning. From a biological point of view, presynaptic long-term plasticity violates Hebbian learning rules that depend on postsynaptic activity. Previous research on SNN showed that the control systems use a reduced number of electronic neurons, while in classification tasks, SNN uses fewer neurons than the traditional CNN does. Starting from these ideas, this work focused on the evaluation of the influence of the number of neurons on the efficiency of Hebbian learning, characterised as the ratio of LTP and PTP effects on the synaptic weights. This ratio increases the effect of LTP and consequently the power of the SNN to discriminate between the neural paths that are trained by associative learning over the paths where only presynaptic plasticity occurs. The simulation results showed that, despite the fact that LTP depends mainly on the frequency of postsynaptic neurons, the number of neurons affect the Hebbian learning efficiency when the posts must reach a predefined frequency. In this case, the best LTP/PTP ratio was obtained when the frequency of the untrained pres was the lowest divisor of the target frequency of post. The efficiency of Hebbian learning reached a maximum for two pres and decreased in the opposite direction with the number of pres. Taking into account that, for a certain number of neurons, the LTP/PTP ratio was better, we could deduce that certain synaptic weights resulted in better Hebbian learning efficiency. Indeed, the position of the maximal r W inside the variation interval (Figure 8b) matched the variation in the r W in the ideal case presented in Figure 5a. This implies that the minimal number of neurons that were necessary to activate post at the requested firing rate was related to the synaptic weight. In conclusion, previous research showed that electronic SNNs with a reduced number of neurons are trained efficiently by Hebbian learning, while the current research strengthen the idea showing that fewer neurons improve associative learning. This could reduce the cost and improve the reliability of the hardware implementation of SNNs.

Conflicts of Interest:
The authors declare no conflict of interest.

Abbreviations
The following abbreviations are used in this manuscript:  Figure A1 presents the schematic circuit, including the parametric values, of the electronic neuron that was implemented in PCB hardware [28]. The neuron included one electronic soma (SOMA) and one or more electronic synapses (SYN). The SOMA detects the neuronal activation threshold using transistor T 1 and activates the SYNs. The SOMA of the postsynaptic neurons that are stimulated by excitatory or inhibitory synapses includes an integrator of the input activity. When the SOMA activates the connected SYNs, S OUT generates pulses at their output N OUT , of which the nergy depends on the charge stored in the weight capacitor C L .