Next Article in Journal
Physically Encrypted Wireless Transmission Based on XOR between Two Data in Terahertz Beams
Previous Article in Journal
Evaluation of IoT-Based Smart Home Assistance for Elderly People Using Robot
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hardware Implementation of an Approximate Simplified Piecewise Linear Spiking Neuron

1
Faculty of Electronics and Information Engineering, Harbin Institute of Technology, Shenzhen 518055, China
2
Sino-German School, Shenzhen Institute of Information Technology, Shenzhen 518055, China
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(12), 2628; https://doi.org/10.3390/electronics12122628
Submission received: 4 May 2023 / Revised: 5 June 2023 / Accepted: 9 June 2023 / Published: 11 June 2023
(This article belongs to the Section Artificial Intelligence Circuits and Systems (AICAS))

Abstract

:
Artificial intelligence has revolutionized image and speech recognition, but the neural network fitting method has limitations. Neuromorphic chips that mimic biological neurons can better simulate the brain’s information processing mechanism. As the basic computing component of the new neuromorphic network, the new neural computing unit’s design and implementation have important significance; however, complex dynamical features come with a high computational cost: approximate computing has unique advantages, in terms of optimizing the computational cost of neural networks, which can solve this problem. This paper proposes a hardware implementation of an approximate spiking neuron structure, based on a simplified piecewise linear model (SPWL), to optimize power consumption and area. The proposed structure can achieve five major neuron spiking generation patterns. The proposed design was synthesized and compared to similar designs, to evaluate its potential advantages and limitations. The results showed that the approximate spiking neuron had the lowest computational cost and the fastest computation speed. A typical spiking neural network was constructed, to test the usability of the SPWL model. The results showed that the proposed approximate spiking neuron could work normally in the spiking neural network, and achieved an accuracy of 94% on the MNIST dataset.

1. Introduction

Artificial intelligence based on deep neural networks has been widely applied in image and speech recognition in recent years. Many groundbreaking advances have been made through the joint efforts of academia and industry; however, there are significant development bottlenecks for artificial intelligence based on the neural network fitting idea [1]. It is necessary to study new design and architectural principles. A possible research direction is to draw inspiration from biology, as the biological nervous system has high capacity, low power consumption, parallel processing, and self-learning characteristics. For example, the most advanced biological data processor, the human brain, is composed of a large number of neurons and synapses, with over 100 billion neurons and 10 15 synapses. Compared to traditional computers, the human brain is undoubtedly a more powerful intelligent platform, capable of adapting to complex and unfamiliar scenarios, mastering new skills, and making decisions [2]. In addition, the human brain is more efficient and reliable, achieving low power consumption and robustness while performing the above functions.
Inspired by the biological processor, neuromorphic chips have become one of the most popular directions in artificial intelligence. In neuromorphic chips, the function of neurons is no longer simple linear multiplication and addition. On the one hand, the neuron model must be close to the behavioral patterns of biological neurons, to simulate the biological nervous system’s reaction mechanism and working principle. On the other hand, according to the requirements of circuits and algorithms, the neuron model must be simple, to achieve the corresponding functions, making large-scale integration and production possible.
Compared to traditional artificial neurons, the output of a spiking neuron is a spiking signal, rather than a continuous activation value. A spiking neuron is a time-based neuron, whose activation is achieved by producing a series of spiking signals within a certain time window. In the biological nervous system, spiking neurons are one of the brain’s most basic types of neuron, and form the foundation of information transmission and processing. The model of spiking neurons is closer to the operating mode of biological neurons, so it can better simulate the information processing mechanism of the brain. Spiking neurons have unique properties, such as temporal coding, which make them advantageous in certain application areas. With the cross-development of neuroscience and computer science, spiking neurons are increasingly being applied in neural computing and machine learning. In a spiking neural network, more flexible and efficient information processing and learning can be achieved by constructing a structure with multiple layers of spiking neurons [3].
However, the mainstream spiking neuron models have a common problem: balancing computation characteristics and computation costs is difficult. Models more in line with biological characteristics often have more complex parameters and higher computational costs, making it difficult to accept the cost of network structures based on these neurons [4]. One way to solve this problem is through approximate computing methods, because spiking neuron models driven by biological characteristics often have strong error tolerance. Sacrificing some computational accuracy can significantly optimize neuron computational costs, without affecting the expression of biological characteristics or the overall performance of spiking neural networks.
This paper proposes a simplified hardware structure for the piecewise linear spiking neuron, which reduces the computational cost of the spiking neuron to some extent, while preserving its computational characteristics.
The rest of this article is arranged as follows: Section 2 introduces the simplified piecewise linear model used in the design; Section 3 describes the hardware structure of the approximate spiking neuron; Section 4 presents the simulated and synthesized results of the proposed structure, and describes the construction of a typical spiking neural network to verify its viability; Section 5 discusses related work, and summarizes the existing designs of spiking neurons; Section 6 presents the conclusion.

2. Simplified Piecewise Linear Model

2.1. Original Piecewise Linear Spiking Neuron

The piecewise linear spiking neuron model [5] combines the Hodgkin–Huxley and the Integrate-and-Fire models. By analyzing the phase plane of the H–H neuron model, the V-shaped curve that approximates the zero line of the membrane potential is replaced by two straight lines. By contrast, the zero line of the recovery variable is still represented by a straight line. The advantage of this model is that it can qualitatively describe the neural excitation system, using bifurcation theory, and quantitatively analyze neuron behavior, using the analysis of state variables. Therefore, the piecewise linear spiking neuron has many neuron characteristics not possessed by IF-type neurons. The membrane potential update expression of the piecewise linear spiking neuron model is provided by Equation (1):
τ m d V d t = ( V V rest ) + g s ( V V thresh ) + U + I τ r d U d t = k ( U V rest ) U .
When the membrane potential V of the neuron is greater than or equal to the spike peak voltage V peak , the corresponding auxiliary reset mechanism is provided by Equation (2):
V V reset U U + U reset
In the equation, ( x ) + denotes the linear rectification function, where ( x ) + = x when x > 0 and ( x ) + = 0 otherwise. Table 1 shows the meaning of the parameters in the piecewise linear spiking neuron model.
Piecewise linear spiking neuron models can express more diverse neural computation features than IF-type linear models. The dynamic analysis method of this model is simple, and its neural performance can be quantitatively analyzed, using the analysis formula of state variables. Although the piecewise linear model lacks the biological interpretability of physiological models like Hodgkin–Huxley, it still possesses similar neural dynamic characteristics.

2.2. Simplified Piecewise Linear Neuron Model

The piecewise linear spiking neuron model has many model parameters, which allows it to retain many biological characteristics, but also results in significant computational complexity. In this paper, considering the computational cost, we propose a simplified version of the original piecewise linear spiking neuron model, called the simplified piecewise linear spiking neuron model (SPWL).
In the simplified piecewise linear spiking neuron model, we set V rest to a constant 0. Introducing the 0 value to the parameters allowed for a certain degree of simplification in the complex membrane potential expression. The updated membrane potential expression obtained by this modification is shown in Equation (3):
τ m d V d t = V + g s ( V V thresh ) + U + I d U d t = K t U
in which K t = k 1 τ r , and the reset expression stays the same as Equation (2).
By adopting this approach, we were able to reconstruct the piecewise linear spiking neuron model, reducing two subtraction and multiplication operations. This modification reduced the computational complexity of the model to some extent, lowering the computational cost of the neuron, and making it better-suited for large-scale neural network construction.

2.3. Verification of Dynamic Computational Characteristics

The nervous system of living organisms is highly complex, with neurons being the basic building blocks of the system. Neuronal morphology is critical for the functioning of neuronal spikes, including the reliability of spiking and its propagation. In fact, the diversity of neuronal morphology and working modes is an important basis for their functional diversity, allowing neurons to be classified into different types, according to their firing patterns [6,7]. Neurons can be classified into different types according to their firing patterns. Currently, six different types of neurons have been discovered in biology, including regular spiking (RS) neurons, intrinsically bursting (IB) neurons, chattering (CH) neurons, fast spiking (FS) neurons, low-threshold spiking (LTS) neurons, and late spiking (LS) neurons. The discharge and spike generation patterns of these neurons are also different, and can express different computational characteristics.
For the original piecewise linear spiking neuron (PWL) model, adjusting the neuron’s parameters can achieve all six different types of neuronal working modes. The improved simplified piecewise linear spiking neuron (SPWL) model can correctly express the computational characteristics of the other five types of neurons, except for CH neurons, by selecting different parameter settings. In this section, we will verify the neuronal dynamic computational characteristics of the proposed simplified spiking neuron model, to demonstrate its usability.
RS neurons are a common type of excitatory neuron, characterized by firing single action potentials at a certain frequency when the membrane potential exceeds the threshold. The neuronal cell body and axon of RS neurons usually have multiple dendrites, which can receive input signals from other neurons, and converge them onto the neuronal axon initial segment [8]. The axon of the RS neuron then transmits the processed signals to other neurons. RS neurons usually have a stable resting potential and a high threshold, requiring sufficient stimulation to trigger action potential firing. They are widely present in brain regions such as the cerebral cortex and hippocampus, and are an important part of brain information processing. In addition, RS neurons also exhibit spike frequency adaptation characteristics, showing Class 1 excitability when the input current gradually decreases and the frequency of the firing neuron decreases.
As shown in Figure 1, two different types of inputs were used during testing. When a stable level signal was inputted, the neuron continuously fired spikes, with the firing frequency gradually decreasing in the initial stage, exhibiting the feature of spike frequency adaptation. When a gradually increasing signal was inputted, the firing frequency of the spikes increased with the increasing signal strength, exhibiting the feature of Class 1 excitability of the neuron. Both the SPWL and PWL models showed similar computational characteristics.
IB neurons are a special type of neuron that exhibits a cluster discharge phase followed by continuous spiking when subjected to sufficiently strong electrical stimulation. This type of neuron is widely present in the cerebral cortex and hippocampus, and has been shown to play an important role in learning and memory. Compared to RS neurons, IB neurons have a lower firing frequency, but the spike clusters they generate can produce stronger effects, making them important in information processing. In addition, IB neurons can adjust their electrical activity states, by regulating their membrane potentials to adapt to different environments and needs. As shown in Figure 2, when the input I remained constant, a burst spike cluster was released first, followed by a normal single spike. The SPWL model could also successfully express the neural characteristics of IB neurons.
FS neurons are a highly active inhibitory neuron type found in the cerebral cortex, cerebellum, and hippocampus [9,10]. They are characterized by extremely short action potential duration and high-frequency spike discharges, and they play important roles in neural regulation, learning, and memory processes. The morphological features of FS neurons include axons with multiple short branches and extensive dendrites, allowing these neurons to rapidly transmit signals and receive input from other neurons. In addition, FS neurons typically have high thresholds and fast-rising action potentials, enabling them to generate high-frequency spike discharges in a short period of time. This fast signal transmission characteristic means that FS neurons play an important inhibitory role in the cerebral cortex. The Class 2 excitability property of FS neurons can help neurons respond more quickly to stimuli, and perform effective inhibition in a short period of time: this is very useful for processing rapidly changing input signals, such as visual and auditory signals.
Comparison of the simulation results for FS neuron characteristics is shown in Figure 3. In the PWL model, the neuron continuously released spikes with a constant frequency when the input was a stable level signal. When the input was a gradually increasing current signal, the spike firing frequency remained constant with the increased signal strength, after stabilizing the spike firing state. According to the comparison results, the proposed SPWL model can also express similar computational characteristics. The spike frequency at the initial stage will be higher when the input signal is a level signal.
LTS neurons are a special type of neuron, with a low spike excitation threshold and rebound spikes, which can generate higher-frequency spike sequences. Although they have obvious spike frequency adaptation characteristics, they play an important role in neuronal networks. LTS neurons can generate long-lasting electrical activity, which is believed to be important in information processing and storage. In addition, LTS neurons can also synaptically connect with other neurons, and can regulate the synchronization and oscillation of neuronal networks. The simulation comparison results for LTS neurons are shown in Figure 4. The comparison results show that the SPWL model can express similar computational characteristics as the PWL model.
LS neurons are a special type of neuron, with delayed synaptic response characteristics in their postsynaptic potentials: this means that they have more precise and complex responses to certain stimuli, affecting some cognitive and perceptual processes. LS neurons are mainly present in areas such as the prefrontal cortex, the parietal lobe, and the occipital lobe of the cerebral cortex. The firing pattern of LS neurons is unique, with a slow membrane potential increase when the input current is small, finally firing an action potential. When the input current is large, the action potential can be fired more quickly, but requires a longer response time. This type of neuron has a clear subthreshold oscillation characteristic within the response time, and the firing frequency is much lower than that of FS neurons.
The simulation test results for the neural characteristics of LS neurons are shown in Figure 5. A significant spike generation delay could be observed when the input was a small amplitude level signal. By contrast, the spike generation frequency was higher for a strong level signal, and the delay effect was smaller, but there was still a significant response time. According to the comparison, the PWL neuron can also correctly express the computational characteristics of LS neurons, but with a smaller spike generation delay.
Neurons are the basic functional units of the nervous system, and are the core of neural signal transmission and processing. The proposed SPWL model could correctly express the computational characteristics of the five main types of neurons, including the two excitatory neurons (RS and FS neurons) and the three inhibitory neurons (FS, LTS, and LT neurons). This validates the richness of the proposed SPWL model, in terms of simulating the neural dynamics of different types of neurons.

3. Hardware Implementation of the Approximate SPWL Model

This design’s approximate spiking neural units were based on the SPWL model proposed earlier. In the hardware implementation, modifying the membrane potential update expression was necessary. We simplified the update expression to the minimum time unit, which was the computational clock of the neuron. As a result, the membrane potential update equation could be converted into a differential equation system, as shown in Equation (4):
V n + 1 = V n + G s ( V n V thresh ) + + τ m ( I U n V n ) U n + 1 = K t U n ,
in which G s = g s τ m , K t = K t + 1 . The benefit of parameter merging was that it allowed for the pre-storing of new parameters, which could reduce the computational cost of the neuron’s calculation process.
The SPWL neurons will be computed using fixed-point format values in hardware implementation. Although floating-point values are more accurate, they require additional decoding and encoding processes, which can result in higher computation latency, design complexity, and circuit area cost, whereas neuronal computation and characteristic expression do not require high computational precision, and have a strong tolerance for calculation errors. On the other hand, actual neural network operations require higher computational efficiency. Fixed-point data format calculations are faster and easier to implement in hardware. When the conditions are met, the effective precision of the computed data can be represented with a shorter bit width in the circuit design, thereby reducing the circuit area cost. After analyzing the neuronal algorithm process, the neuronal computation process is an integer operation on small-scale numerical values. The data format is set to a total bit width of 19 bits, including 1 sign bit, 10 integer bits, and 8 decimal bits.
The spiking neural unit structure designed for this paper was based on the SPWL model proposed, which was divided into three main modules:
  • The V update module, which was used to calculate the change in membrane potential V. This module received input and feedback signals, and calculated the V change by combining the membrane potential update expression. The output of this module served as input to the spike generation logic;
  • The U update module, which was used to calculate the recovery variable U of the membrane potential. The output of this module served as one of the inputs to the V update module;
  • Spike generation logic, which was used to generate and recover spike signals. This module received the output signal of the V update module, and generated spike signals based on the set threshold; at the same time, it decided whether to adjust the threshold, based on control signals.
The structure of the proposed spiking neuron is shown in Figure 6. In piecewise linear spiking neurons, the main external inputs are divided into two categories. The first category is a parameter group that controls the neural unit’s operating mode, provided by the parameter bank, which can be adjusted in real time according to the required neural unit’s operating mode. The other category is the operational variables, mainly the neuronal input current I, while the main output of the neuron is the generation of spikes. The membrane potential V and the recovery variable U are internal variables of the neuron, stored in specific registers, and provided for computation in the next cycle.
In the SPWL model, the U update only required one multiplication, so only one multiplier was needed in the hardware implementation, and therefore it will not be discussed in detail here. The hardware construction of the V update module is shown in Figure 7: this module was mainly responsible for the numerical iteration of V n . The current value of V was input into two subtractors for evaluation. The two calculation results were then sent to a linear rectifier operation unit and a first-stage adder merge unit. The adder merge result was then multiplied by the time parameter, and was then merged, to obtain the corresponding V n + 1 for the current iteration.
In this design, we used a segmented carry prediction adder (SCPA) [11] and a Pwl-Mit multiplier [12], as proposed earlier, to approximate the main calculations in the neuron, so as to further decrease the computation cost of the SPWL model.
The Pwl-Mit multiplier is a hardware implementation of an approximate multiplication algorithm based on the Mitchell logarithmic approximation method, using a piecewise linear function. It reduces computational costs and improves efficiency for error-tolerant multimedia applications. The design uses data truncation techniques, and achieves better statistical performance than existing multipliers.
The SCPA splits long carry chains into shorter chains for parallel computation, allowing for flexible parameter tuning, to achieve different performance levels. By adjusting the size of the blocks and the prediction depth of each sub-additive, different design goals can be achieved, based on specific performance requirements.

4. Results and Analysis

4.1. Hardware Synthesis and Test

For this section, we tested and evaluated the hardware performance of the proposed approximate spiking neuron. We performed hardware synthesis testing on the proposed neuron according to the TSMC 65 nm process, using a Synopsys Design Compiler. We validated the spike generation characteristics of the approximate spiking neuron, and the results are shown in Figure 8. The simulation results demonstrate that the spike generation characteristics of the actual circuit are consistent with the theoretical model.
To evaluate the hardware cost of the approximate spiking neuron, we compared the synthesis results to several other commonly used spiking neurons. The comparison results are shown in Table 2.
In the table, PWL represents the original piecewise linear spiking neuron, and SPWL-approx represents the approximate SPWL neuron using approximation techniques. According to the results in the table, the Quartic neuron has the highest computational cost and the longest computation latency, due to its fourth power term in the membrane potential equation. The SPWL-approx neuron has the smallest computational cost and the shortest computation latency. After applying approximation techniques to the piecewise linear spiking neuron, the area of the SPWL neuron decreased by approximately 57%, power consumption reduced to 21% of the original PWL neuron model, and computation latency was reduced by over 60%.

4.2. Application Test for the SPWL Neuron

Neural networks have high requirements for computational efficiency, and approximate computing techniques are an effective method of improving computational efficiency, which can be applied to spiking neurons. Compared to traditional exact computing methods, approximate computing techniques can reduce computational costs, hardware area, and power consumption with a certain sacrifice in accuracy, achieving more efficient computation. For this section, we constructed a simple spiking neural network, to test the practical usability of the proposed approximate SPWL neuron. The test dataset used was the classic MNIST dataset in image recognition.
To validate the usability of the proposed approximate spiking neural unit, we constructed a spiking neural network, as shown in Figure 9, for testing the SPWL neuron. The designed neural network consisted of three parts: the input layer; the hidden layer; and the classifier serving as the output layer. As the image size in the MNIST dataset was 28 × 28 pixels, the input layer consisted of 784 neurons. In the input layer, each neuron received a Poisson spike sequence, and the frequency of each spike train was proportional to the corresponding pixel’s grayscale value.
The hidden layer consisted of one layer of excitatory neurons and one layer of inhibitory neurons, using an E–I-balanced spiking neural network structure. In an E–I-balanced network, the number of excitatory (E) and inhibitory (I) neurons is equal, and the connections between them are excitatory and inhibitory. The structure of an E–I balanced network is similar to that of local neural circuits in the brain cortex, which play an important role in information processing. In an E–I-balanced network, the interaction between excitatory and inhibitory neurons can produce a dynamic balance state, which can maintain the stability of the network, and achieve information processing.
In this design, 128 excitatory neurons and 128 inhibitory neurons were used to construct the hidden layers. The excitatory and inhibitory neurons were connected in a one-to-one pattern, meaning each spike from an excitatory neuron would directly trigger the corresponding inhibitory neuron to fire spikes. Each inhibitory neuron was connected to all the excitatory neurons, except the one directly connected. The synaptic learning between neurons used the Spike-Timing-Dependent Plasticity (STDP) mechanism [16,17] as Equation (5):
Δ w = pre post g max W ( Δ t ) ,
in which
W ( Δ t ) = A + exp ( Δ t / τ + ) if Δ t < 0 A exp ( Δ t / τ ) if Δ t 0 ,
a presynaptic spike occurring at time t pre , and a postsynaptic spike at time t post . In the above formula, Δ t = t pre t post , τ + and τ are the time constant of the STDP update function, which determined the ranges of presynaptic to postsynaptic interspike intervals, over which synaptic strengthening and weakening occur. A + and A determined the maximum amounts of synaptic modification that occurred when Δ t was close to zero, and g max was a parameter for setting a limitation to Δ w , so that it would fall between 0 and g max .
As the network’s main function was to recognize digits from 0 to 9, a random forest was used as the classifier for the output layer. A random forest is an ensemble learning method based on decision trees, which improve the model’s predictive accuracy and generalization ability, by building multiple decision trees during training, and integrating their results. The random forest classifier has high predictive accuracy and generalization ability, can handle high-dimensional feature data, and is less susceptible to outliers, noise, and missing data.
We trained and tested the proposed network structure on the MNIST dataset, and observed the working states of the neurons during the training process. Figure 10 shows the membrane potential updates of an excitatory–inhibitory neuron pair in the neural network. The red curve represents the membrane potential of the excitatory neuron, and the blue curve represents the membrane potential of the inhibitory neuron. The results demonstrate that the proposed SPWL neuron can perform normal membrane potential updates.
Figure 11 shows the spike generation of the spiking neural network during its working state. The horizontal axis represents time, and the vertical axis represents the index of the spiking neurons. The red color represents the excitatory neurons in the input, hidden, and output layers, while the blue represents the inhibitory neurons in the hidden layer. The results demonstrate that the proposed approximate SPWL neuron structure can perform spike generation normally in the spiking neural network structure.
The neural network was trained on a subset of 60,000 images from the MNIST dataset, and another 10,000 images were used as the test dataset. Table 3 shows the precision, recall, and F1-score, which are commonly used metrics for evaluating the performance of a classification model. ’Precision’ measures the proportion of true positive predictions out of all positive predictions for a particular class, while ’recall’ measures the proportion of true positive predictions out of all actual positive instances in the dataset. The F1-score is the harmonic mean of precision and recall, and provides an overall measure of a model’s accuracy for a given class.
It can be seen that the model performed well on this classification problem, with high precision, recall, and F1 scores for each class, and an average F1 score of 0.94. Additionally, the sizes of each class in the dataset were relatively balanced, indicating that the model performed similarly across different classes.
Figure 12 is the confusion matrix of the spiking neuron network, where each row represents the actual class, and each column represents the predicted class. The ( i , j )th entry of the matrix shows the number of instances where an image belonging to class i was classified as class j. The diagonal entries represent the number of correctly classified instances for each class, and the off-diagonal entries represent the misclassifications.
From the confusion matrix, we can see that the model appeared to be performing fairly well, with high values on the diagonal, and relatively low values off-diagonal. The model correctly predicted most of the digits, but it was not perfect, and did make some errors, particularly between certain digits, such as 4 and 9, because the shape of some digit pairs can be quite similar, especially when they are handwritten: this similarity in shape can make it difficult for a neural network to distinguish between them.
Table 4 provides a performance comparison of different neuron models on the whole MNIST dataset. The network structure was preserved, to test the performance of different models. The table compares the precision, recall, and F1-score of each neuron model. The table lists five different neuron models: FitzHugh–Nagumo, Izhikevich, Quartic, Original PWL, and SPWL-approx. The SPWL-approx model had the highest precision, recall, and F1-score, indicating that it was the most accurate and reliable model among the five. The original PWL model and Izhikevich model also performed well, with high precision, recall, and F1-score. The FitzHugh–Nagumo and Quartic models had lower precision, recall, and F1-score with the given neuron parameter; they may have performed better by adjusting the neuron parameters and network structure.

5. Discussion

Over time, neuroscientists have studied more complex neuron models, including models that can simulate the action potential of neurons, such as the Hodgkin–Huxley model [18], the FitzHugh–-Nagumo model [13], and others [19,20,21]; however, such neuron models are complex and challenging for building complex network structures; therefore, many neuron models based on the integrate-and-fire concept have been proposed [14,22,23,24,25]. These neuron models have simple structures, low computational costs, and are suitable for large-scale applications; however, the cost of simplification is the loss of most of the neuron’s characteristics. In recent years, researchers have been designing new types of spiking neuron models, by combining them with classical models, to improve computational efficiency and reduce hardware resource consumption [26,27,28,29].
A recent study utilized a well-validated model of Purkinje cells (PCs) [30], to investigate how these cells process inputs from parallel fibers (PFs) in the cerebellum. The study revealed that increasing input intensity causes PCs to transition from linear rate-coding to burst-pause timing-coding, which is facilitated by localized dendritic spikes. The properties of these dendritic spikes were validated against experimental data, enabling the elucidation of spiking mechanisms and the prediction of spiking thresholds, both with and without inhibition. The study also challenged the traditional view of PCs as linear point neurons, demonstrating that both linear and burst-pause computations are performed using individual dendritic branches as computational units. The study further predicted that dendritic spike thresholds can be regulated by factors such as voltage state, compartmentalized channel modulation, between-branch interaction, and synaptic inhibition, which can expand the dynamic range of linear and burst-pause computations.
A novel implementation of the adaptive exponential (AdEx) neuron model has been proposed [31], using a complete matching method called Power-2-Based AdEx Model (PBAM). This model can reproduce different spiking behaviors similar to biological neurons, with lower implementation costs compared to previous works. The PBAM neuron was validated by physically realizing it on FPGA, demonstrating high similarity to the original model, high computational performance, and lower hardware cost. The proposed model operates at higher frequencies and has higher performance compared to similar works, making it an ideal candidate for large-scale neuromorphic and biologically inspired neural network implementations targeting low-cost hardware platforms.
A COordinate Rotation DIgital Computer (CORDIC)-based Adaptive Exponential Integrate-and-Fire (AdEx) neuron [32] was proposed for efficient large-scale biological neural network implementation. The proposed model accurately reproduced the signaling, dynamical behavior, and bifurcation pattern of the original model. The model was hardware synthesized, and implemented on FPGA, demonstrating similar neuronal behaviors to the original model. The hardware device utilization and speed confirmed the efficiency of the realized hardware, compared to previous works.
Paper [33] proposed a set of models for biological spiking neurons that were efficiently implementable on digital platforms, targeting low-cost hardware implementation. The proposed models accurately reproduced different biological behaviors, and were investigated in terms of digital implementation feasibility and costs. Hardware synthesis and physical implementations on a field-programmable gate array demonstrated that the proposed models could produce biological behavior of different types of neurons with higher performance and considerably lower implementation costs, compared to the original model.
A dendritic fractal model was proposed [34] for quantifying the dendritic morphological effects of neurons, which can affect their information-processing capabilities. To realize this model, multiple analog fractional-order circuits (AFCs) were designed to match their extended structures and parameters with dendritic features. The proposed AFCs were then introduced into fractional leaky integrate-and-fire (FLIF) neuron circuits, to demonstrate the multiple timescale dynamics of spiking patterns similar to biological neurons. The proposed model enhanced the degree of mimicry of neuron models, and provided a more accurate model for understanding neural computation and cognition mechanisms.
Researchers presented modifications to the Izhikevich neuron model to obtain a simple, low-area digital hardware implementation with low computational intensity [35]. The implemented neuron circuit only required one input parameter to replicate all the cortical neuron behaviors described by Izhikevich, making large, highly parallel digital spiking neural networks feasible. The model required fewer external parameter changes to exhibit diverse neuron behaviors. The alterations made to create this novel model were presented in detail, followed by a performance comparison to the original Izhikevich neuron.
Paper [36] proposed a low-cost, high-throughput digital hardware design for the Izhikevich neuron model used in neuromorphic computing. The design only required addition, subtraction, and logic shift operations, resulting in low hardware costs. The proposed design could reproduce different spiking patterns with high throughput, and achieved the lowest slices and LUTs utilization, compared to existing designs. The proposed design could enable the development of more efficient and scalable digital spiking neural networks for neuromorphic computing, leading to better understanding of brain function and advanced artificial intelligence systems.
A simplified version of the Hodgkin–Huxley neuron model was proposed [37], which is a widely used mathematical description of biological neurons. The complex nonlinear dynamics of the original model were substituted with linear ones, and a digital circuit for the proposed linear model was designed for implementation on a low-cost hardware platform, such as a field-programmable gate array (FPGA). The proposed digital circuit successfully replicated the essential characteristics of spiking responses and ionic currents in a biologically plausible model of neural activities, compared to earlier circuits. This new digital design has potential applications in developing neuro-inspired chips, and exploring the brain’s functionality in information processing.
Hassan [38] proposed the use of approximate multipliers in the hardware implementation of the Izhikevich spiking neuron model. The accuracy of the model was investigated by calculating various types of errors on a single neuron, and this analysis showed that the proposed model followed the original model, and that it reproduced the same firing patterns as the original. The network behavior was also studied, and proved that the model had the same activity patterns as the original one. Moreover, the proposed neuron exhibited better accuracy than the piecewise linear approximation of the Izhikevich model.
The advantages of the reviewed works include reduced complexity, improved computational efficiency, and lower hardware costs compared to previous models. These advantages could enable the development of more efficient and scalable digital spiking neural networks for neuromorphic computing, leading to better understanding of brain function and advanced artificial intelligence systems. However, the limitations of these works include the trade-off between computational efficiency and biological accuracy, the loss of most of the neuron’s characteristics in some simplified models, and the need for further validation and testing on different hardware platforms.
The SPWL model we propose achieves a good balance between the computational characteristics and the computational cost of neurons. By retaining the biological characteristics of neurons, we have reduced the computational cost, making it suitable for constructing spiking neural networks.

6. Conclusions

In this paper, we propose a simplified design for the piecewise linear spiking neuron model, combining it with approximate computing techniques, to develop a new approximate spiking neuron. The proposed neuron structure can achieve the calculation characteristics of five major neuron types based on different parameters. Compared to other types of spiking neuron models, the results of the resource consumption analysis showed that the approximate piecewise linear spiking neuron model could save over 80% of the area cost, and more than 95% of the computational delay, compared to the Quartic spiking neuron model, while only requiring 10% of the computational power. Compared to the original piecewise linear spiking neuron model under exact computing conditions, the approximate version of the neuron design reduces the area by approximately 57%, reduces power consumption to 21% of the original neuron, and shortens the computational delay by more than 60%. A spiking neural network was constructed, to validate the reliability of the proposed spiking neuron, using the MNIST dataset as the training and testing benchmark. The final recognition accuracy was approximately 94%, demonstrating the practical potential of the proposed approximate spiking neuron model.

Author Contributions

Conceptualization, H.L. and M.W.; Methodology, H.L.; Software, H.L.; Resources, L.Y.; Writing—original draft, H.L.; Supervision, M.L.; Project administration, M.W. and M.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by a school-level scientific research project (SZIIT2019KJ026), the Shenzhen Science and Technology Plan JSGG20201102155600001, and the Basic Research Discipline Layout Project of Shenzhen under Grant 2020B1515120004, Grant JCYJ20180507182241622, and Grant JCYJ20180503182125190.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author, upon reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Haghiri, S.; Ahmadi, A.; Saif, M. VLSI Implementable Neuron-Astrocyte Control Mechanism. Neurocomputing 2016, 214, 280–296. [Google Scholar] [CrossRef]
  2. Goaillard, J.M.; Moubarak, E.; Tapia, M.; Tell, F. Diversity of Axonal and Dendritic Contributions to Neuronal Output. Front. Cell. Neurosci. 2019, 13, 570. [Google Scholar] [CrossRef] [PubMed]
  3. Kirch, C.; Gollo, L.L. Spatially Resolved Dendritic Integration: Towards a Functional Classification of Neurons. PeerJ 2020, 8, e10250. [Google Scholar] [CrossRef] [PubMed]
  4. Wang, Y.; Liu, S.C. Active Processing of Spatio-Temporal Input Patterns in Silicon Dendrites. IEEE Trans. Biomed. Circuits Syst. 2013, 7, 307–318. [Google Scholar] [CrossRef]
  5. Xiang-hong, L.I.N.; Tian-wen, Z. Dynamical Properties of Piecewise Linear Spiking Neuron Model. Acta Electonica Sin. 2009, 37, 1270. [Google Scholar]
  6. Zang, Y.; Marder, E. Interactions among Diameter, Myelination, and the Na/K Pump Affect Axonal Resilience to High-Frequency Spiking. Proc. Natl. Acad. Sci. USA 2021, 118, e2105795118. [Google Scholar] [CrossRef] [PubMed]
  7. Zang, Y.; Marder, E. Neuronal Morphology Enhances Robustness to Perturbations of Channel Densities. Proc. Natl. Acad. Sci. USA 2023, 120, e2219049120. [Google Scholar] [CrossRef]
  8. Zang, Y.; Dieudonné, S.; Schutter, E.D. Voltage- and Branch-Specific Climbing Fiber Responses in Purkinje Cells. Cell Rep. 2018, 24, 1536–1549. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Hu, H.; Gan, J.; Jonas, P. Fast-Spiking, Parvalbumin+ GABAergic Interneurons: From Cellular Design to Microcircuit Function. Science 2014, 345, 1255263. [Google Scholar] [CrossRef] [PubMed]
  10. Zang, Y.; Hong, S.; Schutter, E.D. Firing Rate-Dependent Phase Responses of Purkinje Cells Support Transient Oscillations. Elife 2020, 9, e60692. [Google Scholar] [CrossRef]
  11. Liu, H.; Wang, M.J.; Liu, M. SCPA: A Segemented Carry Prediction Approximate Adder Structure. IEICE Electron. Express 2021, 18, 20210335. [Google Scholar] [CrossRef]
  12. Liu, H.; Wang, M.; Yao, L.; Liu, M. A Piecewise Linear Mitchell Algorithm-Based Approximate Multiplier. Electronics 2022, 11, 1913. [Google Scholar] [CrossRef]
  13. FitzHugh, R. Impulses and Physiological States in Theoretical Models of Nerve Membrane. Biophys. J. 1961, 1, 445–466. [Google Scholar] [CrossRef] [Green Version]
  14. Izhikevich, E. Simple Model of Spiking Neurons. IEEE Trans. Neural Netw. 2003, 14, 1569–1572. [Google Scholar] [CrossRef] [Green Version]
  15. Grassia, F.; Levi, T.; Kohno, T.; Saïghi, S. Silicon Neuron: Digital Hardware Implementation of the Quartic Model. Artif. Life Robot. 2014, 19, 215–219. [Google Scholar] [CrossRef] [Green Version]
  16. Song, S.; Miller, K.D.; Abbott, L.F. Competitive Hebbian Learning through Spike-Timing-Dependent Synaptic Plasticity. Nat. Neurosci. 2000, 3, 919–926. [Google Scholar] [CrossRef]
  17. Song, S.; Abbott, L.F. Cortical Development and Remapping through Spike Timing-Dependent Plasticity. Neuron 2001, 32, 339–350. [Google Scholar] [CrossRef] [Green Version]
  18. Hodgkin, A.L.; Huxley, A.F. A Quantitative Description of Membrane Current and Its Application to Conduction and Excitation in Nerve. J. Physiol. 1952, 117, 500–544. [Google Scholar] [CrossRef]
  19. Connor, J.A.; Walter, D.; McKown, R. Neural Repetitive Firing: Modifications of the Hodgkin-Huxley Axon Suggested by Experimental Results from Crustacean Axons. Biophys. J. 1977, 18, 81–102. [Google Scholar] [CrossRef] [Green Version]
  20. Morris, C.; Lecar, H. Voltage Oscillations in the Barnacle Giant Muscle Fiber. Biophys. J. 1981, 35, 193–213. [Google Scholar] [CrossRef] [Green Version]
  21. Hindmarsh, J.L.; Rose, R.M. A Model of Neuronal Bursting Using Three Coupled First Order Differential Equations. Proc. R. Soc. Lond. B Biol. Sci. 1984, 221, 87–102. [Google Scholar] [CrossRef]
  22. Stein, R.B. A theoretical analysis of neuronal variability. Biophys. J. 1965, 5, 173–194. [Google Scholar] [CrossRef] [Green Version]
  23. Ermentrout, G.B.; Kopell, N. Parabolic Bursting in an Excitable System Coupled with a Slow Oscillation. SIAM J. Appl. Math. 1986, 46, 233–253. [Google Scholar] [CrossRef] [Green Version]
  24. Fourcaud-Trocmé, N.; Hansel, D.; van Vreeswijk, C.; Brunel, N. How Spike Generation Mechanisms Determine the Neuronal Response to Fluctuating Inputs. J. Neurosci. 2003, 23, 11628–11640. [Google Scholar] [CrossRef] [Green Version]
  25. Brette, R.; Gerstner, W. Adaptive Exponential Integrate-and-Fire Model as an Effective Description of Neuronal Activity. J. Neurophysiol. 2005, 94, 3637–3642. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Nouri, M.; Hayati, M.; Serrano-Gotarredona, T.; Abbott, D. A Digital Neuromorphic Realization of the 2-D Wilson Neuron Model. IEEE Trans. Circuits Syst. II Express Briefs 2019, 66, 136–140. [Google Scholar] [CrossRef]
  27. Haghiri, S.; Zahedi, A.; Naderi, A.; Ahmadi, A. Multiplierless Implementation of Noisy Izhikevich Neuron With Low-Cost Digital Design. IEEE Trans. Biomed. Circuits Syst. 2018, 12, 1422–1430. [Google Scholar] [CrossRef] [PubMed]
  28. Grassia, F.; Kohno, T.; Levi, T. Digital Hardware Implementation of a Stochastic Two-Dimensional Neuron Model. J. Physiol.-Paris 2016, 110, 409–416. [Google Scholar] [CrossRef] [PubMed]
  29. Soleimani, H.; Drakakise, E.M. An Efficient and Reconfigurable Synchronous Neuron Model. IEEE Trans. Circuits Syst. II Express Briefs 2018, 65, 91–95. [Google Scholar] [CrossRef] [Green Version]
  30. Zang, Y.; Schutter, E.D. The Cellular Electrophysiological Properties Underlying Multiplexed Coding in Purkinje Cells. J. Neurosci. 2021, 41, 1850–1863. [Google Scholar] [CrossRef] [PubMed]
  31. Haghiri, S.; Ahmadi, A. A Novel Digital Realization of AdEx Neuron Model. IEEE Trans. Circuits Syst. II 2020, 67, 1444–1448. [Google Scholar] [CrossRef]
  32. Heidarpour, M.; Ahmadi, A.; Rashidzadeh, R. A CORDIC Based Digital Hardware For Adaptive Exponential Integrate and Fire Neuron. IEEE Trans. Circuits Syst. I 2016, 63, 1986–1996. [Google Scholar] [CrossRef]
  33. Gomar, S.; Ahmadi, A. Digital Multiplierless Implementation of Biological Adaptive-Exponential Neuron Model. IEEE Trans. Circuits Syst. I Regul. Pap. 2014, 61, 1206–1219. [Google Scholar] [CrossRef]
  34. Deng, Y.; Liu, B.; Huang, Z.; Liu, X.; He, S.; Li, Q.; Guo, D. Fractional Spiking Neuron: Fractional Leaky Integrate-and-Fire Circuit Described with Dendritic Fractal Model. IEEE Trans. Biomed. Circuits Syst. 2022, 16, 1375–1386. [Google Scholar] [CrossRef] [PubMed]
  35. Leigh, A.J.; Mirhassani, M.; Muscedere, R. An Efficient Spiking Neuron Hardware System Based on the Hardware-Oriented Modified Izhikevich Neuron (HOMIN) Model. IEEE Trans. Circuits Syst. II Express Briefs 2020, 67, 3377–3381. [Google Scholar] [CrossRef]
  36. Pu, J.; Goh, W.L.; Nambiar, V.P.; Chong, Y.S.; Do, A.T. A Low-Cost High-Throughput Digital Design of Biorealistic Spiking Neuron. IEEE Trans. Circuits Syst. II Express Briefs 2021, 68, 1398–1402. [Google Scholar] [CrossRef]
  37. Amiri, M.; Nazari, S.; Faez, K. Digital Realization of the Proposed Linear Model of the Hodgkin-Huxley Neuron. Int. J. Circ. Theor. Appl. 2019, 47, 483–497. [Google Scholar] [CrossRef]
  38. Hassan, S.; Salama, K.N.; Mostafa, H. An Approximate Multiplier Based Hardware Implementation of the Izhikevich Model. In Proceedings of the 2018 IEEE 61st International Midwest Symposium on Circuits and Systems (MWSCAS), Windsor, ON, Canada, 5–8 August 2018; pp. 492–495. [Google Scholar] [CrossRef]
Figure 1. RS neuron simulation result: (a) PWL model; (b) SPWL model. Here red lines are input ramp signal and its response and blue lines means input level signal and its response.
Figure 1. RS neuron simulation result: (a) PWL model; (b) SPWL model. Here red lines are input ramp signal and its response and blue lines means input level signal and its response.
Electronics 12 02628 g001
Figure 2. IB neuron simulation result. (a) PWL model. (b) SPWL model. Here blue lines means input level signal and its response.
Figure 2. IB neuron simulation result. (a) PWL model. (b) SPWL model. Here blue lines means input level signal and its response.
Electronics 12 02628 g002
Figure 3. FS neuron simulation result: (a) PWL model; (b) SPWL model. Here red lines are input ramp signal and its response and blue lines means input level signal and its response.
Figure 3. FS neuron simulation result: (a) PWL model; (b) SPWL model. Here red lines are input ramp signal and its response and blue lines means input level signal and its response.
Electronics 12 02628 g003
Figure 4. LTS neuron simulation result: (a) PWL model; (b) SPWL model. Here blue lines means input level signal and its response.
Figure 4. LTS neuron simulation result: (a) PWL model; (b) SPWL model. Here blue lines means input level signal and its response.
Electronics 12 02628 g004
Figure 5. LS neuron simulation result: (a) PWL model; (b) SPWL model. Here red lines are input ramp signal and its response and blue lines means input level signal and its response.
Figure 5. LS neuron simulation result: (a) PWL model; (b) SPWL model. Here red lines are input ramp signal and its response and blue lines means input level signal and its response.
Electronics 12 02628 g005
Figure 6. SPWL neuron top level structure.
Figure 6. SPWL neuron top level structure.
Electronics 12 02628 g006
Figure 7. V update module structure.
Figure 7. V update module structure.
Electronics 12 02628 g007
Figure 8. Approximate SPWL neuron firing characteristics: (a) LTS neuron; (b) IB neuron; (c) RS neuron; (d) FS neuron; (e) LS neuron. Here red lines are input ramp signal and its spike response and blue lines means input level signal and its spike response.
Figure 8. Approximate SPWL neuron firing characteristics: (a) LTS neuron; (b) IB neuron; (c) RS neuron; (d) FS neuron; (e) LS neuron. Here red lines are input ramp signal and its spike response and blue lines means input level signal and its spike response.
Electronics 12 02628 g008
Figure 9. Schematic diagram of the spiking neural network for testing.
Figure 9. Schematic diagram of the spiking neural network for testing.
Electronics 12 02628 g009
Figure 10. Membrane potential of approximate SPWL neuron.
Figure 10. Membrane potential of approximate SPWL neuron.
Electronics 12 02628 g010
Figure 11. Spike generation of the SNN.
Figure 11. Spike generation of the SNN.
Electronics 12 02628 g011
Figure 12. Confusion matrix for the MNIST dataset.
Figure 12. Confusion matrix for the MNIST dataset.
Electronics 12 02628 g012
Table 1. Parameter meaning of the piecewise linear neuron model.
Table 1. Parameter meaning of the piecewise linear neuron model.
ParameterDetails
VMembrane potential of neurons.
URecovery variables, represents activated K + current or inactivated Na + current, and also achieves negative feedback regulation of V.
ISynapse input current or external injection current.
τ m Membrane time constant, which must satisfy τ m > 0 .
V rest Resting potential, used with τ m to describe the leakage of neuron.
V thresh Spike generating threshold value.
g s Spike conductance, whose unit is the leakage conductance ( g s > 1 ).
τ r Recovery time constant.
kCoupling strength between U and V.
V peak Spike peak value.
U reset Difference of ion current before and after spike.
Table 2. Similar spiking neuron synthesis result comparison.
Table 2. Similar spiking neuron synthesis result comparison.
NeuronAera (μm2)Power (mW)Delay (ns)
FitzHugh–Nagumo [13]15,220.46.0925.43
Izhikevich [14]12,367.25.8224.27
Quartic [15]24,043.211.9948.81
Original PWL [5]9226.44.254.04
SPWL-approx3999.60.901.55
Table 3. Classification report for the SNN.
Table 3. Classification report for the SNN.
LabelPrecisionRecallF1-ScoreDatasize
00.970.970.97980
10.970.970.971135
20.940.940.941032
30.930.940.941010
40.940.920.93982
50.910.890.90892
60.940.940.94958
70.940.930.931028
80.900.910.91974
90.920.940.931009
average0.940.940.941000
Table 4. Performance comparison for neuron models in MNIST dataset.
Table 4. Performance comparison for neuron models in MNIST dataset.
NeuronPrecisionRecallF1-Score
FitzHugh–Nagumo [13]0.690.650.65
Izhikevich [14]0.910.910.91
Quartic [15]0.840.830.83
Original PWL [5]0.930.920.92
SPWL-approx0.940.940.94
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, H.; Wang, M.; Yao, L.; Liu, M. Hardware Implementation of an Approximate Simplified Piecewise Linear Spiking Neuron. Electronics 2023, 12, 2628. https://doi.org/10.3390/electronics12122628

AMA Style

Liu H, Wang M, Yao L, Liu M. Hardware Implementation of an Approximate Simplified Piecewise Linear Spiking Neuron. Electronics. 2023; 12(12):2628. https://doi.org/10.3390/electronics12122628

Chicago/Turabian Style

Liu, Hao, Mingjiang Wang, Longxin Yao, and Ming Liu. 2023. "Hardware Implementation of an Approximate Simplified Piecewise Linear Spiking Neuron" Electronics 12, no. 12: 2628. https://doi.org/10.3390/electronics12122628

APA Style

Liu, H., Wang, M., Yao, L., & Liu, M. (2023). Hardware Implementation of an Approximate Simplified Piecewise Linear Spiking Neuron. Electronics, 12(12), 2628. https://doi.org/10.3390/electronics12122628

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop