4.1. Basic SNN Binary Logical Operation Modules
According to the designed LogicSNN in
Section 3.2 and
Section 3.3, binary logical operation modules are built and trained.
Table 1 is the truth table of six binary logical operations (AND, OR, NAND, NOR, XOR, XNOR),
A and
B represent binary logic inputs, and
represents the output of the corresponding logic operation, where logic ∈ {AND, OR, NAND, NOR, XOR, XNOR}.
The training data need to be prepared before training. The training data mainly contain two types: one is the input signal of the input layer, and the other is the guidance signal of the teacher layer.
group input signals of the input layer are randomly generated from the four binary logic input patterns: 00, 01, 10, and 11, and are encoded according to the encoding method defined in
Section 3.1. The interval between each group of input pattern is
. The guidance signal of the teacher layer is determined according to the output of the four input modes corresponding to each logical operation in
Table 1. For example, for the logic AND, when the input signal of the input layer is 00, the theoretical output should be 0, and the corresponding teacher Neuron
and
emit spikes so that the output Neuron 0 is depolarized, and the output Neuron 1 is hyperpolarized. Under the action of the two signals, the synapse between the pattern layer and the output layer will be correctly strengthened by LTP, which is induced by the intra-group signal, and weakened by LTD, which is induced by the inter-group signal so as to learn the corresponding logic function.
It is worth noting that the output of the four logical operations of AND, OR, NAND, NOR is unbalanced. If the input pattern is generated in a purely random manner, the probability of one kind of output is 25%, and the other is 75%. The pattern that generates the output with less probability gets fewer spike pairs in the training process. As a consequence, the pattern convergence speed slows down or does not even converge. Therefore, the input patterns are generated based on the principle of “output balance”.
Figure 4 is part of the training data generated for the logical module AND. For clearer display, the spikes for the first 100 ms are shown.
The difference between different logical operation modules lies in the synaptic weights between the pattern layer and the output layer. The initial value of all learnable synapse weights is
:
where
is the maximum value of the synapse weight, and
is the number of fan-in, which indicates the number of synapses that each output layer neuron has with pattern layer neurons. This way of initialization makes all learnable synapses initially equal.
The parameter values used in training are shown in
Table 2. The neuron-related parameters
,
,
refer to the values of biological neuron and LIF neuron models. The former are appropriately adjusted. The synaptic-related parameters
,
(learning rate),
,
,
,
refer to the classic STDP settings, and the other mode construction related parameters (
,
,
) are chosen through experiments.
Figure 5 shows the change of synaptic weights between the pattern layer and output layer during training.
Figure 5a–f shows that the graphs of the weight changes for six kinds of SNN logical operations: AND, OR, NAND, NOR, XOR, XNOR. For the first four kinds of logical operations,
is set to 4,
is set to 1400, and the correspond training time is 7000 ms; for the last two,
is set to 2000, and the correspond training time is 10,000 ms. After training, the synaptic weights of all six modules converge.
Figure 6 shows the test result of the SNN logical operation module AND. In order to show the state of each neuron more clearly, the color of the legend is different from that in
Figure 2, and a richer color is used. The test spike inputs consist of four input patterns of 00, 01, 10, 11 in the truth table, which are reflected in the input layer neuron indices as 02, 03, 12, 13 in
Figure 6a. The four input patterns cause the pattern layer neurons, 0, 1, 2, and 3, to spike, respectively, in
Figure 6b,c. It can be seen from
Figure 6d,e that after training, the SNN logical operation module AND converges, and the output is 0, 0, 0, 1, which is consistent with the expected output of the truth table. The trained module has the characteristics of a logic function, and meets the design requirements. The dashed box shows the fourth case in the truth table: the input is 11, and the output is 1, which is mapped to the network structure for display. The other five kinds of logical operation modules also meet the design requirements after training, and the input layer spikes and the output layer spikes of each module are displayed in
Figure 7, with the same form and color of
Figure 6a,d. The results of six SNN logical operation modules showed in
Figure 6 and
Figure 7 all match their truth table which is shown in
Table 1.
In order to facilitate the representation of the basic SNN logical operation module, appropriate modifications are made on the basis of the internationally commonly used symbols of logic gates, and combined with the spike symbol. The icons of the basic SNN logical operation modules are designed as shown in
Figure 8.
4.2. Combinational Logic Networks
Based on the unified paradigm LogicSNN, the SNN logical operation modules are designed as a “building block” at the beginning. One of its characteristics is that multiple modules can be easily cascaded to establish large-scale networks and achieve more complex functions. In this section, imitating the combinational logic in some digital circuits, the basic SNN logical operation modules built in
Section 4.1 are used to construct combinational logic networks. The cascade characteristics of the SNN logical operation modules is tested.
4.2.1. Rounding Logic Network of 8421-BCD Code
BCD is the abbreviation of the binary-coded decimal. It uses four binary digits to represent one decimal number. The 8421-BCD code is a basic and the most commonly used form. It is similar to a 4-bit binary code. The weight of each bit is 8, 4, 2, 1, so it is a weighted BCD code. Different from the 4-bit binary code, it only selects the first 10 groups of codes, that is, 0000∼1001, to represent the corresponding decimal numbers, and the remaining six groups of codes are not used.
Table 3 shows the correspondence between 8421-BCD codes and decimal numbers.
Rounding is an accurate counting retention method. For a large amount of data that need to be retained, the error sum of this retention method is the smallest, so this method is also used as a basic retention method.
Table 4 is the truth table of a rounding logic network in the 8421-BCD code.
In
Table 4,
A,
B,
C and
D respectively represent the logic variables corresponding to the 4-bit binary numbers of the 8421-BCD code from high to low, and
P represents the output corresponding to the rounding logic network. According to the rules of logical algebra, the relationship between the input and output is simplified to the simplest OR-AND type as follows:
The symbols in Equation (
7) are all defined under logical operations, that is, “+” means logic OR (logical addition), and “·” means logic AND (logical multiplication). Equation (
7) can be transformed to the NOR type, as follows:
According to Equation (
8), three NOR modules and one OR module are needed to build the 8421-BCD code rounding logic network. The network structure is shown in
Figure 9.
The results are shown in
Figure 10. Because multiple logical operation modules are involved and the test results of a single logical operation module are shown in the previous section, only the input and output spikes are shown in
Figure 10. The color table is consistent with
Figure 6. Because each logic variable has two states, the logic variables A, B, C, and D correspond to the input neuron index 0∼7. The test inputs are the 10 possible input situations in
Table 4. From
Figure 10, it can be seen that when the decimal number corresponding to the input is less than five, the output Neuron 0 emits spikes; when it is greater than or equal to five, the output Neuron 1 emits spikes. The network outputs are consistent with the expected outputs, having the rounding logic function under the 8421-BCD code.
4.2.2. Half Adder and Full Adder
Half adder and full adder are the basic components of combinational circuits in digital circuits, and are also the core of the CPU for processing addition operations. Whether a half adder and a full adder can be built with the basic SNN logical operation module has become one of the verification experiments for building a calculation system based on SNN.
The half adder is a logic circuit that can add two 1-bit binary numbers to obtain the sum and the carry to the higher bit. Suppose the summand and addend are represented by A and B, and the sum and carry to the higher bit are represented by S and C.
Table 5 is the truth table of the half adder. The output function expression can be obtained from the truth table as follows:
The symbol “⊕” in Equation (
9) is defined under logical operations, which means logic XOR (logic exclusive or). It can be seen from Equation (
9) that the half adder can be composed of one XOR module and one AND module. The network structure of the half adder is shown in
Figure 11.
The full adder is a logic circuit that can realize the addition of two 1-bit binary numbers and the carry from the lower bit, that is, the addition of three 1-bit binary numbers, to obtain the sum and the carry to the higher bit. Let the summand and addend of the i-th bit be represented by and , represent the carry from the lower bit, and the calculated sum and the carry to the higher bit be represented by ,.
The experimental test results are shown in
Figure 12; the summand
A and addend
B correspond to the input neuron subscript 0∼3, and the color table is consistent with
Figure 6. The test inputs are the four possible input conditions in
Table 5. The network outputs are consistent with the expected outputs, and it has the logic function of a half adder.
Table 6 is the truth table of the full adder. The output function expression can be obtained from the truth table as follows:
It can be seen from Equation (
10) that the full adder can be composed of two XOR modules, two AND modules and two OR modules. The network structure of the full adder is shown in
Figure 13.
The experimental test results are shown in
Figure 14. The summand, addend, and low-bit carry
,
,
correspond to input neuron subscripts 0∼5. The color table is consistent with
Figure 6. The test inputs are the eight possible input conditions in
Table 6, the network outputs are consistent with the expected outputs, and it has the logic function of a full adder.
The full adder network structure shown in
Figure 13 can be further encapsulated, as shown in
Figure 15. The encapsulated full adder can be cascaded, and the carry output CO of the lower full adder is connected with the carry input CI of the higher full adder to transmit carry information, thereby realizing the function of multi-bit binary addition.
The above three experiments are all carried out based on the SNN logical operation modules, and have reached the logic function corresponding to the design requirements. The cascading characteristics of the SNN logical operation modules are verified, as well as the potential of building large-scale networks and constructing a computing system based on SNN.