Spiking Neural P Systems with Rules Dynamic Generation and Removal

: Spiking neural P systems (SNP systems), as computational models abstracted by the biological nervous system, have been a major research topic in biological computing. In conventional SNP systems, the rules in a neuron remain unchanged during the computation. In the biological nervous system, however, the biochemical reactions in a neuron are also inﬂuenced by factors such as the substances contained in it. Based on this motivation, this paper proposes SNP systems with rules dynamic generation and removal (RDGRSNP systems). In RDGRSNP systems, the application of rules leads to changes of the substances in neurons, which leads to changes of the rules in neurons. The Turing universality of RDGRSNP systems is demonstrated as a number-generating device and a number-accepting device, respectively. Finally, a small universal RDGRSNP system for function computation using 68 neurons is given. It is demonstrated that the variant we proposed requires fewer neurons by comparing it with ﬁve variants of SNP systems.


Introduction
Membrane computing (MC) [1], proposed in 1998, is a distributed parallel computational method inspired by the working process of biological cells. The computational models of MC are also known as membrane systems or P systems. P systems can be roughly classified into three groups according to the biological cells they imitate: cell-like P systems, tissue-like P systems, and neural-like P systems.
In neural-like P systems, spikes are used to represent information. Two classes of neural-like P systems have been proposed: axon P systems (AP systems) [2] and spiking neural P systems (SNP systems) [3]. An AP system has a linear structure, so that each node of this system can only send spikes to its neighboring left and right nodes. An SNP system has a directed graph structure, where neurons are represented as the nodes in the directed graph and the synapses between neurons are represented by directed arcs.
SNP systems, which also belong to the spiking neural network [4], have become a major research issue as soon as they were proposed. The research related to SNP systems mainly includes four aspects: variant model design, computational power proof, algorithm design (application), and implementation. Theoretical research mainly includes variant model design and computational power proof, while algorithmic research mainly includes algorithm design (application) and implementation. Under the continuous research of a wide range of scholars, several different variants of SNP systems have been proposed. Numerical SNP (NSNP) systems [5][6][7] are a variant of SNP systems inspired by numerical P systems, in which information is encoded by the values of variables and processed by continuous functions. Compared to the original SNP systems, NSNP systems are no longer discrete, but have a continuous numerical nature, which is useful for solving real-world problems. Homogeneous SNP (HSNP) systems [8][9][10][11] are a restricted variant of SNP systems in which each neuron has the same set of rules, making HSNP systems simpler than the original SNP systems. SNP systems with communication on request (SNQP Π = (O, σ 1 , σ 2 , . . . , σ n , R, syn, in, out) (1) where (1) O is a single alphabet with only one element a used to represent a spike.
(2) σ 1 , σ 2 , . . . , σ m denote m neurons in Π, and σ i is represented by tuple σ i = (n i , R i ), where (a) n i denotes the number of spikes in neuron σ i . (b) R i denotes the rule set contained in neuron σ i , where R i ⊆ R. (3) R is the rule set of Π, which contains rules of the following two forms: (a) r i : E/a c → a p ; α 1 r 1 , α 2 r 2 , . . . , α l r l . A rule of this form is called the spiking rule with rules dynamic generation and removal, where E is a regular expression defined on the single alphabet O, and c and p denote the number of spikes consumed and produced with c ≥ p, r 1 , r 2 , . . . , r l denoting the l rules in R, and α 1 , α 2 , . . . , α l ∈ {+, −, 0} are the labels. The spiking rule with rules dynamic generation and removal r i : E/a c → a p ; α 1 r 1 , α 2 r 2 , . . . , α l r l can be applied when the number of spikes in the neuron satisfies conditions a n i ∈ L(E) and a n i ≥ a c . Furthermore, the spiking rule with rules dynamic generation and removal can be abbreviated as r i : a c → a p ; α 1 r 1 , α 2 r 2 , . . . , α l r l when a c ∈ L(E). (b) r i : a s → λ; α 1 r 1 , α 2 r 2 , . . . , α l r l . A rule of this form is called the forgetting rule with rules dynamic generation and removal, where s ≥ 0 denotes the number of spikes consumed, with the additional restriction that for all the spiking rules with rules dynamic generation and removal r i : E/a c → a p ; α 1 r 1 , α 2 r 2 , . . . , α l r l , there is a s / ∈ E. Similarly, r 1 , r 2 , . . . , r l denotes the l rules R, and α 1 , α 2 , . . . , α l ∈ {+, −, 0} are the labels.

Description
The definition of RDGRSNP systems proposed above is described here in detail. We mainly describe the process of applying rules in RDGRSNP systems, how the rules are executed in RDGRSNP systems, and the state and graphical definition of RDGRSNP systems.
When neuron σ i applies a spiking rule with rules dynamic generation and removal, r i : E/a c → a p ; α 1 r 1 , α 2 r 2 , . . . , α l r l , c spikes in σ i are consumed, and p spikes are sent along synapse (i, j) to postsynaptic neuron σ j . Meanwhile, neuron σ i adjusts its rule set R i as follows. When α k = +, α k r k can be abbreviated to r k , indicating that rule r k is added to the rule set R i of neuron σ i . If rule r k already exists R i , no rule is repeatedly added. When α k = −, it means that rule r k is removed from the rule set R i of neuron σ i . If rule r k does not exist in R i , no rule is removed. When α k = 0, α k r k can be omitted and not written, and it means that rule r k is neither added to nor removed from the rule set R i of neuron σ i .
When neuron σ i applies a forgetting rule with rules dynamic generation and removal r i : a s → λ; α 1 r 1 , α 2 r 2 , . . . , α k r k , s spikes in σ i are removed and no spikes are sent. Meanwhile, σ i adjusts its rule set, operating in the same way as when applying the spiking rule with rules dynamic generation and removal.
Note that an RDGRSNP system Π is parallel on the system level and sequential on the neuron level. There is a global clock used to mark steps throughout the system. When a neuron contains only one applicable rule, the neuron must apply it. When a neuron contains two or more applicable rules, the neuron can only non-deterministically choose one to apply. When no rule is applicable in the neurons of Π, the system stops computing.
In general, the state of the system at each step is represented by the number of spikes in each neuron at that step. However, in the proposed RDGRSNP system Π, since rules can be generated and removed dynamically, the rules contained in each neuron should also be included in the state of system Π. That is, the state of Π is represented by the number of spikes and the rules in each neuron, symbolized as C t = (n 1 , R 1 ), (n 2 , R 2 ), . . . , (n m , R m ) . The initial state of Π is represented by C 0 . When the number of spikes and the rule sets contained in the neurons of Π do not change, the system stops computing.
Regarding the graphical definition, rounded rectangles are used to represent neurons, and directed line segments are used to represent synapses between neurons. In an RD-GRSNP system, the result of a computation is defined by the time interval between the first two spikes sent to the environment env by the output neuron σ out .

An Illustrative Example.
In Figure 1, an illustrative example is given to elaborate the definition of RDGRSNP systems. In this example, both σ 1 and σ 2 initially have one spike, and σ out initially has two spikes, so the initial state is C 0 = (1, {r 3 }), (1, {r 3 , r 4 }), (2, {r 1 }) .  At step 1, neuron σ 1 applies r 3 : a → a to send one spike to σ 2 and σ out . The output neuron σ out applies the spiking rule with rules dynamic generation and removal r 1 : a 2 → a; −r 1 , r 2 , r 3 , consumes two spikes, sends the first spike to env, removes r 1 , and adds r 2 and r 3 . As for neuron σ 2 , the number of spikes contained can satisfy the two rules r 3 and r 4 , so it will non-deterministically choose one rule to apply.
Suppose at step 1, neuron σ 2 applies r 3 : a → a and sends one spike to σ 1 and σ out . At step 2, σ 1 applies r 3 : a → a again and sends one spike to σ 2 and σ out , respectively. As for the output neuron σ out , it contains two spikes and rules r 2 and r 3 , so it applies rule r 2 : a 2 → λ, forgetting the two included spikes. Thereafter, if neuron σ 2 continues to apply rule r 3 : a → a, the system will continue to repeat the cycle until neuron σ 2 chooses rule r 4 : a → λ to apply.
Suppose at step n, where n ≥ 1, neuron σ 2 chooses rule r 4 : a → λ to apply, so it does not send spikes outward. In addition, neuron σ 1 sends one spike to σ out . Therefore, at step n + 1, σ out has only one spike, applies rule r 3 : a → a, and sends the second spike to env.
Thus, σ out sends the first two spikes with a time interval of (n + 1) − 1 = n, where n ≥ 0, i.e., all positive integers.

The Turing Universality of SNP Systems with Rules Dynamic Generation and Removal
This section is used to prove the Turing universality of RDGRSNP systems when used as a number-generating device and a number-accepting device, respectively. It has been shown that register machines can be used to characterize Turing computable sets of numbers (NRE), so we demonstrate the Turing universality of RDGRSNP systems by simulating register machines. For a register machine M = (m, H, l 0 , l h , I), m denotes the number of registers r in the register machine M, H is a label set in which each label corresponds to an instruction in the instruction set I, l 0 denotes the start instruction, l h denotes the halt instruction, and I denotes an instruction set. The instructions in I have the following three forms: (1) l i : (ADD(r), l j , l k ), the instruction indicates that: add one to the number in r and then jump non-deterministically to l j or l k ; (2) l i : (SUB(r), l j , l k ), the instruction indicates that: if the number in r is not zero, subtract the number by one and jump to l j ; if the number in r is zero, jump directly to l k ; (3) l h : H ALT, the instruction is the halt instruction, indicating that M stops computing.
In the number-generating mode, all registers in M are initially empty, and M starts from l 0 and executes until l h . After the computation is finished, all registers except register 1 are empty, and the number generated by M is stored in register 1. It is assumed that there is no l i : (SUB(r), l j , l k ) for the instructions related to register 1, i.e., the number in register 1 will not be subtracted.
In the number-accepting mode, all registers of M except register 1 are initially empty, and the number to be accepted is stored in register 1. If M can reach the halt instruction l h : H ALT, it proves that the number is accepted by M. Note that the instruction l i : (ADD(r), l j , l k ) needs to be changed to deterministic l i : (ADD(r), l j ) in the number accepting mode.
In this paper, we use N 2 RDGRSNP(Π) and N acc RDGRSNP(Π) to denote the set of numbers that can be generated and accepted by a RDGRSNP system, respectively.
In the proofs of this paper, we use neuron σ l i to denote instruction l i , neuron σ r to denote register r, and auxiliary neurons σ In addition, we stipulate that the simulation of instruction l i starts when σ l i has four spikes.
, we simulate the register machine to prove it by following the process shown in Figure 2. In the initial state, all neurons are empty except neuron σ l 0 . Four spikes in neuron σ l 0 are used to trigger the simulation of computation. The simulation of computation follows the simulation of the instructions in the register machine until neuron σ l h receives four spikes. When neuron σ l h receives four spikes, it means that the computation process in the register machine is successfully simulated, and the FIN module starts to output the computation result stored in register 1.
ADD module: The ADD module is used to simulate instruction l i : (ADD(r), l j , l k ), as shown in Figure 3. This module consists of six neurons and eight rules. Suppose that at step t, σ l i has four spikes and starts simulating instruction l i : (ADD(r), l j , l k ). In neuron σ l i , the spiking rule with rules dynamic generation and removal r 1 : a 4 /a 2 → a 2 ; r 2 is applied, consuming two spikes, sending two spikes to σ r , σ l (1) i , and σ l (2) i and adding r 2 : a 2 → a 2 ; −r 2 . At step t + 1, σ l i has two spikes with r 1 , r 2 , and r 7 . Therefore, σ l i applies rule r 2 : a 2 → a 2 ; −r 2 , sending two spikes to σ r , σ and removing r 2 . At step t + 1, σ r receives two spikes again, for a total of four spikes, corresponding to the number in register r add one. In addition, σ l (1) i has four spikes with rules r 1 and r 2 . Since both rules can be applied, neuron σ l (1) i non-deterministically chooses one to apply. The situations are as follows: (1) When neuron σ l (1) i chooses rule r 1 : a 4 /a 2 → a 2 ; r 2 to apply, it consumes two spikes, sends two spikes to neurons σ l j and σ l (2) i , and adds rule r 2 : a 2 → a 2 ; −r 2 . At step applies the forgetting rule with rules dynamic generation and removal r 5 : a 6 → λ; r 6 , consumes the six spikes it contains, and adds rule r 6 : a 2 → λ; −r 6 . In neuron σ l (1) i , since it contains two spikes with rules r 1 , r 2 , and r 3 , rule r 2 : a 2 → a 2 ; −r 2 is applied, sending two spikes to neurons σ l j and σ l (2) i , and removing rule r 2 . At step t + 4, σ l (2) i applies r 6 : a 2 → λ; −r 6 , consuming two spikes and removing rule r 6 . In addition, neuron σ l j receives two spikes, containing four spikes in total and starts simulating l j .
Thus, the spiking rule with rules dynamic generation and removal r 4 : a 2 → a; −r 4 is applied, sending one spike to neurons σ l j and σ l (2) i , respectively, and removing rule r 4 .
At step t + 4, σ l j applies a → λ, forgetting the one spike from neuron σ l (1) i . In addition, , sending four spikes to σ l k . At step t + 5, σ l k contains four spikes and starts simulating l k . Table 1 lists five variants of SNP systems and the number of neurons they require to construct the ADD module. From Table 1, it can be seen that DSNP, PASNP, PSNRSP, MPAIRSNP, and SNP-IR require 7, 8, 11, 9, and 8 neurons, respectively, while RDGRSNP requires 6. Therefore, the proposed RDGRSNP system in this paper requires the least number of neurons. SUB module: The SUB module is used to simulate instruction l i : (SUB(r), l j , l k ), as shown in Figure 4. This module consists of six neurons and twelve rules. , , , r r r r 8 10 11 13 , , , r r r r Suppose that neuron σ l i contains four spikes at step t and starts simulating instruction l i : (SUB(r), l j , l k ). Neuron σ l i applies the spiking rule with rules dynamic generation and removal r 1 : a 4 /a → a; r 2 , sends one spike to σ , and σ r , and adds r 2 to the rule set.
At step t + 1, σ l i remains with three spikes with rules r 1 , r 2 , and r 8 , so σ l i applies the spiking rule with rules dynamic generation and removal r 2 : a 3 → a 2 ; −r 2 , sends two spikes to the postsynaptic neuron, and removes the rule r 2 . In σ l (1) i and σ l (2) i , the forgetting rule is applied, forgetting one spike. In addition, different rules are applied in neuron σ r depending on whether the initial state contains spikes or not in the following two cases: (1) If σ r contains 4n(n ≥ 1) spikes, then at step t + 1, σ r applies r 3 : a(a 4 ) + /a → a; r 4 , consumes one spike, and sends it to σ and adding r 7 . At step t + 2, σ r has r 3 , r 6 , and r 7 with two spikes, so the spiking rule with rules dynamic generation and removal r 7 : a 2 → a; −r 7 is applied, removing rule r 7 . Thus, at step t + 3, the auxiliary neuron σ l (1) i applies rule r 10 : a 4 → λ, while σ l (2) i applies rule r 9 : a 4 → a 4 , sending spikes to σ l k . At step t + 4, σ l k contains four spikes, simulating instruction l k . Table 2 lists five variants of SNP systems and the number of neurons they require to construct the SUB module. From Table 2, it can be seen that DSNP, PASNP, PSNRSP, MPAIRSNP, and SNP-IR require 7, 10, 15, 8, and 8 neurons, respectively, while RDGRSNP requires 6. Therefore, the proposed RDGRSNP system in this paper requires the least number of neurons. FIN module: The FIN module is used to simulate the halt instruction l h : H ALT and output the numbers generated by system Π, as shown in Figure 5. The generated number n is represented by the time interval between the first two spikes sent to the environment env by the output neuron. The FIN module constructed in this paper consists of three neurons and six rules. Suppose that at step t, σ l h has four spikes, starts simulating instruction l h : H ALT and outputting the computation results. Neuron σ l h applies the spiking rule with rules dynamic generation and removal r 1 : a 4 /a 2 → a; r 2 , sends one spike to σ 1 and σ out , and adds r 2 to it. Thus, at step t + 1, neuron σ l h applies r 2 : a 2 → a; r 2 , sends one spike to σ 1 and σ out , and removes r 2 . Neuron σ 1 applies the spiking rule with rules dynamic generation and removal r 3 : a(a 4 ) + /a 5 → a; r 4 , r 5 , −r 3 , consuming five spikes, sending one spike to the output neuron σ out , adding rules r 4 and r 5 , and removing rule r 3 . At step t + 2, σ out applies r 6 : a 3 → a; r 5 sending the first spike to env and adding r 5 . So, starting from step t + 2, neuron σ 1 will forget four spikes at each step until step t + n + 1, where n ≥ 1. At step t + n + 1, σ 1 has only one spike with rules r 4 and r 5 . Therefore, it applies the spiking rule with rules dynamic generation and removal r 5 : a → a; r 3 , −r 4 , −r 5 , sends one spike to σ out , adds r 3 , and removes rules r 4 and r 5 . At step t + n + 2, σ out applies r 5 : a → a; r 3 , −r 4 , −r 5 to send the second spike to env. Thus, the time interval between the first two spikes sent by σ out to the environment is (t + n + 2) − (t + 2) = n, i.e., the number stored in register 1 when the computation stops. Table 3 lists five variants of SNP systems and the number of neurons they require to construct the FIN module. From Table 3, it can be seen that DSNP, PASNP, PSNRSP, MPAIRSNP, and SNP-IR require 4,9,8,8, and 5 neurons, respectively, while RDGRSNP requires 3. Therefore, the proposed RDGRSNP system in this paper requires the least number of neurons. For NRE ⊆ N acc (Π), we simulate the register machine to prove it by following the process shown in Figure 6. In the initial state, neuron σ 1 contains 4n spikes, corresponding to the number n to be accepted in the register machine, neuron σ l 0 contains four spikes for triggering the simulation of computation, and all the remaining neurons are empty. The simulation of computation follows the simulation of instructions in the register machine until neuron σ l h receives four spikes. When neuron σ l h receives four spikes, it means that the computation process in the register machine is simulated successfully, the computation stops, and the number is accepted.
First, in the number-accepting mode, we need the INPUT module to read the number to be accepted. The number n to be accepted is represented by the time interval between the first two spikes entered, i.e., the spike train entered is: 10 n−1 1.
INPUT module: The INPUT module is used to read the number to be accepted, as shown in Figure 7. This module consists of nine neurons and seven rules.
Suppose that at step 0, σ in receives one spike, so at step 1, σ in applies r 3 : a → a and sends one spike to auxiliary neurons σ in 1 , σ in 2 , σ in 3 , and σ in 4 , respectively. Starting from step 2, auxiliary neurons σ in 1 , σ in 2 , σ in 3 , and σ in 4 will send spikes in a cycle and send one spike to σ in 5 and σ in 6 , respectively. For neuron σ in 5 , it applies r 4 : a 4 → a 4 to send four spikes to σ 1 ; for neuron σ in 6 , it applies rule r 5 : a 4 → λ to forget the received four spikes. The above process will continue until the input neuron σ in receives another spike. At step n, σ in receives another spike, so at step n + 1, σ in applies r 3 : a → a again, sends one spike to σ in 1 , σ in 2 , σ in 3 , and σ in 4 . At step n + 2, σ in 1 , σ in 2 , σ in 3 , and σ in 4 all apply the spiking rule with rules dynamic generation and removal r 1 : a 2 → a 2 ; r 2 , −r 1 , send two spikes to neurons σ in 5 and σ in 6 , remove rule r 1 , and add rule r 2 . Thus, at step n + 3, these four auxiliary neurons apply rule r 2 : a 2 → λ; r 1 , −r 2 to forget two spikes. At step n + 3, both neurons σ in 5 and σ in 6 receive eight spikes. Neuron σ in 5 applies rule r 7 : a 8 → λ to forget the spikes in it. Neuron σ 1 receives four spikes at step n + 2, so it contains 4n spikes. In σ in 6 , rule r 6 : a 8 → a 4 is applied to send four spikes to σ l 0 . Then, neuron σ l 0 contains four spikes, simulating the starting instruction l 0 . Table 4 lists five variants of SNP systems and the number of neurons they require to construct the INPUT module. From Table 4, it can be seen that DSNP, PASNP, PSNRSP, MPAIRSNP, and SNP-IR require 6, 6, 10, 6, and 9 neurons, respectively, while RDGRSNP requires 9. However, this is acceptable because the INPUT module will be used only once in the number-generating mode and will not have a great impact on the overall number of neurons. In the number-generating mode, the SUB instruction in M does not change, so we can continue to use the SUB module in Theorem 1 above. Since the system stop means that the input numbers are accepted, an additional FIN module is no longer needed to output the computation results. However, in the number-accepting mode, the ADD instruction in M is no longer in the non-deterministic form of l i : (ADD(r), l j , l k ), but in the deterministic form of l i : (ADD(r), l j ). Therefore, this paper provides a deterministic ADD module, as shown in Figure 8. This module consists of only three neurons and three rules. In this deterministic ADD module, σ l i sends spikes to σ r , simulating the operation of adding one to the number of register r and, at the same time, sends spikes to σ l j , causing instruction l j to start being simulated. Table 5 lists five variants of SNP systems and the number of neurons they require to construct the deterministic ADD module. From Table 5, it can be seen that DSNP, PASNP, PSNRSP, MPAIRSNP, and SNP-IR require 4, 3, 5, 3, and 3 neurons, respectively, while RDGRSNP requires 3. Therefore, PASNP, MPAIRSNP, SNP-IR and the proposed RDGRSNP system in this paper require the least number of neurons.

Small Universal SNP System with Rules Dynamic Generation and Removal
This section presents a small universal RDGRSNP system used to simulate function computation and demonstrates that the RDGRSNP system we proposed requires fewer neurons by comparing it with five variants of SNP systems.

Theorem 3.
A small universal RDGRSNP system using 68 neurons can implement function simulation.
Regarding Theorem 3, we continue to prove it by simulating a register machine. In M u [41], for simulating the function computation shown in Figure 9, there always exists a recursive function g such that θ x (y) = M u (g(x), y) holds for any θ x (y), where θ x (y) is any one function of the fixed enumeration (θ 1 , θ 2 , . . .) of all unary partial recursive functions, M u denotes a universal register machine, and g(x) and y are two parameters with x, y ∈ N stored in registers 1 and 2, respectively. The register machine M u for the simulating function computation stops when the execution reaches l h : H ALT. The result of the computation is stored in register 0. Proof: For Theorem 3, we simulate the register machine to prove it by following the process shown in Figure 10. In the initial state, neuron σ 1 contains 4g(x) spikes, corresponding to parameter g(x) in register 1, neuron σ 2 contains 4y spikes, corresponding to parameter y in register 2, neuron σ l 0 contains four spikes for triggering the simulation of computation, and all the remaining neurons are empty. The simulation of computation follows the simulation of instructions in the register machine until neuron σ l h receives four spikes. When neuron σ l h receives four spikes, it means that the computation process of the register machine is successfully simulated and the computation stops. Meanwhile, the FIN module is triggered and used to output the computation result.
In M u , the SUB instruction in the form of l i : (SUB(r), l j , l k ), the halt instruction l h : H ALT, and the deterministic ADD instruction in the form of l i : (ADD(r), l j ) are included. Thus, we can continue to use the SUB, FIN, and deterministic ADD module, as proposed in Theorems 1 and 2 above. In addition, since two registers are needed to store parameters g(x) and y, respectively, this paper makes a little change to the INPUT module in Theorem 2 above, as shown in Figure 11. We still use the time interval between two spikes to represent the input number, so the input sequence is 10 g(x)−1 10 y−1 1. Suppose that at step 0, σ in receives the first spike, so at step 1, σ in applies r 3 : a → a and sends one spike to the four auxiliary neurons σ in 1 , σ in 2 , σ in 3 , and σ in 4 . Thus, from step 2, these four auxiliary neurons send spikes in a cycle by applying rule r 3 : a → a, and all send one spike to σ in 5 , σ in 6 , and σ in 7 . For neuron σ in 5 , starting at step 3, it continuously applies r 5 : a 4 → a 4 to send four spikes to σ 1 , by which the first input number is stored in σ 1 , while for neurons σ in 6 and σ in 7 , they continuously apply rule r 6 : a 4 → λ to forget four spikes. The process continues until the input neuron receives the second spike.
At step g(x), σ in receives the second spike, so at step g(x) + 1, σ in applies r 3 : a → a again, sends one spike to neurons σ in 1 , σ in 2 , σ in 3 , and σ in 4 . Thus, starting at step g(x) + 2, these four neurons apply rule r 4 : a 2 → a 2 to send two spikes in a cycle, while sending two spikes to their postsynaptic neurons σ in 5 , σ in 6 , and σ in 7 . Similarly, for neuron σ in 6 , starting at step g(x) + 3, it applies rule r 7 : a 8 → a 4 continuously to send four spikes to the neuron σ 2 , by which the second input number is stored in σ 2 . For neurons σ in 5 and σ in 7 , they apply rule r 8 : a 8 → λ to forget the spikes. The process continues until σ in receives the third spike. In addition, σ 1 receives four spikes sent from σ in 5 at step g(x) + 2. Therefore, σ 1 contains 4g(x) spikes, which corresponds to the number g(x) in register 1.
At step g(x) + y, σ in receives the third spike, so at step g(x) + y + 1, σ in applies r 3 : a → a again and sends one spike to σ in 1 , σ in 2 , σ in 3 , and σ in 4 . At step g(x) + y + 2, neurons σ in 1 , σ in 2 , σ in 3 , and σ in 4 all contain three spikes with rules r 1 , r 3 , and r 4 , so they all apply the spiking rule with rules dynamic generation and removal r 1 : a 3 → a 3 ; r 2 , −r 1 , send three spikes to neurons σ in 5 , σ in 6 , and σ in 7 , and add rule r 2 and remove rule r 1 . At step g(x) + y + 3, neurons σ in 1 , σ in 2 , σ in 3 , and σ in 4 all apply the forgetting rule with rules dynamic generation and removal r 2 : a 3 → λ; r 1 , −r 2 , forget three spikes, and remove rule r 2 and add rule r 1 again. Meanwhile, both neurons σ in 5 and σ in 6 contain 12 spikes and apply rule r 10 : a 12 → λ to forget these 12 spikes. Neuron σ in 7 applies rule r 9 : a 12 → a 4 to send four spikes to σ l 0 . In addition, σ 2 receives four spikes sent by neuron σ in 6 at step g(x) + y + 2, so it has 4y spikes, which corresponds to the number y in register 2. At step g(x) + y + 4, σ l 0 contains four spikes, simulating the starting instruction l 0 . Table 6 lists five variants of SNP systems and the number of neurons they need to build the INPUT module in the register machine. From Table 6, it can be seen that DSNP, PASNP, PSNRSP, MPAIRSNP, and SNP-IR require 9, 9, 13, 9, and 9 neurons, respectively, while RDGRSNP requires 11. However, this is acceptable because the INPUT module will be used only once during the simulation of function calculation and will not have a great impact on the overall number of neurons. According to the INPUT, deterministic ADD, SUB, and FIN modules described above, a small universal RDGRSNP system for simulating function θ x (y) = M u (g(x), y) consists of 71 neurons, as follows: (1) Neurons used to start simulating instructions: 25 (2) Neurons used to simulate registers: 9 (3) Auxiliary neurons in the ADD module: 0 (4) Auxiliary neurons in the SUB module: 2 × 14 = 28 (5) The input and auxiliary neurons in the INPUT module: 8 (6) The output neuron in the FIN module: 1 We can also further reduce the number of neurons by compound connections between modules, and the following is the specific structure of compound connections between modules. Figure 12 represents the composite connection of a set of consecutive ADD instructions l i : (ADD(r ), l g ) and l g : (ADD(r ), l j ). We put σ r from the former module into the latter module, removing σ l g .  Figure 13 represents the compound connection of a set of consecutive ADD instruction l i : (ADD(r ), l g ) and SUB instruction l g : (SUB(r ), l j , l k ). We put the neuron σ r of the former ADD module into the later SUB module, removing σ l g .
By the compound connection between the modules mentioned above, we remove three neurons σ l 21 , σ l 6 , and σ l 10 , so that the number of neurons required to build a small universal RDGRSNP system for simulating function θ x (y) = M u (g(x), y) is reduced from 71 to 68. Table 7 lists several variants of SNP systems and the number of neurons they require to construct a small universal SNP system for simulating function computation. From the table, it can be seen that five variants of SNP systems, DSNP, PASNP, PSNRSP, MPAIRSNP, and SNP-IR, require 81, 121, 151, 95, and 100 neurons, respectively, while the RDGRSNP system proposed in this paper requires only 68.  Table 7. The comparison of the number of neurons required to construct a small universal SNP system for simulating function computation.

Variants of SNP Systems Number of Neurons
DSNP [38] 81 PASNP [39] 121 PSNRSP [40] 151 MPAIRSNP [41] 95 SNP-IR [42] 100 RDGRSNP 68 In the construction of SNP systems, the number of neurons is generally used to measure the computational resources required to build a system. A smaller number of neurons means fewer computational resources are required to build a system; conversely, a larger number of neurons means more resources are required to build a system. Therefore, the smaller the number of neurons needed to construct the SNP system, the better. As we can see in Table 7, our proposed RDGRSNP systems require fewer neurons to build a small universal SNP system for simulating function θ x (y) = M u (g(x), y). This means that a RDGRSNP system needs only fewer computational resources compared to other systems. Thus, our proposed RDGRSNP systems have an advantage.

Conclusions
In conventional SNP systems, the rules contained in neurons do not change during the computation process. However, biochemical reactions in biological neurons tend to be different depending on factors such as the substances in the neuron. Considering this motivation, RDGRSNP systems are proposed in this paper. In RDGRSNP systems, neurons apply rules to update the rule set. In Section 2, we give the definition of RDGRSNP systems and illustrate how RDGRSNP systems work with an illustrative example.
In Section 3, we demonstrate the computational power of RDGRSNP systems by simulating the register machines. Specifically, we demonstrate that RDGRSNP systems are Turing universal when used as a number-generating device and a number-accepting device. Subsequently, in Section 4, we construct a small universal RDGRSNP system by using 68 neurons. By comparing with five variants of SNP systems, DSNP, Asynchronous RSSNP, PASNP, PSNRSP, MPAIRSNP, and SNP-IR, it is demonstrated that the RDGRSNP system proposed in this paper requires only fewer resources to construct a small universal RDGRSNP system for simulating function computation.
Our future research will focus on the following areas. Although the computational power of RDGRSNP systems has been demonstrated, the potential of RDGRSNP systems goes far beyond that. SNP systems have shown excellent capabilities in solving NP problems, and we have demonstrated the advantage of RDGRSNP systems compared with other variants of SNP systems. Therefore, we believe that RDGRSNP systems can perform better in solving NP problems.
RDGRSNP systems proposed in this paper work in sequential mode. However, there are other modes of operation such as asynchronous mode that have not been discussed. In asynchronous mode, the neurons in the system can choose to apply or not to apply the rules at each step. Of course, these rules also need to satisfy control conditions to be applied. Therefore, neurons have more autonomy in asynchronous mode. Many variants of SNP systems have been discussed working in asynchronous mode. An exploration of RDGRSNP systems working in asynchronous mode is also a future research direction for this paper.