Dynamic Threshold Neural P Systems with Multiple Channels and Inhibitory Rules

: In biological neural networks, neurons transmit chemical signals through synapses, and there are multiple ion channels during transmission. Moreover, synapses are divided into inhibitory synapses and excitatory synapses. The firing mechanism of previous spiking neural P (SNP) systems and their variants is basically the same as excitatory synapses, but the function of inhibitory synapses is rarely reflected in these systems. In order to more fully simulate the characteristics of neurons communicating through synapses, this paper proposes a dynamic threshold neural P system with inhibitory rules and multiple channels (DTNP ‐ MCIR systems). DTNP ‐ MCIR systems represent a distributed parallel computing model. We prove that DTNP ‐ MCIR systems are Turing universal as number generating/accepting devices. In addition, we design a small universal DTNP ‐ MCIR system with 73 neurons as function computing devices.


Introduction
Membrane computing (MC) is a type of system with the characteristic of distributed parallel computing, usually called P systems or membrane systems.MC is obtained by researching the structure and functioning of biological cells as well as the communication and cooperation of cells in tissues, organs, and biological neural networks [1,2].P systems are mainly divided into three categories, namely cell-like P systems, tissue-like P systems, and neural-like P systems.In the past two decades, many P-system variants have been studied and applied to real-world problems, and most of them have been proven to be universal number generating/accepting devices and functional computing devices [3,4].

Related Work
Spiking neural P (SNP) systems are abstracted from biological facts that neurons transmit spikes to each other through synapses.An SNP system can be regarded as a directed graph, where neurons are regarded as nodes, and the synaptic connections between neurons are regarded as arcs [5,6].SNP systems are the main form of neural-like P systems [7].An SNP system consists of two components: data and firing rules.Data usually describe the states of neurons and can also indicate the number of spikes contained in every neuron.Firing rules contain spiking rules and/or forgetting rules.The firing of rules needs to satisfy necessary conditions.The form of firing rules is  .This means that the firing of a rule is only related to the state of the neuron and has nothing to do with the state of other neurons.Moreover, the parallel work of neurons determines that SNP systems are models of distributed parallel computing.Therefore, most SNP systems are equipped with a global clock to mark time.SNP systems also have nondeterminism characteristic.If more than one rule can be enabled in a neuron at a certain time, then only one of them will be selected non-deterministically.Neural-like P systems have received extensive attention and research in recent years.SNP systems was proposed by Ionescu et al. [7].Usually, SNP systems have spikes and forgetting rules, but Song et al. [8] proposed that neurons can also have request rules.The request rules enable neurons to perceive "stimuli" from the environment by receiving a certain number of spikes.Zeng et al. [9] proposed SNP systems with weights.In this system, when the potential of a neuron is equal to a given value (called a threshold), it will fire.Zhao et al. [10] introduced a new mechanism called neuron dissolution, which can eliminate redundant neurons generated in the calculation process.In order to enable SNP systems to represent and process fuzzy and uncertain knowledge, the weighted fuzzy peak neural P system was proposed by Wang et al. [11].Considering that SNP systems can only fire one neuron in each step, [12,13] investigated sequential SNP systems.Although most SNP systems are synchronous, asynchronous SNP systems are investigated by [14][15][16].Song et al. [15] also studied the computing power of asynchronous SNP system with partial synchronization.Moreover, most SNP systems as number generating/receiving devices [17,18], language generators [19], and functional computing devices [20,21] have been proven to be Turing universal.Furthermore, SNP systems have applications in dealing with real-world problems, such as fault diagnosis [22][23][24], clustering [25], and optimization problems [26].

Motivation
Yang et al. [27] researched SNP systems with multiple channels (SNP-MC systems) based on the fact: there are multiple ion channels in the process of chemical signal transmission, therefore, spiking rules with channel labels are introduced into neural P systems.After SNP-MC systems were proposed, many neural P systems combine with multiple channels have been investigated.They prove that using multiple channels can improve the computing power of neural P systems.
In addition, there are the following facts in the biological nervous system: (1) The conduction of nerve impulses between neurons is unidirectional, that is, nerve impulses can only be transmitted from the axon of one neuron to the cell body or dendrites of another neuron, but not in the opposite direction.(2) Synapses are divided into excitatory synapses and inhibitory synapses.According to the signal from the presynaptic cells, if the excitability of the postsynaptic cell is increased or excited, then the connection is excitatory synapse.If the excitability of the postsynaptic cell is decreased or the excitability is not easily generated, then it is inhibitory synapse.
Li et al. [28] were inspired by the above two biological facts and proposed SNP system with inhibitory rules (SNP-IR systems).The firing condition of an inhibitory rule not only related to the state of neuron where the rule is located but also related to other neurons (presynaptic neurons).This is because of the unidirectionality of nerve impulse transmission.SNP-IR systems have stronger control capabilities than other SNP systems.
Peng et al. [29] first proposed the dynamic threshold neural P systems (DTNP systems), which were abstracted from ICM model of the cortical neurons.The firing mechanism of DTNP systems are different from SNP systems.DTNP systems adopt a dynamic-threshold-based firing mechanism, and they have two data units (feeding input unit and dynamic threshold unit) as well as a maximum spike consumption strategy.In order to improve the computational efficiency of DTNP systems, we introduce multiple channels and inhibitory rules.
The main motivation of this paper is mainly to introduce inhibitory rules into DTNP systems, which can better reflect the working mechanism of inhibitory synapses and simulate the actual situation of communication between neurons through synapses.The introduction of multiple channels improves the control capability of the DTNP system so that it can better solve real-world problems.
(2) Inspired by SNP-IR systems, we introduce inhibitory rules to DTNP systems, but the form and firing conditions of inhibitory rules have been re-defined.
(3) The firing rules of neuron r σ (corresponds to register r) is can be reflected in the rules.However, r E can also be a regular expression in DTNP-MCIR systems, for example , where represents an odd number of spikes.The rule is only enabled when neuron r σ contains an odd number of spikes and satisfies the default firing condition.
If a rule in a neuron is enabled in DTNP-MCIR systems, then the number of spikes consumed by the neuron is u τ p n    .When two rules in a neuron can be applied simultaneously at time t, we can choose one of the rules according to the neuron's maximum spike consumption strategy.Based on the above, we built a variant of DTNP systems called dynamic threshold neural P systems with multiple channels and inhibitory rules (DTNP-MCIR systems).We will prove Turing universal of DTNP-MCIR systems as number generating/accepting devices and functional computing devices.The content of the rest of this paper is arranged as follows.Section 2 defines DTNP-MCIR systems and gives an illustrative example.Section 3 illustrates the Turing universality of DTNP-MCIR systems as number-generating/accepting devices.Section 4 explains DTNP-MCIR systems as function computing devices.Section 5 is the conclusion of this paper.

DTNP-MCIR Systems
In this section, we define a DTNP-MCIR system, describe some of its related details, and give an illustrative example.For convenience, DTNP-MCIR systems use the same notations and terms as SNP systems.

Definition
A DTNP-MCIR system  with degree m  1 is shown below: where: (1)

 
O a  is a singleton alphabet (the object a is called the spike); (2) is the channel labels; (3) , where , which can reflect the number of spikes in feeding input unit and dynamic threshold unit of m neurons.The initial configuration can be denoted as The rules in DTNP-MCIR systems can be divided into two types: firing rules and inhibitory rules.The form of firing rules is  1a, its firing mechanism is special.Usually the firing of a neuron depends only on its current state, other neurons have no influence on it.However, the firing condition of an inhibitory rule not only depends on the state of the current neuron, but also related to the state of the preceding neuron (called an inhibitory neuron).Suppose an inhibitory rule is located in neuron i σ and the inhibitory neuron of i σ is j σ .
We assume that when there is an inhibitory arc between neurons i σ and j σ , the usual arc cannot exist.
In addition, the inhibitory neuron j σ only controls the firing of the neuron i σ , and the neuron i σ has no effect on the inhibitory neuron j σ .The firing condition of the inhibitory rule is defined as , we can get: where n spikes come from other neurons, and p spikes are generated by the neuron i σ . .At this time, the form of inhibitory rules is , which is called an extended inhibitory rule.The firing condition of the extended inhibitory rule is If neuron i σ meets firing conditions during calculation, then one of the rules in R i must be used.

If two rules
  in neuro n i σ can be applied to configuration t C at the same time, only one of them can be applied.According to the maximum spikes consumption strategy of DTNP-MCIR systems, when is applied, when u τ p n u τ p n        , one of them will be non-deterministically selected.Note that this strategy is also effective for forgetting rules.For example, two rules a(l)  both satisfy firing conditions at time t, since the firing of forgetting rule E a ,a λ  consumes three spikes, however, the spiking rule   2 E a,a a(l)  only consumes one spike, then the forgetting rule will be chose and applied.
We assume that a transition step is from one configuration to another.A sequence of transitions from the initial configuration is defined as a calculation.For a configuration t C , if no rules in the system can be applied, the system calculation halts.Any calculation corresponds to a binary sequence, write 1 when the output neuron out σ emit a spike to the environment, otherwise write 0.
Therefore, the calculation result is defined as the time interval between the first two spikes emitted by the output neuron out σ . 2 N (Π) represents a set of numbers calculated by system Π .at most m neurons and at most n rules in each neuron.When m or n is not restricted, the symbol "*"is usually used instead.System Π can be used as a accepting device, at this time, the input neuron receives spikes from the environment, but the output neuron is removed from the system.The system imports spikes train from the environment, then stores the number n in the form of 2n spikes.When the system calculation halts, n is the number to be accepted by it.acc N (Π) represents a set of numbers accepted by the system, n acc m

N DTNP MCIR 
denotes the families of all sets acc N (Π) accepted by DTNP-MCIR systems having at most m neurons and at most n rules in each neuron.

Illustrative Example
In order to clearly understand the working mechanism of DTNP-MCIR systems, we give an example that can generate a finite spike train, as shown in Figure 2. We assume that a DTNP-MCIR system Π consists of four neurons σ σ σ 3 out ， ， ， .Initially, the feeding input units of neuron 1 σ and neuron 2 σ each have two spikes, but there are no spikes in a ,a a ,a a 1 The number of spikes in the initial dynamic threshold unit of the four neurons is Therefore, the initial configuration u τ p u τ p 2       , one of them will be selected non- deterministically.Therefore, there are the following two cases: (1) Case 1: if rule (a ,a ) a (1)  is applied in neuron 1 σ at time 1, then neuron 1 σ will consume two spikes in the feeding input unit and sends two spikes to the neuron 3 (a , a ) a (1)  reaches the firing condition in neuron 2 σ , so neuron 2 σ consumes two spikes in the feeding input unit and sends two spikes to the neuron out via channel (1) at time 1.Therefore, and rule (a,λ) λ( 2) in neuron out can be applied.However, according to the maximum spike consumption strategy, rule   2 , a a(1) a  will be applied and sends one spike to the environment through channel (1).Further, since σ and consumes two spikes in the feeding input unit and sends two spike to neuron out.Thus (a ,a) a( 1)  fires again and sends one spike to the environment.The system Π halts.Therefore, the spike train generated by this system is "011".
(2) Case 2: if rule (a ,a) a (2)  is applied in neuron 1 σ at time 1, then neuron 1 σ consumes two spikes in the feeding input unit and sends one spikes to the neuron out through channel (2).
Because the state of system Π is the same as that in case 1 at time 1, neuron 2 σ consumes two spikes in the feeding input unit and sends two spikes to the neuron out through channel (1).So  is applied and removes one spike in feeding input unit and the system Π halts.Therefore, the spike train generated is "010".

Turing Universality of DTNP-MCIR Systems as Number-Generating/Accepting Device
In this section, we will explain the working mechanism of DTNP-MCIR systems as the number generating and number accepting mode.DTNP-MCIR systems proves its computational completeness by simulating register machines.More specifically, DTNP-MCIR systems can generate/accept all recursively enumerable sets of numbers (their family is called NRE).
A register machine can be defined as , where m is the number of registers, H indicates the set of instruction labels, 0 l corresponds to the start label, h l corresponds to the halt label, I denotes the set of instructions.Each instruction in I corresponds to a label in H, the instructions in I have the following three forms: (1) i jk l : (ADD(r),l ,l ) (add 1 to register r and then move non-deterministically to one of the instructions with labels j l , k l ).
(2) j jk l :(SUB(r),l ,l ) (if register r is non-zero, then subtract 1 from it, and go to the instruction with label j l ; otherwise go to the instruction with label k l ).

DTNP-MCIR Systems as Number Generating Devices
Initially, every register in the register machine M is empty.The register machine can calculate the number n in the generation mode.It first starts with instruction 0 l , and then applies a series of instructions until it reaches the halting instruction h l .Finally, the number stored in the first register is the result calculated by the register machine.Usually the family NRE can be characterized by the register machine.
Theorem 1. σ fires, indicating that the simulation is completed [30].In the calculation process, the spikes are sent to the environment twice by neuron out σ at times 1 t and 2 t respectively, the calculation result is defined as the interval of 2 1 t t  that is also the number contained in register1.
In order to verify that the system 1 Π can indeed simulate the register machine M correctly, we will explain how the ADD or SUB module simulates the ADD or SUB instruction, and how the FIN module outputs the calculation results.
(1) ADD module (shown in Figure 3)-simulating an ADD instruction i jk l : (ADD(r),l ,l ) The system Π 1 starts from ADD instruction 0 l .Suppose an ADD instruction , which involves 7 neurons σ ,σ ,σ ,σ ,σ ,σ ,σ respectively.Thus, a ,a a (1)  is applied in neuron i l σ ,then two spikes are sent to neurons both satisfy the firing condition, and both consume an equal number of spikes, so one of them will be selected nondeterministically.We consider the following two cases: a ,a a (1)  is applied, neuron   λ,a a ,a a 1 This means that the system 1 Π starts to simulate instruction j l .The configuration of 1 Π at this time is C 0,2 , 0, 2 , 0,2 , 0,1 , 0,1 , 2,2 , 0,2   .Therefore, ADD instructions can be correctly simulated by the ADD module.Since neuron i l σ has two spikes, two spikes are added to neuron r σ , and then neuron j l σ and k l σ are selected non- deterministically.
, which involves 7 neurons σ ,σ ,σ ,σ ,σ ,σ ,σ respectively.Thus, σ is enabled and removes the only spike in the feeding input unit.Therefore, and neuron r (3) Module FIN (shown in Figure 5)-outputting the result of computation We assume that neuron σ when the register machine halts.
According to the above description, it can be found that the system 1 Π can correctly simulate register machine M working in generating module.Therefore, Theorem 1 holds.

Turing Universality of Systems Working in the Accepting Mode
In the following, we will prove the universal of DTNP-MCIR systems as number accepting device.Suppose that neuron in σ imports the first spike from the environment at time t.Therefore, the

Theorem 2.
1,1 , 0,1 , 0,1 , 0, 2 , 0,1  , which involves 5 neurons . At the next moment, the rule   σ is enabled, so the two spikes in feeding input unit are removed.The rule    Through the above description, we prove that system 2 Π can correctly simulate the register machine M working in the accepting mode and can find that neurons of system 2 Π contain at most two rules, Therefore, Theorem 2 holds.

DTNP-MCIR Systems as Function Computing Devices
In this part, we will discuss the ability of a small universal DTNP-MCIR system to calculate functions.A register machine Initially we assume that all registers are empty, k arguments are introduced into k registers.In general, only the first two registers are used.Then the register machine starts from the instruction 0 l and continues to calculate until it reaches the halt instruction h l .Finally, the function value is stored in a special register t r .  0 1 φ , φ , is defined as a fixed admissible enumeration of the unary partial recursive functions.We think that a register machine is universal when there is a recursive function g that makes φ y M g x ,y  for all natural numbers x, y.Korec [31] proposed a small universal register machines l : ADD(8),l ; h l :HALT.We define the modified register machine as ' u M , as shown in Table 1.
We will design a small universal DTNP-MCIR system to simulate the register machine  Theorem 3.There is a small universal DTNP-MCIR system having 73 neurons for computing functions.
Proof.We design a DTNP-MCIR system 3 Π to simulate the register machine We do not use inhibitory rules in this module, but make full use of the function of multiple channels.
Multiple channels have played a major role in saving neurons and improving system operation efficiency.
Suppose that neuron in σ receives the first spike form the environment at time    Assuming that the time for the second spike to reach neuron in Similarly, rule    By observing Table 1, we can find that all ADD instructions have the form i j l :(ADD(r),l ) , therefore, we use the deterministic ADD module shown in Figure 7 to simulate the ADD instructions.Its working mechanism has been clarified in the proof of Theorem 2.
In addition, the SUB instruction j jk l :(SUB(r),l ,l ) can be simulated by the SUB module in Figure 4.The working mechanism of the SUB module has been discussed during the proof of Theorem 1.
When neuron h l σ receives two spikes at time t, the calculation of the system 3 Π halts.The OUTPUT module shown in Figure 9 is used to deal with the calculation results.We assume that neuron 8 σ contains 2n spikes.The configuration of OUTPUT module at time t is , which involves fives neurons σ is the inhibitory neuron of neuron    l : ADD(0),l .In summary, we can save eight neurons in total by combining some ADD/SUB instructions.The recombined instructions can be simulated with ADD-ADD, SUB-ADD-1and SUB-ADD-2modules respectively.Therefore, the number of neurons in system λ,a a ,a a 1 a, λ λ

 
In order to further illustrate the computing power of DTNP-MCIR systems, we compared it with some computing models in term of small number of computing units.From Table 2, we can see that DTNP systems, SNP systems, SNP-IR systems and recurrent neural networks need 109,67,100 and 886 neurons respectively to achieve Turing universality for computing function.However, DTNP-MCIR systems need fewer neurons than them.Although SNP-MC systems require only 38 neurons to compute function, which is much less than the computing unit required by DTNP-MCIR systems, they have different working modes.SNP-MC systems work in asynchronous mode but DTNP-MCIR systems work in synchronous mode.Therefore, the comparison indicates that DTNP-MCIR systems need fewer neurons than most of modes.In addition, Table 3 gives the full name of the comparative model abbreviation.
Table 2.The comparison of different computing models in the term of small number of computing units.
DTNP systems [29] Dynamic threshold neural P systems SNP-IR systems [28] Spiking neural P systems with inhibitory rules SNP-MC systems [27] Spiking neural P systems with multiple channels SNP systems [32] Smaller universal spiking neural P systems SNP-MC systems [34] Small universal asynchronous spiking neural P systems with multiple channels.

Conclusions and Further Work
Inspired by SNP systems with inhibitory rules (SNP-IR systems) and the learning of SNP systems with multiple channels (SNP-MC systems), this paper proposes a dynamic threshold neural P system with inhibitory rules and multiple channels (DTNP-MCIR systems).In fact, dynamic threshold neural P systems (DTNP systems) had been researched and proved to be Turing universal number generating/accepting device.Our original intention to construct DTNP-MCIR systems is to more fully simulate the actual situation of neurons communicating through synapses, and also to show the use of inhibitory rules and multiple channels in DTNP systems.In addition, we have optimized DTNP systems.The form of firing rules is is a regular expression we introduced, which means that neuron r σ can be fired only when the number of spikes is an odd number.
In the future, we want to research whether DTNP-MCIR systems can be combined with certain algorithms, we think that the two data units of DTNP-MCIR systems can be used as two parameter inputs, the use of inhibitory rules and multiple channels make them have stronger control capabilities.Moreover, because DTNP-MCIR systems are a distributed parallel computing model, they can greatly improve the computational efficiency of algorithms.Future work will focus on using DTNP-MCIR systems to solve real-world problems, such as image processing and data clustering and n c  , the rule is applied and the neuron fires.  L E represents the language set associated with regular expressions E. n a indicates that the neuron where the rule is located contains n spikes.Once the rule is enabled, the neuron removes c spikes and transmits the generated p spikes to succeeding neurons.When p 0 rule.λ represents an empty string, which indicates that forgetting rules does not generate new spikes.In summary, the firing condition of a firing rule is:

denotes the families of all sets 2 N
(Π) accepted by DTNP-MCIR systems having

1 Π 1 Πl σ is 2 a
,H,l ,l I)  that works in generation mode, assuming that all registers except register 1 are empty, and register 1 never decrements during the calculation.□We designed a DTNP-MCIR system to simulate the register machine working in generating mode.The system includes three modules: ADD module, SUB module, and FIN module.ADD module and SUB module used to simulate the ADD instruction and the SUB instruction respectively, and FIN module deals with the calculation results of the system 1 Π .We stipulate that each register r of register machine M corresponds to a neuron r σ , note that there are no rules in neuron r σ ,numbers can be encoded in neuron r σ .If the number stored in the register r is n 0  , then neuron r σ contains 2n spikes.Each instruction l corresponds to a neuron l σ , and some auxiliary neurons are introduced into models.Assume that there are no spikes in the feeding input unit of all auxiliary neurons, but neuron li σ receives two spikes at the beginning.Each neuron has an initial threshold for the initial threshold unit: (i) the initial threshold of each instruction neuron ; (ii) each register neuron r σ contains an initial threshold of a; (iii) the initial threshold of other neurons is uncertain.Because neuron li σ gets two spikes, the system 1 Π begin to simulate the instruction n   i jk l OP(r),l ,l  (OP represents one of ADD or SUB operations).Starting from the activated neuron li σ , the simulation deals with neuron r σ according to OP, and then two spikes are introduced to one of the neurons lj σ and lk σ .The simulation will continue until the neuron lh

Figure 3 .
Figure 3. Module ADD simulating the ADD instruction ADD(r),l ,l ) is simulated at time t.At this time, neuron i l σ contains two spikes and the configuration of module ADD is

σ 1 c σ and neuron 2 cσ
,σ ,σ via channel(1).Because neuron r σ receives two spikes, register r increases by one.Thus, t + 1, both feeding input unit and dynamic threshold unit of neuron contain two spikes, so they fire.Rule  

2 cσ
sends two spikes to neuron k l σ via channel (1).Then neuron k l σ receives two spikes, system 1

3 cσ
is enabled and removes the only spike in the feeding input unit.
is enabled, neuro n 3 c σ sends two spikes to neuron j l σ .

Figure 4 .
Figure 4. Module SUB simulating a SUB instruction j

σ will transmit one spike to neurons 2 means that the system 1 Π 1 Πσ
the only spike in the feeding input unit.Moreover, since starts to simulate instruction j l .Thus,              seen from the above description that the SUB instruction can be correctly simulated by the SUB module.The system starts receives two spikes according to whether neuron r σ contains spikes.

2 Π 2 ΠFigure 6 1  2 Π
Figure 6 shows the input module where neuron in σ is used to read the spike train n 1 10 1  from

Figure 6 .
Figure 6.The INPUT module of

σ
each contain two spikes, the forgetting rule   2 a ,λ λ  in neuron

σ 2 Π 2 Π
is also applied and two spikes are transmitted to neuron 0 l σ via channel(2).Since neuron 0 l σ contains two spikes, system starts to simulate instruction 0 l .In the acceptance mode, we use the deterministic ADD module as shown in Figure7to simulate the instruction i j l :(ADD(r),l ) in the register machine M. When neuron i l two spikes is sent to neurons j l σ and r σ .This means that system starts to simulate instruction j l and register r increases by 1.The SUB module shown in Figure4is used to simulate the instruction j jk l :(SUB(r),l ,l ).

1 register 8
,l ,l ,I  .u M contains 8 registers (labeled from 0 to 8) and 23 instructions.By introducing two numbers g(x) and y into registers 1 and 2, respectively, x φ (y) can be computed by u M .When u M stops, the function value is stored in register 0. In order to simplify the register machine u M , we introduce a new and replace the halt instruction with three new instructions:

3 Π 3 Π 1
contains an INPUT module, an OUTPUT module, and some ADD and SUB modules to simulate the ADD and SUB instructions of ' u M ,respectively.The INPUT module is used to read the spike train from the environment, and the OUTPUT module deal with calculation result which is placed in register 8.Each register r and each instruction i l of ' u M corresponds to a neuron r σ and i l σ respectively.If the feeding input unit of neuron r σ contains 2n spikes, it means that the register r contains the number n.When neuron i l σ receives two spikes, the simulation of instruction i l is started.Initially, all neurons in system are assumed to be empty.□The INPUT module is shown in Figure8.The spike train is read from the environment by it, and 2g(x) spikes and 2y spikes will be stored in neurons 1 σ and 2 σ respectively.

σ sends a spike to neurons 1 c 1 c σ and neuron 2 c σ each send a spike to neuron 1 σ
with each other, as well as neuron via channel (1).

σ
is enabled and rule  

2 σ receives a total of 2y spikes from time 2 t 2  to time 2 t y 1   (That is, the number stored in neuron 2 σ 1 c σ , 2 c σ , and 3 cσσ 3 Π
is y).When the third spike is imported by neuron in σ , the next moment neurons will each contain three spikes.Three spikes are consumed by forgetting rule is applied to transmit two spikes to neuron 0 l σ via channel (3), this indicates that the system starts to simulate the initial instruction 0 l .

Figure 10 .σ
Figure 10.The model simulating consecutive ADD-ADD instructions

Case 3 .
The following six pairs of ADD and SUB instructions can be combined.(8),l; By observing the above six pairs of instructions, it can be seen that the latter ADD instruction is at the position of the first exit of the previous SUB instruction.Therefore, the combination instructions can be expressed as r , l , l , l : ADD r , l .In this way, we can save six neurons.The recombined SUB and ADD instructions can be simulated by the SUB-ADD-2module in Figure12.
Figure 12.The model simulating consecutive SUB-ADD-2 instructions    

Π
dropped from 81 to 73.So far, we have completed the proof of Theorem 3.

σ
. However, τ and p are always equal in the DTNP system, we think it lacks generality and improve it; For the rule   in the SUB module, If neuron r σ contains 2n spikes at time t and 2n u  , then neuron r σ can be fired at time t, which is unreasonable for the entire system.Therefore, we improve the form of firing rules as represents the number of spikes at time t in the dynamic threshold unit of neuron i σ .Therefore, the firing conditions of inhibitory rules correspond to the function of inhibitory synapses.It is not only related to the state of neuron i σ , but also depends on the state of the inhibitory neuron i (t) τ (t) j u  is an additional constraint compared to the firing condition of the firing rule.j u (t) represents the number of spikes at time t in the feeding input unit of inhibitory neuron j σ .i τ (t) j σ .Although the firing conditions of inhibitory rules and firing rules are different, the spike consumption strategy is same when they are applied.If neuron i σ applies rules to configuration This process will repeat until the second spike reaches neuron in σ .