Spiking Neural Membrane Computing Models

: As third-generation neural network models, spiking neural P systems (SNP) have distributed parallel computing capabilities with good performance. In recent years, artiﬁcial neural networks have received widespread attention due to their powerful information processing capabilities, which is an effective combination of a class of biological neural networks and mathematical models. However, SNP systems have some shortcomings in numerical calculations. In order to improve the incompletion of current SNP systems in dealing with certain real data technology in this paper, we use neural network structure and data processing methods for reference. Combining them with membrane computing, spiking neural membrane computing models (SNMC models) are proposed. In SNMC models, the state of each neuron is a real number, and the neuron contains the input unit and the threshold unit. Additionally, there is a new style of rules for neurons with time delay. The way of consuming spikes is controlled by a nonlinear production function, and the produced spike is determined based on a comparison between the value calculated by the production function and the critical value. In addition, the Turing universality of the SNMC model as a number generator and acceptor is proved.


Introduction
Membrane computing, an important branch of natural computing, is a computing model inspired by the structure, function, and behavior of biological cells. At present, there are three main types of membrane computing models: cell-like P system, tissue-like P system, and neural P system. In the past few years, research on neural P systems has mostly focused on spiking neural P systems, which is a type of computing model inspired by the processing of information in the form of spikes by neurons in biological neural networks. In 2006, Ionescu et al. first proposed the concept of spiking neural membrane systems [1], which have received extensive attention in recent years as a third-generation neural network model. Artificial neural networks are based on imitating the information processing function of the human brain nervous system, based on network topology to simulate the processing mechanism of the human brain nervous system towards complex information. It is a type that combines the understanding of biological neural networks with mathematical models to achieve powerful information processing capabilities, and it has a wide range of applications in pattern recognition, information processing, and image processing. We can find that both membrane computing and artificial neural networks are inspired by biological neural networks, and, in a certain sense, they are connected.
The SNP systems have accumulated rich research results in theory and application, especially in theoretical research. By changing the rules, objects, synapses, and structures to expand systems, many new SNP systems have been established. The changes in rules are mainly reflected in the form of the rules, such as SNP systems with white hole rules [2], SNP systems with communication rules [3], SNP systems with polarizations [4], asynchronous SNP systems [5], SNP systems with inhibitory rules [6], SNP systems with astrocytes [7], difference between the SNMC model and the artificial neural network is that the data flow of the SNMC model is completed by rules and objects. The artificial neural network is only calculated through mathematical models. The difference between the SNMC models and the existing SNP systems are as follows.
(1) The forms of the rules are different; they contain the production functions. Additionally, each neuron contains two data units, including the input value and the threshold value. However, SNP systems contain the number of spikes in integer form. (2) The execution steps of the rules are different. When the rules start to be executed, SNMC models have the production and comparison steps. (3) The synapse weights of connecting neurons in SNMC models are divided into inhibitory synapses and excitatory synapses, and the corresponding weights are positive and negative. It can be explained in this manner: if the spike passes through the inhibitory synapse, the spike will be negatively charged.
The structure of the rest of this paper is as follows. In Section 2, we give the concepts of SNP systems and the MP model. In Section 3, the definition of a new type of neural membrane computing model, called the SNMC model, is given; a detailed explanation of the definition is also given, and the working process of the model is explained through an example. In Section 4, through a simulation of a register machine, the Turing universality of the SNMC model is proven in the generating mode and the accepting mode, respectively. Finally, conclusions and future work are given in Section 5.

Related Works
In this section, SNP systems and the general mathematical model of the neuron network are introduced. Moreover, some basic expressions of membrane computing are given.

Spiking Neural P Systems
Definition 1. An SNP system with the degree m ≥ 1 is regarded as a tuple Π = (O, σ 1 , σ 2 , · · · , σ m , syn, in, out) where (1) O = {a} is the alphabet, and a is a spike included in neurons; (2) (σ 1 , σ 2 , · · · , σ m ) represents m neurons with the form σ i = (n i , R i ), 1 ≤ i ≤ m, where (a) n i ≥ 0 is the number of spikes in neuron σ i ; (b) R i is the finite set of rules, including spiking rules and forgetting rules. The form of spiking rules is E/a c → a p ; d , c ≥ p ≥ 1, where d indicates the time delay and E indicates the regular expression over the alphabet O. The form of forgetting rules is a s → λ , s ≥ 1. Additionally λ indicates the neuron is empty, without spikes.
An SNP system can be regarded as a digraph without self-circulation, denoted as G(V, A). V is a set of vertices for neurons. A is the arc set for synapses. Spikes and rules are included in neurons, and the number of spikes changes according to the rules in the neuron. If the spiking rule activates, it means that the neuron contains at least c spikes. Additionally, these c spikes will be consumed and produce p spikes that are sent to connected neurons after d time units. In particular, the parameter d refers to delay, which means that the neurons involved in the delay turn off and refuse to accept external spikes before d time units. For instance, assume d = 2 and the rule in neuron σ i fires at step t, then σ i is closed in steps t and t + 1. The neuron σ i reopens at step t + 2 and receives spikes at the next step.
If the forgetting rule activates, it means s spikes are removed from the neuron. The function of input neuron is reading spikes from the environment, and the function of the output neuron is outputting the results computed by the system. The register machine has been shown to describe a set of recursive enumerable languages called NRE, which is equivalent to the computing power of the Turing machine. When proving the computational universality of various membrane systems below, the purpose of characterizing NRE is mainly achieved by simulating the register machine, which is denoted as a tuple, M = (m, H, l 0 , l h , I). Among them, m is the number of registers, H is the instruction tag set, l 0 is the start instruction, l h is the halting instruction, and I is the instruction set. It is notable that each element in I corresponds to the element in H. The register machine M contains the following three forms of instructions: (1) ADD instructions, such as l i : (ADD(r), l j , l k ), mean that the number stored in register r is increased by 1, and the next instruction is chosen l j or l k nondeterministically. (2) SUB instructions, such as l i : (SUB(r), l j , l k ), generate two results according to the number in register r. If the value stored in register r is greater than 0, the operation of subtracting 1 is performed, and the next instruction l j is executed. If the value stored in register r is equal to 0, no operation is performed on r, and the next execution instruction is l k .
(3) Halting instruction l h : H ALT is used to halt calculation.

Neural Network
From a biological point of view, a neuron can be regarded as a small processing unit. Additionally, the neural network of the brain is made up of many neurons connected in a certain way. The simplified mathematical model of neurons is shown in Figure 1. The representation of the model can be regarded as Formula (1), which indicates the sum of input of neuron i where w ij is the weight between neuron i and neuron j, x j is the input vector that comes from neuron j, and b i is the threshold of neuron i, the value of which can be set to positive or negative. In this way, it indicates that the neuron activates when the signal received by the neuron is greater than the threshold. This neuron model is called the MP neuron model, which is an abstract and simplified model constructed according to the structure and working principle of biological neurons. In our proposed models, we consider its activation function is a nonlinear function, which is the binary function shown as Formula (2)

Spiking Neural Membrane Computing Models
In this section, inspired by artificial neural networks, a new variant membrane computing model, called the spiking neural membrane computing model, is proposed. It is a combined model of neural network and spiking neural P systems and contains multiple neurons. Neurons are connected by synapses, and the synapses have weights, where the weights represent the relationship between neurons. To facilitate understanding and expression, we use an expression similar to SNP systems. (2) N= {σ 1 , · · · , σ m } is the set of neurons, and neuron σ i has the form σ p f i is the production function, which is to compute the total real value of neuron σ i . The total real value is the weighted sum of all inputs minus the threshold; If s = 0, neuron σ i is not producing spikes, denoted as a 0 = λ. (3) W is the weight on the synapse, which can be positive or negative. A positive weight means an excitatory synapse, and a negative weight means an inhibitory synapse. (4) syn ⊆ {1, 2, · · · m} × {1, 2, · · · m} × W is the set of synapses. (5) in and out are the input neuron and the output neuron, respectively. The input neuron converts the input data into spikes containing real values. The output neuron outputs the input data as a binary string composed of 0 s and 1 s.
A spiking neural membrane computing model can be regarded as a digraph structure without self-circulation, where the nodes of the graph are represented by neurons, and the arcs represent the synaptic connections between neurons, as shown in Figure 2. The definition and description of SNMC models are given below. The neuron contains two kinds of data: a real input value and a threshold value. The way of transmitting data is determined by rules and synaptic connections. How the SNMC model works is explained here. There are two types of synapses: one is the inhibitory synapse, and the other is the excitatory synapse. This can be embodied by the value of weights, where a positive weight means an excitatory synapse, and a negative weight means an inhibitory synapse. It also indicates the relationship between neurons. For example, the weight between σ i and σ j is w ij = 2, which means the synapse between them is an excitatory synapse, and neuron σ j receives twice the value that neuron σ i outputs.
There are two types of data units in each neuron, including the input data unit and the threshold unit. The threshold can be 0, which means no threshold in neurons. It is notable that the neurons in our proposed model contain spikes with real values, which are real numbers. The input data u i of a neuron is the linking input data plus original data. The linking input data comes from the connected neurons, and the original data are that the neuron itself already exists. In this way, neuron σ i has a spike with real value u i , which is the sum of the weighted values sent by the connected neurons plus the original values, such as Formula (3), where w ij is the weight between σ i and σ j , s j is the output value of neuron σ j , and ε i is the original data of neuron σ i .
For the convenience of calculation in this article, only integers are involved, which can be interpreted as "integer spikes" in this paper. For instance, the real value 2 is shown as a 2 , which can be explained as two spikes in a neuron. Additionally, a −2 is explained by two spikes with a negative charge in the neuron. A negatively charged spike can annihilate one spike.
The output state of the neuron is related to the rules. At each step, each neuron contains at least one firing rule, which is applied sequentially within the same neuron, but neurons work in parallel mode with each other. At a certain moment, if some neurons contain more than one rule that can be applied, they will nondeterministically choose one of the rules to apply. The way the rules are executed and interpreted is given below.
The rule contains two parts, including the production function and the outputting function. The production function is used to calculate the total real value of the current neuron, and the total effect on neuron σ i is the input data minus the threshold, which will cause the state change of neuron σ i . In addition, the neuron has a critical value, which is set to 0. Therefore, the execution steps of rules are divided into three steps: production, comparison, and outputting.
(1) Production step. When neuron σ i receives weighted spikes with real value u 1,i (t 1 ), u 2,i (t 1 ), · · · , u k,i (t 1 ) from connected k neurons at time t 1 , and the threshold is b i , the production function works to calculate the total real value by Formula (4).
(2) Comparison step. In this step, the result p f i computed by Formula (4) is compared with the critical value 0. It determines whether the real value output of the next step is 1 or 0. (3) Outputting step. According to the result of the comparison step, if it has p f i > 0, then s = 1, and the rule E/a p f (u i −b i )| 0 → a; t 1 , t 2 can be applied to output a spike with the real value of 1. If it has p f i ≤ 0, then s = 0 and the rule E/a p f ( Therefore, no spike can be sent to the connected neurons. The firing of rules requires two conditions: (1) Assume the number of spikes contained in neuron σ i is k, a k belongs to the language set represented by the regular expression E, and the number of spikes k contained in neuron σ i is greater than or equal to the number of spikes consumed, u i , i.e., k ≥ u i . (2) The neuron can only be activated when it receives the signal sent by the connected neurons.
Additionally, t 1 and t 2 after the rule refers to time delay. t 1 means the time neuron receives spikes and t 2 represents the rule execution time (from the execution of the production steps to the outputting step). Before a delay of t 1 times, the neuron is in a closed state. If there is no delay, then the firing rule is abbreviated to E/ a p f (u i −b i )| 0 → a s . Moreover, the neuron fires and contains a spike with the real value of u; then, the real input value of the input unit is reset to 0, and the threshold unit is unchanged after the outputting step. In other words, once the rules fires in the neuron, the input value in the neuron is consumed. It is noted that if the input value of the SNMC model is a natural number, and there is no threshold in neurons and the weights are positive integers, then the SNP systems belong to a special case of our proposed SNMC models.
At each step, the configuration of the system Π is composed of the real values of input units and threshold units of all neurons, denoted as With the application of firing rules, the configuration of the system Π at a certain time is also changed. The transition from configuration at time t to the configuration at time t 1 is denoted as C t ⇒ C t+1 . When the calculation reaches a certain configuration and there is no rule that can be activated, then the calculation stops, and this configuration is denoted as C h . The computational process of the system can be regarded as a transition of a series of configurations, which is ordered and finite, i.e., from the initial configuration to halting configuration When an SNMC model is working in a generating mode in the initial configuration, all the neurons in the model are empty except for neuron σ 1 , which means that all registers are empty except for the number stored in Register 1. The calculation starts from instruction l 0 , stops when the end instruction l h is reached, and then the number stored in Register 1 is the generated number. The calculation result is associated with the firing time of neuron σ out , which is calculated by the time interval between the two nonzero values, that is, the time it takes neuron σ out to send the two spikes to the environment. Suppose that neuron σ out sends the spike to the environment for the first time at time t 1 ; the environment receives the second spike coming from σ out at time t 2 , and then the calculation result is t 2 − t 1 . In addition, when an SNMC model works in the accepting mode, an input neuron σ in is added to read the values from the environment and all neurons are empty at the beginning. The number is fed into the system in the form of an encoded spike train, and it means the time interval between the two firings of σ in is the input number. For instance, the input number is n, n ∈ N + , and it is encoded as a spike train 10 n−1 1, where 1 represents spike and 0 is empty. The interval between two spikes is (n + 1) − 1 = n, which is the input number.
The family of all sets of numbers is denoted as N α (Π), α ∈ {2, acc}. When α = 2, N 2 (Π) represents the result of the system Π calculation. Additionally, "2" indicates that only the first two firing times of neuron σ out are considered. When α = acc, N acc (Π) is the set of all the accepted numbers by Π. The sets generated and accepted by SNMC models are denoted by N α SN MC(p f (2)) β m , where p f (2) means two variables contained in each production function, m ≥ 1 is the number of neurons, and β ≥ 1 is the number of rules. Notice that when the number of neurons cannot be counted, it is denoted by a symbol, * . In the following proof of the computing universality of SNMC models, the NRE, which is a family of numerically computable natural numbers, can be described mainly through simulating the register machine.

Illustrative Example
This example consists of 5 neurons and several synapses to explain the workflow of the system Π, as shown in Figure 3. Neuron σ 1 contains a real input value 2 and a threshold value 1, and it exists in the neuron in the form of spikes [a 2 , a]. Suppose that at Time 1, neuron σ 1 fires since p f = 2 − 1 = 1 > 0, and a spike is generated at Time 2, that is,s = 1. Thus, neuron σ 3 receives two spikes from neuron σ 1 . At Time 3, σ 3 generates one spike because its p f value is 1. Additionally, neuron σ 2 contains two rules, of which one is selected for execution indeterminately. Hence, there are two cases that can happen depending on the rule selection in σ 2 . (1) Assuming that the rule a p f (u 2 −b 2 )| 0 → a s ; 0, 1 is applied, neuron σ 2 receives a spike from σ 1 at Time 2, and then the production function executes at Time 3. The value p f = 1 − 1 = 0 is obtained. Hence, at Time 4, neuron σ 2 produces and sends an empty to neuron σ 4 . At the same time, neuron σ 2 also receives one spike from neuron σ 3 ; the rule a p f (u 2 −b 2 )| 0 → a s ; 1, 0 is used. Since its p f value is 0, neuron σ 2 sends an empty to neuron σ 4 . The neuron σ 4 has not received any spikes, so it produces empty at Time 5, and neuron σ 5 receives two spikes from neuron σ 3 . At Time 6, its rule in neuron σ 5 fires and its p f = 2 − 1 = 1 > 0, so it produces one spike and sends it out at the same time.
(2) Assuming that the rule a p f (u 2 −b 2 )| 0 → a s ; 1, 0 is used, then neuron σ 2 is in the closed state before Time 3 and does not receive any spikes. At Time 3, the production function of neuron σ 3 performs and produces one spike to send to neurons σ 2 and σ 5 . Thus, at Time 3, neuron σ 2 receives two spikes: one from neuron σ 1 and the other from neuron σ 3 . At Time 4, since p f = 2 − 1 − 1 > 0 in σ 2 , it has s = 1, and σ 2 produces a spike and sends it to neuron σ 4 . Neuron σ 4 receives three spikes, and at Time 5, its producing function can be calculated as p f = 3 − 2 = 1 > 0, so s = 1. Meanwhile, neuron σ 5 receives two spikes from neuron σ 3 and one negative spike from neuron σ 4 , so neuron σ 5 contains one spike. In this way, no spike is generated and sent out at Time 6 because of p f = 1 − 1 = 0.
In order to conveniently display the changes of neurons at each time, a graph of configuration is given, as shown in Figure 4. The configuration in this figure is in the order of neuron σ 1 , σ 2 , σ 3 , σ 4 , and σ 5 , and it is composed of the input unit and the threshold unit with the form of C ≤ u 1 , b 1 , u 2 , b 2 , u 3 , b 3 , u 4 , b 4 , u 5 , b 5 >. When rules are still performing at a certain moment, the corresponding spikes are considered not to be consumed completely, denoted as "u i ".

Turing Universality of SNMC Models
In this section, the computational power of SNMC models is proved as number generators and acceptors, respectively.

Generating Mode
In generating mode, the most important neuron σ out is contained. In this way, the Turing universality of SNMC models as a generator is investigated by simulating three instructions, including the ADD instruction, the SUB instruction, and the halt instruction. Thus, three modules, named ADD, SUB, and FIN modules, are used for the simulation. Assume that each neuron contains a certain initial threshold. It is stipulated that each neuron corresponding to an instruction has an initial threshold a, and each neuron corresponding to the register has an initial threshold a.
The module SUB, as shown in Figure 6, is used to simulate the SUB instruction in the register machine. The configuration of the module SUB is C t ≤ u 1 , b 1 , . . . u 7 , b 7 >; they correspond to the number of input units and threshold units of neurons σ l i , σ v 1 , σ r , σ v 2 , σ v 3 , σ l j and σ l k . Assume that neuron σ l i receives two spikes at time t, and the configuration is c t ≤ 2, 1, 0, 1, x, 1, 0, 1, 0, 1, 0, 1, 0, 1 >. At time t + 1, the production function starts to calculate a p f value that is equal to 1 (greater than 0), so one spike is generated at time t + 2 and sent to σ v 1 and σ r . At time t + 2, neuron σ v 1 receives two spikes, and neuron σ r receives one spike, but it is unknown whether there is empty in neuron σ r . According to the number of spikes contained in σ r , the operation results are divided into the following two cases: Figure 6. Module SUB. Its function is to simulate SUB instruction l i : (SUB(r), l j , l k ).
It is notable that despite that it is possible to have multiple SUB instructions operating on the same register, no incorrect simulation of M caused by the interference among SUB modules in Π takes place. Assume that SUB instructions l i and l i share the same register r, so when the instruction l i works, we need to ensure that the work of instruction l i will not be affected in the next work. Assume that the neurons connected to register r in instruction l i have σ v 2 and σ v 3 , which correspond to σ v 2 and σ v 3 shown in Figure 6. According to the simulation of the above module SUB, when there is no number stored in register r, neuron σ r does not generate any spike, so it will not affect instruction l i . When register r is not empty, then neuron σ r produces one spike. After passing through the synapse, neuron σ v 2 receives one spike and neuron σ v 3 receives a spike with a negative charge. According to the rules of neurons σ v 2 and σ v 3 , their p f value does not exceed 0, so no spike is generated. Therefore, instruction l i is not affected when instruction l i is simulated. In this way, the simulation of module SUB is proven correct.
The function of module FIN is to output the computational result (shown in Figure 7). Suppose that the number in Register 1 is n, that is, there are n spikes in neuron σ 1 . Additionally, neurons σ l h and σ out contain a spike, respectively. Suppose that at time t, neuron σ l h receives two spikes. As shown in Figure 7, we can see that the configuration is C t ≤ 2, 1, 0, 1, 0, 1, 0, 1, n, 1, 0, 0, 1, 1 >. Therefore, at time t + 1, from the p f value of σ l h (p f = 2 − 1 = 1 > 0), one spike is generated and sent to neurons σ h 1 and σ h 2 . Neurons σ h 1 and σ h 2 both receive two spikes, and their rules are activated. Since the p f values of σ h 1 and σ h 2 are p f = 2 − 1 = 1 > 0, at time t + 2, both neurons σ h 1 and σ h 2 generate one spike. Neurons σ h 3 and σ h 2 both receive two spikes from neuron σ h 1 , and neuron σ h 1 also receives two spikes from σ h 2 . Hence, at time t + 3 until the calculation stops, neurons σ h 1 and σ h 2 will always repeat the operation as at time t + 2. At time t + 3, the rule of neuron σ h 3 is activated, and it has p f = 2 − 1 = 1 > 0, so one spike is sent to neurons σ out and σ 1 , respectively. Neuron σ out contains two spikes. Thus, at time t + 4, according to the rule in σ out , one spike can be generated and sent to the environment. At the same time, neuron σ out receives a spike obtained from neuron σ h 3 . It is worth noting that neuron σ h 3 always sends one spike every time after time t + 3 to neuron σ out and σ 1 until the calculation stops. At time t + 3, neuron σ 1 receives a spike with a negative charge. Hence, neuron σ 1 contains n − 1 spikes at this time. However, according to the rule in neuron σ 1 , the rule can fire if and only if the number of spikes in neuron σ 1 is not more than 2. Hence, until time t + n, when neuron σ 1 contains only two spikes, the rule is activated. At time t + n + 1, since p f = 1 > 0 in neuron σ 1 , it generates a spike and sends it to neuron σ h 4 . At the next time, the rule of σ h 4 fires. Since its p f value is greater than 0 and the rule execution time has a one-step delay, one spike is generated and sent to neuron σ out at time t + n + 3. At this time, neuron σ out also gets a spike from neuron σ h 3 , so it contains two spikes. Therefore, at time t + n + 4, neuron σ out generates one spike and sends it to the environment. The calculation result of the SNMC model is defined as the time interval for the output neuron to send the first two nonzero values to the environment, that is, t + n + 4 − (t + 4) = n which is consistent with number n stored in Register 1. Therefore, the FIN module can output the calculation result correctly.
In this way, the computation power of SNMC models in generating mode is investigated by simulating the register machine correctly through three modules.

Accepting Mode
The computational power of an SNMC model is obtained by simulating the deterministic register machine in the accepting mode. We need to construct an SNMC model, including module INPUT, module SUB, and module ADD', to simulate the deterministic register machine. Module SUB is the same as that in the generating mode, and we will not prove it in this part. Additionally, module ADD' simulates the deterministic ADD instruction l i : (ADD(r), l j ). Proof of Theorem 2. We only need to prove that module INPUT and module ADD can simulate the register machine. The function of module INPUT is to read the number encoded into a spike train into the model, as shown in Figure 8. Assume that number n is to be read into the SNMC model by module INPUT. Firstly, encode n into a spike train 10 n−1 1, where the time interval between two spikes is n. When input neuron σ in reads the symbol 1, it means that input value u is 1, and when the read symbol is 0, it means input value u is 0. Then, use module INPUT to store the number in Register 1. If Register 1 stores number n, it corresponds to neuron σ r receiving n spikes. At the initial moment, there is one spike in neuron σ in , one spike in σ c 1 , and one spike with a negative charge in σ c 2 , i.e., the initial configuration is C 0 ≤ 1, 0, 1, −1, −1, 1, 0, 1, 0, 1 >. Suppose that at time t, neuron σ in receives the first spike from the environment. Thus, there are two spikes in σ in . According to the rule a p f (u in −b in )| 0 → a s , the p f value is equal to 2 > 0, so a spike is generated. At time t + 1, neuron σ c 1 receives one spike with a negative charge, which annihilates the spike it contains. Hence, there is no spike in σ c 1 . According to the rule a p f (u c 1 −b c 1 )| 0 → a s , calculate the p f value and p f = 0 + 1 = 1 > 0. Hence, σ c 1 produces one spike and sends it to neuron σ 1 at the next time. Neuron σ c 2 also receives two spikes from σ in at time t + 1, one of which is annihilated by the negatively charged spike. At this time, σ c 2 has one spike. Additionally, calculate p f = 1 − 1 = 0, so that neuron σ c 2 generates empty at time t + 2. Simultaneously, neuron σ in receives the empty (0) from the environment and sends the empty to neurons σ c 1 and σ c 2 according to its rule. Therefore, at time t + 2, neuron σ c 1 is activated again, and its p f = 0 + 1 = 1 > 0. Thus, σ c 1 produces one spike and sends it to neuron σ 1 . At the next time t + n − 1, neuron σ in always accepts the empty, so neuron σ 1 receives n spikes until time t + n. At time t + n, σ in obtains the second spike. Thus, according to the value p f = 1 − 0 = 1 > 0, σ in produces one spike. In this way, σ c 1 receives one spike with a negative charge, and σ c 2 receives two spikes. At time t + n + 1, neuron σ c 1 fires, and its p f = −1 + 1 = 0, hence no spike is generated. Meanwhile, according to the rule in neuron σ c 2 which is p f = 2 − 1 = 1 > 0, a spike is produced and two spikes are sent to neuron σ l 0 . So far, the module INPUT simulation is completed.
The module ADD', used to simulate deterministic ADD instruction l i : (ADD(r), l j ), is shown in Figure 9. When neuron σ l i receives two spikes, the produced one spike is transmitted to neuron σ l j because of p f = 2 − 1 = 1 > 0; hence, neuron σ l j receives two spikes and neuron σ r also gets one spike from neuron σ l i . Therefore, the operation of increasing by 1 to register r is successfully simulated by module ADD'. Based on the above proof, it is determined that the register machine can be correctly simulated by the SNMC model working in the accepting mode. Therefore, N acc SN MC(p f (2)) 2 * = NRE.

Conclusions
Inspired by the SNP systems and artificial neural networks, this paper presents a new membrane computing model called the spiking neural membrane computing model. The model is composed of multiple neurons and connected synapses. The weights on the synapses represent the relationship between the neurons. According to the types of synapses, weights can be either positive or negative. If the synapse is inhibitory, the weight is negative. If the synapse is excitatory, a positive weight value denotes it. Each neuron contains two data units: the input value unit and the threshold unit, both of which exist in the form of spikes. In this model, the activation of rules in neurons requires two conditions. One is to meet the conditions generated by the regular expression, and the other is that the neuron can only be activated when it receives signals from the connected neurons.
The operation of the rule needs to be divided into three stages: production step, comparison step, and outputting step. Note that when the generated positive real value is transmitted to the neuron through the inhibitory synapse, the neuron receives a negative value; this means that there is a spike with a negative charge in the neuron. A spike with a negative charge and a spike with a positive charge can cancel each other out. In addition, we also proved the computing power of the SNMC model through Theorems 1 and 2. When the model is in the generating mode, the Turing universality of the SNMC model as a number generator is proven by simulating the nondeterministic register machine. Additionally, the Turing universality of the SNMC model as a number acceptor is proven by simulating a deterministic register machine.
The following are the advantages of the SNMC model: 1.
The weight and threshold values are introduced into the SNMC model, and the rule mechanism is improved compared with the SNP system so that the model combines the advantages of membrane computing and neural networks and can extend the application when processing real value information in particular.

2.
The rules of the SNMC model involve production function, and the calculation method of production function originates from the data processing method of neural networks, so the effective combination of the SNMC model and algorithms can be realized in the future.

3.
The computing power of the SNMC model has been proven, and it is a kind of Turing universal computational model.
The SNMC model extends the current SNP systems and comprehensively considers the relevant elements of the current SNP systems, such as time delay, threshold, and weight. The way of data processing in SNMC models makes the development of SNP systems more possible. We found that if the potential value of the SNMC model is a natural number, there is no threshold and weights are only positive integers; then, most of the currently existing SNP systems belong to a special case of our proposed SNMC model. The main difference between SNMC models and current SNP systems lies in the operating mechanism of rules. For example, the SNMC model is compared with SNP systems with weights and thresholds (WTSNP) [51]. They have different forms of rules, roles of thresholds, and operation mechanisms of rules. Additionally, the main difference with the numerical SNP system (NSNP) [9] is that the NSNP is embedded into the SNP system as the form of rule of the numerical P system and its object is not a spike. The firing rules also operate differently. In the SNMC model, the potential value is consumed in the form of spikes, and two results are produced, 0 or 1, according to the comparison results of the production function. Additionally, the SNMC model works by mapping the production function to an activation function.
The main difference between the SNMC model and artificial neural networks is that the data flow of the SNMC model is completed by rules and objects. Artificial neural networks are only calculated through mathematical models. The proposed SNMC model not only retains the distributed parallel computing of membrane computing but also has the method and structure of neural networks for data processing. Therefore, the model can be used in the future to deal with certain practical problems and expand the application of membrane computing. For example, further research can be carried out on image processing and algorithm design.