Next Article in Journal
ReSAN: Relation-Sensitive Graph Representation Learning for Peer Assessment in Educational Scenarios
Previous Article in Journal
Independent Bondage Number in Planar Graphs Under Girth Constraints
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Memory-Processing Unit Model Based on Spiking Neural P Systems with Dendritic and Synaptic Behavior for Kronecker Matrix–Matrix Multiplication

Instituto Politécnico Nacional, ESIME Culhuacan, Av. Santa Ana No. 1000, Ciudad de México 04260, Mexico
*
Authors to whom correspondence should be addressed.
Mathematics 2025, 13(22), 3663; https://doi.org/10.3390/math13223663 (registering DOI)
Submission received: 18 September 2025 / Revised: 14 October 2025 / Accepted: 12 November 2025 / Published: 15 November 2025
(This article belongs to the Section E: Applied Mathematics)

Abstract

Currently, Kronecker Matrix–Matrix Multiplication play a crucial role in many advanced applications across science and engineering, such as Quantum Computing (Tensor Representation of Quantum States, Quantum Gate Construction), Machine Learning and Data Science (Kernel Methods, Tensor Decompositions), and Signal and Image Processing (Multi-dimensional Filtering, Compression Algorithms). However, the implementation of the Kronecker Matrix–Matrix Multiplication increasingly relies on systems with enhanced computational capabilities. Specifically, current implementations expend large amounts of external memory and requires a large number of processing units to perform this operation. As is commonly acknowledged, cutting-edge high-performance computing schemes still faces limitations in terms of energy and performance due to the bottleneck in data transfer between processing units and memory. To mitigate this limitation, memory processing units (MPUs) enable direct computation on in-memory data, reducing latency and eliminating the need for data transfer. On the other hand, spiking neural P systems, with their inherent parallelism and distributed processing capabilities, are therefore well-suited as foundational components for implementing such memory architectures efficiently. From the mathematical point of view, we present for the first time a neural, synaptic, and dendritic model to support the Kronecker Matrix–Matrix multiplication. To this end, the proposed spiking neural P system with their cutting-edge variants, such as anti-spikes, communication on request, synaptic weights, and dendritic–axonal delays, facilitates the creation of neural memory cells and spike-based routers. Hence, these elements potentially allow the design of novel processing memory architectures that markedly enhance data transfer efficiency between computational units and memory.

1. Introduction

In recent years, spiking neural P (SN P) systems have emerged as a novel class of parallel and distributed computational models [1]. These systems have garnered significant attention due to their proven effectiveness in tackling complex engineering challenges, including information security [2], signal processing [3], and arithmetic circuit design [3]. Despite notable achievements, ongoing efforts continue to enhance the computational power of standard SN P systems. A key limitation lies in their reliance on the soma of the neuron as the primary processing unit, which is governed by complex spiking rules. This dependence on sophisticated rule activation tends to constrain the overall computational efficiency of these systems [4]. To address this issue, researchers have investigated additional neuronal properties from both theoretical and biological perspectives. Drawing inspiration from neurobiological mechanisms, several advanced variants of SN P systems have been proposed [5]. These include features such as propagation delays [6], permeability [7], inhibitory synapses [8], structural plasticity [9], autapses [9], dendritic trunk/dendritic delays [10], myelin and dendritic spines [11], inhibitory rules [12], and microglia [13]. Beyond biologically inspired enhancements, other variants have been introduced based on abstract neural computing principles. These include stochastic rule application [14], extended channel rules [15], target indication mechanisms [8], multiple channels [16], colored spikes [17], scheduled synapses [18], request and antispike rules [19], weighted synapses [20], and communication-on-request protocols [21]. From a mathematical standpoint, incorporating these diverse features has markedly improved the computational capabilities of SN P systems. Recent developments have focused on designing compact arithmetic circuits, such as adders, subtractors, dividers, and multipliers, which outperform traditional binary circuits in terms of efficiency [22]. Consequently, the evolution of SN P system variants is paving the way for more powerful high-performance computing solutions, which are increasingly vital in real-world applications [3].
On the other hand, Duchen et al. [23] recently introduced a spike-based memory cell capable of executing basic arithmetic operations, where the results are stored directly within the same cell. This innovation leverages spiking neural P systems and their advanced variants, such as dendritic behavior and astrocyte-like control mechanisms. Although this represents a significant advancement, several challenges remain in the development of a fully memory processing unit (MPU) based on spiking neural P systems. Therefore, these emerging architectures hold promise for enabling matrix–matrix multiplication, which represents a core operation in computationally intensive domains, such as quantum computing, image processing, classification systems, Internet of Things (IoT), and data-intensive computing [24]. Moreover, this approach offers promising advancements in memory-intensive computing, rather than traditional computing-intensive paradigms. In such applications, transferring large datasets between processing cores and memory is both time and energy consuming, creating a significant performance bottleneck. To address this challenge, processing-in-memory (PIM) technology, also referred to as memory processing unit (MPU) architecture, has recently emerged [25]. This innovative solution enables computation to occur directly within the memory itself, effectively minimizing data movement and alleviating the bottleneck [26]. However, most existing approaches rely on near-memory architectures, where memory modules and processing units are positioned in close proximity but remain physically distinct. As a result, memory bandwidth remains constrained, limiting the overall system performance. In this work, we introduce a novel memory-processing unit (MPU) model based on Spiking Neural P systems along with their cutting-edge variants, in which neurons process and store computational results within the same unit. This architecture is specifically designed to perform Kronecker matrix–matrix multiplication, which is a fundamental operation widely used in advanced applications, such as quantum computing [27], image analysis [28], pattern classification [29], and information security [30].

2. The Proposed Memory-Processing Unit Model Based on Spiking Neural P Systems, Π MEM

Before presenting the proposed memory processing unit ( Π M E M ), we provide some concepts and necessary notations related to the formal definition of the SN P systems and automata theory. In general terms, the SN P systems are mainly composed of neurons interconnected through synapses. Specifically, the behavior of the soma is regulated by rules (forgetting rule and firing rule) to process the information, which is encoded by means of spikes [1]. Recently, several variants of the SN P systems have been proposed to increase the computational capabilities of the soma and their synapses. In this work, we also describe the formal definition of these new variants of the SN P systems, such as anti-spikes [31,32], communication on request [32], synaptic weights [33] and dendritic–axonal delays [10]. In particular, the use of these variants has allowed us to build a memory-processing unit.
Definition 1.
We provide the formal definition of the SNP systems of degree m 1 along with their variants, as follows:
Π S N P = { O , σ 1 , σ 2 , , σ m , s y n , i n , o u t }
where
1.
O = { a 1 , a 2 , , a k , a ¯ 1 , a ¯ 2 , , a ¯ l } is the alphabet, where a i ( i { 1 , 2 , , k } ) represents a type of spikes, and a ¯ j ( j { 1 , 2 , , l } ) represents a type of anti-spikes [31,32].
2.
σ 1 , σ 2 , , σ m denote neurons of the form σ i = ( v i , R i ) , where
(a) 
v i is a multiset over alphabet O.
(b) 
R i denotes the finite set of spiking rules as follows:
i.
E / a c a p ; d , where E is a regular expression over alphabet O, c 1 , 1 p c . d is the time units delay for this rule.
ii.
a s λ , for s 1 and a s L ( E ) .
iii.
E / Q , where Q is a finite set of queries of the forms ( a k p , σ j ) and ( a k , σ j ) . The query ( a k p , σ j ) indicates that neuron ( σ i ) requests p copies of a k from neuron ( σ j ). On the other hand, the query ( a k , σ j ) indicates that all spikes ( a k ) are requested from neuron ( σ j ) [32].
(c) 
s y n   { 1 , 2 , , m } × { 1 , 2 , , m } represents the synapses. Here, synapses have a specific synaptic weight (w), which indicates that p · w spikes are received by some neuron ( σ i ) [33].
(d) 
i n , o u t { 1 , 2 , , m } denote the input and output neurons, respectively.
Once the formal definition of the SN P systems is provided, we present the proposed MPU based on SN P systems, Π M E M , which is configured to compute mathematical operations and store the results in the same unit. To achieve this, we proposed a basic spiking neural memory cell, Π m ( u , v ) to perform arithmetic operations, such as addition, subtraction, and multiplication and two router neurons ( σ r t and σ r p ) to perform write/read operations between two integer numbers (u and v). The functionality of each element is described below.
  • Basic spiking neural memory cell, Π m ( u , v ) .
    The proposed neural memory cell, Π m ( u , v ) , is composed of a set of dendritic delays ( Δ t 1 , Δ t 2 , , Δ t n ), a set of neurons ( σ s 1 , σ s 2 , , σ s n ), a set of synaptic weights ( w 1 , w 2 , , w n ), and a neuron ( σ q ( u , v ) ), as shown in Figure 1.
    Figure 1. Schematic of basic neural memory cell, Π m ( u , v ) .
    Figure 1. Schematic of basic neural memory cell, Π m ( u , v ) .
    Mathematics 13 03663 g001
    The proposed spiking neural memory cell Π m ( u , v ) performs the dot product between two vectors ( A · B ) in parallel. To perform this operation, the vectors A and B must comply with the data format under the following criteria:
    -
    The components of vector A are represented with the synaptic weights ( w 1 , w 2 , , w n ).
    -
    The components of vector B are decomposed into individual integer digits and encoded as spike trains.
    -
    The signs of the components of each vector (A and B) are represented with an anti-spike associated with each component.
    -
    The magnitude of the result of each component is represented by the number of spikes ( a g 1 , a g 2 , , a g n ) accumulated in a specific neuron ( σ s 1 , σ s 2 , , σ s n ). If each magnitude of the result is negative, then there is an anti-spike ( a ¯ g 1 , a ¯ g 2 , , a ¯ g n ) associated with that magnitude.
    -
    All components of the resulting vector are stored in the same neuron ( σ q ( u , v ) ), which can be read and write with their respective request rules.
    Specifically, the operation of the proposed neural memory cell, Π m ( u , v ) , is described as follows:
    1.
    At the initial simulation step ( t = 0 ), the magnitudes of the components of the vector A are assigned to each synaptic weight ( w 1 , w 2 , , w n ), while the signs of each component are loaded as anti-spikes in the dendritic delays ( Δ t 1 , Δ t 2 , , Δ t n ).
    2.
    At simulation step t = 1 , each neuron ( σ s 1 , σ s 2 , , σ s n ) receives anti-spikes, which represent the signs of vectors (A and B), from dendritic delays ( Δ t 1 , Δ t 2 , , Δ t n ) and inputs ( s i g n B ( u , v ) 1 , s i g n B ( u , v ) 2 , …, s i g n B ( u , v ) n ), respectively. At this time, each neuron ( σ s 1 , σ s 2 , , σ s n ) activates some of its rules to determine the sign of each component of the resulting vector. Here, the forgetting rule ( a ¯ 2 λ ) is activated when the result is positive and the firing rule ( a ¯ a ¯ g ) is activated when the sign is negative. In general, when the sign of each component of the resulting vector is negative, it is sent to the neuron ( σ q ( u , v ) ).
    3.
    From t = 2 to t = 10 , each component of the vector B is fed to the neurons ( σ s 1 , σ s 2 , , σ s n ) in the form of trains of colored spikes when the firing rules on synapses ( a a g x , where 1 x n ) are activated. In this time interval, the neuron ( σ q ( u , v ) ) applies its request rules (( a * / Q ( a g 1 , σ s 1 ) ) , ( a * / Q ( a g 2 , σ s 2 ) ) , , ( a * / Q ( a g n , σ s n )).
    Therefore, the neuron ( σ q ( u , v ) ) contains the results of the dot product ( A · B ), where each component of the result is expressed with a color of spikes with its respective anti-spike sign when the component is negative.
    Therefore, each basic spiking neural memory cell, Π m ( u , v ) , can perform parallel computation of vector arithmetic operations. In case of performing the computation of matrix operations, such as the inner product and the Kronecker product, these components need to be interconnected by router neurons ( σ r t and σ r p ), as shown in Figure 2. Under this configuration, we denominated this component as spiking neural memory core, Π c ( s , t ) .
    Figure 2. Schematic of spiking neural memory core, Π c ( s , t ) .
    Figure 2. Schematic of spiking neural memory core, Π c ( s , t ) .
    Mathematics 13 03663 g002
  • Spiking neural memory core, Π c ( s , t ) .
    Given the matrices A of size i × i and B of size j × j , the spiking neural memory core, Π c ( s , t ) , performs the calculation of matrix multiplication ( C = A × B ). Here, when both matrices A and B have the same dimensions (i.e., i = j ), matrix multiplication is possible. In general, to calculate this operation, the spiking neural memory core, Π c ( s , t ) , performs the dot product between the rows of matrix A and the corresponding columns of matrix B through basic spiking neural memory cells ( Π m ( 1 , 1 ) , Π m ( 1 , 2 ) , , Π m ( i , 1 ) , , Π m ( i , j ) ). To obtain the corresponding element in the resulting matrix C = A × B , the spiking neural memory core performs the dot product between the row i from A and column i from B.
    In general, the proposed spiking neural memory core, Π c ( s , t ) , compute the matrix multiplication, as follows:
    -
    Writing stage. At this stage, the data is fed to each spiking neural memory cell. Specifically, the components of the matrices, A and B, are sent from the buses ( w r _ b u s and d a t a _ i n _ b u s ) to the basic spiking neural memory cells ( Π m ( 1 , 1 ) , Π m ( 1 , 2 ) , , Π m ( i , 1 ) , , Π m ( i , j ) ). This is achieved by routing the spikes through router neurons ( σ r t and σ r p ) under the following considerations:
    1.
    Each component of matrices A and B are represented with the types of spikes a w A ( u , v ) and a w B ( u , v ) , respectively. On the other hand, the signs of the components are represented with spikes of the type a d ( u , v ) . Here, u and v denote matrix indices; in this way, the spikes are identified between different neurons.
    2.
    The writing stage begins when the sign spikes of each component are sent via w r _ b u s to the router neuron σ r t . Here, the firing rule ( a d ( u , v ) a d ( u , v ) ( { r p } ) ) is activated, and these sign spikes are sent to the router neuron σ r p . Therefore, the rules in synapses ( a d ( i , j ) a ) on systems Π m ( i , j ) are activated according its indices.
    3.
    Subsequently, the spikes of the matrix components ( a w A ( u , v ) and a w B ( u , v ) ) enter to the router neuron ( σ r t ) via input d a t a _ i n _ b u s . As previous case, to transfer of the matrix components, the spikes are distributed from the router neuron ( σ r p ) to the different basic spiking neural memory cells ( Π m ( 1 , 1 ) , Π m ( 1 , 2 ) , , Π m ( i , 1 ) , , Π m ( i , j ) ).
    -
    Processing stage. Once the matrix components are loaded into memory, the calculation of the product ( C = A × B ) begins. Each basic neural memory cell, Π m ( i , j ) , perform the dot product between the rows of A and the corresponding columns of B. The resulting matrix C is an i × i matrix, where the calculation of the components of C can be generalized as c i j = a 1 j · b 1 i + a 2 j · b 2 i + a 3 j · b 3 i + . . . + a i j · b i j . This process is repeated for all elements of the resulting matrix C; thus, the resulting components are stored in the basic spiking neural memory cells ( Π m ( 1 , 1 ) , Π m ( 1 , 2 ) , , Π m ( i , 1 ) , , Π m ( i , j ) ).
    -
    Reading stage. At this stage, it is possible to read the results of matrix multiplication by following two steps:
    1.
    A spike of the type a r ( u , v ) is sent to the router neuron ( σ r t ) via r d _ b u s , where 1 u i and 1 v j . Here, u and v represent the indices of the component of the resulting matrix ( c i j ) to be read. At this moment, a spike a r ( u , v ) is sent to the router neuron ( σ r p ) when the rule ( a r ( u , v ) a r ( u , v ) ( { r p } ) ) is activated. Therefore, the router neuron ( σ r p ) requests the component to any of the basic spiking neural memory cells ( Π m ( 1 , 1 ) , Π m ( 1 , 2 ) , …, Π m ( i , 1 ) , …, Π m ( i , j ) ) by activating any of its request rules ( a r ( 1 , 1 ) / Q ( a g , σ m ( 1 , 1 ) ) , a r ( 1 , 2 ) / Q ( a g , σ m ( 1 , 2 ) ) , …, a r ( i , 1 ) / Q ( a g , σ m ( i , 1 ) ) , …, a r ( i , j ) / Q ( a g , σ m ( i , j ) ) ).
    2.
    In this step, if the requested result component ( c i j ) is in the router neuron ( σ r p ), it is necessary to send a spike a d into the router neuron ( σ r t ) via r d _ b u s . Here, the neuron ( σ r t ) makes a request ( a d / Q ( a g , σ r p ) for the component of the neuron ( σ r p ) and stores it in its soma. As a consequence of this, the requested result component is sent to the spiking neural memory core ( Π c ( s , t ) ) output via d a t a _ o u t _ b u s .
Once the explanation behavior of basic neural memory cell ( Π m ( i , j ) ) and spiking neural memory core ( Π c ( s , t ) ) is provided, it is possible to understand how the proposed memory-processing unit ( Π M E M ) works. As can be seen from Figure 3, the proposed Π M E M is composed of a set of neural memory cores Π c ( s , t ) and performs the computation of the Kronecker product between two matrices ( A B ). Here, the neural memory cores, Π c ( s , t ) , are interconnected by four buses: r d _ b u s , w r _ b u s , d a t a _ i n _ b u s , and d a t a _ o u t _ b u s . Through these buses, the memory-processing unit, Π M E M , enables the data transfer from external sources to the processing cores while storing the results in the same component.
To calculate the Kronecker product, the memory-processing unit, Π M E M , takes the input matrices (A and B) and divides them into smaller matrix fragments. These fragments are portions of the original matrices. In this manner, the data can be allocated more easily in each neural memory cores Π c ( s , t ) . The Kronecker product involves a series of dot product operations between elements of the matrix fragments, where these operations are performed in parallel. In general, the segmentation of the input matrices (A and B) into smaller fragments and the distribution of them among neural memory cores ( Π c ( s , t ) ). This approach has enabled more efficient distribution of processing tasks, particularly during intensive computations involving large and complex matrices.

3. Discussion

Nowadays, modern applications are surpassing the computational capabilities of current computing systems. One critical aspect is linked to the memory since these applications are exceeding its bandwidth. This becomes more critical, especially when these applications require processing and storing large amounts of data. As a consequence, this produces a negative impact on these computing systems since this data transfer from/to memory highly demands time and energy. Until now, several authors have proposed several strategies to mitigate this negative aspect. These solutions are based on processing at the data array and processing in a unit near memory banks. Despite achieving significant improvements, there is an important issue to be solved. To achieve a true MPU, the memory cell must be equipped with computational capabilities and not just storage. Up till date, the existing MPU schemes systems still perform operations on distributed cores, and the result is stored on caches. Hence, the strategy of fetching and moving data from the memory to the computing unit is still inefficient. According to various authors, the MPUs need to be designed for a specific application field to achieve an optimal performance. Therefore, advanced applications, such as artificial intelligence, big data, cloud computing, image processing, and data encryption, among others, can achieve maximum efficiency by performing specific computations using a unique data-array structure. Specifically, these applications widely use matrix–matrix multiplication operations as basic mathematical operations. Recent studies in neuroscience have found that a neuron can intrinsically perform specific operations and retain the information in its memory [34,35,36,37]. Specifically, neuronal allocation refers to the process by which certain neurons within a network are selectively activated by specific stimuli to encode a particular memory while excluding others. Inspired by these neuronal biological behaviors, we have created a memory cell, in which neurons have been endowed with computing capabilities, and the result is stored on the same units to create distributed MPU devices. Obviously, the development of a general-purpose MPU opens new horizons in the improvement of advanced in-memory applications.

4. Conclusions

Inspired by diverse dendritic and axonal phenomena, we develop a new memory-processing unit, MPU. Specifically, this MPU was designed to fully exploit the structure and computational capabilities of the dendrites and axons. Based on this, our contributions are the following:
  • We design a basic spiking neural memory cell, Π m ( u , v ) , to perform arithmetic operations, such as addition, subtraction, and multiplication.
  • We propose two router neurons ( σ r t and σ r p ) to efficiently write/read the spikes in each neural memory cell.
This significant achievement was possible thanks to the study of the structure of the dendrites and axon since these components are vital in the computation of the neuron and the storage of information in its soma. Future work will focus on designing specialized memory-processing-unit (MPU) architectures for implementation in memristor-based systems since these are particularly well-suited for neuromorphic computing due to their capacity to replicate the behavior of biological synapses and neurons, supporting the creation of highly integrated, energy-efficient, brain-inspired computing platforms.

Author Contributions

Conceptualization, G.S. (Giovanny Sanchez) and L.G.; data curation, E.R.A. and E.V.; formal analysis, G.S. (Giovanny Sanchez) and J.G.A.; funding acquisition, G.S. (Gabriel Sanchez) and L.K.T.; investigation, L.G. and J.G.A.; methodology, J.G.A. and G.S. (Giovanny Sanchez); resources, L.K.T. and G.S. (Gabriel Sanchez); software, L.G. and E.R.A.; supervision, J.G.A. and G.S. (Gabriel Sanchez); validation, E.R.A. and L.K.T.; writing—original draft, E.V. and G.S. (Giovanny Sanchez); writing—review and editing G.S. (Gabriel Sanchez) and L.K.T. All authors have read and agreed to the published version of the manuscript.

Funding

The authors would like to thank the Instituto Politécnico Nacional for its financial support.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding authos.

Acknowledgments

The authors would like to thank the IPN for financial support in creating this work.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Ionescu, M.; Păun, G.; Yokomori, T. Spiking neural P systems. Fundam. Informaticae 2006, 71, 279–308. [Google Scholar] [CrossRef]
  2. Pleṣa, M.I.; Gheorghe, M.; Ipate, F.; Zhang, G. Applications of spiking neural P systems in cybersecurity. J. Membr. Comput. 2024, 6, 310–317. [Google Scholar] [CrossRef]
  3. Zandron, C. An Overview on Applications of Spiking Neural Networks and Spiking Neural P Systems. Lang. Coop. Commun. Essays Dedic. Erzsébet Csuhaj-Varjú Celebr. Her Sci. Career 2025, 15840, 267–278. [Google Scholar]
  4. Zeng, X.; Xu, L.; Liu, X.; Pan, L. On languages generated by spiking neural P systems with weights. Inf. Sci. 2014, 278, 423–433. [Google Scholar] [CrossRef]
  5. Peng, H.; Wang, J. Spiking Neural P Systems and Variants. In Advanced Spiking Neural P Systems: Models and Applications; Springer: Berlin/Heidelberg, Germany, 2024; pp. 15–49. [Google Scholar]
  6. Shen, Y.; Liu, X.; Yang, Z.; Zang, W.; Zhao, Y. Spiking neural membrane systems with adaptive synaptic time delay. Int. J. Neural Syst. 2024, 34, 2450028. [Google Scholar] [CrossRef]
  7. Wang, L.; Liu, X.; Han, Z.; Zhao, Y. Spiking neural P systems with neuron permeability. Neurocomputing 2024, 576, 127351. [Google Scholar] [CrossRef]
  8. Shen, Y.; Qiu, L.; Yang, Z.; Zhao, Y. Weighted target indications spiking neural P systems with inhibitory rules and time schedule. J. Membr. Comput. 2024, 6, 245–254. [Google Scholar] [CrossRef]
  9. Wu, R.; Zhao, Y. Spiking neural P systems with structural plasticity and mute rules. Theor. Comput. Sci. 2024, 1000, 114554. [Google Scholar] [CrossRef]
  10. Garcia, L.; Sanchez, G.; Vazquez, E.; Avalos, G.; Anides, E.; Nakano, M.; Sanchez, G.; Perez, H. Small universal spiking neural P systems with dendritic/axonal delays and dendritic trunk/feedback. Neural Netw. 2021, 138, 126–139. [Google Scholar] [CrossRef]
  11. Garcia, L.; Sanchez, G.; Avalos, J.G.; Vazquez, E. Spiking neural P systems with myelin and dendritic spines. Neurocomputing 2023, 552, 126522. [Google Scholar] [CrossRef]
  12. Liu, Y.; Zhao, Y. Spiking neural P systems with membrane potentials, inhibitory rules, and anti-spikes. Entropy 2022, 24, 834. [Google Scholar] [CrossRef]
  13. Zhao, Y.; Liu, X. Spiking neural P systems with microglia. IEEE Trans. Parallel Distrib. Syst. 2024, 35, 1239–1250. [Google Scholar] [CrossRef]
  14. Lazo, P.P.L.; De La Cruz, R.T.A.; Macababayao, I.C.H.; Cabarle, F.G.C. Universality of SN P systems with stochastic application of rules. J. Membr. Comput. 2022, 4, 166–176. [Google Scholar] [CrossRef]
  15. Lv, Z.; Bao, T.; Zhou, N.; Peng, H.; Huang, X.; Riscos-Núñez, A.; Pérez-Jiménez, M.J. Spiking neural p systems with extended channel rules. Int. J. Neural Syst. 2021, 31, 2050049. [Google Scholar] [CrossRef] [PubMed]
  16. Ning, X.; Yang, G.; Sun, Z.; Song, X. On the universality of spiking neural P systems with multiple channels and Autapses. IEEE Access 2024, 12, 8773–8779. [Google Scholar] [CrossRef]
  17. Song, T.; Rodríguez-Patón, A.; Zheng, P.; Zeng, X. Spiking neural P systems with colored spikes. IEEE Trans. Cogn. Dev. Syst. 2017, 10, 1106–1115. [Google Scholar] [CrossRef]
  18. Cabarle, F.G.C.; Adorna, H.N.; Jiang, M.; Zeng, X. Spiking neural P systems with scheduled synapses. IEEE Trans. Nanobiosci. 2017, 16, 792–801. [Google Scholar] [CrossRef]
  19. Pan, L.; Păun, G. Spiking neural P systems with anti-spikes. Int. J. Comput. Commun. Control 2009, 4, 273–282. [Google Scholar] [CrossRef]
  20. Pan, L.; Wang, J.; Hoogeboom, H.J. Spiking neural P systems with astrocytes. Neural Comput. 2012, 24, 805–825. [Google Scholar] [CrossRef]
  21. Wu, T.; Pan, L. Spiking neural P systems with communication on request and mute rules. IEEE Trans. Parallel Distrib. Syst. 2022, 34, 734–745. [Google Scholar] [CrossRef]
  22. Rangel, J.; Anides, E.; Vázquez, E.; Sanchez, G.; Avalos, J.G.; Duchen, G.; Toscano, L.K. New High-Speed Arithmetic Circuits Based on Spiking Neural P Systems with Communication on Request Implemented in a Low-Area FPGA. Mathematics 2024, 12, 3472. [Google Scholar] [CrossRef]
  23. Duchen, G.; Diaz, C.; Sanchez, G.; Perez, H. First steps toward memory processor unit architecture based on SN P systems. Electron. Lett. 2017, 53, 384–385. [Google Scholar] [CrossRef]
  24. Gao, J.; Ji, W.; Chang, F.; Han, S.; Wei, B.; Liu, Z.; Wang, Y. A systematic survey of general sparse matrix-matrix multiplication. ACM Comput. Surv. 2023, 55, 1–36. [Google Scholar] [CrossRef]
  25. Asifuzzaman, K.; Miniskar, N.R.; Young, A.R.; Liu, F.; Vetter, J.S. A survey on processing-in-memory techniques: Advances and challenges. Mem.-Mater. Devices Circuits Syst. 2023, 4, 100022. [Google Scholar] [CrossRef]
  26. Zou, X.; Xu, S.; Chen, X.; Yan, L.; Han, Y. Breaking the von Neumann bottleneck: Architecture-level processing-in-memory technology. Sci. China Inf. Sci. 2021, 64, 160404. [Google Scholar] [CrossRef]
  27. Hong, Y.; Jeon, S.; Park, S.; Kim, B.S. Implementation of an quantum circuit simulator using classical bits. In Proceedings of the 2022 IEEE 4th International Conference on Artificial Intelligence Circuits and Systems (AICAS), Incheon, Republic of Korea, 13–15 June 2022; pp. 472–474. [Google Scholar]
  28. Feng, L.; Yang, G. Deep kronecker network. Biometrika 2024, 111, 707–714. [Google Scholar] [CrossRef]
  29. Panagos, I.I.; Sfikas, G.; Nikou, C. Visual speech recognition using compact hypercomplex neural networks. Pattern Recognit. Lett. 2024, 186, 1–7. [Google Scholar] [CrossRef]
  30. Wang, H.; Wan, C.; Jin, H. Efficient Modeling Attack on Multiplexer PUFs via Kronecker Matrix Multiplication. IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 2025, 44, 2883–2896. [Google Scholar] [CrossRef]
  31. Song, T.; Jiang, Y.; Shi, X.; Zeng, X. Small universal spiking neural P systems with anti-spikes. J. Comput. Theor. Nanosci. 2013, 10, 999–1006. [Google Scholar] [CrossRef]
  32. Pan, L.; Păun, G.; Zhang, G.; Neri, F. Spiking neural P systems with communication on request. Int. J. Neural Syst. 2017, 27, 1750042. [Google Scholar] [CrossRef]
  33. Wang, J.; Hoogeboom, H.J.; Pan, L.; Paun, G.; Pérez-Jiménez, M.J. Spiking neural P systems with weights. Neural Comput. 2010, 22, 2615–2646. [Google Scholar] [CrossRef] [PubMed]
  34. Koch, C. Biophysics of Computation: Information Processing in Single Neurons; Oxford University Press: Oxford, UK, 2004. [Google Scholar]
  35. Nenadic, Z.; Ghosh, B.K. Computation with biological neurons. In Proceedings of the 2001 American Control Conference (Cat. No. 01CH37148), Arlington, VA, USA, 25–27 June 2001; Volume 1, pp. 257–262. [Google Scholar]
  36. Bhalla, U.S. Molecular computation in neurons: A modeling perspective. Curr. Opin. Neurobiol. 2014, 25, 31–37. [Google Scholar] [CrossRef] [PubMed]
  37. Deng, W.; Aimone, J.B.; Gage, F.H. New neurons and new memories: How does adult hippocampal neurogenesis affect learning and memory? Nat. Rev. Neurosci. 2010, 11, 339–350. [Google Scholar] [CrossRef] [PubMed]
Figure 3. General schematic of the proposed memory processing unit, Π M E M .
Figure 3. General schematic of the proposed memory processing unit, Π M E M .
Mathematics 13 03663 g003
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Garcia, L.; Anides, E.R.; Vazquez, E.; Toscano, L.K.; Sanchez, G.; Avalos, J.G.; Sanchez, G. A New Memory-Processing Unit Model Based on Spiking Neural P Systems with Dendritic and Synaptic Behavior for Kronecker Matrix–Matrix Multiplication. Mathematics 2025, 13, 3663. https://doi.org/10.3390/math13223663

AMA Style

Garcia L, Anides ER, Vazquez E, Toscano LK, Sanchez G, Avalos JG, Sanchez G. A New Memory-Processing Unit Model Based on Spiking Neural P Systems with Dendritic and Synaptic Behavior for Kronecker Matrix–Matrix Multiplication. Mathematics. 2025; 13(22):3663. https://doi.org/10.3390/math13223663

Chicago/Turabian Style

Garcia, Luis, Esteban Ramse Anides, Eduardo Vazquez, Linda Karina Toscano, Gabriel Sanchez, Juan Gerardo Avalos, and Giovanny Sanchez. 2025. "A New Memory-Processing Unit Model Based on Spiking Neural P Systems with Dendritic and Synaptic Behavior for Kronecker Matrix–Matrix Multiplication" Mathematics 13, no. 22: 3663. https://doi.org/10.3390/math13223663

APA Style

Garcia, L., Anides, E. R., Vazquez, E., Toscano, L. K., Sanchez, G., Avalos, J. G., & Sanchez, G. (2025). A New Memory-Processing Unit Model Based on Spiking Neural P Systems with Dendritic and Synaptic Behavior for Kronecker Matrix–Matrix Multiplication. Mathematics, 13(22), 3663. https://doi.org/10.3390/math13223663

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop