Auto Deep Spiking Neural Network Design Based on an Evolutionary Membrane Algorithm
Abstract
1. Introduction
- The availability of massive, high-quality labeled datasets;
- Access to specialized hardware accelerators (e.g., GPU clusters and computational servers);
- Substantial energy resources and thermal management systems.
- DNN-to-SNN Conversion FrameworksThe first category of training methodologies involves DNN-to-SNN conversion techniques [24], which aim to transfer pre-trained parameters (weights and biases) from conventional artificial neural networks (ANNs) to structurally analogous SNNs. This approach leverages the representational power of high-performing ANNs by approximating their functional behavior in neuromorphic hardware. Despite their theoretical appeal, these methods exhibit systematic performance degradation during conversion due to inherent architectural mismatches between rate-coded ANNs and temporally-coded SNNs. While researchers have proposed mitigation strategies including weight and activation normalization techniques and noise injection for robustness enhancement, complete elimination of accuracy loss remains an open challenge. More critically, such conversion frameworks typically require rate-based encoding schemes to approximate ANN activation distributions. These trade-offs highlight a fundamental limitation of conversion-based approaches their inability to fully exploit the temporal computation paradigm that constitutes SNNs’ primary advantage over traditional ANNs.
- Spike-Timing-Dependent Plasticity (STDP)-Based LearningThe second category employs STDP [27], a biologically plausible unsupervised synaptic plasticity mechanism observed across various brain regions. Unlike gradient-based methods, STDP enables layer-wise parameter updates in DSNNs by modulating synaptic weights for both error backpropagation and gradient computation of spiking activation functions. While STDP excels at extracting shared statistical features among input samples through its Hebbian learning principle, it inherently struggles with discriminative feature learning due to its symmetric update rule for potentiation and depression. To address this limitation, researchers have proposed several STDP variants that incorporate additional biological mechanisms. These limitations indicate that although STDP-based approaches effectively capture low-level statistical correlations, they currently fail to fully unlock the biologically inspired intelligence potential of SNNs—especially when it comes to high-level cognitive tasks demanding nuanced feature discrimination.
- Direct Training Approaches for Pre-designed DSNNsThe third category focuses on direct training methodologies for pre-structured DSNNs, with the SpikeProp algorithm and its various enhanced variants representing the most prominent examples [17]. These approaches attempt to overcome the non-differentiability challenge of spiking neurons by leveraging mathematical approximations, such as the linearization assumption in SpikeProp’s gradient estimation framework. While several mitigation strategies have been proposed, including weight regularization constraints, gradient normalization techniques, non-leaky PSP function, and multi-bit spike encoding schemes, these solutions significantly increase computational complexity, rendering them infeasiable for large-scale network implementations.
- For the first time, an evolutionary membrane algorithm is employed for automatically constructing and designing promising DSNN models.
- A search space consisting of the architectures of DSNNs is defined. The proposed algorithm is utilized as a search mechanism to find the best architectures of DSNNs.
- The simulation results verify the effectiveness of the proposed algorithm on the CIFAR-10 and CIFAR-100 datasets.
2. Related Work
2.1. Convolutional Neural Network
- Input Layer: Receives raw input data (typically structured as three-dimensional tensors for image-based tasks);
- Feature Map Layer: Performs feature transformation through convolutional operations and nonlinear activations;
- Classfication Layer: Produces classification scores through probabilistic scoring mechanisms;
- Output Layer: Outputs classfication results.
2.2. Spiking Neural Network
2.3. Neural Architecture Search
3. Proposed Algorithm
Algorithm 1 The pseudo-code of the proposed algorithm in the skin membrane. |
|
Algorithm 2 The pseudo-code of the proposed algorithm in membranes. |
|
3.1. Constructing the Supernet
3.1.1. Supernet Sampling
- Weight Sharing: All candidate architectures share the weights of the supernet, which avoids the need to train each architecture individually and significantly reduces computational costs.
- Single-Path Uniform Sampling: During the training of the supernet, architectures are sampled uniformly from the search space. Each time a sub-model is trained, the optimized weights are shared with the supernet, allowing other sub-models to directly inherit these weights for evaluation.
3.1.2. Training Process of the Supernet
- Supernet Initialization: A supernet that encompasses all candidate architectures is constructed.
- Sampling Training: In each training iteration, an architecture is randomly sampled from the supernet.
- Weight Update: The weights of the sampled architecture are updated, which in turn reflects on the shared weights of the supernet.
3.2. Membrane Computing
3.2.1. Initialization of Object
3.2.2. Crossover Rule
3.2.3. Mutation Rule
3.2.4. Determination of the Optimal Sub-Network
4. Experimental Validation and Result Analysis
4.1. Benchmark Datasets and Metrics
- Intrinsic balance: Every class receives an identical number of training and test samples, eliminating any majority-class bias by design.
- Community consensus: Pioneering NAS studies, ranging from DARTS and Genetic-CNN to AutoSNN, exclusively utilize accuracy across these benchmarks.
4.1.1. CIFAR-10 Dataset
4.1.2. CIFAR-100 Dataset
4.2. Experimental Setup
- Massive Network Training Requirements: DSNNs necessitate extensive training of numerous network architectures, each requiring independent training and validation processes. This large-scale training significantly escalates computational demands, particularly when dealing with complex architectures and large datasets.
- Complexity of the Search Space: The search space for DSNNs is vast and intricate, encompassing a multitude of potential network structures, layer configurations, and connection patterns. Conducting a thorough search within this complex space to identify the optimal architecture is computationally intensive.
- High Cost of Independent Evaluation: Each candidate network structure in DSNNs must be evaluated independently, involving separate training from scratch. This independent evaluation process, especially when dealing with large populations of candidate structures, dramatically increases computational overhead.
- Lack of Gradient Information: DSNNs often rely on evolutionary or heuristic methods that do not leverage gradient information. Unlike gradient-based optimization methods, which can guide the search more efficiently, these methods require extensive exploration of the search space, leading to higher computational costs.
- Hardware Resource Limitations: The computational requirements of DSNNs are substantial, often necessitating the use of high-performance hardware such as GPUs or TPUs. The need for extensive computational resources, especially for large-scale training and evaluation, poses significant challenges and contributes to the overall cost.
4.3. Experimental Results and Discussion
4.4. Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436. [Google Scholar] [CrossRef]
- Lopes, V.; Alexandre, L.A. Toward Less Constrained Macro-Neural Architecture Search. IEEE Trans. Neural Netw. Learn. Syst. 2025, 36, 2854–2868. [Google Scholar] [CrossRef]
- Huang, J.; Xue, B.; Sun, Y.; Zhang, M.; Yen, G.G. Particle Swarm Optimization for Compact Neural Architecture Search for Image Classification. IEEE Trans. Evol. Comput. 2023, 27, 1298–1312. [Google Scholar] [CrossRef]
- Wei, P.; Hu, C.; Hu, J.; Li, Z.; Qin, W.; Gan, J.; Chen, T.; Shu, H.; Shang, M. A Novel Black Widow Optimization Algorithm Based on Lagrange Interpolation Operator for ResNet18. Biomimetics 2025, 10, 361. [Google Scholar] [CrossRef]
- Biju, G.M.; Pillai, G.N. Sequential node search for faster neural architecture search. Knowl.-Based Syst. 2024, 300, 112145. [Google Scholar]
- Kyriakides, G.; Margaritis, K. The effect of reduced training in neural architecture search. Neural Comput. Appl. 2020, 32, 17321–17332. [Google Scholar] [CrossRef]
- Rao, H.; Jia, H.; Zhang, X.; Abualigah, L. Hybrid Adaptive Crayfish Optimization with Differential Evolution for Color Multi-Threshold Image Segmentation. Biomimetics 2025, 10, 218. [Google Scholar] [CrossRef]
- Zhou, Y.; Jin, Y.; Ding, J. Surrogate -Assisted Evolutionary Search of Spiking Neural Architectures in Liquid State Machines. Neurocomputing 2020, 406, 12–23. [Google Scholar] [CrossRef]
- Feng, G.; Wang, H.; Wang, C. Search for deep graph neural networks. Inf. Sci. 2023, 649, 119617. [Google Scholar] [CrossRef]
- Yan, S.; Meng, Q.; Xiao, M.; Wang, Y.; Lin, Z. Sampling complex topology structures for spiking neural networks. Neural Netw. 2024, 172, 106121. [Google Scholar] [CrossRef]
- Pang, D.; Le, X.; Guan, X. RL-DARTS: Differentiable neural architecture search via reinforcement-learning-based meta-optimizer. Knowl.-Based Syst. 2021, 234, 107585. [Google Scholar] [CrossRef]
- Garcia, J.L.L.; Monroy, R.; Hernandez, V.A.S. Neural architecture search for image super-resolution: A review on the emerging state-of-the-art. Neurocomputing 2024, 610, 128481. [Google Scholar] [CrossRef]
- Zhu, H.; Jin, Y. Real-Time Federated Evolutionary Neural Architecture Search. IEEE Trans. Evol. Comput. 2022, 26, 364–378. [Google Scholar] [CrossRef]
- Tang, X.; Jia, C.; He, Z. UAV Path Planning: A Dual-Population Cooperative Honey Badger Algorithm for Staged Fusion of Multiple Differential Evolutionary Strategies. Biomimetics 2025, 10, 168. [Google Scholar] [CrossRef]
- Szwarcman, D.; Civitarese, D.; Vellasco, M. Quantum-inspired evolutionary algorithm applied to neural architecture search. Appl. Soft Comput. 2022, 120, 108674. [Google Scholar] [CrossRef]
- Firmin, T.; Boulet, P.; Talbi, E.G. Parallel hyperparameter optimization of spiking neural networks. Neurocomputing 2024, 609, 128483. [Google Scholar] [CrossRef]
- Liu, C.; Wang, H.; Liu, N.; Yuan, Z. Optimizing the Neural Structure and Hyperparameters of Liquid State Machines Based on Evolutionary Membrane Algorithm. Mathematics 2022, 10, 1844. [Google Scholar] [CrossRef]
- Ding, J.; Zhang, J.; Huang, T.; Liu, J.K.; Yu, Z. Assisting Training of Deep Spiking Neural Networks with Parameter Initialization. IEEE Trans. Neural Netw. Learn. Syst. 2025, 36, 15015–15028. [Google Scholar] [CrossRef] [PubMed]
- Wang, Z.; Zhang, Y.; Lian, S.; Cui, X.; Yan, R.; Tang, H. Toward High-Accuracy and Low-Latency Spiking Neural Networks with Two-Stage Optimization. IEEE Trans. Neural Netw. Learn. Syst. 2025, 36, 3189–3203. [Google Scholar] [CrossRef]
- Zheng, Y.; Xue, J.; Liu, J.; Zhang, Y. Biologically Inspired Spatial-Temporal Perceiving Strategies for Spiking Neural Network. Biomimetics 2025, 10, 48. [Google Scholar] [CrossRef]
- Kumar, K.A.; Vanmathi, C. A hybrid parallel convolutional spiking neural network for enhanced skin cancer detection. Sci. Rep. 2025, 15, 11137. [Google Scholar] [CrossRef]
- Han, B.; Zhao, F.; Zeng, Y.; Shen, G. Developmental Plasticity-Inspired Adaptive Pruning for Deep Spiking and Artificial Neural Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2025, 47, 240–251. [Google Scholar] [CrossRef]
- Man, Y.; Xie, L.; Qiao, S.; Zhou, Y.; Shang, D. Differentiable architecture search with multi-dimensional attention for spiking neural networks. Neurocomputing 2024, 601, 128181. [Google Scholar] [CrossRef]
- Lin, Z.; Wang, Y.; Zhang, J.; Chu, X.; Ling, H. NAS-BNN: Neural Architecture Search for Binary Neural Networks. Pattern Recognit. 2025, 159, 111086. [Google Scholar] [CrossRef]
- Zhang, R.; Jiao, L.; Wang, D.; Liu, F.; Liu, X.; Yang, S. A Fast Evolutionary Knowledge Transfer Search for Multiscale Deep Neural Architecture. IEEE Trans. Neural Netw. Learn. Syst. 2024, 35, 17450–17464. [Google Scholar] [CrossRef]
- Liu, Y.; Sun, Y.; Xue, B.; Zhang, M.; Yen, G.G.; Tan, K.C. A Survey on Evolutionary Neural Architecture Search. IEEE Trans. Neural Netw. Learn. Syst. 2023, 34, 550–570. [Google Scholar] [CrossRef]
- Demin, V.A.; Nekhaev, V.D.; Surazhevsky, I.A.; Nikiruy, K.E.; Emelyanov, V.A.; Nikolaev, S.N.; Rylkov, V.V.; Kovalchuk, V.M. Necessary conditions for STDP-based pattern recognition learning in a memristive spiking neural network. Neural Netw. 2021, 134, 64–75. [Google Scholar] [CrossRef] [PubMed]
- Zhu, H.; Zhang, H.; Jin, Y. From federated learning to federated neural architecture search: A survey. Complex Intell. Syst. 2021, 7, 639–657. [Google Scholar] [CrossRef]
- Shahawy, M.; Benkhelifa, E.; White, D. Exploring the Intersection Between Neural Architecture Search and Continual Learning. IEEE Trans. Neural Netw. Learn. Syst. 2024, 36, 11776–11792. [Google Scholar] [CrossRef]
- Kyriakides, G.; Margaritis, K. Evolving graph convolutional networks for neural architecture search. Neural Comput. Appl. 2022, 34, 899–909. [Google Scholar] [CrossRef]
- Chen, X.; Zhang, S.; Li, Q.; Zhu, F.; Feng, A.; Nkenyereye, L.; Rani, S. A NAS-Based TinyML for Secure Authentication Detection on SAGVN-Enabled Consumer Edge Devices. IEEE Trans. Consum. Electron. 2025, 71, 303–313. [Google Scholar] [CrossRef]
- Tian, S.; Qu, L.; Wang, L.; Hu, K.; Li, N.; Xu, W. A neural architecture search based framework for liquid state machine design. Neurocomputing 2021, 443, 174–182. [Google Scholar] [CrossRef]
- Wang, Y.; Zuo, Y.; Chen, D.; Tu, W.; Zomaya, A.Y.; Li, X. From hippocampal neurons to broad spiking neural networks. Neurocomputing 2025, 647, 130547. [Google Scholar] [CrossRef]
- Putra, R.V.W.; Shafique, M. SpiKernel: A Kernel Size Exploration Methodology for Improving Accuracy of the Embedded Spiking Neural Network Systems. IEEE Embed. Syst. Lett. 2025, 17, 151–155. [Google Scholar] [CrossRef]
- Tang, F.; Zhang, J.; Zhang, C.; Liu, L. Brain-Inspired Architecture for Spiking Neural Networks. Biomimetics 2024, 9, 646. [Google Scholar] [CrossRef] [PubMed]
- Ajay, B.S.; Phani Pavan, K.; Rao, M. MC-QDSNN: Quantized Deep Evolutionary SNN With Multidendritic Compartment Neurons for Stress Detection Using Physiological Signals. IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 2025, 44, 1313–1325. [Google Scholar] [CrossRef]
- Li, W.; Zhu, Z.; Shao, S.; Lu, Y.; Song, A. Spiking Spatiotemporal Neural Architecture Search for EEG-Based Emotion Recognition. IEEE Trans. Instrum. Meas. 2025, 74, 4001014. [Google Scholar] [CrossRef]
- Dissem, M.; Amayri, M.; Bouguila, N. Neural Architecture Search for Anomaly Detection in Time-Series Data of Smart Buildings: A Reinforcement Learning Approach for Optimal Autoencoder Design. IEEE Internet Things J. 2024, 11, 18059–18073. [Google Scholar] [CrossRef]
- Liu, J.; Chen, Y.; Li, S. Binary Particle Swarm Optimization with Manta Ray Foraging Learning Strategies for High-Dimensional Feature Selection. Biomimetics 2025, 10, 315. [Google Scholar] [CrossRef]
- Lyu, B.; Yang, Y.; Cao, Y.; Wang, P.; Zhu, J.; Chang, J.; Wen, S. Efficient multi-objective neural architecture search framework via policy gradient algorithm. Inf. Sci. 2024, 661, 120186. [Google Scholar] [CrossRef]
- Yan, J.; Liu, Q.; Zhang, M.; Feng, L.; Ma, D.; Li, H.; Pan, G. Efficient spiking neural network design via neural architecture search. Neural Netw. 2024, 173, 106172. [Google Scholar] [CrossRef]
- Sakamoto, K.; Ishibashi, H.; Sato, R.; Shirakawa, S.; Akimoto, Y.; Hino, H. ATNAS: Automatic Termination for Neural Architecture Search. Neural Netw. 2023, 166, 446–458. [Google Scholar] [CrossRef]
- Guo, Z.; Zhang, X.; Mu, H.; Heng, W.; Liu, Z.; Wei, Y.; Sun, J. Single Path One-Shot Neural Architecture Search with Uniform Sampling. In Proceedings of the European Conference on Computer Vision; Vedaldi, A., Bischof, H., Brox, T., Frahm, J.M., Eds.; Springer: Cham, Switzerland, 2020; Lecture Notes in Computer Science; pp. 544–560. [Google Scholar]
- Byunggook, N.; Jisoo, M.; Seongsik, P.; Dongjin, L.; Hyeokjun, C.; Sungroh, Y. AutoSNN: Towards Energy-Efficient Spiking Neural Networks. In Proceedings of the Neural and Evolutionary Computing, Gaeta, Italy, 14–16 September 2022. [Google Scholar]
- Sosik, P.; Paul, P.; Ciencialova, L. A survey on learning models of spiking neural membrane systems. Nat. Comput. 2025, 1–13. [Google Scholar] [CrossRef]
- Liu, C.; Du, Y. A membrane algorithm based on chemical reaction optimization for many-objective optimization problems. Knowl.-BASED Syst. 2019, 165, 306–320. [Google Scholar] [CrossRef]
- Fang, W.; Chen, Y.; Ding, J.; Yu, Z.; Masquelier, T.; Chen, D.; Huang, L.; Zhou, H.; Li, G.; Tian, Y. SpikingJelly: An open-source machine learning infrastructure platform for spike-based intelligence. Sci. Adv. 2023, 9, eadi1480. [Google Scholar] [CrossRef]
- Karen, S.; Andrew, Z. Very Deep Convolutional Networks for Large-Scale Image Recognition. In Proceedings of the International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, 14–16 April 2014; Volume 2014, pp. 1–12. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. arXiv 2015, arXiv:1512.03385. [Google Scholar]
- Mark, S.; Andrew, H.; Menglong, Z.; Andrey, Z.; Liang-Chieh, C. MobileNetV2: Inverted Residuals and Linear Bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4510–4520. [Google Scholar]
- Fang, W.; Yu, Z.; Chen, Y.; Huang, T.; Masquelier, T.; Tian, Y. Deep Residual Learning in Spiking Neural Networks. Adv. Neural Inf. Process. Syst. 2022, 34, 21056–21069. [Google Scholar] [CrossRef]
- Lingxi, X.; Alan, Y. Genetic CNN. In Proceedings of the Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1379–1388. [Google Scholar] [CrossRef]
Feature | DNNs | SNNs |
---|---|---|
Neuron Model | Artificial neurons with weighted sums and activation functions (e.g., ReLU, sigmoid) | Biological neurons with spiking events and threshold-based firing |
Temporal Dynamics | Static processing without explicit temporal dynamics | Inherently temporal with spike timing and event-driven processing |
Energy Efficiency | High computational and energy requirements, especially on GPUs/CPU | More energy-efficient due to sparse and event-driven nature |
Training Complexity | Backpropagation and gradient descent, well-established but computationally intensive | More complex due to non-differentiable spike events, often requiring surrogate gradients |
Architecture Design | Dense layers, convolutional layers, recurrent layers (e.g., CNNs, RNNs) | Specialized architectures leveraging spiking dynamics (e.g., SCNs, SRNs) |
Application Domains | Image recognition, natural language processing, reinforcement learning | Temporal data, gesture recognition, auditory processing, real-time sensor data |
Scalability | Highly scalable to very large models (e.g., Transformers, GPT series) | Scalability is an ongoing research area, with recent advances in scaling up SNNs |
Biological Plausibility | Less biologically plausible compared to SNNs | More biologically plausible, closely mimicking brain function |
Advantages | - Well-established training methods - Broad range of applications - Extensive tools and frameworks available | - Energy-efficient - Suitable for temporal data - Biologically plausible |
Challenges | - High computational and energy costs - Limited in temporal data processing | - Complex training - Design complexity - Limited tools and frameworks |
Superclass | Classes |
---|---|
aquatic mammals | beaver, dolphin, otter, seal, whale |
fish | aquarium fish, flatfish, ray, shark, trout |
flowers | orchids, poppies, roses, sunflowers, tulips |
food containers | bottles, bowls, cans, cups, plates |
fruit and vegetables | apples, mushrooms, oranges, pears, sweet peppers |
household electrical devices | clock, computer keyboard, lamp, telephone, television |
household furniture | bed, chair, couch, table, wardrobe |
insects | bee, beetle, butterfly, caterpillar, cockroach |
large carnivores | bear, leopard, lion, tiger, wolf |
large man-made outdoor things | bridge, castle, house, road, skyscraper |
large natural outdoor scenes | cloud, forest, mountain, plain, sea |
large omnivores and herbivores | camel, cattle, chimpanzee, elephant, kangaroo |
medium-sized mammals | fox, porcupine, possum, raccoon, skunk |
non-insect invertebrates | crab, lobster, snail, spider, worm |
people | baby, boy, girl, man, woman |
reptiles | crocodile, dinosaur, lizard, snake, turtle |
small mammals | hamster, mouse, rabbit, shrew, squirrel |
trees | maple, oak, palm, pine, willow |
vehicles 1 | bicycle, bus, motorcycle, pickup truck, train |
vehicles 2 | lawn-mower, rocket, streetcar, tank, tractor |
Component | Model |
---|---|
CPU | Dual Intel Xeon Platinum 8160 processor (33 M cache, 2.10 GHz) |
Two Xeon CPUs containing 48 parallel cores and 96 threads | |
Memory | 160 GB |
GPU | Nvidia GeForce GTX 1070Ti |
Storage | 1 TB |
Parameter | Value | Justification |
---|---|---|
Population Size (N) | 20 | Ensures sufficient diversity and avoids premature convergence while keeping computational cost manageable. |
Number of Training Iterations (Max_Gen) | 20 | Provides enough generations for convergence without excessive computational time. |
Number of Membranes (G) | 4 | Allows structured exploration of the search space without becoming overly complex. |
Batch Size | 256 | Ensures efficient training while keeping memory usage within practical limits. |
Mutation Rate | 0.1 | Introduces moderate genetic diversity to escape local optima and explore new regions. |
Crossover Rate | 0.7 | Promotes combination of good traits from different parents, leading to better offspring solutions. |
Early Stopping Threshold | 0.05 | Terminates evaluation of underperforming architectures early to save computational resources. |
Learning Rate | 0.001 | Ensures stable and effective training of neural networks, chosen based on standard practices. |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Liu, C.; Wang, H. Auto Deep Spiking Neural Network Design Based on an Evolutionary Membrane Algorithm. Biomimetics 2025, 10, 514. https://doi.org/10.3390/biomimetics10080514
Liu C, Wang H. Auto Deep Spiking Neural Network Design Based on an Evolutionary Membrane Algorithm. Biomimetics. 2025; 10(8):514. https://doi.org/10.3390/biomimetics10080514
Chicago/Turabian StyleLiu, Chuang, and Haojie Wang. 2025. "Auto Deep Spiking Neural Network Design Based on an Evolutionary Membrane Algorithm" Biomimetics 10, no. 8: 514. https://doi.org/10.3390/biomimetics10080514
APA StyleLiu, C., & Wang, H. (2025). Auto Deep Spiking Neural Network Design Based on an Evolutionary Membrane Algorithm. Biomimetics, 10(8), 514. https://doi.org/10.3390/biomimetics10080514