Biologically Plausible Deep Learning

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "E: Applied Mathematics".

Deadline for manuscript submissions: 31 July 2025 | Viewed by 1144

Special Issue Editor

Special Issue Information

Dear Colleagues,

Deep learning has achieved remarkable success in various problems, such as pattern recognition, image classification, segmentation, object detection, and natural language processing. Notably, current large language models show spectacular performance. However, networks based on oversimplified McCulloch–Pitts neurons usually necessitate hyper-complex architectures to learn the representations of data with multiple levels of abstraction, resulting in high computational costs. Furthermore, as black-box methods, these networks pose challenges in elucidating the reasons behind their high performance.

In contrast, the human brain's neurons demonstrate potent computational capabilities with plausible structures while consuming minimal energy. There is adequate evidence to reveal how biological neurons, such as residual networks, dendrites, and spiking neurons, process signals in neuroscience. Consequently, neurons inspired by the biological nervous system have the potential to provide powerful computation capability with a simple structure, thereby reducing computation costs. Hence, as next-generation deep learning methods, biologically plausible deep learning models promise to be cogent tools for complex problems.

This Special Issue provides a platform to exchange research works, technical trends, and practical experience related to biologically plausible deep learning methods. This Special Issue aims to attract high-quality research articles in this field.

Prof. Dr. Shangce Gao
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • biologically plausible neuron models
  • biologically plausible deep learning networks
  • biologically plausible deep learning algorithms
  • applications of biologically plausible deep learning networks

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

25 pages, 3597 KiB  
Article
Toward Next-Generation Biologically Plausible Single Neuron Modeling: An Evolutionary Dendritic Neuron Model
by Chongyuan Wang and Huiyi Liu
Mathematics 2025, 13(9), 1465; https://doi.org/10.3390/math13091465 - 29 Apr 2025
Abstract
Conventional deep learning models rely heavily on the McCulloch–Pitts (MCP) neuron, limiting their interpretability and biological plausibility. The Dendritic Neuron Model (DNM) offers a more realistic alternative by simulating nonlinear and compartmentalized processing within dendritic branches, enabling efficient and transparent learning. While DNMs [...] Read more.
Conventional deep learning models rely heavily on the McCulloch–Pitts (MCP) neuron, limiting their interpretability and biological plausibility. The Dendritic Neuron Model (DNM) offers a more realistic alternative by simulating nonlinear and compartmentalized processing within dendritic branches, enabling efficient and transparent learning. While DNMs have shown strong performance in various tasks, their learning capacity at the single-neuron level remains underexplored. This paper proposes a Reinforced Dynamic-grouping Differential Evolution (RDE) algorithm to enhance synaptic plasticity within the DNM framework. RDE introduces a biologically inspired mutation-selection strategy and an adaptive grouping mechanism that promotes effective exploration and convergence. Experimental evaluations on benchmark classification tasks demonstrate that the proposed method outperforms conventional differential evolution and other evolutionary learning approaches in terms of accuracy, generalization, and convergence speed. Specifically, the RDE-DNM achieves up to 92.9% accuracy on the BreastEW dataset and 98.08% on the Moons dataset, with consistently low standard deviations across 30 trials, indicating strong robustness and generalization. Beyond technical performance, the proposed model supports societal applications requiring trustworthy AI, such as interpretable medical diagnostics, financial screening, and low-energy embedded systems. The results highlight the potential of RDE-driven DNMs as a compact and interpretable alternative to traditional deep models, offering new insights into biologically plausible single-neuron computation for next-generation AI. Full article
(This article belongs to the Special Issue Biologically Plausible Deep Learning)
23 pages, 9189 KiB  
Article
A Dendritic Neural Network-Based Model for Residential Electricity Consumption Prediction
by Ting Jin, Rui Xu, Kunqi Su and Jinrui Gao
Mathematics 2025, 13(4), 575; https://doi.org/10.3390/math13040575 - 9 Feb 2025
Viewed by 727
Abstract
Residential electricity consumption represents a large percentage of overall energy use. Therefore, accurately predicting residential electricity consumption and understanding the factors that influence it can provide effective strategies for reducing energy demand. In this study, a dendritic neural network-based model (DNM), combined with [...] Read more.
Residential electricity consumption represents a large percentage of overall energy use. Therefore, accurately predicting residential electricity consumption and understanding the factors that influence it can provide effective strategies for reducing energy demand. In this study, a dendritic neural network-based model (DNM), combined with the AdaMax optimization algorithm, is used to predict residential electricity consumption. The case study uses the U.S. residential electricity consumption dataset.This paper constructs a feature selection framework for the dataset, reducing the high-dimensional data to 12 features. The DNM model is then used for fitting and compared with five commonly used prediction models. The R2 of DNM is 0.7405, the highest among the six models, followed by the XGBoost model with an R2 of 0.7286. Subsequently, the paper leverages the interpretability of DNM to further filter the data, obtaining a dataset with 6 features, and the R2 on this dataset is further improved to 0.7423, resulting in an increase of 0.0018. Full article
(This article belongs to the Special Issue Biologically Plausible Deep Learning)
Show Figures

Figure 1

Back to TopTop