Bio-Inspired Neural Networks

A special issue of Biomimetics (ISSN 2313-7673). This special issue belongs to the section "Bioinspired Sensorics, Information Processing and Control".

Deadline for manuscript submissions: closed (31 October 2023) | Viewed by 8368

Special Issue Editors


E-Mail Website
Guest Editor
Istituto Nazionale di Fisica Nucleare—INFN, Rome, Italy
Interests: computational biology; computational physics; high performance computing; neural networks

E-Mail Website
Guest Editor
Istituto Nazionale di Fisica Nucleare—INFN, Rome, Italy
Interests: disordered and complex systems; statistical modeling; theoretical and computational biophysics; theoretical neuroscience; neural networks; machine learning

E-Mail Website
Guest Editor
1. Istituto Nazionale di Fisica Nucleare—INFN, Rome, Italy
2. Department of Neuroscience, La Sapienza University of Rome, Rome, Italy
Interests: spiking neural network; cognitive effects of sleep; neuromorphic computing

Special Issue Information

Dear Colleagues,

We are all fascinated by the brain's behavior, trying to reproduce some of its features both to learn how it works and to build tools acting as living creatures. Computational neuroscience was born together with computers, and both have continued to progress together ever since. The development of brain science models has a typical trajectory composed of years full of new discoveries followed by periods in which innovation stalls. During the downs, it is usually an insight from the biology of the brain that makes a new breakthrough possible, letting the entire field of computational and theoretical neuroscience glow again. For example, neural networks appeared when models of single neurons could not reproduce but simple tasks and complexity science were there to suggest that “more is different”. The famous Hopfield model showed us that many interacting simple neurons could create and store memories in numbers and modalities that a single neuron could not and express a variety of spontaneous behaviors and computations. The most-recent famous breakthrough was the addition of several layers to network architectures, making them “deep”, and allowing artificial intelligence to enter everyday life.

Despite the incredible successes of deep neural networks addressing a huge variety of tasks and problems, such as image recognition and language processing, some key features of real brains are still impossible to reproduce. For example, they are extremely energy-efficient, compact, and able to learn online at first sight in a noisy environment. Thus, scientists are once again seeking inspiration from biology to investigate and reproduce some of these features, building “bio-inspired” models to this aim. The aim is not to compete with “deep” artificial neural networks but to complement them, addressing specific requirements and scenarios. 

Improvements in experimental methods and theory have led to discoveries that have paved the way to new biologically inspired innovative neural-network models: besides the elucidation of structural and functional properties of brains, we are now aware that collective neuron behavior and different brain states exist and must be accounted for if we are to understand phenomena such as consciousness and sleep. In addition, we now have a clearer picture of how single neurons work and spikes are generated—as the role of neuron compartmentalization that helps in segregating and integrating different inputs—such that we can once again focus on single neurons that are capable of performing nonlinear computations on their own. Furthermore, we are also better understanding the subtleties of the plasticity of synaptic connections between neurons, for example, how they evolve during sleep cycles. In this Special Issue, we would like to see how all these added pieces of knowledge are put together in a comprehensive framework or model. Finally, given the peculiarities of brain connections and spike dynamics, we are also witnessing the development of novel “neuromorphic” architectures, both digital and analogic, that take inspiration from the communications and memory storage of actual brains, beyond the constraint of following the Von Neumann architecture. Neuromorphic computers are compact and low-consumption devices characterized by a fast response, but we still have a lot to learn in what they can do and what their optimal design is.

Inspired by these observations and with the help of the community, we would like to build a place in which recent advances and innovations on bio-inspired network modeling and computer architectures can be collected and new ideas can be put forward. We would like to host papers that recapitulate all the innovations from the field, as well as papers that open up new research directions or even suggest what the missing biological details are that need a proper theoretical treatment or lack the hardware necessary to investigate them. We are looking forward to your contributions.

Dr. Fabrizio Capuani
Dr. Cosimo Lupo
Dr. Chiara De Luca
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Biomimetics is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2200 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • bio-inspired neural networks
  • spiking neural networks
  • brain states
  • sleep cycles
  • neuromorphic computers
  • non-linear spiking neurons
  • synaptic plasticity

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

14 pages, 2424 KiB  
Article
Competitive Perceptrons: The Relevance of Modeling New Bioinspired Properties Such as Intrinsic Plasticity, Metaplasticity, and Lateral Inhibition of Rate-Coding Artificial Neurons
by Diego Andina
Biomimetics 2023, 8(8), 564; https://doi.org/10.3390/biomimetics8080564 - 23 Nov 2023
Viewed by 992
Abstract
This article supports the relevance of modeling new bioinspired properties in rate-coding artificial neurons, focusing on fundamental neural properties rarely implemented thus far in artificial neurons, such as intrinsic plasticity, the metaplasticity of synaptic strength, and the lateral inhibition of neighborhood neurons. All [...] Read more.
This article supports the relevance of modeling new bioinspired properties in rate-coding artificial neurons, focusing on fundamental neural properties rarely implemented thus far in artificial neurons, such as intrinsic plasticity, the metaplasticity of synaptic strength, and the lateral inhibition of neighborhood neurons. All these properties are bioinspired through empirical models developed by neurologists, and this in turn contributes to taking perceptrons to a higher potential level. Metaplasticity and intrinsic plasticity are different levels of plasticity and are believed by neurologists to have fundamental roles in memory and learning and therefore in the performance of neurons. Assuming that information about stimuli is contained in the firing rate of the connections among biological neurons, several models of artificial implementation have been tested. Analyzing their results and comparing them with learning and performance of state-of-the-art models, relevant advances are made in the context of the developing Industrial Revolution 4.0 based on advances in Machine Learning, and they may even initiate a new generation of artificial neural networks. As an example, a single-layer perceptron that includes the proposed advances is successfully trained to perform the XOR function, called the Competitive Perceptron, which is a new bioinspired artificial neuronal model with the potential of non-linear separability, continuous learning, and scalability, which is suitable to build efficient Deep Networks, overcoming the basic limitations of traditional perceptrons that have challenged scientists for half a century. Full article
(This article belongs to the Special Issue Bio-Inspired Neural Networks)
Show Figures

Figure 1

18 pages, 14336 KiB  
Article
Dynamic Analysis and FPGA Implementation of a New Fractional-Order Hopfield Neural Network System under Electromagnetic Radiation
by Fei Yu, Yue Lin, Si Xu, Wei Yao, Yumba Musoya Gracia and Shuo Cai
Biomimetics 2023, 8(8), 559; https://doi.org/10.3390/biomimetics8080559 - 21 Nov 2023
Cited by 2 | Viewed by 1202
Abstract
Fractional calculus research indicates that, within the field of neural networks, fractional-order systems more accurately simulate the temporal memory effects present in the human brain. Therefore, it is worthwhile to conduct an in-depth investigation into the complex dynamics of fractional-order neural networks compared [...] Read more.
Fractional calculus research indicates that, within the field of neural networks, fractional-order systems more accurately simulate the temporal memory effects present in the human brain. Therefore, it is worthwhile to conduct an in-depth investigation into the complex dynamics of fractional-order neural networks compared to integer-order models. In this paper, we propose a magnetically controlled, memristor-based, fractional-order chaotic system under electromagnetic radiation, utilizing the Hopfield neural network (HNN) model with four neurons as the foundation. The proposed system is solved by using the Adomain decomposition method (ADM). Then, through dynamic simulations of the internal parameters of the system, rich dynamic behaviors are found, such as chaos, quasiperiodicity, direction-controllable multi-scroll, and the emergence of analogous symmetric dynamic behaviors in the system as the radiation parameters are altered, with the order remaining constant. Finally, we implement the proposed new fractional-order HNN system on a field-programmable gate array (FPGA). The experimental results show the feasibility of the theoretical analysis. Full article
(This article belongs to the Special Issue Bio-Inspired Neural Networks)
Show Figures

Figure 1

13 pages, 4355 KiB  
Article
Advancements in Complementary Metal-Oxide Semiconductor-Compatible Tunnel Barrier Engineered Charge-Trapping Synaptic Transistors for Bio-Inspired Neural Networks in Harsh Environments
by Dong-Hee Lee, Hamin Park and Won-Ju Cho
Biomimetics 2023, 8(6), 506; https://doi.org/10.3390/biomimetics8060506 - 23 Oct 2023
Cited by 1 | Viewed by 1723
Abstract
This study aimed to propose a silicon-on-insulator (SOI)-based charge-trapping synaptic transistor with engineered tunnel barriers using high-k dielectrics for artificial synapse electronics capable of operating at high temperatures. The transistor employed sequential electron trapping and de-trapping in the charge storage medium, facilitating [...] Read more.
This study aimed to propose a silicon-on-insulator (SOI)-based charge-trapping synaptic transistor with engineered tunnel barriers using high-k dielectrics for artificial synapse electronics capable of operating at high temperatures. The transistor employed sequential electron trapping and de-trapping in the charge storage medium, facilitating gradual modulation of the silicon channel conductance. The engineered tunnel barrier structure (SiO2/Si3N4/SiO2), coupled with the high-k charge-trapping layer of HfO2 and high-k blocking layer of Al2O3, enabled reliable long-term potentiation/depression behaviors within a short gate stimulus time (100 μs), even under elevated temperatures (75 and 125 °C). Conductance variability was determined by the number of gate stimuli reflected in the maximum excitatory postsynaptic current (EPSC) and the residual EPSC ratio. Moreover, we analyzed the Arrhenius relationship between the EPSC as a function of the gate pulse number (N = 1–100) and the measured temperatures (25, 75, and 125 °C), allowing us to deduce the charge trap activation energy. A learning simulation was performed to assess the pattern recognition capabilities of the neuromorphic computing system using the modified National Institute of Standards and Technology datasheets. This study demonstrates high-reliability silicon channel conductance modulation and proposes in-memory computing capabilities for artificial neural networks using SOI-based charge-trapping synaptic transistors. Full article
(This article belongs to the Special Issue Bio-Inspired Neural Networks)
Show Figures

Figure 1

17 pages, 2096 KiB  
Article
Active Vision in Binocular Depth Estimation: A Top-Down Perspective
by Matteo Priorelli, Giovanni Pezzulo and Ivilin Peev Stoianov
Biomimetics 2023, 8(5), 445; https://doi.org/10.3390/biomimetics8050445 - 21 Sep 2023
Cited by 2 | Viewed by 1198
Abstract
Depth estimation is an ill-posed problem; objects of different shapes or dimensions, even if at different distances, may project to the same image on the retina. Our brain uses several cues for depth estimation, including monocular cues such as motion parallax and binocular [...] Read more.
Depth estimation is an ill-posed problem; objects of different shapes or dimensions, even if at different distances, may project to the same image on the retina. Our brain uses several cues for depth estimation, including monocular cues such as motion parallax and binocular cues such as diplopia. However, it remains unclear how the computations required for depth estimation are implemented in biologically plausible ways. State-of-the-art approaches to depth estimation based on deep neural networks implicitly describe the brain as a hierarchical feature detector. Instead, in this paper we propose an alternative approach that casts depth estimation as a problem of active inference. We show that depth can be inferred by inverting a hierarchical generative model that simultaneously predicts the eyes’ projections from a 2D belief over an object. Model inversion consists of a series of biologically plausible homogeneous transformations based on Predictive Coding principles. Under the plausible assumption of a nonuniform fovea resolution, depth estimation favors an active vision strategy that fixates the object with the eyes, rendering the depth belief more accurate. This strategy is not realized by first fixating on a target and then estimating the depth; instead, it combines the two processes through action–perception cycles, with a similar mechanism of the saccades during object recognition. The proposed approach requires only local (top-down and bottom-up) message passing, which can be implemented in biologically plausible neural circuits. Full article
(This article belongs to the Special Issue Bio-Inspired Neural Networks)
Show Figures

Figure 1

13 pages, 3163 KiB  
Article
STDP-Driven Rewiring in Spiking Neural Networks under Stimulus-Induced and Spontaneous Activity
by Sergey A. Lobov, Ekaterina S. Berdnikova, Alexey I. Zharinov, Dmitry P. Kurganov and Victor B. Kazantsev
Biomimetics 2023, 8(3), 320; https://doi.org/10.3390/biomimetics8030320 - 20 Jul 2023
Viewed by 1107
Abstract
Mathematical and computer simulation of learning in living neural networks have typically focused on changes in the efficiency of synaptic connections represented by synaptic weights in the models. Synaptic plasticity is believed to be the cellular basis for learning and memory. In spiking [...] Read more.
Mathematical and computer simulation of learning in living neural networks have typically focused on changes in the efficiency of synaptic connections represented by synaptic weights in the models. Synaptic plasticity is believed to be the cellular basis for learning and memory. In spiking neural networks composed of dynamical spiking units, a biologically relevant learning rule is based on the so-called spike-timing-dependent plasticity or STDP. However, experimental data suggest that synaptic plasticity is only a part of brain circuit plasticity, which also includes homeostatic and structural plasticity. A model of structural plasticity proposed in this study is based on the activity-dependent appearance and disappearance of synaptic connections. The results of the research indicate that such adaptive rewiring enables the consolidation of the effects of STDP in response to a local external stimulation of a neural network. Subsequently, a vector field approach is used to demonstrate the successive “recording” of spike paths in both functional connectome and synaptic connectome, and finally in the anatomical connectome of the network. Moreover, the findings suggest that the adaptive rewiring could stabilize network dynamics over time in the context of activity patterns’ reproducibility. A universal measure of such reproducibility introduced in this article is based on similarity between time-consequent patterns of the special vector fields characterizing both functional and anatomical connectomes. Full article
(This article belongs to the Special Issue Bio-Inspired Neural Networks)
Show Figures

Figure 1

16 pages, 3140 KiB  
Article
Biased Random Walk Model of Neuronal Dynamics on Substrates with Periodic Geometrical Patterns
by Cristian Staii
Biomimetics 2023, 8(2), 267; https://doi.org/10.3390/biomimetics8020267 - 20 Jun 2023
Cited by 1 | Viewed by 1075
Abstract
Neuronal networks are complex systems of interconnected neurons responsible for transmitting and processing information throughout the nervous system. The building blocks of neuronal networks consist of individual neurons, specialized cells that receive, process, and transmit electrical and chemical signals throughout the body. The [...] Read more.
Neuronal networks are complex systems of interconnected neurons responsible for transmitting and processing information throughout the nervous system. The building blocks of neuronal networks consist of individual neurons, specialized cells that receive, process, and transmit electrical and chemical signals throughout the body. The formation of neuronal networks in the developing nervous system is a process of fundamental importance for understanding brain activity, including perception, memory, and cognition. To form networks, neuronal cells extend long processes called axons, which navigate toward other target neurons guided by both intrinsic and extrinsic factors, including genetic programming, chemical signaling, intercellular interactions, and mechanical and geometrical cues. Despite important recent advances, the basic mechanisms underlying collective neuron behavior and the formation of functional neuronal networks are not entirely understood. In this paper, we present a combined experimental and theoretical analysis of neuronal growth on surfaces with micropatterned periodic geometrical features. We demonstrate that the extension of axons on these surfaces is described by a biased random walk model, in which the surface geometry imparts a constant drift term to the axon, and the stochastic cues produce a random walk around the average growth direction. We show that the model predicts key parameters that describe axonal dynamics: diffusion (cell motility) coefficient, average growth velocity, and axonal mean squared length, and we compare these parameters with the results of experimental measurements. Our findings indicate that neuronal growth is governed by a contact-guidance mechanism, in which the axons respond to external geometrical cues by aligning their motion along the surface micropatterns. These results have a significant impact on developing novel neural network models, as well as biomimetic substrates, to stimulate nerve regeneration and repair after injury. Full article
(This article belongs to the Special Issue Bio-Inspired Neural Networks)
Show Figures

Figure 1

Back to TopTop