Neuromorphic Artificial Intelligence and Its Applications: Retrospective and Prospective

A special issue of Brain Sciences (ISSN 2076-3425). This special issue belongs to the section "Computational Neuroscience, Neuroinformatics, and Neurocomputing".

Deadline for manuscript submissions: closed (20 March 2026) | Viewed by 7825

Special Issue Editor


E-Mail Website
Guest Editor
School of Engineering, Computing and Mathematics, University of Plymouth, Plymouth, Devon, UK
Interests: neuromorphic AI and its applications in health, medicine, robotics, and environment

Special Issue Information

Dear Colleagues,

Neuromorphic artificial intelligence (AI), inspired by the structures of brains, constitutes a new shift in the development of AI technology that, in contrast to deep learning (DL), is able to process vast amounts of information extremely quickly, accurately, and expending far less energy than any AI/DL system. It is unparalleled in its ability to rapidly, and on its own, adapt and learn from changing and unexpected environmental contingencies with very limited resources. Because it utilizes event-based processing in which neurons spike only in response to specific stimuli, at any given time only a small fraction of neurons is active, drastically reducing energy consumption. Thus, neuromorphic AI is ideal for low-power devices such as mobile phones or cameras and, because it utilizes temporal coding as a form of efficient information processing, it is extremely precise and fast allowing visual recognition in the human brain to be achieved in less than 100ms. To learn stably about the world, it uses temporal learning, a type of continual learning that depends on the timing of spikes. Neuromorphic AI is expected to open new roads to computing technologies (software and hardware) and pave the way to true artificial general intelligence.

This Special Issue aims to solicit articles that report state-of-the-art approaches to and recent advances in:

  1. Novel neuromorphic architectures and models constrained by neurobiological data from multiple levels of detail showing mastery in all faculties, including sensory recognition, learning and memory, decision making, cognitive control, reasoning, language processing, and consciousness;
  2. Learning algorithms constrained by the limits of biology and neuromorphic hardware;
  3. Neuromorphic hardware (sensors, chips, cameras, etc.) for cognitive systems;
  4. Applications of neuromorphic architectures or hardware to cognitive robotics;
  5. Application of neuromorphic architectures or hardware to all other aspects of science and technology including healthcare, medical imaging, environment, etc.;

This Special Issue will bring together scientists with diverse backgrounds to discuss current concepts and exciting new results in this broad field, cutting across disciplines and focusing on topics that have a high potential to synergize. It is expected that this Special Issue will generate valuable new insights and highlight promising directions of future progress.

Research articles, review articles, opinion articles, theory and hypothesis articles, short communications.

Dr. Vassilis Cutsuridis
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Brain Sciences is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2200 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • neuromorphic artificial intelligence
  • deep learning
  • sensory recognition
  • cognitive function
  • cognitive robotics
  • neuroimaging

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

37 pages, 1544 KB  
Article
From Spontaneous Ignitions to Sensorimotor Cell Assemblies via Dopamine: A Spiking Neurocomputational Model of Infants’ Hand Action Acquisition
by Nick Griffin, Andrea Mattera, Gianluca Baldassarre and Max Garagnani
Brain Sci. 2026, 16(2), 158; https://doi.org/10.3390/brainsci16020158 - 29 Jan 2026
Viewed by 558
Abstract
Background/Objectives: From birth, infants learn how to interact with the world through exploration. It has been proposed that this early learning phase is driven by motor babbling: the spontaneous generation of exploratory movements that are progressively consolidated through associative mechanisms. This process [...] Read more.
Background/Objectives: From birth, infants learn how to interact with the world through exploration. It has been proposed that this early learning phase is driven by motor babbling: the spontaneous generation of exploratory movements that are progressively consolidated through associative mechanisms. This process leads to the acquisition of a repertoire of hand movements such as single- or multi-finger flexion, extension, touching, and pushing. Later, in a second phase, some of these movements (e.g., those that happen to enable access to biologically salient stimuli, such as grasping food) are further reinforced and consolidated through rewards obtained from the environment. However, the neural mechanisms underlying these processes remain unclear. Here, we used a fully neuroanatomically and neurophysiologically constrained neural network model to investigate the brain correlates of these processes. Methods: The model consists of six neural maps simulating six human brain areas, including three pre-central (motor-related) and three post-central (sensory-related) regions. Each map is composed of excitatory and inhibitory spiking neurons, with biologically constrained within- and between-area connectivity forming recurrent circuits. Hand action execution and corresponding haptic perception are simulated simply as activity in primary motor and somatosensory model areas, respectively. During an initial “exploratory” phase, the network learned, via Hebbian mechanisms, associations—as emerging distributed cell assembly (CA) circuits—linking “motor” to corresponding “haptic feedback” patterns. As a result of this initial training, the model began to exhibit spontaneous ignitions of these CA circuits, an emergent phenomenon taken to represent internally generated, non-stimulus-driven attempts at hand action exploitation. In a second phase, a global reward signal, simulating dopamine-mediated reward encoding, was applied to only a subset of “successful” actions upon their noise-driven ignition. Results: During the first exploratory phase, the neural architecture autonomously developed “action-perception” circuits corresponding to multiple possible hand actions. During the subsequent exploitation phase, positively reinforced circuits increased in size and, consequently, in frequency of spontaneous ignition, when compared to non-rewarded “actions”. Conclusions: These results provide a mechanistic account, at the cortical-circuit level, of the early acquisition of hand actions, of their subsequent consolidation, and of the spontaneous transition of an agent’s behavior from exploration to reward-seeking, as typically observed in humans and animals during development. Full article
Show Figures

Figure 1

18 pages, 3441 KB  
Article
Dendritic Inhibition Effects in Memory Retrieval of a Neuromorphic Microcircuit Model of the Rat Hippocampus
by Nikolaos Andreakos and Vassilis Cutsuridis
Brain Sci. 2025, 15(11), 1219; https://doi.org/10.3390/brainsci15111219 - 13 Nov 2025
Viewed by 873
Abstract
Background: Studies have shown that input comparison in the hippocampus between the Schaffer collateral (SC) input in apical dendrites and the perforant path (PP) input in the apical tufts dramatically changes the activity of pyramidal cells (PCs). Equally, dendritic inhibition was shown to [...] Read more.
Background: Studies have shown that input comparison in the hippocampus between the Schaffer collateral (SC) input in apical dendrites and the perforant path (PP) input in the apical tufts dramatically changes the activity of pyramidal cells (PCs). Equally, dendritic inhibition was shown to control PC activity by minimizing the depolarizing signals in their dendritic trees, controlling the synaptic integration time window, and ensuring temporal firing precision. Objectives: We computationally investigated the diverse roles of inhibitory synapses on the PC dendritic arbors of a CA1 microcircuit model in mnemonic retrieval during the co-occurrence of SC and PP inputs. Results: Our study showed inhibition in the apical PC dendrites mediated thresholding of firing during memory retrieval by restricting the depolarizing signals in the dendrites of non-engram cells, thus preventing them from firing, and ensuring perfect memory retrieval (only engram cells fire). On the other hand, inhibition in the apical dendritic tuft removed interference from spurious EC during recall. When EC drove only the engram cells of the SC input cue, recall was perfect under all conditions. Removal of apical tuft inhibition had no effect on recall quality. When EC drove 40% of engram cells and 60% of non-engram cells of the SC input cue, recall was disrupted, and this disruption was worse when the apical tuft inhibition was removed. When EC drove only the non-engram cells of the cue, then recall was perfect again but only when the population of engram cells was small. Removal of the apical tuft inhibition disrupted recall performance when the population of engram cells was large. Conclusions: Our study deciphers the diverse roles of dendritic inhibition in mnemonic processing in the CA1 microcircuit of the rat hippocampus. Full article
Show Figures

Figure 1

Review

Jump to: Research

38 pages, 4759 KB  
Review
Event-Based Vision at the Edge: A Review
by Michael Middleton, Teymoor Ali, Epifanios Baikas, Hakan Kayan, Basabdatta Sen Bhattacharya, Elena Gheorghiu, Mark Vousden, Charith Perera, Oliver Rhodes and Martin A. Trefzer
Brain Sci. 2026, 16(4), 422; https://doi.org/10.3390/brainsci16040422 - 17 Apr 2026
Viewed by 523
Abstract
Spiking Neural Networks (SNNs) executed on neuromorphic hardware promise energyefficient, low-latency inference well-suited to edge deployment in size, weight, and powerconstrained environments such as autonomous vehicles, wearable devices, and unmanned aerial platforms. However, a coherent research pathway to deployment of neuromorphic devices remains [...] Read more.
Spiking Neural Networks (SNNs) executed on neuromorphic hardware promise energyefficient, low-latency inference well-suited to edge deployment in size, weight, and powerconstrained environments such as autonomous vehicles, wearable devices, and unmanned aerial platforms. However, a coherent research pathway to deployment of neuromorphic devices remains elusive. This paper presents a structured review and position on the state of SNN-based vision across four interconnected dimensions: network architectures, training methodologies, event-based datasets and simulation techniques, and neuromorphic computing hardware. We survey the evolution from shallow convolutional SNNs to spiking Transformers and hybrid designs which leverage the advantages of SNNs and conventional artificial neural networks. We also examine surrogate gradient training and ANN-to-SNN conversion approaches, catalogue real-world and simulated event-based datasets, and assess the landscape of neuromorphic platforms ranging from rigid mixed-signal architectures to fully-configurable digital systems. Our analysis reveals that while each area has matured considerably in isolation, critical integration challenges persist. In particular, event-based datasets remain scarce and lack standardisation, training methodologies introduce systematic gaps relative to deployment hardware, and access to neuromorphic platforms is restricted by proprietary toolchains and limited development kit availability. We conclude that bridging these integration gaps, rather than advancing individual components alone, represents the most important and least addressed work required to realise the potential of SNN-based vision at the edge. Full article
Show Figures

Figure 1

25 pages, 4694 KB  
Review
Spiking Neural Models of Neurons and Networks for Perception, Learning, Cognition, and Navigation: A Review
by Stephen Grossberg
Brain Sci. 2025, 15(8), 870; https://doi.org/10.3390/brainsci15080870 - 15 Aug 2025
Cited by 2 | Viewed by 4561
Abstract
This article reviews and synthesizes highlights of the history of neural models of rate-based and spiking neural networks. It explains that theoretical and experimental results about how all rate-based neural network models, whose cells obey the membrane equations of neurophysiology, also called shunting [...] Read more.
This article reviews and synthesizes highlights of the history of neural models of rate-based and spiking neural networks. It explains that theoretical and experimental results about how all rate-based neural network models, whose cells obey the membrane equations of neurophysiology, also called shunting laws, can be converted into spiking neural network models without any loss of explanatory power, and often with gains in explanatory power. These results are relevant to all the main brain processes, including individual neurons and networks for perception, learning, cognition, and navigation. The results build upon the hypothesis that the functional units of brain processes are spatial patterns of cell activities, or short-term-memory (STM) traces, and spatial patterns of learned adaptive weights, or long-term-memory (LTM) patterns. It is also shown how spatial patterns that are learned by spiking neurons during childhood can be preserved even as the child’s brain grows and deforms while it develops towards adulthood. Indeed, this property of spatiotemporal self-similarity may be one of the most powerful properties that individual spiking neurons contribute to the development of large-scale neural networks and architectures throughout life. Full article
Show Figures

Figure 1

Back to TopTop