Neuromorphic Artificial Intelligence and Its Applications: Retrospective and Prospective

A special issue of Brain Sciences (ISSN 2076-3425). This special issue belongs to the section "Computational Neuroscience, Neuroinformatics, and Neurocomputing".

Deadline for manuscript submissions: 20 March 2026 | Viewed by 4250

Special Issue Editor


E-Mail Website
Guest Editor
School of Engineering, Computing and Mathematics, University of Plymouth, Plymouth, Devon, UK
Interests: neuromorphic AI and its applications in health, medicine, robotics, and environment

Special Issue Information

Dear Colleagues,

Neuromorphic artificial intelligence (AI), inspired by the structures of brains, constitutes a new shift in the development of AI technology that, in contrast to deep learning (DL), is able to process vast amounts of information extremely quickly, accurately, and expending far less energy than any AI/DL system. It is unparalleled in its ability to rapidly, and on its own, adapt and learn from changing and unexpected environmental contingencies with very limited resources. Because it utilizes event-based processing in which neurons spike only in response to specific stimuli, at any given time only a small fraction of neurons is active, drastically reducing energy consumption. Thus, neuromorphic AI is ideal for low-power devices such as mobile phones or cameras and, because it utilizes temporal coding as a form of efficient information processing, it is extremely precise and fast allowing visual recognition in the human brain to be achieved in less than 100ms. To learn stably about the world, it uses temporal learning, a type of continual learning that depends on the timing of spikes. Neuromorphic AI is expected to open new roads to computing technologies (software and hardware) and pave the way to true artificial general intelligence.

This Special Issue aims to solicit articles that report state-of-the-art approaches to and recent advances in:

  1. Novel neuromorphic architectures and models constrained by neurobiological data from multiple levels of detail showing mastery in all faculties, including sensory recognition, learning and memory, decision making, cognitive control, reasoning, language processing, and consciousness;
  2. Learning algorithms constrained by the limits of biology and neuromorphic hardware;
  3. Neuromorphic hardware (sensors, chips, cameras, etc.) for cognitive systems;
  4. Applications of neuromorphic architectures or hardware to cognitive robotics;
  5. Application of neuromorphic architectures or hardware to all other aspects of science and technology including healthcare, medical imaging, environment, etc.;

This Special Issue will bring together scientists with diverse backgrounds to discuss current concepts and exciting new results in this broad field, cutting across disciplines and focusing on topics that have a high potential to synergize. It is expected that this Special Issue will generate valuable new insights and highlight promising directions of future progress.

Research articles, review articles, opinion articles, theory and hypothesis articles, short communications.

Dr. Vassilis Cutsuridis
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Brain Sciences is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2200 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • neuromorphic artificial intelligence
  • deep learning
  • sensory recognition
  • cognitive function
  • cognitive robotics
  • neuroimaging

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

18 pages, 3441 KB  
Article
Dendritic Inhibition Effects in Memory Retrieval of a Neuromorphic Microcircuit Model of the Rat Hippocampus
by Nikolaos Andreakos and Vassilis Cutsuridis
Brain Sci. 2025, 15(11), 1219; https://doi.org/10.3390/brainsci15111219 - 13 Nov 2025
Viewed by 400
Abstract
Background: Studies have shown that input comparison in the hippocampus between the Schaffer collateral (SC) input in apical dendrites and the perforant path (PP) input in the apical tufts dramatically changes the activity of pyramidal cells (PCs). Equally, dendritic inhibition was shown to [...] Read more.
Background: Studies have shown that input comparison in the hippocampus between the Schaffer collateral (SC) input in apical dendrites and the perforant path (PP) input in the apical tufts dramatically changes the activity of pyramidal cells (PCs). Equally, dendritic inhibition was shown to control PC activity by minimizing the depolarizing signals in their dendritic trees, controlling the synaptic integration time window, and ensuring temporal firing precision. Objectives: We computationally investigated the diverse roles of inhibitory synapses on the PC dendritic arbors of a CA1 microcircuit model in mnemonic retrieval during the co-occurrence of SC and PP inputs. Results: Our study showed inhibition in the apical PC dendrites mediated thresholding of firing during memory retrieval by restricting the depolarizing signals in the dendrites of non-engram cells, thus preventing them from firing, and ensuring perfect memory retrieval (only engram cells fire). On the other hand, inhibition in the apical dendritic tuft removed interference from spurious EC during recall. When EC drove only the engram cells of the SC input cue, recall was perfect under all conditions. Removal of apical tuft inhibition had no effect on recall quality. When EC drove 40% of engram cells and 60% of non-engram cells of the SC input cue, recall was disrupted, and this disruption was worse when the apical tuft inhibition was removed. When EC drove only the non-engram cells of the cue, then recall was perfect again but only when the population of engram cells was small. Removal of the apical tuft inhibition disrupted recall performance when the population of engram cells was large. Conclusions: Our study deciphers the diverse roles of dendritic inhibition in mnemonic processing in the CA1 microcircuit of the rat hippocampus. Full article
Show Figures

Figure 1

Review

Jump to: Research

25 pages, 4694 KB  
Review
Spiking Neural Models of Neurons and Networks for Perception, Learning, Cognition, and Navigation: A Review
by Stephen Grossberg
Brain Sci. 2025, 15(8), 870; https://doi.org/10.3390/brainsci15080870 - 15 Aug 2025
Cited by 1 | Viewed by 3130
Abstract
This article reviews and synthesizes highlights of the history of neural models of rate-based and spiking neural networks. It explains that theoretical and experimental results about how all rate-based neural network models, whose cells obey the membrane equations of neurophysiology, also called shunting [...] Read more.
This article reviews and synthesizes highlights of the history of neural models of rate-based and spiking neural networks. It explains that theoretical and experimental results about how all rate-based neural network models, whose cells obey the membrane equations of neurophysiology, also called shunting laws, can be converted into spiking neural network models without any loss of explanatory power, and often with gains in explanatory power. These results are relevant to all the main brain processes, including individual neurons and networks for perception, learning, cognition, and navigation. The results build upon the hypothesis that the functional units of brain processes are spatial patterns of cell activities, or short-term-memory (STM) traces, and spatial patterns of learned adaptive weights, or long-term-memory (LTM) patterns. It is also shown how spatial patterns that are learned by spiking neurons during childhood can be preserved even as the child’s brain grows and deforms while it develops towards adulthood. Indeed, this property of spatiotemporal self-similarity may be one of the most powerful properties that individual spiking neurons contribute to the development of large-scale neural networks and architectures throughout life. Full article
Show Figures

Figure 1

Back to TopTop