Artificial Intelligence Hardware and Software Co-Design and Neuromorphic Computing

A special issue of AI (ISSN 2673-2688). This special issue belongs to the section "AI Systems: Theory and Applications".

Deadline for manuscript submissions: 9 October 2026 | Viewed by 1384

Special Issue Editors


E-Mail Website
Guest Editor
Bradley Department of Electrical and Computer Engineering, College of Engineering, Virginia Tech, Blacksburg, VA 24061, USA
Interests: artificial intelligence; neuromorphic computing; high-performance computing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computer Systems Technology, School of Technology and Design, New York City College of Technology (CUNY), Brooklyn, NY 11201, USA
Interests: AI; wireless networks; storage systems

Special Issue Information

Dear Colleagues,

This Special Issue aims to explore the rapidly evolving landscape of artificial intelligence (AI) through the lens of co-design methodologies that integrate hardware and software systems. As AI workloads become increasingly complex and data-intensive, the need for efficient, scalable, and domain-adapted computing paradigms has never been more urgent. Traditional interactions between hardware and software are proving to be inadequate for meeting the demands of next-generation AI applications, which range from edge intelligence and autonomous systems to large-scale generative models.

This Special Issue will focus on (a) the collaborative design of hardware and software systems that are optimized for AI performance, energy efficiency, and adaptability; (b) the role of emerging paradigms such as neuromorphic computing, which draws inspiration from the structure and functionality of the human brain; and (c) interdisciplinary research that bridges architecture, algorithms, circuits, and systems.

By gathering high-quality contributions from researchers across academia and industry, this Special Issue aims to highlight both foundational advances and practical implementations in areas such as AI accelerators, compiler and runtime support for custom AI hardware, spiking neural networks, and memory-centric designs. It also seeks to encourage discussion on benchmarking, design automation, and co-optimization strategies that holistically address the full AI system stack.

This Special Issue will complement and extend the existing literature in several ways. While prior works have often focused separately on hardware innovation or algorithmic development, this collection will emphasize the co-design principle, recognizing that optimal AI performance arises from the joint consideration of computational models, algorithm design, and underlying hardware. Moreover, by including neuromorphic computing, this Special Issue embraces biologically inspired approaches that offer promising paths for ultra-low-power and real-time processing.

This Special Issue will serve as a timely and valuable resource for researchers and practitioners interested in the co-evolution of AI hardware and software and in developing novel computing architectures that address emerging challenges in intelligent systems.

Dr. Yang (Cindy) Yi
Dr. Fangyang Shen
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. AI is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • AI hardware-software co-design
  • neuromorphic computing
  • spiking neural networks
  • AI accelerators
  • edge intelligence
  • brain-inspired computing
  • machine learning architecture
  • hardware-aware AI models

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 3314 KB  
Article
Reservoir Computing: Foundations, Advances, and Challenges Toward Neuromorphic Intelligence
by Andrew Liu, Muhammad Farhan Azmine, Chunxiao Lin and Yang Yi
AI 2026, 7(2), 70; https://doi.org/10.3390/ai7020070 - 13 Feb 2026
Viewed by 976
Abstract
Reservoir computing (RC) has emerged as an energy-efficient paradigm for temporal information processing, offering reduced training complexity by fixing recurrent dynamics and training only a simple readout layer. Among RC models, Echo State Networks (ESNs) and Liquid State Machines (LSMs) represent two distinct [...] Read more.
Reservoir computing (RC) has emerged as an energy-efficient paradigm for temporal information processing, offering reduced training complexity by fixing recurrent dynamics and training only a simple readout layer. Among RC models, Echo State Networks (ESNs) and Liquid State Machines (LSMs) represent two distinct approaches based on continuous-valued and spiking neural dynamics, respectively. In this work, we present a comparative evaluation of ESNs and LSMs on the Mackey–Glass chaotic time-series prediction task, with emphasis on scalability, overfitting behavior, and robustness to reduced numerical error precision. Experimental results show that ESNs achieve lower prediction error with relatively small reservoirs but exhibit early performance saturation and signs of overfitting as reservoir size increases. In contrast, LSMs demonstrate more consistent generalization with increasing reservoir size and maintain stable performance under aggressive reservoir quantization. These findings highlight fundamental trade-offs between accuracy and hardware efficiency, and suggest that spiking RC models are well suited for energy-constrained and neuromorphic computing applications. Full article
Show Figures

Figure 1

Back to TopTop