Special Issue "Hardware for Machine Learning"

Special Issue Editors

Dr. Aatmesh Shrivastava
E-Mail Website
Guest Editor
Department of Electrical and Computer Engineering, Northeastern University, Boston, MA 02115, USA
Interests: ultra-low power circuits and systems; analog computing; precision circuits; hardware security
Special Issues and Collections in MDPI journals
Dr. Vishal Saxena
E-Mail Website
Guest Editor
Electrical and Computer Engineering, University of Delaware, Newark, DE 19716, USA
Interests: mixed-signal IC design; cmos photonic ICs; RF/mmwave photonics; neuromorphic circuits
Special Issues and Collections in MDPI journals
Dr. Xinfei Guo
E-Mail Website
Guest Editor
UM-SJTU Joint Institute, Shanghai Jiao Tong University, Shanghai 200240, China
Interests: EDA; computer architecture; low power VLSI; hardware acceleration

Special Issue Information

Dear Colleagues,

This Special Issue focusses on hardware and circuit design methods for machine learning applications. It will include invited papers that will cover a range of topics—the large-scale integration of CMOS mixed-signal integrated circuits and nanoscale emerging devices, to enable a new generation of integrated circuits and systems that can be applied to a wide range of machine learning problems; on-device learning; in-memory computing; neuromorphic deep learning, and system-level aspects of Edge-AI.

The rationale of this Special Issue is to develop a compelling volume of research in the emerging field of neuromorphic and machine learning (ML) circuits and systems, and present advances in their individual studies in this area of growing importance. We believe that this topic is timely and compelling, as there is a growing need for training ML and artificial intelligence (AI) algorithms on low-power platforms that can potentially provide an orders-of-magnitude improvement in energy-efficiency, when compared to the present focus on graphics processing units (GPUs), field-programmable gate arrays (FPGAs), and digital application-specific integrated circuits (ASICs). Low-power mixed-signal circuits that leverage conventional and emerging non-volatile emerging devices, such as the resistive RAM (RRAM) and phase-change RAM (PCRAM), are potential candidates for achieving this energy-efficiency with very high synaptic density. Further, these systems need to be completely rethought, as such non-von Neumann architectures will require entirely new ways of programming and managing resources. There are several open challenges in this area at the device-, circuit-, algorithm- and system-levels, and the presented papers in the proposed session will address some of these in a timely manner.

Dr. Aatmesh Shrivastava
Dr. Vishal Saxena
Dr. Xinfei Guo

Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Low Power Electronics and Applications is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
A Dynamic Reconfigurable Architecture for Hybrid Spiking and Convolutional FPGA-Based Neural Network Designs
J. Low Power Electron. Appl. 2021, 11(3), 32; https://doi.org/10.3390/jlpea11030032 - 17 Aug 2021
Viewed by 376
Abstract
This work presents a dynamically reconfigurable architecture for Neural Network (NN) accelerators implemented in Field-Programmable Gate Array (FPGA) that can be applied in a variety of application scenarios. Although the concept of Dynamic Partial Reconfiguration (DPR) is increasingly used in NN accelerators, the [...] Read more.
This work presents a dynamically reconfigurable architecture for Neural Network (NN) accelerators implemented in Field-Programmable Gate Array (FPGA) that can be applied in a variety of application scenarios. Although the concept of Dynamic Partial Reconfiguration (DPR) is increasingly used in NN accelerators, the throughput is usually lower than pure static designs. This work presents a dynamically reconfigurable energy-efficient accelerator architecture that does not sacrifice throughput performance. The proposed accelerator comprises reconfigurable processing engines and dynamically utilizes the device resources according to model parameters. Using the proposed architecture with DPR, different NN types and architectures can be realized on the same FPGA. Moreover, the proposed architecture maximizes throughput performance with design optimizations while considering the available resources on the hardware platform. We evaluate our design with different NN architectures for two different tasks. The first task is the image classification of two distinct datasets, and this requires switching between Convolutional Neural Network (CNN) architectures having different layer structures. The second task requires switching between NN architectures, namely a CNN architecture with high accuracy and throughput and a hybrid architecture that combines convolutional layers and an optimized Spiking Neural Network (SNN) architecture. We demonstrate throughput results from quickly reprogramming only a tiny part of the FPGA hardware using DPR. Experimental results show that the implemented designs achieve a 7× faster frame rate than current FPGA accelerators while being extremely flexible and using comparable resources. Full article
(This article belongs to the Special Issue Hardware for Machine Learning)
Show Figures

Figure 1

Back to TopTop