Neuromorphic Engineering and Machine Learning

A special issue of Information (ISSN 2078-2489). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: closed (30 June 2024) | Viewed by 1557

Special Issue Editors


E-Mail Website
Guest Editor
Computer Architecture and Technology department (ATC), University of Seville, 41012 Seville, Spain
Interests: neuromorphic engineering; machine learning; deep learning; embedded systems
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Computer Architecture and Technology Department, Faculty of Engineering, University of Cadiz, 11003 Cadiz, Spain
Interests: neuromorphic engineering; motor control; FPGA; neuromorphic robotics; spiking neural networks

Special Issue Information

Dear Colleagues,

Neuromorphic engineering proposes the use of biologically plausible models applied to engineering applications. The brain has been proven to solve complex problems in the most efficient way thanks to evolution. Therefore, replicating the way in which the brain performs specific tasks has become an interesting and promising field of research.

This Special Issue focuses on recent advances in neuromorphic engineering, including neuromorphic sensors, neuromorphic systems, bio-inspired models, spiking neural networks, and machine learning, which consider both hardware (digital or analog circuits) and software implementations.

Submissions to this Special Issue on ‘Neuromorphic Engineering and Machine Learning’ are solicited to represent a snapshot of the field’s development by covering a range of topics such as (but not limited to) the following:

Event-based sensors: vision, audio, tactile, olfactory;

Spiking neural network models;

Spike-based central pattern generators;

Machine learning applied to spike-based systems;

Deep learning algorithms for neuromorphic applications.

Dr. Juan P. Domínguez-Morales
Dr. Fernando Perez-Peña
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Information is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • neuromorphic engineering
  • spiking neural networks
  • neuromorphic sensors
  • machine learning
  • deep learning
  • address-event representation

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

14 pages, 2421 KiB  
Article
Optimization of Memristor Crossbar’s Mapping Using Lagrange Multiplier Method and Genetic Algorithm for Reducing Crossbar’s Area and Delay Time
by Seung-Myeong Cho, Rina Yoon, Ilpyeong Yoon, Jihwan Moon, Seokjin Oh and Kyeong-Sik Min
Information 2024, 15(7), 409; https://doi.org/10.3390/info15070409 - 15 Jul 2024
Viewed by 387
Abstract
Memristor crossbars offer promising low-power and parallel processing capabilities, making them efficient for implementing convolutional neural networks (CNNs) in terms of delay time, area, etc. However, mapping large CNN models like ResNet-18, ResNet-34, VGG-Net, etc., onto memristor crossbars is challenging due to the [...] Read more.
Memristor crossbars offer promising low-power and parallel processing capabilities, making them efficient for implementing convolutional neural networks (CNNs) in terms of delay time, area, etc. However, mapping large CNN models like ResNet-18, ResNet-34, VGG-Net, etc., onto memristor crossbars is challenging due to the line resistance problem limiting crossbar size. This necessitates partitioning full-image convolution into sub-image convolution. To do so, an optimized mapping of memristor crossbars should be considered to divide full-image convolution into multiple crossbars. With limited crossbar resources, especially in edge devices, it is crucial to optimize the crossbar allocation per layer to minimize the hardware resource in term of crossbar area, delay time, and area–delay product. This paper explores three optimization scenarios: (1) optimizing total delay time under a crossbar’s area constraint, (2) optimizing total crossbar area with a crossbar’s delay time constraint, and (3) optimizing a crossbar’s area–delay-time product without constraints. The Lagrange multiplier method is employed for the constrained cases 1 and 2. For the unconstrained case 3, a genetic algorithm (GA) is used to optimize the area–delay-time product. Simulation results demonstrate that the optimization can have significant improvements over the unoptimized results. When VGG-Net is simulated, the optimization can show about 20% reduction in delay time for case 1 and 22% area reduction for case 2. Case 3 highlights the benefits of optimizing the crossbar utilization ratio for minimizing the area–delay-time product. The proposed optimization strategies can substantially enhance the neural network’s performance of memristor crossbar-based processing-in-memory architectures, especially for resource-constrained edge computing platforms. Full article
(This article belongs to the Special Issue Neuromorphic Engineering and Machine Learning)
Show Figures

Figure 1

Back to TopTop