Research on Key Technologies for Hardware Acceleration

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Circuit and Signal Processing".

Deadline for manuscript submissions: 15 July 2025 | Viewed by 3536

Special Issue Editor

Institute for Artificial Intelligence, School of Integrated Circuits, Peking University, Beijing 100871, China
Interests: FPGA hardware system; deep learning acceleration; energy-efficient VLSI design
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

With increasingly demand for more computing power, data storage, and memory bandwidth by artificial intelligence (AI), current general-purpose processors are suffering from low latency and low energy efficiency for AI applications due to the unsuitable architecture or limited memory access bandwidth (e.g., memory wall bottleneck). To meet the requirements of modern intelligent systems, customized hardware accelerators are emerging for different tasks by integrating cross-level optimizations and innovations of algorithm, compilation, architecture, and circuit designs. The aim of this Special Issue is to focus on the key technologies including reconfigurable FPGAs, 2.5D/3D chiplets, and in-/near-memory computing for hardware acceleration.

Submissions for this Special Issue on “Research on Key Technologies for Hardware Acceleration” are welcome on any scope related, but not limited, to the following areas:

  • Co-optimization of hardware-friendly emerging AI algorithms, including pruning, quantization, etc.
  • Reconfigurable hardware accelerators on FPGA for intelligent vision tasks.
  • The ultra-low power ASIC accelerator for voice applications.
  • The methodologies of compilation and mapping to AI hardware accelerators.
  • The hardware accelerators based on in-/near-memory computing architecture.
  • The interconnection and parallel structure of 2.5D chiplets or 3D stacked chips.

Dr. Yufei Ma
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • hardware accelerators
  • reconfigurable FPGA acceleration
  • in-/near-memory computing
  • 2.5D/3D integrated chiplets

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 740 KiB  
Article
A Variation-Aware Binary Neural Network Framework for Process Resilient In-Memory Computations
by Minh-Son Le, Thi-Nhan Pham, Thanh-Dat Nguyen and Ik-Joon Chang
Electronics 2024, 13(19), 3847; https://doi.org/10.3390/electronics13193847 - 28 Sep 2024
Viewed by 1155
Abstract
Binary neural networks (BNNs) that use 1-bit weights and activations have garnered interest as extreme quantization provides low power dissipation. By implementing BNNs as computation-in-memory (CIM), which computes multiplication and accumulations on memory arrays in an analog fashion, namely, analog CIM, we can [...] Read more.
Binary neural networks (BNNs) that use 1-bit weights and activations have garnered interest as extreme quantization provides low power dissipation. By implementing BNNs as computation-in-memory (CIM), which computes multiplication and accumulations on memory arrays in an analog fashion, namely, analog CIM, we can further improve the energy efficiency to process neural networks. However, analog CIMs are susceptible to process variation, which refers to the variability in manufacturing that causes fluctuations in the electrical properties of transistors, resulting in significant degradation in BNN accuracy. Our Monte Carlo simulations demonstrate that in an SRAM-based analog CIM implementing the VGG-9 BNN model, the classification accuracy on the CIFAR-10 image dataset is degraded to below 50% under process variations in a 28 nm FD-SOI technology. To overcome this problem, we present a variation-aware BNN framework. The proposed framework is developed for SRAM-based BNN CIMs since SRAM is most widely used as on-chip memory; however, it is easily extensible to BNN CIMs based on other memories. Our extensive experimental results demonstrate that under process variation of 28 nm FD-SOI, with an SRAM array size of 128×128, our framework significantly enhances classification accuracies on both the MNIST hand-written digit dataset and the CIFAR-10 image dataset. Specifically, for the CONVNET BNN model on MNIST, accuracy improves from 60.24% to 92.33%, while for the VGG-9 BNN model on CIFAR-10, accuracy increases from 45.23% to 78.22%. Full article
(This article belongs to the Special Issue Research on Key Technologies for Hardware Acceleration)
Show Figures

Figure 1

15 pages, 2961 KiB  
Article
Hardware Acceleration and Approximation of CNN Computations: Case Study on an Integer Version of LeNet
by Régis Leveugle, Arthur Cogney, Ahmed Baba Gah El Hilal, Tristan Lailler and Maxime Pieau
Electronics 2024, 13(14), 2709; https://doi.org/10.3390/electronics13142709 - 11 Jul 2024
Cited by 1 | Viewed by 1666
Abstract
AI systems have an increasing sprawling impact in many application areas. Embedded systems built on AI have strong conflictual implementation constraints, including high computation speed, low power consumption, high energy efficiency, strong robustness and low cost. Neural Networks (NNs) used by these systems [...] Read more.
AI systems have an increasing sprawling impact in many application areas. Embedded systems built on AI have strong conflictual implementation constraints, including high computation speed, low power consumption, high energy efficiency, strong robustness and low cost. Neural Networks (NNs) used by these systems are intrinsically partially tolerant to computation disturbances. As a consequence, they are an interesting target for approximate computing seeking reduced resources, lower power consumption and faster computation. Also, the large number of computations required by a single inference makes hardware acceleration almost unavoidable to globally meet the design constraints. The reported study, based on an integer version of LeNet, shows the possible gains when coupling approximation and hardware acceleration. The main conclusions can be leveraged when considering other types of NNs. The first one is that several approximation types that look very similar can exhibit very different trade-offs between accuracy loss and hardware optimizations, so the selected approximation has to be carefully chosen. Also, a strong approximation leading to the best hardware can also lead to the best accuracy. This is the case here when selecting the ApxFA5 adder approximation defined in the literature. Finally, combining hardware acceleration and approximate operators in a coherent manner also increases the global gains. Full article
(This article belongs to the Special Issue Research on Key Technologies for Hardware Acceleration)
Show Figures

Figure 1

Back to TopTop