Advancements in Hardware-Efficient Machine Learning

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: 15 June 2025 | Viewed by 1300

Special Issue Editor


E-Mail Website
Guest Editor
Department of Electrical and Computer Engineering, McMaster University, Hamilton, ON L8S 4L8, Canada
Interests: application-specific custom-tailored computer architecture and hardware acceleration; hardware-efficient deep learning; neurotechnology; reconfigurable computing; asynchronous circuits

Special Issue Information

Dear Colleagues,

Machine learning has rapidly evolved in recent years, driving innovation across industries and research fields. As machine learning models grow more complex, there is a pressing need for hardware systems that can support these models efficiently, without compromising speed or power consumption. This growing demand has given rise to new advancements in hardware-efficient machine learning, which focuses on optimizing the interaction between machine learning algorithms and the hardware they run on, from edge devices to high-performance computing systems.

The challenge of hardware-efficient machine learning is to deliver high-performance models while minimizing power, area, and energy consumption. This includes novel hardware architectures, algorithmic optimizations, and the use of hardware accelerators such as GPUs, TPUs, and FPGAs to enhance the performance and efficiency of machine learning systems.

The aim of this Special Issue is to present state-of-the-art research that addresses these challenges, highlighting innovative approaches to making machine learning more efficient from a hardware perspective. We invite researchers to submit original articles that explore new techniques, as well as comprehensive review articles that summarize recent advancements in the field.

Topics of interest include, but are not limited to:

  • Hardware accelerators for machine learning;
  • Domain-specific custom computer architectures tailored for machine learning applications;
  • Hardware amenability: optimizing machine learning models for efficient execution on hardware accelerators;
  • Optimizing commodity hardware (e.g., CPUs, GPUs, and TPUs) for machine learning workloads;
  • Configurable machine learning accelerators;
  • Low-power hardware architectures for machine learning;
  • Novel computer paradigms for machine learning acceleration;
  • Utilizing emerging technologies (e.g. configurable architectures, processing-in-memory (PIM), hyperdimensional computing (HDC), approximate computing, neuromorphic computing, quantum computing, photonic computing, resistive RAM (ReRAM), and memristors) for hardware-efficient machine learning;
  • Algorithm–hardware co-design for machine learning;
  • Model compression techniques for hardware efficiency, including data movement optimizations, quantization schemes, binary or quantized networks, non-conventional data formats, sparce pruning, dynamic pruning, structured weights, and group convolutions;
  • Neuromorphic computing for machine learning applications;
  • Edge computing for machine learning;
  • Energy-efficient inference and training;
  • Accelerating computer-heavy deep neural network (e.g., transformers, attention mechanisms, and recommender systems);
  • Machine learning optimizations for resource-constrained environments;
  • Cross-layer optimization for hardware and machine learning algorithms.

We look forward to receiving your contributions, which will push the boundaries of hardware-efficient machine learning and contribute to shaping the future of the field.

Dr. Ameer M.S. Abdelhadi
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • hardware-efficient machine learning
  • machine learning hardware optimization
  • custom computer architectures for machine learning
  • configurable machine learning accelerators
  • hardware-optimized machine learning models
  • low-latency machine learning inference
  • energy-efficient machine learning inference and training
  • hardware-accelerated training for large-scale machine learning models
  • deep learning model compression
  • low-power hardware architectures
  • novel compute paradigms for ML
  • emerging technologies for hardware-efficient ML
  • machine learning algorithm–hardware co-design
  • cross-layer optimization for hardware and ML algorithms
  • machine learning model compression techniques for hardware efficiency
  • commodity hardware optimization for machine learning
  • efficient machine learning model deployment on edge devices
  • resource-constrained machine learning optimization

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

15 pages, 626 KiB  
Article
Fast Resource Estimation of FPGA-Based MLP Accelerators for TinyML Applications
by Argyris Kokkinis and Kostas Siozios
Electronics 2025, 14(2), 247; https://doi.org/10.3390/electronics14020247 - 9 Jan 2025
Viewed by 983
Abstract
Tiny machine learning (TinyML) demands the development of edge solutions that are both low-latency and power-efficient. To achieve these on System-on-Chip (SoC) FPGAs, co-design methodologies, such as hls4ml, have emerged aiming to speed up the design process. In this context, fast estimation of [...] Read more.
Tiny machine learning (TinyML) demands the development of edge solutions that are both low-latency and power-efficient. To achieve these on System-on-Chip (SoC) FPGAs, co-design methodologies, such as hls4ml, have emerged aiming to speed up the design process. In this context, fast estimation of FPGA’s utilized resources is needed to rapidly assess the feasibility of a design. In this paper, we propose a resource estimator for fully customized (bespoke) multilayer perceptrons (MLPs) designed through the hls4ml workflow. Through the analysis of bespoke MLPs synthesized using Xilinx High-Level Synthesis (HLS) tools, we developed resource estimation models for the dense layers’ arithmetic modules and registers. These models consider the unique characteristics inherent to the bespoke nature of the MLPs. Our estimator was evaluated on six different architectures for synthetic and real benchmarks, which were designed using Xilinx Vitis HLS 2022.1 targeting the ZYNQ-7000 FPGAs. Our experimental analysis demonstrates that our estimator can accurately predict the required resources in terms of the utilized Look-Up Tables (LUTs), Flip-Flops (FFs), and Digital Signal Processing (DSP) units in less than 147 ms of single-threaded execution. Full article
(This article belongs to the Special Issue Advancements in Hardware-Efficient Machine Learning)
Show Figures

Graphical abstract

Back to TopTop