Special Issue "Energy-Efficient Computing Systems for Deep Learning"

A special issue of Sustainability (ISSN 2071-1050). This special issue belongs to the section "Sustainable Engineering and Science".

Deadline for manuscript submissions: 31 December 2021.

Special Issue Editors

Dr. José Cano
E-Mail Website
Guest Editor
School of Computing Science, University of Glasgow, Glasgow G12 8RZ, United Kingdom
Interests: computer architecture; computer systems; compilers; interconnection networks and machine learning
Dr. José L. Abellán
E-Mail Website
Guest Editor
Computer Science and Engineering Department, Universidad Católica de Murcia (UCAM), 30107 Murcia, Spain
Interests: computer architecture; interconnection networks; memory hierarchy; domain-specific architecture; machine learning
Special Issues, Collections and Topics in MDPI journals
Prof. David Kaeli
E-Mail Website
Guest Editor
Department of Electrical and Computer Engineering, Northeastern University, Boston, MA 02115, USA
Interests: performance and design of high-performance computer systems and software

Special Issue Information

Dear Colleagues,

We want to invite you to submit your latest research to this Special Issue on “Energy-Efficient Computing Systems for Deep Learning”.

Deep learning (DL) is receiving much attention these days due to the impressive performance achieved in a variety of application areas, such as computer vision, natural language processing, machine translation, and many more. Aimed at achieving ever-faster processing of these DL workloads in an energy-efficient way, a myriad of specialized hardware architectures (e.g., sparse tensor cores in NVIDIA A100 GPU) and accelerators (e.g., Google TPU) are emerging. The goal is to provide much higher performance-per-watt than general-purpose CPU processors. Production deployments tend to have very high model complexity and diversity, demanding solutions that can deliver higher productivity, more powerful programming abstractions, more efficient software and system architectures, faster runtime systems, and numerical libraries, accompanied by a rich set of analysis tools.

DL models are generally memory and computationally intensive, for both training and inference. Accelerating these operations in an energy-efficient way has obvious advantages, first by reducing energy consumption (e.g., data centers can consume megawatts, producing an electricity bill similar to that of a small town), and secondly, by making these models usable on smaller battery-operated devices at the edge of the Internet. Edge devices run on strict power budgets and highly constrained computing power. In addition, while deep neural networks have motivated much of this effort, numerous applications and models involve a wider variety of operations, network architectures, and data processing. These applications and models are a challenge for today’s computer architectures, system stacks, and programming abstractions. As a result, non-von Neumann computing systems such as those based on in-memory and/or in-network computing, which perform specific computational tasks just where the data are generated, are being investigated in order to avoid the latency of shuttling huge amounts of data back and forth between processing and memory units. Additionally, machine learning (ML) techniques are being explored to reduce overall energy consumption in computing systems. These applications of ML range from energy-aware scheduling algorithms in data centers to battery life prediction techniques in edge devices. The high level of interest in these areas calls for a dedicated journal issue to discuss novel acceleration techniques and computation paradigms for energy-efficient DL algorithms. Since the journal targets the interaction of machine learning and computing systems, it will complement other publications specifically focused on one of these two parts in isolation.

The main objective of this Special Issue is to discuss and disseminate the current work in this area, showcasing new and novel DL algorithms, programming paradigms, software tools/libraries, and hardware architectures oriented at providing energy efficiency, in particular (but not limited to):

  • Novel energy-efficient DL systems: heterogeneous multi/many-core systems, GPUs, and FPGAs;
  • Novel energy-efficient DL hardware accelerators and associated software;
  • Emerging semiconductor technologies with applications to energy-efficient DL hardware acceleration;
  • Cloud and edge energy-efficient DL computing: hardware and software to accelerate training and inference;
  • In-memory computation and in-network computation for energy-efficient DL processing;
  • Machine-learning-based techniques for managing energy efficiency of computing platforms.

Dr. José Cano
Dr. José L. Abellán
Prof. David Kaeli
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sustainability is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1900 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • deep learning
  • computing systems
  • energy efficiency
  • software tools
  • hardware architecture
  • in-memory computing
  • in-network computing

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
Early-Stage Neural Network Hardware Performance Analysis
Sustainability 2021, 13(2), 717; https://doi.org/10.3390/su13020717 - 13 Jan 2021
Cited by 4 | Viewed by 1294
Abstract
The demand for running NNs in embedded environments has increased significantly in recent years due to the significant success of convolutional neural network (CNN) approaches in various tasks, including image recognition and generation. The task of achieving high accuracy on resource-restricted devices, however, [...] Read more.
The demand for running NNs in embedded environments has increased significantly in recent years due to the significant success of convolutional neural network (CNN) approaches in various tasks, including image recognition and generation. The task of achieving high accuracy on resource-restricted devices, however, is still considered to be challenging, which is mainly due to the vast number of design parameters that need to be balanced. While the quantization of CNN parameters leads to a reduction of power and area, it can also generate unexpected changes in the balance between communication and computation. This change is hard to evaluate, and the lack of balance may lead to lower utilization of either memory bandwidth or computational resources, thereby reducing performance. This paper introduces a hardware performance analysis framework for identifying bottlenecks in the early stages of CNN hardware design. We demonstrate how the proposed method can help in evaluating different architecture alternatives of resource-restricted CNN accelerators (e.g., part of real-time embedded systems) early in design stages and, thus, prevent making design mistakes. Full article
(This article belongs to the Special Issue Energy-Efficient Computing Systems for Deep Learning)
Show Figures

Figure 1

Back to TopTop