Special Issue "Energy-Efficient Computing Systems for Deep Learning"
Deadline for manuscript submissions: 31 December 2021.
Interests: computer architecture; computer systems; compilers; interconnection networks and machine learning
Interests: computer architecture; interconnection networks; memory hierarchy; domain-specific architecture; machine learning
Special Issues, Collections and Topics in MDPI journals
We want to invite you to submit your latest research to this Special Issue on “Energy-Efficient Computing Systems for Deep Learning”.
Deep learning (DL) is receiving much attention these days due to the impressive performance achieved in a variety of application areas, such as computer vision, natural language processing, machine translation, and many more. Aimed at achieving ever-faster processing of these DL workloads in an energy-efficient way, a myriad of specialized hardware architectures (e.g., sparse tensor cores in NVIDIA A100 GPU) and accelerators (e.g., Google TPU) are emerging. The goal is to provide much higher performance-per-watt than general-purpose CPU processors. Production deployments tend to have very high model complexity and diversity, demanding solutions that can deliver higher productivity, more powerful programming abstractions, more efficient software and system architectures, faster runtime systems, and numerical libraries, accompanied by a rich set of analysis tools.
DL models are generally memory and computationally intensive, for both training and inference. Accelerating these operations in an energy-efficient way has obvious advantages, first by reducing energy consumption (e.g., data centers can consume megawatts, producing an electricity bill similar to that of a small town), and secondly, by making these models usable on smaller battery-operated devices at the edge of the Internet. Edge devices run on strict power budgets and highly constrained computing power. In addition, while deep neural networks have motivated much of this effort, numerous applications and models involve a wider variety of operations, network architectures, and data processing. These applications and models are a challenge for today’s computer architectures, system stacks, and programming abstractions. As a result, non-von Neumann computing systems such as those based on in-memory and/or in-network computing, which perform specific computational tasks just where the data are generated, are being investigated in order to avoid the latency of shuttling huge amounts of data back and forth between processing and memory units. Additionally, machine learning (ML) techniques are being explored to reduce overall energy consumption in computing systems. These applications of ML range from energy-aware scheduling algorithms in data centers to battery life prediction techniques in edge devices. The high level of interest in these areas calls for a dedicated journal issue to discuss novel acceleration techniques and computation paradigms for energy-efficient DL algorithms. Since the journal targets the interaction of machine learning and computing systems, it will complement other publications specifically focused on one of these two parts in isolation.
The main objective of this Special Issue is to discuss and disseminate the current work in this area, showcasing new and novel DL algorithms, programming paradigms, software tools/libraries, and hardware architectures oriented at providing energy efficiency, in particular (but not limited to):
- Novel energy-efficient DL systems: heterogeneous multi/many-core systems, GPUs, and FPGAs;
- Novel energy-efficient DL hardware accelerators and associated software;
- Emerging semiconductor technologies with applications to energy-efficient DL hardware acceleration;
- Cloud and edge energy-efficient DL computing: hardware and software to accelerate training and inference;
- In-memory computation and in-network computation for energy-efficient DL processing;
- Machine-learning-based techniques for managing energy efficiency of computing platforms.
Dr. José Cano
Dr. José L. Abellán
Prof. David Kaeli
Manuscript Submission Information
Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.
Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sustainability is an international peer-reviewed open access semimonthly journal published by MDPI.
Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1900 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.
- deep learning
- computing systems
- energy efficiency
- software tools
- hardware architecture
- in-memory computing
- in-network computing