Special Issue "Artificial Intelligence Compression and Acceleration for Smart Sensing Applications"
A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".
Deadline for manuscript submissions: 25 November 2022 | Viewed by 1509
Special Issue Editors

Interests: digital VLSI design; Smart Vision systems; FPGA design; image and video coding; reconfigurable architectures

Interests: computer vision

Interests: machine learning; deep learning; computer vision
Special Issue Information
Dear Colleagues,
In recent years, deep neural networks (DNNs) have achieved overwhelming success in different artificial intelligence applications. This great success is mainly due to the availability of the GPU and TPU clusters that can train very deep models with thousands of layers and millions/billions of parameters on large-scale datasets. However, such cumbersome DNNs require heavy computation resources, which makes their deployment on devices with limited computational capacity and memory (embedded devices, mobile phones, etc.) very difficult. To overcome this limitation, algorithmic, architectural, and technological efforts could be made. From the algorithmic point of view, DNN compression techniques seem to be an attractive solution. Moreover, some innovative architectures and design flows have been attempted for deployment to reach the compromise between precision and energy efficiency. To summarize, the main challenge is to propose heavy architectures or optimized algorithms that achieve approximately the same performance when compared to the original versions.
This Special Issue aims to cover the new developments and recent advances in the compression of the deep neural networks for real-time applications. The topic includes but is not limited to the following:
- Cloud/FoG/Edge DNN challenges;
- Knowledge distillation;
- Parameters pruning and quantization;
- Design flow and low power systems;
- Low-rank factorization;
- Transferred compact convolutional filters;
- Hardware accelerators.
Dr. Jridi Maher
Dr. Thibault Napoléon
Dr. Ayoub Karine
Guest Editors
Manuscript Submission Information
Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.
Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.
Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.
Keywords
- deep neural networks
- knowledge distillation
- pruning/quantization model compression
- computer visions
- internet of things
- CNN
- FPGA, SoPC, GPU
- IoMT : internet of multimedia things
- algorithmic optimization
- computational complexity reduction
- AI implementation challenges
- image processing
Planned Papers
The below list represents only planned manuscripts. Some of these manuscripts have not been received by the Editorial Office yet. Papers submitted to MDPI journals are subject to peer-review.
Title: Embedded Deep Learning Accelerators on RISC-V Processors: A Survey on Recent Advances
Authors: Ghattas Akkad
Affiliation: [email protected], VISION-AD, ISEN Yncrea Ouest, 44470 Carquefou, France
Abstract: The exponential increase in generated data as well as the advances in high-performance computing have paved the way for the use of complex machine learning methods. Indeed, the availability of Graphical Processing Units (GPU) and Tensor Processing Units (TPU) have made it possible to train and prototype Deep Neural Networks (DNN) on large-scale data sets and for a variety of applications, i.e., vision, robotics, medical, etc. The popularity of these DNNs originates from their efficacy and state-of-the-art inference accuracy, however at the cost of a considerably high computational complexity. Rendering their implementation on limited resources, edge devices, without a major loss in inference accuracy, a dire and challenging task. To this extent, it has become extremely important to design innovative architectures and dedicated accelerators to port these DNNs to embedded and re-configurable processors with a high-performance low complexity structure. In this study, we present an extensive survey on recent advances in deep learning accelerators (DLA) for RISC-V processors given their open-source nature, accessibility, customizability and universality. After scanning through this article, the reader should have a comprehensive overview of the recent progress in this domain as well as cutting edge knowledge of up-to-date embedded machine learning trends.