Next Article in Journal
An Agent Based Model to Analyze the Bitcoin Mining Activity and a Comparison with the Gold Mining Industry
Next Article in Special Issue
Fog vs. Cloud Computing: Should I Stay or Should I Go?
Previous Article in Journal
A Framework for Improving the Engagement of Medical Practitioners in an E-Training Platform for Tuberculosis Care and Prevention
Article

Layer-Wise Compressive Training for Convolutional Neural Networks

Department of Control and Computer Engineering, Politecnico di Torino, Turin 10129, Italy
*
Authors to whom correspondence should be addressed.
Future Internet 2019, 11(1), 7; https://doi.org/10.3390/fi11010007
Received: 30 November 2018 / Revised: 17 December 2018 / Accepted: 22 December 2018 / Published: 28 December 2018
(This article belongs to the Special Issue Selelcted papers from INTESA Workshop 2018)
Convolutional Neural Networks (CNNs) are brain-inspired computational models designed to recognize patterns. Recent advances demonstrate that CNNs are able to achieve, and often exceed, human capabilities in many application domains. Made of several millions of parameters, even the simplest CNN shows large model size. This characteristic is a serious concern for the deployment on resource-constrained embedded-systems, where compression stages are needed to meet the stringent hardware constraints. In this paper, we introduce a novel accuracy-driven compressive training algorithm. It consists of a two-stage flow: first, layers are sorted by means of heuristic rules according to their significance; second, a modified stochastic gradient descent optimization is applied on less significant layers such that their representation is collapsed into a constrained subspace. Experimental results demonstrate that our approach achieves remarkable compression rates with low accuracy loss (<1%). View Full-Text
Keywords: deep learning; machine learning; neural networks on-chip; optimization deep learning; machine learning; neural networks on-chip; optimization
Show Figures

Figure 1

MDPI and ACS Style

Grimaldi, M.; Tenace, V.; Calimera, A. Layer-Wise Compressive Training for Convolutional Neural Networks. Future Internet 2019, 11, 7. https://doi.org/10.3390/fi11010007

AMA Style

Grimaldi M, Tenace V, Calimera A. Layer-Wise Compressive Training for Convolutional Neural Networks. Future Internet. 2019; 11(1):7. https://doi.org/10.3390/fi11010007

Chicago/Turabian Style

Grimaldi, Matteo, Valerio Tenace, and Andrea Calimera. 2019. "Layer-Wise Compressive Training for Convolutional Neural Networks" Future Internet 11, no. 1: 7. https://doi.org/10.3390/fi11010007

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop