Next Article in Journal
An Agent Based Model to Analyze the Bitcoin Mining Activity and a Comparison with the Gold Mining Industry
Previous Article in Journal
A Framework for Improving the Engagement of Medical Practitioners in an E-Training Platform for Tuberculosis Care and Prevention
Article Menu
Issue 1 (January) cover image

Export Article

Open AccessArticle
Future Internet 2019, 11(1), 7; https://doi.org/10.3390/fi11010007

Layer-Wise Compressive Training for Convolutional Neural Networks

Department of Control and Computer Engineering, Politecnico di Torino, Turin 10129, Italy
*
Authors to whom correspondence should be addressed.
Received: 30 November 2018 / Revised: 17 December 2018 / Accepted: 22 December 2018 / Published: 28 December 2018
(This article belongs to the Special Issue Selelcted papers from INTESA Workshop 2018)
Full-Text   |   PDF [737 KB, uploaded 28 December 2018]   |  

Abstract

Convolutional Neural Networks (CNNs) are brain-inspired computational models designed to recognize patterns. Recent advances demonstrate that CNNs are able to achieve, and often exceed, human capabilities in many application domains. Made of several millions of parameters, even the simplest CNN shows large model size. This characteristic is a serious concern for the deployment on resource-constrained embedded-systems, where compression stages are needed to meet the stringent hardware constraints. In this paper, we introduce a novel accuracy-driven compressive training algorithm. It consists of a two-stage flow: first, layers are sorted by means of heuristic rules according to their significance; second, a modified stochastic gradient descent optimization is applied on less significant layers such that their representation is collapsed into a constrained subspace. Experimental results demonstrate that our approach achieves remarkable compression rates with low accuracy loss (<1%). View Full-Text
Keywords: deep learning; machine learning; neural networks on-chip; optimization deep learning; machine learning; neural networks on-chip; optimization
Figures

Figure 1

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).
SciFeed

Share & Cite This Article

MDPI and ACS Style

Grimaldi, M.; Tenace, V.; Calimera, A. Layer-Wise Compressive Training for Convolutional Neural Networks. Future Internet 2019, 11, 7.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Future Internet EISSN 1999-5903 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top