Next Article in Journal
Application of the Non-Contact Video Gauge on the Mechanical Properties Test for Steel Cable at Elevated Temperature
Previous Article in Journal
Computer-Aided Detection of Hyperacute Stroke Based on Relative Radiomic Patterns in Computed Tomography
Article Menu
Issue 8 (April-2) cover image

Export Article

Open AccessArticle

Compressed Learning of Deep Neural Networks for OpenCL-Capable Embedded Systems

1
Computer Science, Hanyang University ERICA, Ansan 15588, Korea
2
Computer Science and Engineering, Hanyang University ERICA, Ansan 15588, Korea
*
Author to whom correspondence should be addressed.
Appl. Sci. 2019, 9(8), 1669; https://doi.org/10.3390/app9081669
Received: 21 March 2019 / Revised: 14 April 2019 / Accepted: 16 April 2019 / Published: 23 April 2019
(This article belongs to the Section Computing and Artificial Intelligence)
  |  
PDF [695 KB, uploaded 23 April 2019]
  |  

Abstract

Deep neural networks (DNNs) have been quite successful in solving many complex learning problems. However, DNNs tend to have a large number of learning parameters, leading to a large memory and computation requirement. In this paper, we propose a model compression framework for efficient training and inference of deep neural networks on embedded systems. Our framework provides data structures and kernels for OpenCL-based parallel forward and backward computation in a compressed form. In particular, our method learns sparse representations of parameters using 1 -based sparse coding while training, storing them in compressed sparse matrices. Unlike the previous works, our method does not require a pre-trained model as an input and therefore can be more versatile for different application environments. Even though the use of 1 -based sparse coding for model compression is not new, we show that it can be far more effective than previously reported when we use proximal point algorithms and the technique of debiasing. Our experiments show that our method can produce minimal learning models suitable for small embedded devices. View Full-Text
Keywords: compressed learning; regularization; proximal point algorithm; debiasing; embedded systems; OpenCL compressed learning; regularization; proximal point algorithm; debiasing; embedded systems; OpenCL
Figures

Figure 1

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).
SciFeed

Share & Cite This Article

MDPI and ACS Style

Lee, S.; Lee, J. Compressed Learning of Deep Neural Networks for OpenCL-Capable Embedded Systems. Appl. Sci. 2019, 9, 1669.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Appl. Sci. EISSN 2076-3417 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top