Next Article in Journal
Sustainable Removal of Cr(VI) by Lime Peel and Pineapple Core Wastes
Next Article in Special Issue
A Survey on Deep Learning-Driven Remote Sensing Image Scene Understanding: Scene Classification, Scene Retrieval and Scene-Guided Object Detection
Previous Article in Journal
Evaluation of Structural and Mechanical Properties of Porous Artificial Bone Scaffolds Fabricated via Advanced TBA-Based Freeze-Gel Casting Technique
Previous Article in Special Issue
Heated Metal Mark Attribute Recognition Based on Compressed CNNs Model
Article Menu
Issue 10 (May-2) cover image

Export Article

Open AccessArticle

Layer-Level Knowledge Distillation for Deep Neural Network Learning

Advanced Institute of Manufacturing with High-tech Innovations, Center for Innovative Research on Aging Society (CIRAS) and Department of Computer Science and Information Engineering, National Chung Cheng University, Chiayi 62102, Taiwan
*
Author to whom correspondence should be addressed.
Appl. Sci. 2019, 9(10), 1966; https://doi.org/10.3390/app9101966
Received: 1 April 2019 / Revised: 4 May 2019 / Accepted: 9 May 2019 / Published: 14 May 2019
(This article belongs to the Special Issue Advances in Deep Learning)
  |  
PDF [1288 KB, uploaded 14 May 2019]
  |  

Abstract

Motivated by the recently developed distillation approaches that aim to obtain small and fast-to-execute models, in this paper a novel Layer Selectivity Learning (LSL) framework is proposed for learning deep models. We firstly use an asymmetric dual-model learning framework, called Auxiliary Structure Learning (ASL), to train a small model with the help of a larger and well-trained model. Then, the intermediate layer selection scheme, called the Layer Selectivity Procedure (LSP), is exploited to determine the corresponding intermediate layers of source and target models. The LSP is achieved by two novel matrices, the layered inter-class Gram matrix and the inter-layered Gram matrix, to evaluate the diversity and discrimination of feature maps. The experimental results, demonstrated using three publicly available datasets, present the superior performance of model training using the LSL deep model learning framework. View Full-Text
Keywords: deep learning; knowledge distillation deep learning; knowledge distillation
Figures

Figure 1

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).
SciFeed

Share & Cite This Article

MDPI and ACS Style

Li, H.-T.; Lin, S.-C.; Chen, C.-Y.; Chiang, C.-K. Layer-Level Knowledge Distillation for Deep Neural Network Learning. Appl. Sci. 2019, 9, 1966.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Appl. Sci. EISSN 2076-3417 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top