Next Article in Journal
Research on fNIRS Recognition Method of Upper Limb Movement Intention
Next Article in Special Issue
On the Symbol Error Probability of STBC-NOMA with Timing Offsets and Imperfect Successive Interference Cancellation
Previous Article in Journal
Implementation of a System That Helps Novice Users Work with Linked Data
Previous Article in Special Issue
Multi-UAV Enabled Data Collection with Efficient Joint Adaptive Interference Management and Trajectory Design
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Zero-Keep Filter Pruning for Energy/Power Efficient Deep Neural Networks †

Department of Computer Engineering, Hallym University, Chuncheon 24252, Korea
*
Authors to whom correspondence should be addressed.
This paper is an extended version of our paper published in 11th International Conference on Information and Communication Technology Convergence (ICTC), Jeju Island, Korea, 21–23 October 2020.
Electronics 2021, 10(11), 1238; https://doi.org/10.3390/electronics10111238
Submission received: 30 April 2021 / Revised: 18 May 2021 / Accepted: 19 May 2021 / Published: 22 May 2021
(This article belongs to the Special Issue Advanced Communication Techniques for 5G and Internet of Things)

Abstract

Recent deep learning models succeed in achieving high accuracy and fast inference time, but they require high-performance computing resources because they have a large number of parameters. However, not all systems have high-performance hardware. Sometimes, a deep learning model needs to be run on edge devices such as IoT devices or smartphones. On edge devices, however, limited computing resources are available and the amount of computation must be reduced to launch the deep learning models. Pruning is one of the well-known approaches for deriving light-weight models by eliminating weights, channels or filters. In this work, we propose “zero-keep filter pruning” for energy-efficient deep neural networks. The proposed method maximizes the number of zero elements in filters by replacing small values with zero and pruning the filter that has the lowest number of zeros. In the conventional approach, the filters that have the highest number of zeros are generally pruned. As a result, through this zero-keep filter pruning, we can have the filters that have many zeros in a model. We compared the results of the proposed method with the random filter pruning and proved that our method shows better performance with many fewer non-zero elements with a marginal drop in accuracy. Finally, we discuss a possible multiplier architecture, zero-skip multiplier circuit, which skips the multiplications with zero to accelerate and reduce energy consumption.
Keywords: deep learning; convolutional neural networks; filter pruning; image classification; energy efficiency deep learning; convolutional neural networks; filter pruning; image classification; energy efficiency

Share and Cite

MDPI and ACS Style

Woo, Y.; Kim, D.; Jeong, J.; Ko, Y.-W.; Lee, J.-G. Zero-Keep Filter Pruning for Energy/Power Efficient Deep Neural Networks. Electronics 2021, 10, 1238. https://doi.org/10.3390/electronics10111238

AMA Style

Woo Y, Kim D, Jeong J, Ko Y-W, Lee J-G. Zero-Keep Filter Pruning for Energy/Power Efficient Deep Neural Networks. Electronics. 2021; 10(11):1238. https://doi.org/10.3390/electronics10111238

Chicago/Turabian Style

Woo, Yunhee, Dongyoung Kim, Jaemin Jeong, Young-Woong Ko, and Jeong-Gun Lee. 2021. "Zero-Keep Filter Pruning for Energy/Power Efficient Deep Neural Networks" Electronics 10, no. 11: 1238. https://doi.org/10.3390/electronics10111238

APA Style

Woo, Y., Kim, D., Jeong, J., Ko, Y.-W., & Lee, J.-G. (2021). Zero-Keep Filter Pruning for Energy/Power Efficient Deep Neural Networks. Electronics, 10(11), 1238. https://doi.org/10.3390/electronics10111238

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop