Next Article in Journal
Personalized Human Activity Recognition Based on Integrated Wearable Sensor and Transfer Learning
Previous Article in Journal
Passive Wireless Dual-Tag UHF RFID Sensor System for Surface Crack Monitoring
Article

Differential Evolution Based Layer-Wise Weight Pruning for Compressing Deep Neural Networks

School of Electronics and Information, Northwestern Polytechnical University, 127 West Youyi Road, Xi’an 710072, China
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(3), 880; https://doi.org/10.3390/s21030880
Received: 24 December 2020 / Revised: 25 January 2021 / Accepted: 25 January 2021 / Published: 28 January 2021
(This article belongs to the Section Intelligent Sensors)
Deep neural networks have evolved significantly in the past decades and are now able to achieve better progression of sensor data. Nonetheless, most of the deep models verify the ruling maxim in deep learning—bigger is better—so they have very complex structures. As the models become more complex, the computational complexity and resource consumption of these deep models are increasing significantly, making them difficult to perform on resource-limited platforms, such as sensor platforms. In this paper, we observe that different layers often have different pruning requirements, and propose a differential evolutionary layer-wise weight pruning method. Firstly, the pruning sensitivity of each layer is analyzed, and then the network is compressed by iterating the weight pruning process. Unlike some other methods that deal with pruning ratio by greedy ways or statistical analysis, we establish an optimization model to find the optimal pruning sensitivity set for each layer. Differential evolution is an effective method based on population optimization which can be used to address this task. Furthermore, we adopt a strategy to recovery some of the removed connections to increase the capacity of the pruned model during the fine-tuning phase. The effectiveness of our method has been demonstrated in experimental studies. Our method compresses the number of weight parameters in LeNet-300-100, LeNet-5, AlexNet and VGG16 by 24×, 14×, 29× and 12×, respectively. View Full-Text
Keywords: neural network compression; weight pruning; differential evolution; sparse network neural network compression; weight pruning; differential evolution; sparse network
Show Figures

Figure 1

MDPI and ACS Style

Wu, T.; Li, X.; Zhou, D.; Li, N.; Shi, J. Differential Evolution Based Layer-Wise Weight Pruning for Compressing Deep Neural Networks. Sensors 2021, 21, 880. https://doi.org/10.3390/s21030880

AMA Style

Wu T, Li X, Zhou D, Li N, Shi J. Differential Evolution Based Layer-Wise Weight Pruning for Compressing Deep Neural Networks. Sensors. 2021; 21(3):880. https://doi.org/10.3390/s21030880

Chicago/Turabian Style

Wu, Tao, Xiaoyang Li, Deyun Zhou, Na Li, and Jiao Shi. 2021. "Differential Evolution Based Layer-Wise Weight Pruning for Compressing Deep Neural Networks" Sensors 21, no. 3: 880. https://doi.org/10.3390/s21030880

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop