Title: Efficient deep convolutional model compression with an active stepwise pruning approach

Authors: Shengsheng Wang; Chunshang Xing; Dong Liu

Addresses: College of Computer Science and Technology, Jilin University, Changchun, China ' Software Institute, Jilin University, Changchun, China ' School of Software and Communication Engineering, Xiangnan University, Chenzhou, China

Abstract: Deep models are structurally tremendous and complex, thus making it hard to deploy on the embedded hardware with restricted memory and computing power. Although, the existing compression methods have pruned the deep models effectively, some issues exist in those methods, such as multiple iterations needed in fine-tuning phase, difficulty in pruning granularity control and numerous hyperparameters needed to set. In this paper, we propose an active stepwise pruning method of a logarithmic function which only needs to set three hyperparameters and a few epochs. We also propose a recovery strategy to repair the incorrect pruning thus ensuring the prediction accuracy of model. Pruning and repairing alternately constitute cyclic process along with updating the weights in layers. Our method can prune the parameters of MobileNet, AlexNet, VGG-16 and ZFNet by a factor of 5.6×, 11.7×, 16.6× and 15× respectively without any accuracy loss, which precedes the existing methods.

Keywords: deep convolutional model; model compression; active stepwise pruning; parameter repairing; pruning intensity; logarithmic function.

DOI: 10.1504/IJCSE.2020.109401

International Journal of Computational Science and Engineering, 2020 Vol.22 No.4, pp.420 - 430

Received: 23 Jan 2019
Accepted: 11 Sep 2019

Published online: 08 Sep 2020 *

Full-text access for editors Full-text access for subscribers Purchase this article Comment on this article