Learning the number of filters in convolutional neural networks Online publication date: Thu, 08-Apr-2021
by Jue Li; Feng Cao; Honghong Cheng; Yuhua Qian
International Journal of Bio-Inspired Computation (IJBIC), Vol. 17, No. 2, 2021
Abstract: Convolutional networks bring the performance of many computer vision tasks to unprecedented heights, but at the cost of enormous computation load. To reduce this cost, many model compression tasks have been proposed by eliminating insignificant model structures. For example, convolution filters with small absolute weights are pruned and then fine-tuned to restore reasonable accuracy. However, most of these works rely on pre-trained models without specific analysis of the changes in filters during the training process, resulting in sizable model retraining costs. Different from previous works, we interpret the change of filter behaviour during training from the associated angle, and propose a novel filter pruning method utilising the change rule, which can remove filters with similar functions later in training. According to this strategy, not only can we achieve model compression without fine-tuning, but we can also find a novel perspective to interpret the changing behaviour of the filter during training. Moreover, our approach has been proved to be effective for many advanced CNN architectures.
Existing subscribers:
Go to Inderscience Online Journals to access the Full Text of this article.
If you are not a subscriber and you just want to read the full contents of this article, buy online access here.Complimentary Subscribers, Editors or Members of the Editorial Board of the International Journal of Bio-Inspired Computation (IJBIC):
Login with your Inderscience username and password:
Want to subscribe?
A subscription gives you complete access to all articles in the current issue, as well as to all articles in the previous three years (where applicable). See our Orders page to subscribe.
If you still need assistance, please email subs@inderscience.com