Title: A generative model approach for visualising convolutional neural networks

Authors: Masayuki Kobayashi; Masanori Suganuma; Tomoharu Nagao

Addresses: Graduate School of Environment and Information Sciences, Yokohama National University, Kanagawa, Japan ' RIKEN AIP Centre, Tokyo, Japan ' Faculty of Environment and Information Sciences, Yokohama National University, Kanagawa, Japan

Abstract: Convolutional neural networks (CNN) have continued to achieve outstanding performance in a variety of computer vision tasks. CNNs have advanced significantly deeper and deeper, continuing to show substantial improvements for various tasks. Despite their successes, their models are often considered as black-box predictors, and their uninterpretable natures are major problems. In this paper, we introduce a new visualisation framework based on generative adversarial networks (GAN) to provide insight into how CNNs work. Following the standard GAN training, we train the generator and the discriminator to produce natural images that activate a particular unit in the pre-trained CNN. We apply our method to the AlexNet and CaffeNet and visualise the neuron activations. Our method is very simple, yet produces comparatively recognisable visualisations. We also attempt to use our visualisation as indications of models trust and verify the potential of our visualisations.

Keywords: convolutional neural network; CNN; generative adversarial networks; GAN; visualisation; activation maximisation; interpretability.

DOI: 10.1504/IJCISTUDIES.2018.096186

International Journal of Computational Intelligence Studies, 2018 Vol.7 No.3/4, pp.214 - 230

Received: 07 Feb 2018
Accepted: 19 Apr 2018

Published online: 15 Nov 2018 *

Full-text access for editors Full-text access for subscribers Purchase this article Comment on this article