Title: Training auto-encoders effectively via eliminating task-irrelevant input variables

Authors: Hui Shen; Dehua Li; Hong Wu; Zhaoxiang Zang

Addresses: School of Automation, Huazhong University of Science and Technology, 1037 Luoyu Road, Wuhan, Hubei, 430074, China ' School of Automation, Huazhong University of Science and Technology, 1037 Luoyu Road, Wuhan, Hubei, 430074, China ' School of Automation, Huazhong University of Science and Technology, 1037 Luoyu Road, Wuhan, Hubei, 430074, China ' Hubei Key Laboratory of Intelligent Vision Based Monitoring for Hydroelectric Engineering, China Three Gorges University, Yichang Hubei, 443002, China

Abstract: Auto-encoders are often used as building blocks of deep network classifiers to learn feature extractors, but task-irrelevant information in the input data may lead to bad extractors and result in poor generalisation performance of the network. In this paper, via dropping the task-irrelevant input variables, the performance of auto-encoders can be obviously improved. Specifically, an importance-based variable selection method is proposed to aim at finding the task-irrelevant input variables and dropping them. It firstly estimates importance of each variable, and then drops the variables with importance value lower than a threshold. In order to obtain better performance, the method can be employed for each layer of stacked auto-encoders. Experimental results show that when combined with our method the stacked denoising auto-encoders achieve significantly improved performance on three challenging datasets.

Keywords: feature learning; deep learning; neural network; auto-encoder; variable selection; stacked auto-encoders; feature selection; unsupervised training.

DOI: 10.1504/IJCSE.2019.099071

International Journal of Computational Science and Engineering, 2019 Vol.18 No.4, pp.332 - 339

Received: 17 May 2016
Accepted: 12 Sep 2016

Published online: 08 Apr 2019 *

Full-text access for editors Access for subscribers Purchase this article Comment on this article