Title: Node estimate for sparse random vector functional-link networks

Authors: Simone Scardapane; Aurelio Uncini

Addresses: Department of Information Engineering, Electronics and Telecommunications (DIET), Sapienza University of Rome, Via Eudossiana 18, 00184 Rome, Italy ' Department of Information Engineering, Electronics and Telecommunications (DIET), Sapienza University of Rome, Via Eudossiana 18, 00184 Rome, Italy

Abstract: A random vector functional-link (RVFL) network is a neural network composed of a randomised hidden layer and an adaptable output layer. Training such a network is reduced to a linear least-squares problem, which can be solved efficiently. Still, selecting a proper number of nodes in the hidden layer is a critical issue, since an improper choice can lead to either overfitting or underfitting for the problem at hand. Additionally, small sized RVFL networks are favoured in situations where computational considerations are important. In the case of RVFL networks with a single output, unnecessary neurons can be removed adaptively with the use of sparse training algorithms such as Lasso, which are suboptimal for the case of multiple outputs. In this paper, we extend some prior ideas in order to devise a group sparse training algorithm which avoids the shortcomings of previous approaches. We validate our proposal on a large set of experimental benchmarks, and we analyse several state-of-the-art optimisation techniques in order to solve the overall training problem. We show that the proposed approach can obtain an accuracy comparable to standard algorithms, while at the same time resulting in extremely sparse hidden layers.

Keywords: pruning; sparse learning; random weights; neural network.

DOI: 10.1504/IJMISSP.2016.085271

International Journal of Machine Intelligence and Sensory Signal Processing, 2016 Vol.1 No.4, pp.341 - 352

Received: 26 Jan 2017
Accepted: 27 Feb 2017

Published online: 19 Jul 2017 *

Full-text access for editors Full-text access for subscribers Purchase this article Comment on this article