Factors important for good visualisation of time series
by Tzai-Der Wang; Xiaochuan Wu; Colin Fyfe
International Journal of Computational Science and Engineering (IJCSE), Vol. 12, No. 1, 2016

Abstract: We create an artificial neural network which is a version of echo state networks, ESNs. ESNs are recurrent neural networks but unlike most recurrent networks, they come with an efficient training method. We have previously (Wang et al., 2011) adapted this method using ideas from neuroscale (Tipping, 1996) so that the network can be used for projecting multivariate time series data onto a low dimensional manifold so that the structure in the time series can be identified by eye. In this paper, we review work on a minimal architecture echo state machine (Wang et al., 2011) in the context of visualisation and show that it does not perform as well as the original. We then discuss three factors which may affect the capability of the network - its structure, size and sparsity - and show that, of these three, by far the most important is the size of the reservoir of neurons.

Online publication date: Sat, 06-Feb-2016

The full text of this article is only available to individual subscribers or to users at subscribing institutions.

 
Existing subscribers:
Go to Inderscience Online Journals to access the Full Text of this article.

Pay per view:
If you are not a subscriber and you just want to read the full contents of this article, buy online access here.

Complimentary Subscribers, Editors or Members of the Editorial Board of the International Journal of Computational Science and Engineering (IJCSE):
Login with your Inderscience username and password:

    Username:        Password:         

Forgotten your password?


Want to subscribe?
A subscription gives you complete access to all articles in the current issue, as well as to all articles in the previous three years (where applicable). See our Orders page to subscribe.

If you still need assistance, please email subs@inderscience.com