Title: Mathematical formulation for the second derivative of backpropagation error with non-linear output function in feedforward neural networks

Authors: Manu Pratap Singh, Sanjeev Kumar, Naveen Kumar Sharma

Addresses: Institute of Computer and Information Science, Dr. B.R. Ambedkar University, Khandari, Agra, Uttar Pradesh, India. ' Department of Mathematics, Dr. B.R. Ambedkar University, Khandari, Agra, Uttar Pradesh, India. ' CET-IILM-AHL, Knowledge Park II, Greater Noida, Gautham Budha Nagar, Uttar Pradesh, India

Abstract: The feedforward neural network architecture uses the backpropagation learning for determination of optimal weights between different interconnected layers in order to perform as the good approximation and generalisation. The determination of the optimal weight vector is possible only when the total minimum error or global error (mean of the minimum local errors) for all patterns from the training set is supposed to minimise. In this paper, we are presenting the generalised mathematical formulation for second derivative of the error function for feedforward neural network to obtain the optimal weight vector for the given training set. The new global minimum error point can evaluate with the help of current global minimum error and the current minimised local error. The proposed method indicates that weights, which are determined from the minimisation of mean error, are more close to the optimal solution with respect to the conventional gradient descent approaches.

Keywords: feedforward neural networks; delta learning rule; descent gradient; conjugate descent; optimisation; backpropagation learning; second derivative; error function; optimal weight vector.

DOI: 10.1504/IJIDS.2010.037231

International Journal of Information and Decision Sciences, 2010 Vol.2 No.4, pp.352 - 374

Published online: 30 Nov 2010 *

Full-text access for editors Full-text access for subscribers Purchase this article Comment on this article