Author(s):

  • Ranganathan, Varun
  • Natarajan, S.

Abstract:

The backpropagation algorithm, which had been originally introduced in the 1970s, is the workhorse of learning in neural networks. This backpropagation algorithm makes use of the famous machine learning algorithm known as Gradient Descent, which is a first-order iterative optimization algorithm for finding the minimum of a function. To find a local minimum of a function using gradient descent, one takes steps proportional to the negative of the gradient (or of the approximate gradient) of the function at the current point. In this paper, we develop an alternative to the backpropagation without the use of the Gradient Descent Algorithm, but instead we are going to devise a new algorithm to find the error in the weights and biases of an artificial neuron using Moore-Penrose Pseudo Inverse. The numerical studies and the experiments performed on various datasets are used to verify the working of this alternative algorithm.

Document:

https://arxiv.org/abs/1802.00027

References:

[1] Rumelhart, David E.; Hinton, Geoffrey E.; Williams, Ronald J. (8 October1986). ”Learning representations by back-propagating errors”. Nature. 323(6088): 533–536. doi:10.1038/323533a014

[2] arXiv:1710.05941 – Prajit Ramachandran, Barret Zoph, Quoc V. Le – Search-ing for Activation Functions

[3] Rosenblatt, Frank (1958), The Perceptron: A Probabilistic Model for Infor-mation Storage and Organization in the Brain, Cornell Aeronautical Labo-ratory, Psychological Review, v65, No. 6, pp. 386–408. doi:10.1037/h0042519

[4] Snyman, Jan (3 March 2005). Practical Mathematical Optimization: An In-troduction to Basic Optimization Theory and Classical and New Gradient-Based Algorithms. Springer Science & Business Media. ISBN 978-0-387-24348-1

[5] arXiv:1606.04474 – Marcin Andrychowicz, Misha Denil, Sergio Gomez,Matthew W. Hoffman, David Pfau, Tom Schaul, Brendan Shillingford,Nando de Freitas – Learning to learn by gradient descent by gradient de-scent

[6] arXiv:1602.05980v2 – Bing Xu, Ruitong Huang, Mu Li – Revise SaturatedActivation Functions

[7] Guang-Bin Huang, Qin-Yu Zhu, Chee-Kheong Siew – Extreme learning ma-chine: a new learning scheme of feedforward neural networks. – ISBN: 0-7803-8359-1

[8] Weisstein, Eric W. ”Moore-Penrose Matrix Inverse.” From MathWorld–AWolframWebResource.http://mathworld.wolfram.com/Moore-PenroseMatrixInverse.html

[9] https://archive.ics.uci.edu/ml/datasets/breast+cancer+wisconsin+(original)