Generalization of Back propagation to Recurrent and Higher Order Neural Networks

Part of Neural Information Processing Systems 0 (NIPS 1987)

Bibtex Metadata Paper

Authors

Fernando Pineda

Abstract

A general method for deriving backpropagation algorithms for networks

with recurrent and higher order networks is introduced. The propagation of activation in these networks is determined by dissipative differential equations. The error signal is backpropagated by integrating an associated differential equation. The method is introduced by applying it to the recurrent generalization of the feedforward backpropagation network. The method is extended to the case of higher order networks and to a constrained dynamical system for training a content addressable memory. The essential feature of the adaptive algorithms is that adaptive equation has a simple outer product form.

Preliminary experiments suggest that learning can occur very rapidly in

networks with recurrent connections. The continuous formalism makes the new approach more suitable for implementation in VLSI.