A Parallel Gradient Descent Method for Learning in Analog VLSI Neural Networks

Part of Advances in Neural Information Processing Systems 5 (NIPS 1992)

Bibtex Metadata Paper

Authors

J. Alspector, R. Meir, B. Yuhas, A. Jayakumar, D. Lippe

Abstract

Typical methods for gradient descent in neural network learning involve calculation of derivatives based on a detailed knowledge of the network model. This requires extensive, time consuming calculations for each pat(cid:173) tern presentation and high precision that makes it difficult to implement in VLSI. We present here a perturbation technique that measures, not calculates, the gradient. Since the technique uses the actual network as a measuring device, errors in modeling neuron activation and synaptic weights do not cause errors in gradient descent. The method is parallel in nature and easy to implement in VLSI. We describe the theory of such an algorithm, an analysis of its domain of applicability, some simulations using it and an outline of a hardware implementation.