Part of Advances in Neural Information Processing Systems 5 (NIPS 1992)
David B. Kirk, Douglas Kerns, Kurt Fleischer, Alan Barr
We describe an analog VLSI implementation of a multi-dimensional gradient estimation and descent technique for minimizing an on(cid:173) chip scalar function fO. The implementation uses noise injec(cid:173) tion and multiplicative correlation to estimate derivatives, as in [Anderson, Kerns 92]. One intended application of this technique is setting circuit parameters on-chip automatically, rather than manually [Kirk 91]. Gradient descent optimization may be used to adjust synapse weights for a backpropagation or other on-chip learning implementation. The approach combines the features of continuous multi-dimensional gradient descent and the potential for an annealing style of optimization. We present data measured from our analog VLSI implementation.