Part of Advances in Neural Information Processing Systems 3 (NIPS 1990)

*A. Kramer, C. Sin, R. Chu, P. Ko*

We are focusing on the development of a highly compact neural net weight function based on the use of EEPROM devices. These devices have already proven useful for analog weight storage, but existing designs rely on the use of conventional voltage multiplication as the weight function, requiring additional transistors per synapse. A parasitic capacitance between the floating gate and the drain of the EEPROM structure leads to an unusual J-V characteristic which can be used to advantage in designing a compact synapse. This novel behavior is well characterized by a model we have developed. A single-device circuit results in a 1-quadrant synapse function which is nonlinear, though monotonic. A simple extension employing 2 EEPROMs results in a 2 quadrant function which is much more linear. This approach offers the potential for more than a ten-fold increase in the density of neural net implementations.

1

INTRODUCTION - ANALOG WEIGHTING

The recent surge of interest in neural networks and parallel analog computation has motivated the need for compact analog computing blocks. Analog weighting is an important computational function of this class. Analog weighting is the combining of two analog values, one of which is typically varying (the input) and one of which is typically fixed (the weight) or at least varying more slowly. The varying value is "weighted" by the fixed value through the "weighting function", typically mul(cid:173) tiplication. Analog weighting is most interesting when the overall computational task involves computing the "weighted sum of the inputs." That is, to compute

2:7=1 t(lOj, Vi) where to is the weighting function and ~v = {lOb W2, ... , wn} and

Do not remove: This comment is monitored to verify that the site is working properly