History-Dependent Attractor Neural Networks

Part of Advances in Neural Information Processing Systems 5 (NIPS 1992)

Bibtex Metadata Paper

Authors

Isaac Meilijson, Eytan Ruppin

Abstract

We present a methodological framework enabling a detailed de(cid:173) scription of the performance of Hopfield-like attractor neural net(cid:173) works (ANN) in the first two iterations. Using the Bayesian ap(cid:173) proach, we find that performance is improved when a history-based term is included in the neuron's dynamics. A further enhancement of the network's performance is achieved by judiciously choosing the censored neurons (those which become active in a given itera(cid:173) tion) on the basis of the magnitude of their post-synaptic poten(cid:173) tials. The contribution of biologically plausible, censored, history(cid:173) dependent dynamics is especially marked in conditions of low firing activity and sparse connectivity, two important characteristics of the mammalian cortex. In such networks, the performance at(cid:173) tained is higher than the performance of two 'independent' iter(cid:173) ations, which represents an upper bound on the performance of history-independent networks.