Nets with Unreliable Hidden Nodes Learn Error-Correcting Codes

Part of Advances in Neural Information Processing Systems 5 (NIPS 1992)

Bibtex Metadata Paper

Authors

Stephen Judd, Paul Munro

Abstract

In a multi-layered neural network, anyone of the hidden layers can be viewed as computing a distributed representation of the input. Several "encoder" experiments have shown that when the representation space is small it can be fully used. But computing with such a representation requires completely dependable nodes. In the case where the hidden nodes are noisy and unreliable, we find that error correcting schemes emerge simply by using noisy units during training; random errors in(cid:173) jected during backpropagation result in spreading representations apart. Average and minimum distances increase with misfire probability, as predicted by coding-theoretic considerations. Furthennore, the effect of this noise is to protect the machine against permanent node failure, thereby potentially extending the useful lifetime of the machine.