Learning by Choice of Internal Representations

Part of Advances in Neural Information Processing Systems 1 (NIPS 1988)

Bibtex Metadata Paper

Authors

Tal Grossman, Ronny Meir, Eytan Domany

Abstract

We introduce a learning algorithm for multilayer neural net(cid:173) works composed of binary linear threshold elements. Whereas ex(cid:173) isting algorithms reduce the learning process to minimizing a cost function over the weights, our method treats the internal repre(cid:173) sentations as the fundamental entities to be determined. Once a correct set of internal representations is arrived at, the weights are found by the local aild biologically plausible Perceptron Learning Rule (PLR). We tested our learning algorithm on four problems: adjacency, symmetry, parity and combined symmetry-parity.