Graded Grammaticality in Prediction Fractal Machines

Part of Advances in Neural Information Processing Systems 12 (NIPS 1999)

Bibtex Metadata Paper

Authors

Shan Parfitt, Peter Tiño, Georg Dorffner

Abstract

We introduce a novel method of constructing language models, which avoids some of the problems associated with recurrent neu(cid:173) ral networks. The method of creating a Prediction Fractal Machine (PFM) [1] is briefly described and some experiments are presented which demonstrate the suitability of PFMs for language modeling. PFMs distinguish reliably between minimal pairs, and their be(cid:173) havior is consistent with the hypothesis [4] that wellformedness is 'graded' not absolute. A discussion of their potential to offer fresh insights into language acquisition and processing follows.