Grammar Learning by a Self-Organizing Network

Part of Advances in Neural Information Processing Systems 7 (NIPS 1994)

Bibtex Metadata Paper

Authors

Michiro Negishi

Abstract

This paper presents the design and simulation results of a self(cid:173) organizing neural network which induces a grammar from exam(cid:173) ple sentences. Input sentences are generated from a simple phrase structure grammar including number agreement, verb transitiv(cid:173) ity, and recursive noun phrase construction rules. The network induces a grammar explicitly in the form of symbol categorization rules and phrase structure rules.

1 Purpose and related works

The purpose of this research is to show that a self-organizing network with a certain structure can acquire syntactic knowledge from only positive (i.e. grammatical) data, without requiring any initial knowledge or external teachers that correct errors. There has been research on supervised neural network models of language acquisi(cid:173) tion tasks [Elman, 1991, Miikkulainen and Dyer, 1988, John and McClelland, 1988]. Unlike these supervised models, the current model self-organizes word and phrasal categories and phrase construction rules through mere exposure to input sentences, without any artificially defined task goals. There also have been self-organizing models of language acquisition tasks [Ritter and Kohonen, 1990, Scholtes, 1991]. Compared to these models, the current model acquires phrase structure rules in more explicit forms, and it learns wider and more structured contexts, as will be explained below.

2 Network Structure and Algorithm

The design of the current network is motivated by the observation that humans have the ability to handle a frequently occurring sequence of symbols (chunk) as an unit of information [Grossberg, 1978, Mannes, 1993]. The network consists of two parts: classification networks and production networks (Figure 1). The classification networks categorize words and phrases, and the production networks

28