Part of Advances in Neural Information Processing Systems 6 (NIPS 1993)
Several recurrent networks have been proposed as representations for the task of formal language learning. After training a recurrent network rec(cid:173) ognize a formal language or predict the next symbol of a sequence, the next logical step is to understand the information processing carried out by the network. Some researchers have begun to extracting finite state machines from the internal state trajectories of their recurrent networks. This paper describes how sensitivity to initial conditions and discrete measurements can trick these extraction methods to return illusory finite state descriptions.