Part of Advances in Neural Information Processing Systems 2 (NIPS 1989)
Dataflow architectures are general computation engines optimized for the execution of fme-grain parallel algorithms. Neural networks can be simulated on these systems with certain advantages. In this paper, we review dataflow architectures, examine neural network simulation performance on a new generation dataflow machine, compare that performance to other simulation alternatives, and discuss the benefits and drawbacks of the dataflow approach.
1 DATAFLOW ARCHITECTURES Dataflow research has been conducted at MIT (Arvind & Culler, 1986) and elsewhere (Hiraki, et. aI., 1987) for a number of years. Dataflow architectures are general computation engines that treat each instruction of a program as a separate task which is scheduled in an asynchronous, data-driven fashion. Dataflow programs are compiled into graphs which explicitly describe the data dependencies of the computation. These graphs are directly executed by the machine. Computations which are not linked by a path in the graphs can be executed in parallel. Each machine has a large number of processing elements with hardware that is optimized to reduce task switching overhead to a minimum. As each computation executes and produces a result, it causes all of the following computations that require the result to be scheduled. In this manner, fine grain parallel computation is achieved, with the limit on the amount of possible parallelism determined by the problem and the number of processing elements in the machine.
Dataflow Architectures: Flexible Platforms for Neural Network Simulation