Part of Advances in Neural Information Processing Systems 6 (NIPS 1993)
Arlindo Oliveira, Alberto Sangiovanni-Vincentelli
The most commonly used neural network models are not well suited to direct digital implementations because each node needs to per(cid:173) form a large number of operations between floating point values. Fortunately, the ability to learn from examples and to generalize is not restricted to networks ofthis type. Indeed, networks where each node implements a simple Boolean function (Boolean networks) can be designed in such a way as to exhibit similar properties. Two algorithms that generate Boolean networks from examples are pre(cid:173) sented. The results show that these algorithms generalize very well in a class of problems that accept compact Boolean network descriptions. The techniques described are general and can be ap(cid:173) plied to tasks that are not known to have that characteristic. Two examples of applications are presented: image reconstruction and hand-written character recognition.