Bias-Optimal Incremental Problem Solving

Part of Advances in Neural Information Processing Systems 15 (NIPS 2002)

Bibtex Metadata Paper

Authors

Jürgen Schmidhuber

Abstract

Given is a problem sequence and a probability distribution (the bias) on programs computing solution candidates. We present an optimally fast way of incrementally solving each task in the sequence. Bias shifts are computed by program prefixes that modify the distribution on their suf- fixes by reusing successful code for previous tasks (stored in non-modifi- able memory). No tested program gets more runtime than its probability times the total search time. In illustrative experiments, ours becomes the first general system to learn a universal solver for arbitrary disk Tow- ers of Hanoi tasks (minimal solution size  ). It demonstrates the advantages of incremental learning by profiting from previously solved, simpler tasks involving samples of a simple context free language.

1 Brief Introduction to Optimal Universal Search

Consider an asymptotically optimal method for tasks with quickly verifiable solutions: