Part of Advances in Neural Information Processing Systems 20 (NIPS 2007)
Tao Wang, Michael Bowling, Dale Schuurmans, Daniel Lizotte
Recently, we have introduced a novel approach to dynamic programming and re- inforcement learning that is based on maintaining explicit representations of sta- tionary distributions instead of value functions. In this paper, we investigate the convergence properties of these dual algorithms both theoretically and empirically, and show how they can be scaled up by incorporating function approximation.