TD_gamma: Re-evaluating Complex Backups in Temporal Difference Learning

Part of Advances in Neural Information Processing Systems 24 (NIPS 2011)

Bibtex »Metadata »Paper »

Authors

George Konidaris, Scott Niekum, Philip S. Thomas

Abstract

<p>We show that the lambda-return target used in the TD(lambda) family of algorithms is the maximum likelihood estimator for a specific model of how the variance of an n-step return estimate increases with n. We introduce the gamma-return estimator, an alternative target based on a more accurate model of variance, which defines the TD<em>gamma family of complex-backup temporal difference learning algorithms. We derive TD</em>gamma, the gamma-return equivalent of the original TD(lambda) algorithm, which eliminates the lambda parameter but can only perform updates at the end of an episode and requires time and space proportional to the episode length. We then derive a second algorithm, TD<em>gamma(C), with a capacity parameter C. TD</em>gamma(C) requires C times more time and memory than TD(lambda) and is incremental and online. We show that TD<em>gamma outperforms TD(lambda) for any setting of lambda on 4 out of 5 benchmark domains, and that TD</em>gamma(C) performs as well as or better than TD_gamma for intermediate settings of C.</p>