Fast Online Policy Gradient Learning with SMD Gain Vector Adaptation

Jin Yu, Douglas Aberdeen, Nicol N. Schraudolph

Advances in Neural Information Processing Systems 18 (NIPS 2005)

Reinforcement learning by direct policy gradient estimation is attractive in theory but in practice leads to notoriously ill-behaved optimization problems. We improve its robustness and speed of convergence with stochastic meta-descent, a gain vector adaptation method that employs fast Hessian-vector products. In our experiments the resulting algorithms outperform previously employed online stochastic, offline conjugate, and natural policy gradient methods.