Part of Advances in Neural Information Processing Systems 36 (NeurIPS 2023) Main Conference Track
Haoxing Tian, Alex Olshevsky, Yannis Paschalidis
The early theory of actor-critic methods considered convergence using linear function approximators for the policy and value functions. Recent work has established convergence using neural network approximators with a single hidden layer. In this work we are taking the natural next step and establish convergence using deep neural networks with an arbitrary number of hidden layers, thus closing a gap between theory and practice. We show that actor-critic updates projected on a ball around the initial condition will converge to a neighborhood where the average of the squared gradients is ˜O(1/√m)+O(ϵ), with m being the width of the neural network and ϵ the approximation quality of the best critic neural network over the projected set.