Part of Advances in Neural Information Processing Systems 37 (NeurIPS 2024) Main Conference Track
Victor Boone, Zihan Zhang
In recent years, significant attention has been directed towards learning average-reward Markov Decision Processes (MDPs).However, existing algorithms either suffer from sub-optimal regret guarantees or computational inefficiencies.In this paper, we present the first *tractable* algorithm with minimax optimal regret of O(√sp(h∗)SATlog(SAT)) where sp(h∗) is the span of the optimal bias function h∗, S×A is the size of the state-action space and T the number of learning steps. Remarkably, our algorithm does not require prior information on sp(h∗). Our algorithm relies on a novel subroutine, **P**rojected **M**itigated **E**xtended **V**alue **I**teration (PMEVI), to compute bias-constrained optimal policies efficiently. This subroutine can be applied to various previous algorithms to obtain improved regret bounds.