Loading [MathJax]/jax/output/CommonHTML/jax.js

Provably Efficient Reinforcement Learning with Multinomial Logit Function Approximation

Part of Advances in Neural Information Processing Systems 37 (NeurIPS 2024) Main Conference Track

Bibtex Paper

Authors

Long-Fei Li, Yu-Jie Zhang, Peng Zhao, Zhi-Hua Zhou

Abstract

We study a new class of MDPs that employs multinomial logit (MNL) function approximation to ensure valid probability distributions over the state space. Despite its significant benefits, incorporating the non-linear function raises substantial challenges in both *statistical* and *computational* efficiency. The best-known result of Hwang and Oh [2023] has achieved an ˜O(κ1dH2K) regret upper bound, where κ is a problem-dependent quantity, d is the feature dimension, H is the episode length, and K is the number of episodes. However, we observe that κ1 exhibits polynomial dependence on the number of reachable states, which can be as large as the state space size in the worst case and thus undermines the motivation for function approximation. Additionally, their method requires storing all historical data and the time complexity scales linearly with the episode count, which is computationally expensive. In this work, we propose a statistically efficient algorithm that achieves a regret of ˜O(dH2K+κ1d2H2), eliminating the dependence on κ1 in the dominant term for the first time. We then address the computational challenges by introducing an enhanced algorithm that achieves the same regret guarantee but with only constant cost. Finally, we establish the first lower bound for this problem, justifying the optimality of our results in d and K.