Loading [MathJax]/jax/output/CommonHTML/jax.js

Corruption-Robust Offline Reinforcement Learning with General Function Approximation

Part of Advances in Neural Information Processing Systems 36 (NeurIPS 2023) Main Conference Track

Bibtex Paper Supplemental

Authors

Chenlu Ye, Rui Yang, Quanquan Gu, Tong Zhang

Abstract

We investigate the problem of corruption robustness in offline reinforcement learning (RL) with general function approximation, where an adversary can corrupt each sample in the offline dataset, and the corruption level ζ0 quantifies the cumulative corruption amount over n episodes and H steps. Our goal is to find a policy that is robust to such corruption and minimizes the suboptimality gap with respect to the optimal policy for the uncorrupted Markov decision processes (MDPs). Drawing inspiration from the uncertainty-weighting technique from the robust online RL setting \citep{he2022nearly,ye2022corruptionrobust}, we design a new uncertainty weight iteration procedure to efficiently compute on batched samples and propose a corruption-robust algorithm for offline RL. Notably, under the assumption of single policy coverage and the knowledge of ζ, our proposed algorithm achieves a suboptimality bound that is worsened by an additive factor of O(ζ(CC(λ,ˆF,ZHn))1/2(C(ˆF,μ))1/2n1) due to the corruption. Here CC(λ,ˆF,ZHn) is the coverage coefficient that depends on the regularization parameter λ, the confidence set ˆF, and the dataset ZHn, and C(ˆF,μ) is a coefficient that depends on ˆF and the underlying data distribution μ. When specialized to linear MDPs, the corruption-dependent error term reduces to O(ζdn1) with d being the dimension of the feature map, which matches the existing lower bound for corrupted linear MDPs. This suggests that our analysis is tight in terms of the corruption-dependent term.