A POMDP Extension with Belief-dependent Rewards

Part of Advances in Neural Information Processing Systems 23 (NIPS 2010)

Bibtex Metadata Paper Supplemental

Authors

Mauricio Araya, Olivier Buffet, Vincent Thomas, Françcois Charpillet

Abstract

Partially Observable Markov Decision Processes (POMDPs) model sequential decision-making problems under uncertainty and partial observability. Unfortunately, some problems cannot be modeled with state-dependent reward functions, e.g., problems whose objective explicitly implies reducing the uncertainty on the state. To that end, we introduce rho-POMDPs, an extension of POMDPs where the reward function rho depends on the belief state. We show that, under the common assumption that rho is convex, the value function is also convex, what makes it possible to (1) approximate rho arbitrarily well with a piecewise linear and convex (PWLC) function, and (2) use state-of-the-art exact or approximate solving algorithms with limited changes.