Eric Allender, Sanjeev Arora, Michael Kearns, Cristopher Moore, Alexander Russell
We establish a new hardness result that shows that the difﬁculty of plan- ning in factored Markov decision processes is representational rather than just computational. More precisely, we give a ﬁxed family of fac- tored MDPs with linear rewards whose optimal policies and value func- tions simply cannot be represented succinctly in any standard parametric form. Previous hardness results indicated that computing good policies from the MDP parameters was difﬁcult, but left open the possibility of succinct function approximation for any ﬁxed factored MDP. Our result applies even to policies which yield a polynomially poor approximation to the optimal value, and highlights interesting connectionswith the com- plexity class of Arthur-Merlin games.