Causal Bandits: Learning Good Interventions via Causal Inference

Part of Advances in Neural Information Processing Systems 29 (NIPS 2016)

Bibtex »Metadata »Paper »Reviews »Supplemental »

Authors

Finnian Lattimore, Tor Lattimore, Mark D. Reid

Abstract

<p>We study the problem of using causal models to improve the rate at which good interventions can be learned online in a stochastic environment. Our formalism combines multi-arm bandits and causal inference to model a novel type of bandit feedback that is not exploited by existing approaches. We propose a new algorithm that exploits the causal feedback and prove a bound on its simple regret that is strictly better (in all quantities) than algorithms that do not use the additional causal information.</p>