Part of Advances in Neural Information Processing Systems 23 (NIPS 2010)
Alex Strehl, John Langford, Lihong Li, Sham M. Kakade
We provide a sound and consistent foundation for the use of \emph{nonrandom} exploration data in contextual bandit'' or
partially labeled'' settings where only the value of a chosen action is learned. The primary challenge in a variety of settings is that the exploration policy, in which ``offline'' data is logged, is not explicitly known. Prior solutions here require either control of the actions during the learning process, recorded random exploration, or actions chosen obliviously in a repeated manner. The techniques reported here lift these restrictions, allowing the learning of a policy for choosing actions given features from historical data where no randomization occurred or was logged. We empirically verify our solution on two reasonably sized sets of real-world data obtained from an Internet %online advertising company.