Part of Advances in Neural Information Processing Systems 36 (NeurIPS 2023) Main Conference Track
Yihan Zhou, ecprice
For some hypothesis classes and input distributions, \emph{active} agnostic learning needs exponentially fewer samples than passive learning; for other classes and distributions, it offers little to no improvement. The most popular algorithms for agnostic active learning express their performance in terms of a parameter called the disagreement coefficient, but it is known that these algorithms are inefficient on some inputs. We take a different approach to agnostic active learning, getting an algorithm that is \emph{competitive} with the optimal algorithm for any binary hypothesis class H and distribution DX over X. In particular, if any algorithm can use m∗ queries to get O(η) error, then our algorithm uses O(m∗logH) queries to get O(η) error. Our algorithm lies in the vein of the splitting-based approach of Dasgupta [2004], which gets a similar result for the realizable (η=0) setting. We also show that it is NP-hard to do better than our algorithm's O(logH) overhead in general.