ℓ₀-norm Minimization for Basis Selection

Part of Advances in Neural Information Processing Systems 17 (NIPS 2004)

Bibtex Metadata Paper

Authors

David Wipf, Bhaskar Rao

Abstract

Finding the sparsest, or minimum ℓ0-norm, representation of a signal given an overcomplete dictionary of basis vectors is an important prob- lem in many application domains. Unfortunately, the required optimiza- tion problem is often intractable because there is a combinatorial increase in the number of local minima as the number of candidate basis vectors increases. This deficiency has prompted most researchers to instead min- imize surrogate measures, such as the ℓ1-norm, that lead to more tractable computational methods. The downside of this procedure is that we have now introduced a mismatch between our ultimate goal and our objective function. In this paper, we demonstrate a sparse Bayesian learning-based method of minimizing the ℓ0-norm while reducing the number of trou- blesome local minima. Moreover, we derive necessary conditions for local minima to occur via this approach and empirically demonstrate that there are typically many fewer for general problems of interest.