Part of Advances in Neural Information Processing Systems 28 (NIPS 2015)
Qinqing Zheng, John Lafferty
We propose a simple, scalable, and fast gradient descent algorithm to optimize a nonconvex objective for the rank minimization problem and a closely related family of semidefinite programs. With O(r3κ2nlogn) random measurements of a positive semidefinite n×n matrix of rank r and condition number κ, our method is guaranteed to converge linearly to the global optimum.