NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:1473
Title:Positive-Unlabeled Compression on the Cloud


		
This paper proposed a quite novel application of positive-unlabeled learning: how to compress a big trained teacher network into a small student network, given that the architecture of the teacher network is unknown and the data for distilling the supervision from the teacher network and training the student network is also limited. The proposed method makes use of PU learning by assuming there is a huge pool of data on the cloud regarded as U data; the data uploaded by the user is regarded as P data and then the PU classifier will select more and more related data for distilling the supervision. This only requires the teacher network to be compressed and a limited amount of users' data to be uploaded onto the cloud. As a consequence, this can save a lot of computational resources and may protect some sensitive data from the user side as well. The clarity, the novelty, and the significance are all above the corresponding thresholds of NeurIPS and thus it should clearly be accepted. The problem under consideration is of practical interests and may have huge impacts to our daily life (I assume that many of us are using low-end computing devices everyday). In order to address the broader audience in NeurIPS, many discussions on the motivation should be moved from the rebuttal to the paper itself, including why not compress on the user side and why not directly use the teacher network on the cloud side. The final version should be as friendly as possible to everybody attending the conference. [This meta-review was reviewed and revised by the Program Chairs]