Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Privacy in machine learning is being studied by many in the community due to its importance in many practical applications. Most studies use Homomorphic Encryptions or Secure Multi Party Computation to achieve privacy. This work uses Functional Encryption (FE) which is a different set of tools with different capabilities. I find this a great contribution since it may influence future research by demonstrating another plausible direction. Moreover, the authors present a new FE scheme, tailored to work well with machine learning workloads. This triggers the question whether this paper should be evaluated by ML experts in NeurIPS or by privacy experts at Crypto for example. Publishing it in NeurIPS generates a problem, even during the review process since the NeurIPS community is not used to evaluated new cryptographic algorithms. Nevertheless, since the algorithm design makes adjustments to match ML, it is in the scope of NeurIPS to publish such paper.