NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:2545
Title:Partially Encrypted Deep Learning using Functional Encryption

Reviewer 1


		
The paper is very well written but I'm concerned whether the paper is a fit for NeurIPS. The main contribution to ML in Section 4 feels rather compressed, and by far most of the references got to venues in cryptography and information security (only one each for NeurIPS and ICML). Furthermore, given that the paper introduces a new functional encryption scheme, it would make to sense to submit to a venue where this receives due appreciation and scrutiny. Post rebuttal: I appreciate the author's efforts to introduce a new concept to the ML community, but I stand by my preference of scrutiny in a more appropriate venue.

Reviewer 2


		
The paper presents a novel FE scheme and discusses it in detail. I enjoyed reading it. However, the definition of this encryption scheme remains unclear in the end. The authors refer to Figure 10, which is not available. Without this crucial information, the remainder of the paper lacks clarity. However, the experiments are chosen carefully and highlight their finding. I wonder which important information the circles and lines in Figures 2 and 3 reveal to the reader? Eventually, it would have been more informative to replace these two Figures by Figure 10 from the appendix. Otherwise, the paper is not self-contained.

Reviewer 3


		
Summary of the work: This paper proposes a methodology to perform inference on encrypted data using functional evaluation. Authors develop a specific model consisting of private and public execution; the private (cypher-text) execution takes place a 2-layer perceptron with square activation functions in the hidden layer. The output of this 2-layer perceptron is revealed to the server, which runs another ML model to classify the input. Authors provide Functional Encryption tools to efficiently run the private part of the protocol. They also propose a strong points: - Authors clearly distinguish their work from other private inference scenarios: their target is applications where the client might not be "online" and cannot communicate in an SFE protocol. - The importance of off-line secure inference is well-motivated using examples. - The paper is well-written and different aspects on the work are explained effectively. - The scenario of having some labels to be revealed publicly (e.g., digit value) while keeping some private (e.g., font) is useful in many practical applications. weak points: - The threat model is not clearly explained. I would like to know the following: - It is not clear which party is the adversary; My assumption is that the server is untrusted. Given that, I cannot convince myself how the adversarial re-training step is meaningful: the server is the party who learns the private 2-layer perceptron using training data. From that point of view, the adversary itself has control over the underlying model and he can avoid adversarial re-training. - If the above scenario is not correct, it would be nice if the authors provide explanations to avoid confusion. Please specify which party trains each part of the ML model. - From the runtime results, the Encryption time (client's processing time) is around 4x the server's evaluation time. Therefore, outsourcing the evaluation to the server does not save much time (compared to the scenario that the client herself evaluates the model in plaintext). If the server is not willing to share the model parameters with the client, then we should assume that the server (the adversary) can train the model himself, which is makes the adversarial training step not so sensible. - Another major issue is the scalability of this approach, and whether it can be generalized to other datasets. The provided (private) model architecture is overly simple and is evaluated on MNIST-like data. I would like to see how the network architecture would perform (in terms of accuracy and runtime) for more complex datasets like CIFAR-10. - Minor: please describe what \theta in Section 4.2 represents.