Efficient Privacy Preserving Inference Outsourcing for Convolutional Neural Networks

Efficient Privacy Preserving Inference Outsourcing for Convolutional Neural Networks

Abstract:

Deploying convolutional neural network (CNN) inference on resource-constrained devices remains a remarkable challenge for Industrial Internet of Things (IIoT). Although the cloud computing shows great promise in machine learning training and prediction, outsourcing data to a remote cloud always incurs privacy risk and high latency. Therefore, we design a new framework for efficient and privacy-preserving CNN inference based on cloud-edge-client collaboration (named PCNNCEC ). In PCNNCEC , the model of cloud and the data of client in IIoT are split into two secret shares and sent to two non-colluded edge servers. We proposed a new efficient private comparison protocol based on the additively secret sharing technique, which can be used to realize secure computation of ReLU function without approximation in semi-honest adversary model. By applying some secure two-party computation protocols, the two edge servers can jointly calculate the predicting results without learning anything about the model and data. Moreover, to speed up the pre-computation of offline phase but not sacrifice security, we delegate the task of triplets generation to the cloud, so that the edge servers do not require frequent interactions to generate triplets themselves or introducing additional trusted party. The experimental results show the proposed private comparison protocol achieves a better tradeoff between low latency and high throughput, when it is compared with garbled circuit based protocols and other secret sharing based protocols. Additionally, the benchmarks conducted on realistic MNIST and CIFAR-10 datasets demonstrate that PCNNCEC costs less communication and runtime than two recently related schemes under the same security level.