Difference between revisions of "Privacy-preserving deep learning"

From CryptoWiki
Jump to: navigation, search
(Detailed Construction)
(Detailed Construction)
(2 intermediate revisions by one user not shown)
Line 69: Line 69:
  
 
Data Encryption: When a DNN inference is needed for the IoT device, it will outsource the execution of linear layers to EdgeA and EdgeB in the privacy-preserving manner. To be specific, for the ith linear layer, its input will be encrypted and sent to EdgeA and EdgeB for processing. Intermediate results returned by edge servers are decrypted by the IoT devices, which are then fed into the follow up non-linear layers. The output of non-linear layers will be used as the input as the (i + 1)th linear layer. This process will be interactively conducted until all layers of the DNN are executed as shown in Algorithm.2.
 
Data Encryption: When a DNN inference is needed for the IoT device, it will outsource the execution of linear layers to EdgeA and EdgeB in the privacy-preserving manner. To be specific, for the ith linear layer, its input will be encrypted and sent to EdgeA and EdgeB for processing. Intermediate results returned by edge servers are decrypted by the IoT devices, which are then fed into the follow up non-linear layers. The output of non-linear layers will be used as the input as the (i + 1)th linear layer. This process will be interactively conducted until all layers of the DNN are executed as shown in Algorithm.2.
[[File:2PsQzAALIro.jpg]] Privacy-preserving Execution: EdgeA and EdgeB take Ci,A and Ci,B as the input for the ith linear layer respectively, and output Oi,A and Oi,B. On receiving Oi,A and Oi,B from edge servers, the IoT device decrypts them as
+
 
 +
 
 +
 
 +
[[File:2PsQzAALIro.jpg]]  
 +
 
 +
[[File:8mheHRXdEuk.jpg]]
 +
[[File:T8NFZJF6kc0.jpg]]
 +
 
 +
 
 +
 
 +
 
 +
 
 +
 
 +
 
 +
 
 +
To this end, the IoT device is able to efficiently handle each layer in a DNN. Compute-intensive linear (CONV and FC) layers are securely outsourced to the edge using after encryption. These compute-efficient layers are directly handled by the IoT device. Since our privacy-preserving solution is a
 +
general design, it can be customized and recursively plugged into other DNN architectures.
 +
 
 +
=Security Analysis=

Revision as of 09:08, 20 May 2020

Contents

Introduction

Fueled by the massive influx of data and advanced algorithms, modern deep neural network (DNN) has surprisingly benefited IoT applications in a spectrum of domains, including visual detection, smart security, audio analytics, health monitoring, infrastructure inspection, etc. In recent years, enabling efficient integration of DNNs and IoT is receiving increasing attention from both academia and industry. DNN-driven applications typically have a two-phase paradigm: 1) a training phase wherein a model is trained using a training dataset. 2) an inference phase wherein the trained model is used to output results (e.g., predication, decision, recognition) for a piece of input data. With regard to the deployment on IoT devices, the inference phase is mainly adopted to process data collected on the fly. Given the fact that complex DNN inference tasks can contain a large amount of computational operations, their execution on resourceconstrained IoT devices becomes challenging, especially when time-sensitive tasks are taken into consideration. For example, a single inference task using popular DNN architectures (e.g., AlexNet, FaceNet, and ResNet) for visual detection can require billions of operations. Moreover, many IoT devices are powered by battery, which will be quickly drained by executing these complex DNN inference tasks. To soothe IoT devices from heavy computation and energy consumption, outsourcing complex DNN inference tasks to public cloud computing platforms has become a popular choice in the literature. However, this type of “cloud-backed” system can raise privacy concerns when the data sent to remote cloud servers contain sensitive information.

Background and problem formulation

The computational flow of a DNN inference consists of multiple linear and non-linear computational layers.The input of each layer is a matrix or a vector, and the output of each layer will be used as the input of the next layer unless the last layer is reached. In this project, we investigate convolutional neural network (CNN) as an example, which is an important representative of DNN. In CNN, linear operations in an inference mainly performed in fully-connected (FC) and convolution (CONV) layers. Non-linear layers (e.g., activation layer and pooling layer) are typically placed after a CONV or FC layer to perform data transformation. In CONV and FC layers, dot product operation (DoT(·)) are repeatedly executed. To be specific, a FC layer takes a vector v ∈ R n as input and outputs y ∈ R m using linear transformation as y = W · v + b, where W ∈ R m×n is the weight matrix and b is the vector of bias. During he calculation of W · v, m dot products are computed as y[i] = DoT(W [i, :], v)1≤i≤m. In a CONV layer, a X ∈ R n×n input matrix will be processed into H kernels. For a (k × k) kernel K, it scans the matrix from top-left corner, and then moves from left to right. Each scan is a linear transformation that takes a (k × k) window in the input matrix and uses it to compute a dot product with the kernel, which then adds a bias term to the result.

Fig. 1. Examples of a Convolutional Layer and a Fully-connected Layer
Fig. 2. System Model

As depicted in Fig.2, our framework involves two noncolluding edge computing servers, resource-constrained IoT devices, and the device owner. • Edge servers: we consider two non-colluding servers, denoted as EdgeA and EdgeB, that are deployed close to IoT devices. Each edge server has the capability to efficiently process DNN inference tasks over plaintext, such as a regular laptop. Each edge server will obtain linear layers of a trained DNN model from the device owner. EdgeA and EdgeB will process encrypted DNN inference requests from IoT devices in a privacypreserving manner. The multi-server architecture has been widely adopted to balance the security and efficiency in privacy-preserving outsourcing, wherein at least one server will not collude with the others. • IoT devices: we consider resource-constrained IoT devices that are deployed with limited computing capability and battery life. These devices collect data and need to process these data on the fly using DNN inference. • Device owner: the device owner has pre-trained DNN models and can deploy IoT devices for service. In this project, we focus on designing a framework that an IoT device can outsource the majority of computation in a DNN inference task to two non-colluding edge servers in a privacy-preserving manner. At the end of the inference, the IoT device obtains the result over its input data, whereas two edge servers do not learn the sensitive information of input data, intermediate outputs, and the final inference result. As all IoT devices are deployed by the owner, he/she has access to all data collected and processed by his/her IoT devices when necessary.


















Privacy-preserving outsourcing of DNN inference

In our framework, the IoT device outsources the execution of linear (CONV and FC) layers and keeps the computeefficient non-linear layers at local. Without loss of generality, we consider a DNN that contains q CONV and FC layers, each of which is followed by non-linear activation layers if necessary. We use µ to denote the length (in bit) of an element in the input matrix of CONV or input vector or FC layers, and λ to denote the security parameter. Random numbers utilized in our design are λ-bit generated using a pseudorandom function F(·). There are three major phases in our framework: Setup, Data, and Privacy-Preserving Execution. In the Setup, the owner prepares a pre-trained DNN model and generates the encryption and decryption keys for the IoT device. When the IoT device needs to perform DNN inference over its collected data, it will execute the Data Encryption phase to encrypt them and send them to two edge servers. The DNN inference is then executed in the Privacy-Preserving Execution phase. All outsourced DNN operations performed by edge servers are over encrypted data. 75rRMHz-g9Q.jpg

Detailed Construction

Setup: To setup the framework, the device owner prepares a trained DNN model and sends its q linear layers (CONV and FC) to EdgeA and EdgeB. For the ith linear layer, the owner generates a pair of encryption and decryption keys {Si,in,Si,out}1≤i≤q. As presented in Algorithm 1, Si,in for ith linear layer will be randomly generated according to input dimension of the layer, and each element in Si,in will be a λ-bit random number. Si,out is the corresponding output of the ith linear layer when taking Si,in as the input. {Si,in,Si,out}1≤i≤q key pairs are deployed on the IoT device for later on privacy-preserving DNN inference tasks. NYwU5VlmOSM.jpg





Data Encryption: When a DNN inference is needed for the IoT device, it will outsource the execution of linear layers to EdgeA and EdgeB in the privacy-preserving manner. To be specific, for the ith linear layer, its input will be encrypted and sent to EdgeA and EdgeB for processing. Intermediate results returned by edge servers are decrypted by the IoT devices, which are then fed into the follow up non-linear layers. The output of non-linear layers will be used as the input as the (i + 1)th linear layer. This process will be interactively conducted until all layers of the DNN are executed as shown in Algorithm.2.


2PsQzAALIro.jpg

8mheHRXdEuk.jpg T8NFZJF6kc0.jpg





To this end, the IoT device is able to efficiently handle each layer in a DNN. Compute-intensive linear (CONV and FC) layers are securely outsourced to the edge using after encryption. These compute-efficient layers are directly handled by the IoT device. Since our privacy-preserving solution is a general design, it can be customized and recursively plugged into other DNN architectures.

Security Analysis