Homomorphic Matrix Computation & Application to Neural Networks - - PowerPoint PPT Presentation

homomorphic matrix computation application to neural
SMART_READER_LITE
LIVE PREVIEW

Homomorphic Matrix Computation & Application to Neural Networks - - PowerPoint PPT Presentation

Homomorphic Matrix Computation & Application to Neural Networks Xiaoqian Jiang, Miran Kim (University of Texas, Health Science Center at Houston) Kristin Lauter (Microsoft Research), Yongsoo Song (University of California, San Diego)


slide-1
SLIDE 1

Homomorphic Matrix Computation & Application to Neural Networks

Xiaoqian Jiang, Miran Kim (University of Texas, Health Science Center at Houston) Kristin Lauter (Microsoft Research), Yongsoo Song (University of California, San Diego)

slide-2
SLIDE 2

Background

slide-3
SLIDE 3

Primitives for Secure Computation

q Differential Privacy

§ Limited Applications (e.g. count, average). Privacy budget.

q (Secure) Multi-Party Computation

§ Lower complexity, but higher communication costs. e.g. 40GB for GWAS analysis for 100K individuals [Nature Biotechnology'17] § Protocol has many rounds.

q (Fully) Homomorphic Encryption

§ Higher complexity, but less communication costs. § One round protocol.

slide-4
SLIDE 4

HE vs. MPC

Homomorphic Encryption Multi-Party Computation

Re-usability High (non-interactive) One-time encryption No further interaction from the data owners Single-use encryption Not good for long-term storage Interaction between parties each time Sources Unlimited Limited participants (due to complexity constraints) Speed Slow in computation (but can speed-up using SIMD) Slow in communication (due to large circuit to be exchanged)

HE is ideal for long term storage and non-interactive computation

slide-5
SLIDE 5

Summary of Progresses

q 2009-10: Plausibility

§ [GH'11] A single bit operation takes 30 minutes.

q 2011-12: Real Circuits

§ [GHS'12] A 30,000-gate in 36 hours

q 2013-16: Usability

§ HElib [HS'14]: IBM's open-source implementation of the BGV scheme The same 30,000-gate in 4-15 minutes

q 2017-T

  • day: Practical uses for real-world applications

§ HE Standardization workshops § iDASH Privacy & Security competition (2013~)

slide-6
SLIDE 6

Secure Health Data Analysis

q Predicting Heart Attack

§ ~0.2 seconds.

q Sequence matching

§ ~27 seconds, Edit distance of length 8. § ~180 seconds, Approximate edit distance of length 10K (iDASH'15)

q Searching of Biomarkers

§ ~0.2 seconds, 100K database (iDASH'16)

q Training Logistic Regression Model

§ ~7 minutes, 18 features * 1600 samples (iDASH'17)

slide-7
SLIDE 7

Homomorphic Matrix Operation

q HElib (Crypto'14)

§ (Matrix) * (Vector)

q CryptoNets (ICML'16)

§ (Plain matrix) * (Element-wisely encrypted vector)

q GAZELLE (Usenix Security'18)

§ (Column-wisely encrypted matrix) * (Plain vector)

q Homomorphic Evaluation of (Deep) Neural Networks

§ [BMMP17] Evaluation of discretized DNN, [CWM+17] Classification on DNN. § Evaluation of Plain model on Encrypted data.

slide-8
SLIDE 8

Homomorphic Matrix Operation

q HElib (Crypto'14)

§ (Matrix) * (Vector)

q CryptoNets (ICML'16)

§ (Plain matrix) * (Element-wisely encrypted vector)

q GAZELLE (Usenix Security'18)

§ (Column-wisely encrypted matrix) * (Plain vector)

q Homomorphic Evaluation of (Deep) Neural Networks

§ [BMMP17] Evaluation of discretized DNN, [CWM+17] Classification on DNN. § Evaluation of Plain model on Encrypted data.

O(d) complexity for (matrix*vector). → O(d2) for (matrix*matrix): not optimal.

slide-9
SLIDE 9

Motivation

q Scenarios (Data/Model owner; Cloud server; Individuals)

I. Data owner trains a model and makes it available on the cloud. II. Model provider encrypts a trained model & uploads it to the cloud to make predictions on encrypted inputs from individuals.

  • III. Cloud trains a model on encrypted data and uses it to make predictions
  • n new encrypted inputs.

q Our Work: Homomorphic Operations between Encrypted Matrices

slide-10
SLIDE 10

Main Idea

slide-11
SLIDE 11

Functionality of HE Schemes

q Packing Method

§ Vector encryption & Parallel operations. § Enc(x1,..., xn) * Enc(y1,..., yn) = Enc(x1 * y1,..., xn * yn)

slide-12
SLIDE 12

Functionality of HE Schemes

q Packing Method

§ Vector encryption & Parallel operations. § Enc(x1,..., xn) * Enc(y1,..., yn) = Enc(x1 * y1,..., xn * yn)

q Scalar Multiplication

§ (a1,..,an) * Enc(x1,.., xn) = Enc(a1x1,.., anxn)

q Rotation

§ Enc(x1,..., xn) → Enc(x2,..., xn, x1)

slide-13
SLIDE 13

Functionality of HE Schemes

q Packing Method

§ Vector encryption & Parallel operations. § Enc(x1,..., xn) * Enc(y1,..., yn) = Enc(x1 * y1,..., xn * yn)

q Scalar Multiplication

§ (a1,..,an) * Enc(x1,.., xn) = Enc(a1x1,.., anxn)

q Rotation

§ Enc(x1,..., xn) → Enc(x2,..., xn, x1)

q Composition of basic operations

§ Permutation, linear transformation (Expensive)

How to Represent Matrix Arithmetic Using HE-Friendly Operations?

slide-14
SLIDE 14

Matrix Encoding

q Identify n=d2 dimensional vector to d*d matrix

§ Addition is easy.

1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9

slide-15
SLIDE 15

Matrix Encoding

q Identify n=d2 dimensional vector to d*d matrix

§ Addition is easy. § Row or Column shifting permutations are cheap. (Depth 1, Complexity O(1))

1 2 3 4 5 6 7 8 9 4 5 6 7 8 9 1 2 3

slide-16
SLIDE 16

Matrix Encoding

q Identify n=d2 dimensional vector to d*d matrix

§ Addition is easy. § Row or Column shifting permutations are cheap. (Depth 1, Complexity O(1))

1 2 3 4 5 6 7 8 9 2 3 1 5 6 4 8 9 7

slide-17
SLIDE 17

Matrix Multiplication

1 2 3 4 5 6 7 8 9

A =

a b c d e f g h i

, B = AB = A0 ◉ B0 + A1 ◉ B1 + A2 ◉ B2 ◉ : Element-wise Multiplication

1 2 3 5 6 4 9 7 8 ◉ a e i d h c g b f 2 3 1 6 4 5 7 8 9 ◉ d h c g b f a e i 3 1 2 4 5 6 8 9 7 ◉ g b f a e i d h c + +

Matrix Mult = Generation of Ai, Bi & d-homomorphic add/mult.

slide-18
SLIDE 18

Matrix Multiplication

1 2 3 4 5 6 7 8 9

A =

a b c d e f g h i

, B = AB = A0 ◉ B0 + A1 ◉ B1 + A2 ◉ B2 ◉ : Element-wise Multiplication

1 2 3 5 6 4 9 7 8 ◉ a e i d h c g b f 2 3 1 6 4 5 7 8 9 ◉ d h c g b f a e i 3 1 2 4 5 6 8 9 7 ◉ g b f a e i d h c + +

Matrix Mult = Generation of Ai, Bi & d-homomorphic add/mult.

slide-19
SLIDE 19

Matrix Multiplication

1 2 3 4 5 6 7 8 9

A =

a b c d e f g h i

, B = AB = A0 ◉ B0 + A1 ◉ B1 + A2 ◉ B2 ◉ : Element-wise Multiplication

1 2 3 5 6 4 9 7 8 ◉ a e i d h c g b f 2 3 1 6 4 5 7 8 9 ◉ d h c g b f a e i 3 1 2 4 5 6 8 9 7 ◉ g b f a e i d h c + +

Matrix Mult = Generation of Ai, Bi & d-homomorphic add/mult.

slide-20
SLIDE 20

Generation of Ai

A0 Generation : O(d) homomorphic operations. Ai = ColumnShifting(A0, i) : O(1) for each.

1 2 3 5 6 4 9 7 8

A0 =

2 3 1 6 4 5 7 8 9

A1 =

3 1 2 4 5 6 8 9 7

A2 =

1 2 3 4 5 6 7 8 9

A =

slide-21
SLIDE 21

Generation of Bi

B0 Generation : O(d) homomorphic operations. Bi = RowShifting(B0, i) : O(1) for each.

a e i d h c g b f

B0 =

d h c g b f a e i

B1 =

g b f a e i d h c

B2 =

a b c d e f g h i

B =

slide-22
SLIDE 22

Summary

A, B : d * d matrices AB = A0 ◉ B0 + A1 ◉ B1 + ... + Ad-1 ◉ Bd-1 Generation of A0, B0 : General permutation - O(d). Generation of Ai, Bi 's: Column/Row shifting from A0, B0 - O(d). Element-wise product and summation: O(d). T

  • tal complexity: O(d) homomorphic operations (optimal?).

Depth: 2 (scalar mult) + 1 (homo mult).

slide-23
SLIDE 23

Other Operations

q Matrix Transposition

§ Complexity O(d0.5) + Depth 1.

q Parallelization

§ When the number of plaintext slots > d2. § Encrypt several matrices in a single ciphertext.

q Multiplication between Non-square Matrices

slide-24
SLIDE 24

Implementation

slide-25
SLIDE 25

Experimental Results

Based on the HEAAN library for fixed-point operation (n = 213). All numbers have 24-bit precision.

slide-26
SLIDE 26

Evaluation of Neural Networks

1 Convolution layer + 2 Fully connected layers. Parallel evaluation on 64 images.

slide-27
SLIDE 27

Comparison?

Plain model (previous work) vs. Encrypted model (ours)

slide-28
SLIDE 28

Questions?

Thanks for listening