homomorphic matrix computation application to neural
play

Homomorphic Matrix Computation & Application to Neural Networks - PowerPoint PPT Presentation

Homomorphic Matrix Computation & Application to Neural Networks Xiaoqian Jiang, Miran Kim (University of Texas, Health Science Center at Houston) Kristin Lauter (Microsoft Research), Yongsoo Song (University of California, San Diego)


  1. Homomorphic Matrix Computation & Application to Neural Networks Xiaoqian Jiang, Miran Kim (University of Texas, Health Science Center at Houston) Kristin Lauter (Microsoft Research), Yongsoo Song (University of California, San Diego)

  2. Background

  3. Primitives for Secure Computation q Differential Privacy § Limited Applications (e.g. count, average). Privacy budget. q (Secure) Multi-Party Computation § Lower complexity, but higher communication costs. e.g. 40GB for GWAS analysis for 100K individuals [Nature Biotechnology'17] § Protocol has many rounds. q (Fully) Homomorphic Encryption § Higher complexity, but less communication costs. § One round protocol.

  4. HE vs. MPC Homomorphic Encryption Multi-Party Computation High (non-interactive) Single-use encryption One-time encryption Re-usability Not good for long-term storage No further interaction from Interaction between parties each time the data owners Limited participants Sources Unlimited (due to complexity constraints) Slow in computation Slow in communication Speed (but can speed-up using SIMD) (due to large circuit to be exchanged) HE is ideal for long term storage and non-interactive computation

  5. Summary of Progresses q 2009-10: Plausibility § [GH'11] A single bit operation takes 30 minutes. q 2011-12: Real Circuits § [GHS'12] A 30,000-gate in 36 hours q 2013-16: Usability § HElib [HS'14]: IBM's open-source implementation of the BGV scheme The same 30,000-gate in 4-15 minutes q 2017-T oday: Practical uses for real-world applications § HE Standardization workshops § iDASH Privacy & Security competition (2013~)

  6. Secure Health Data Analysis q Predicting Heart Attack § ~0.2 seconds. q Sequence matching § ~27 seconds, Edit distance of length 8. § ~180 seconds, Approximate edit distance of length 10K (iDASH'15) q Searching of Biomarkers § ~0.2 seconds, 100K database (iDASH'16) q Training Logistic Regression Model § ~7 minutes, 18 features * 1600 samples (iDASH'17)

  7. Homomorphic Matrix Operation q HElib (Crypto'14) § (Matrix) * (Vector) q CryptoNets (ICML'16) § (Plain matrix) * (Element-wisely encrypted vector) q GAZELLE (Usenix Security'18) § (Column-wisely encrypted matrix) * (Plain vector) q Homomorphic Evaluation of (Deep) Neural Networks § [BMMP17] Evaluation of discretized DNN , [CWM+17] Classification on DNN. § Evaluation of Plain model on Encrypted data.

  8. Homomorphic Matrix Operation q HElib (Crypto'14) O(d) complexity for (matrix*vector). § (Matrix) * (Vector) → O(d 2 ) for (matrix*matrix): not optimal. q CryptoNets (ICML'16) § (Plain matrix) * (Element-wisely encrypted vector) q GAZELLE (Usenix Security'18) § (Column-wisely encrypted matrix) * (Plain vector) q Homomorphic Evaluation of (Deep) Neural Networks § [BMMP17] Evaluation of discretized DNN , [CWM+17] Classification on DNN. § Evaluation of Plain model on Encrypted data.

  9. Motivation q Scenarios (Data/Model owner; Cloud server; Individuals) I. Data owner trains a model and makes it available on the cloud. II. Model provider encrypts a trained model & uploads it to the cloud to make predictions on encrypted inputs from individuals. III. Cloud trains a model on encrypted data and uses it to make predictions on new encrypted inputs. q Our Work: Homomorphic Operations between Encrypted Matrices

  10. Main Idea

  11. Functionality of HE Schemes q Packing Method § Vector encryption & Parallel operations. § Enc(x 1 ,..., x n ) * Enc(y 1 ,..., y n ) = Enc(x 1 * y 1 ,..., x n * y n )

  12. Functionality of HE Schemes q Packing Method § Vector encryption & Parallel operations. § Enc(x 1 ,..., x n ) * Enc(y 1 ,..., y n ) = Enc(x 1 * y 1 ,..., x n * y n ) q Scalar Multiplication § (a 1 ,..,a n ) * Enc(x 1 ,.., x n ) = Enc(a 1 x 1 ,.., a n x n ) q Rotation § Enc(x 1 ,..., x n ) → Enc(x 2 ,..., x n , x 1 )

  13. Functionality of HE Schemes q Packing Method § Vector encryption & Parallel operations. § Enc(x 1 ,..., x n ) * Enc(y 1 ,..., y n ) = Enc(x 1 * y 1 ,..., x n * y n ) q Scalar Multiplication § (a 1 ,..,a n ) * Enc(x 1 ,.., x n ) = Enc(a 1 x 1 ,.., a n x n ) q Rotation How to Represent Matrix Arithmetic § Enc(x 1 ,..., x n ) → Enc(x 2 ,..., x n , x 1 ) Using HE-Friendly Operations? q Composition of basic operations § Permutation, linear transformation (Expensive)

  14. Matrix Encoding 1 q Identify n=d 2 dimensional vector to d*d matrix 2 § Addition is easy. 3 4 5 1 2 3 6 7 4 5 6 8 7 8 9 9

  15. Matrix Encoding q Identify n=d 2 dimensional vector to d*d matrix § Addition is easy. § Row or Column shifting permutations are cheap. (Depth 1, Complexity O(1)) 1 2 3 4 5 6 4 5 6 7 8 9 7 8 9 1 2 3

  16. Matrix Encoding q Identify n=d 2 dimensional vector to d*d matrix § Addition is easy. § Row or Column shifting permutations are cheap. (Depth 1, Complexity O(1)) 1 2 3 2 3 1 4 5 6 5 6 4 7 8 9 8 9 7

  17. Matrix Multiplication 1 2 3 a b c 4 5 6 d e f AB = A 0 ◉ B 0 + A 1 ◉ B 1 + A 2 ◉ B 2 A = , B = 7 8 9 g h i ◉ : Element-wise Multiplication 1 2 3 a e i 2 3 1 d h c 3 1 2 g b f 5 6 4 d h c 6 4 5 g b f 4 5 6 a e i ◉ ◉ ◉ + + 9 7 8 g b f 7 8 9 a e i 8 9 7 d h c Matrix Mult = Generation of Ai, Bi & d-homomorphic add/mult.

  18. Matrix Multiplication 1 2 3 a b c 4 5 6 d e f AB = A 0 ◉ B 0 + A 1 ◉ B 1 + A 2 ◉ B 2 A = , B = 7 8 9 g h i ◉ : Element-wise Multiplication 1 2 3 a e i 2 3 1 d h c 3 1 2 g b f 5 6 4 d h c 6 4 5 g b f 4 5 6 a e i ◉ ◉ ◉ + + 9 7 8 g b f 7 8 9 a e i 8 9 7 d h c Matrix Mult = Generation of Ai, Bi & d-homomorphic add/mult.

  19. Matrix Multiplication 1 2 3 a b c 4 5 6 d e f AB = A 0 ◉ B 0 + A 1 ◉ B 1 + A 2 ◉ B 2 A = , B = 7 8 9 g h i ◉ : Element-wise Multiplication 1 2 3 a e i 2 3 1 d h c 3 1 2 g b f 5 6 4 d h c 6 4 5 g b f 4 5 6 a e i ◉ ◉ ◉ + + 9 7 8 g b f 7 8 9 a e i 8 9 7 d h c Matrix Mult = Generation of A i , B i & d-homomorphic add/mult.

  20. Generation of A i 1 2 3 4 5 6 A = 7 8 9 3 1 2 1 2 3 2 3 1 4 5 6 5 6 4 6 4 5 A 2 = A 0 = A 1 = 8 9 7 9 7 8 7 8 9 A 0 Generation : O(d) homomorphic operations. A i = ColumnShifting(A 0 , i) : O(1) for each.

  21. Generation of B i a b c d e f B = g h i g b f a e i d h c a e i d h c g b f B 2 = B 0 = B 1 = d h c g b f a e i B 0 Generation : O(d) homomorphic operations. B i = RowShifting(B 0 , i) : O(1) for each.

  22. Summary A, B : d * d matrices AB = A 0 ◉ B 0 + A 1 ◉ B 1 + ... + A d-1 ◉ B d-1 Generation of A 0 , B 0 : General permutation - O(d). Generation of A i , B i 's: Column/Row shifting from A 0 , B 0 - O(d). Element-wise product and summation: O(d). T otal complexity: O(d) homomorphic operations (optimal?). Depth: 2 (scalar mult) + 1 (homo mult).

  23. Other Operations q Matrix Transposition § Complexity O(d 0.5 ) + Depth 1. q Parallelization § When the number of plaintext slots > d 2 . § Encrypt several matrices in a single ciphertext. q Multiplication between Non-square Matrices

  24. Implementation

  25. Experimental Results Based on the HEAAN library for fixed-point operation (n = 2 13 ). All numbers have 24-bit precision.

  26. Evaluation of Neural Networks 1 Convolution layer + 2 Fully connected layers. Parallel evaluation on 64 images.

  27. Comparison? Plain model (previous work) vs. Encrypted model (ours)

  28. Questions? Thanks for listening

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend