privacy preserving data mining
play

Privacy Preserving Data Mining Moheeb Rajab Agenda Overview and - PowerPoint PPT Presentation

Privacy Preserving Data Mining Moheeb Rajab Agenda Overview and Terminology Motivation Active Research Areas Secure Multi-party Computation (SMC) Randomization approach Limitations Summary and Insights Overview


  1. Privacy Preserving Data Mining Moheeb Rajab

  2. Agenda  Overview and Terminology  Motivation  Active Research Areas  Secure Multi-party Computation (SMC)  Randomization approach  Limitations  Summary and Insights

  3. Overview  What is Data Mining?  Extracting implicit un-obvious patterns and relationships from a warehoused of data sets.  This information can be useful to increase the efficiency of the organization and aids future plans.  Can be done at an organizational level.  By Establishing a data Warehouse  Can be done also at a global Scale.

  4. Data Mining System Architecture 90 80 70 60 50 40 30 Entity I 20 10 0 1st Qtr 2nd Qtr 3rd Qtr 4th Qtr Global Entity II Aggregato r Entity n

  5. Distributed Data Mining Architecture  Lower scale Mining 90 80 70 60 50 40 30 20 10 90 0 90 80 1st Qtr 2nd Qtr 3rd Qtr 4th Qtr 80 70 70 60 60 50 50 40 40 30 30 20 20 10 10 0 1st Qtr 2nd Qtr 3rd Qtr 4th Qtr 0 1st Qtr 2nd Qtr 3rd Qtr 4th Qtr 90 80 70 60 50 40 30 20 10 0 1st Qtr 2nd Qtr 3rd Qtr 4th Qtr 90 80 70 60 50 40 30 20 10 0 1st Qtr 2nd Qtr 3rd Qtr 4th Qtr

  6. Challenges  Privacy Concerns  Proprietary information disclosure  Concerns about Association breaches  Misuse of mining  These Concerns provide the motivation for privacy preserving data mining solutions

  7. Approaches to preserve privacy  Restrict Access to data (Protect Individual records)  Protect both the data and its source:  Secure Multi-party computation (SMC)  Input Data Randomization  There is no such one solution that fits all purposes

  8. SMC vs Randomization Overhead SMC Accuracy Randomization Schemes Privacy Pinkas et al

  9. Secure Multi-party Computation  Multiple parties sharing the burden of creating the data aggregate.  Final processing if needed can be delegated to any party.  Computation is considered secure if each party only knows its input and the result of its computation.

  10. SMC 90 80 70 60 50 40 30 20 10 90 0 90 80 1st Qtr 2nd Qtr 3rd Qtr 4th Qtr 80 70 70 60 60 50 50 40 40 30 30 20 20 10 10 0 0 1st Qtr 2nd Qtr 3rd Qtr 4th Qtr 1st Qtr 2nd Qtr 3rd Qtr 4th Qtr 90 80 70 60 50 40 30 20 10 0 1st Qtr 2nd Qtr 3rd Qtr 4th Qtr 90 80 70 60 50 40 30 20 10 0 1st Qtr 2nd Qtr 3rd Qtr 4th Qtr Each Party Knows its input and the result of the operation and nothing else

  11. Key Assumptions  The ONLY information that can be leaked is the information that we can get as an overall output from the computation (aggregation) process  Users are not Malicious but can honestly curious  All users are supposed to abide to the SMC protocol  Otherwise, for the case of having malicious participants is not easy to model! [Penkas et al, Argawal]

  12. “Tools for Privacy Preserving Distributed Data Mining” Clifton et al [SIGKDD]  Secure Sum x , x ,........, x  Given a number of values belonging 1 2 n to n entities n x �  We need to compute i i 1 =  Such that each entity ONLY knows its input and the result of the computation (The aggregate sum of the data)

  13. Examples (Secure Sum) R + R + 45 90 45 50 15 R + 140 Master R + 3 0 10 20 R = 15 R + 10 Sum = R+140 -R  Problem:  Colluding members  Solution  Divide values into shares and have each share permute a disjoint path (no site has the same neighbor twice)

  14. Split path solution R + 22.5 R + 45 45 50 15 R + 70 R + 1 5 10 20 R1 = 15 R2 = 12 R + 5 Sum = R1+ 70 – R1 + R2+ 70 –R2= 140

  15. Secure Set Union  Consider n sets S , S ,........, S 1 2 n Compute, U U U U S S S ,........, S = 1 2 3 n Such that each entity ONLY knows U and nothing else.

  16. Secure Union Set  Using the properties of Commutative Encryption  For any permutation i, j the following holds E (... E ( M )...) E (... E ( M )...) = K K K K i i j j 1 n 1 n P ( E (... E ( M )...) E (... E ( M )...)) == < � K K 1 K K 2 i i j j 1 n 1 n

  17. Secure Set Union Global Union Set U .  Each site:   Encrypts its items  Creates an array M[n] and adds it to U Upon receiving U an entity should encrypt all items in U that it did not  encrypt before. In the end: all entries are encrypted with all keys K , K ,....., K  1 2 n Remove the duplicates:   Identical plain text will result the same cipher text regardless of the order of the use of encryption keys. Decryption U:   Done by all entities in any order.

  18. Secure Union Set 1 2 3 ….. . .. .. . . . n E (... E ( M )...), 0 1 0 … . . .. . … K K i i 1 n

  19. U= {E3(E2(E1(A))),E3(E2(C)), E3(A)} 3 A 2 1 U= {E1(A)} A C U= {E2(E1(A)),E2(C)} U= {E3(E2(E1(A))),E1(E3(E2(C))), E1(E3(A))} U= {E3(E2(E1(A))),E1(E3(E2(C))), E2(E1(E3(A)))}  Problem:  Computation Overhead, number of exchanged messages O(n*m)

  20. Problems with SMC  Scalability  High Overhead  Details of the trust model assumptions  Users are honest and follow the protocol

  21. Randomization Approach  “ Privacy Preserving Data Mining”, Argawal et. al [SIKDD]  Applied generally to provide estimates for data distributions rather than single point estimates  A user is allowed to alter the value provided to the aggregator  The alteration scheme should known to the aggregator  The aggregator Estimates the overall global distribution of input by removing the randomization from the aggregate data

  22. Randomization Approach (ctnd.)  Assumptions:  Users are willing to divulge some form of their data  The aggregator is not malicious but may honestly curious (they follow the protocol)  Two main data perturbation schemes  Value- class membership (Discretization)  Value distortion

  23. Randomization Methods  Value Distortion Method x  Given a value the client is allowed to report a distorted value i r ( x i + r ) where is a random variable drawn from a known distribution [ ] 0 , , µ = � � + �  Uniform Distribution: 0 , µ = �  Gaussian Distribution:

  24. Quantifying the privacy of different randomization Schemes Confidence ( α ) 50 % 95 % 99.9 % Distribution Discretization 0.5 x W 0.95 x W 0.999 x W W 0.5 x 2 α 0.95 x 2 α 0.999 x 2 α Uniform − α + α 1.34 x σ 3.92 x σ 6.8 x σ Gaussian Gaussian Distribution provides the best accuracy at higher confidence levels

  25. Problem Statement x y , x y ,....., x y + + + x y , x y ,....., x y 11 11 12 12 1 k 1 k + + + f Y ( a ) 1 1 2 2 n n Entity I x y , x y ,....., x y + + + Estimator 21 21 22 22 2 k 2 k Global Aggregator Entity II f X ( z ) 90 80 x y , x y ,....., x y + + + 70 m 1 m 1 m 2 m 2 mk mk 60 50 40 30 Entity m 20 10 0 1st Qtr 2nd Qtr 3rd Qtr 4th Qtr

  26. Reconstruction of the Original Distribution  Reconstruction problem can be viewed in in the general framework of the “Inverse Problems”  Inverse Problems: describing system internal structure from indirect noisy data.  Bayesian Estimation is an Effective tools for such settings

  27. Formal problem statement  Given one dimensional array of randomized data x y , x y ,....., x y + + + 1 1 2 2 n n x  Where ’s are iid random variables each with the same distribution i as the random variable X y  And ’s are realizations of a globally known random distribution i F with CDF Y F  Purpose: Estimate X

  28. Background: Bayesian Inference  An Estimation method that involves collecting observational data and use it a tool to adjust (either support of refute) a prior belief.  The previous knowledge (hypothesis) has an established probability called (prior probability)  The adjusted hypothesis given the new observational data is called (posterior probability)

  29. Bayesian Inference P ( H )  Let the prior probability, then Bayes’ rule states 0 that the posterior probability of given an ( H ) 0 observation ( D ) is given by: P ( D | H ) P ( H ) P ( H | D ) 0 0 = 0 P ( D )  Bayes rule is a cyclic application of the general form of the joint probability theorem: P ( D , H ) P ( H | D ) P ( D ) = 0 0

  30. Bayesian Inference ( Classical Example)  Two Boxes:  Box-I : 30 Red balls and 10 White Balls  Box-II: 20 Red balls and 20 White Balls  A Person draws a Red Ball, what is the probability that the Ball is from Box-I  Prior Probability P(Box-I) = 0.5  From the data we know that:  P(Red|Box-I) = 30/40 = 0.75  P(Red|Box-II) = 20/40 = 0.5

  31. Example (cntd.)  Now, given the new observation (The Red Ball) we want to know the posterior probability of Box-I (i.e P(Box-I | Red) ) P ( RED | Box I ) P ( Box I ) � � P ( Box I | RED ) � = P ( RED ) P ( RED ) P ( RED , Box I ) P ( RED , Box II ) = � + � P ( RED ) P ( RED | Box I ) P ( Box I ) P ( RED | Box II ) P ( Box II ) = � � + � � P ( RED ) 0 . 5 0 . 75 0 . 5 0 . 5 = � + �

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend