from biometrics
play

from Biometrics Benjamin Fuller, Boston University/MIT Lincoln - PowerPoint PPT Presentation

Strong Key Derivation from Biometrics Benjamin Fuller, Boston University/MIT Lincoln Laboratory Privacy Enhancing Technologies for Biometrics, Haifa January 15, 2015 Based on three works: Computational Fuzzy Extractors [FullerMengReyzin13]


  1. Strong Key Derivation from Biometrics Benjamin Fuller, Boston University/MIT Lincoln Laboratory Privacy Enhancing Technologies for Biometrics, Haifa January 15, 2015 Based on three works: • Computational Fuzzy Extractors [FullerMengReyzin13] • When are Fuzzy Extractors Possible? [FullerSmithReyzin14] • Key Derivation from Noisy Sources with More Errors than Entropy [CanettiFullerPanethSmithReyzin14]

  2. Key Derivation from Noisy Sources Biometric Data High-entropy sources are often noisy – Initial reading w 0 ≠ later reading reading w 1 – Consider sources w 0 = a 1 ,…, a k , each symbol a i over alphabet Z – Assume a bound distance: d ( w 0 , w 1 ) ≤ t d ( w 0 , w 1 ) = # of symbols in that differ d ( w 0 , w 1 )=4 w 0 A B C A D B E F A A w 1 A G C A B B E F C B

  3. Key Derivation from Noisy Sources Biometric Data High-entropy sources are often noisy – Initial reading w 0 ≠ later reading reading w 1 – Consider sources w 0 = a 1 ,…, a k , each symbol a i over alphabet Z – Assume a bound distance: d ( w 0 , w 1 ) ≤ t Goal: derive a stable cryptographically strong output – Want w 0 , w 1 to map to same output – The output should look uniform to the adversary Goal of this talk: produce good outputs for sour�es we �ouldn’t handle �efore

  4. Biometrics • Measure unique physical phenomenon • Unique, collectable, permanent, universal • Repeated readings exhibit significant noise • Uniqueness/Noise vary widely • Hu�a� iris �elie�ed to �e ��est� Theoretic work, [Daugman04], [PrabhakarPankantiJain03] with iris in mind

  5. Iris Codes [Daugman04] Locating Iris the iris unwrapping Iris code Fuzzy Extractor * Filtering and 2-bit phase quantization • Iris code: sequence of quantized wavelets (computed at different positions) • Daug�a�’s transform is 2048 bits long • Entropy estimate 249 bits • Error rate depends on conditions, user applications 10%

  6. Two Physical Processes w 0 w 0 – create a new biometric, Uncertainty take initial reading w 1 w 1 – take new reading from Errors a fixed person Two readings may not be subject to same noise. Often less error in original reading

  7. Key Derivation from Noisy Sources Interactive Protocols [W��er7�] … [ BennettBrassardRobert85,88] …lots of �ork… User must store initial reading w 0 at server w 0 w 1 Not appropriate for user authenticating to device Parties agree on cryptographic key

  8. Fuzzy Extractors: Functionality [JuelsWatte��erg99], …, [DodisOstro�sk�Re�zi�“�ith��] … • Enrollment algorithm Gen : Take a measurement w 0 from the source. Use it to �lo�k up� ra�do� r in a nonsecret value p . • Subsequent algorithm Rep : give same output if d ( w 0 , w 1 ) < t • Security: r looks uniform even given p , w hen the source is good enough Traditionally, security def. is information theoretic Gen r w 0 Rep r p w 1

  9. Fuzzy Extractors: Goals • Goal 1: handle as many sources as possible (typically, any source in which w 0 is 2 k -hard to guess) • Goal 2: handle as much error as possible (typically, any w 1 within distance t ) • Most previous approaches are analyzed in terms of t and k • Traditional approaches do not support sources with t > k t > k for the iris Gen r Say: more errors entropy k than entropy w 0 Rep r p w 1

  10. Contribution • Lessons on how to construct fuzzy extractors when t > k [FMR13,FRS14] • First fuzzy extractors for large classes of distributions where t > k [CFPRS14] • First Reusable fuzzy extractor for arbitrary correlation between repeated readings [CFPRS14] • Preliminary results on the iris

  11. Fuzzy Extractors: Typical Construction - derive r using a randomness extractor (converts high-entropy sources to uniform, e.g., via universal hashing [CarterWegman77] ) - correct errors using a secure sketch [DodisOstrovskyReyzinSmith08] (gives recovery of the original from a noisy signal) Gen r entropy k Ext w 0 Rep r p Ext w 1

  12. Fuzzy Extractors: Typical Construction - derive r using a randomness extractor (converts high-entropy sources to uniform, e.g., via universal hashing [CarterWegman77] ) - correct errors using a secure sketch [DodisOstrovskyReyzinSmith08] (gives recovery of the original from a noisy signal) Gen r entropy k Ext w 0 Rep r p w 0 Sketch Ext Rec w 1

  13. Secure Sketches Generate r Ext w 0 Reproduce r p Rec w 0 Sketch Ext w 1 Code Offset Sketch c [JuelsWattenberg99] p =c  w 0 C – Error correcting code correcting t errors

  14. Secure Sketches Generate r Ext w 0 Reproduce r p Rec w 0 Sketch Ext w 1 Code Offset Sketch c ’= Decode ( c *) c [JuelsWattenberg99] p  w 1 = c * If decoding p =c  w 0 succeeds, C – Error correcting w 0 = c ’  p . code correcting t errors

  15. Secure Sketches Generate r Ext w 0 Reproduce r p Rec w 0 Sketch Ext w ’ 1 Code Offset Sketch [JuelsWattenberg99] p  w 1 = c * Goal: p =c  w 0 minimize how C – Error correcting much p code correcting t informs on w 0 . errors p  w ’ 1

  16. Outline • Key Derivation from Noisy Sources • Fuzzy Extractors • Limitations of Traditional Approaches/Lessons • New Constructions

  17. Is it possible to handle ��ore errors tha� e�trop�� ( t > k )? • This distribution has 2 k points • Why might we hope to extract Support of w 0 from this distribution? • Points are far apart • No need to deconflict original reading w 1

  18. Is it possible to handle ��ore errors tha� e�trop�� ( t > k )? Support of w 0 Support of v 0 r Since t > k there is a distribution v 0 where all points lie in a single ball Left and right have same number of points and error tolerance

  19. Is it possible to handle ��ore errors tha� e�trop�� ( t > k )? Support of w 0 Support of v 0 Rep ? r r r r w 1 Rep v 1 r The likelihood of adversary For any construction picking a point w 1 close adversary learns r by enough to recover r is low running with v 1 Recall: adversary can run Rep on any point

  20. Is it possible to handle ��ore errors tha� e�trop�� ( t > k )? To distinguish between w 0 and v 0 must consider more than just t and k Support of w 0 Support of v 0 Rep ? r r r w 1 Rep v 1 r The likelihood of adversary For any construction picking a point w 1 close adversary learns r by enough to recover r is low running with v 1 Key derivation may be possible for w 0 , impossible for v 0

  21. Lessons 1. Exploit structure of source beyond entropy – Need to understand what structure is helpful

  22. Understand the structure of source • Minimum necessary condition for fuzzy extraction: weight inside any B t must be small • Let H fuzz ( W 0 ) = log (1/max wt( B t )) • Big H fuzz ( W 0 ) is necessary r w 1 Rep • Models security in ideal world • Q: Is big H fuzz ( W 0 ) sufficient for fuzzy extractors?

  23. Is big H fuzz ( W 0 ) sufficient? • Thm [FRS]: Yes, if algorithms know exact distribution of W 0 • Imprudent to assume construction and adversary have same view of W 0 – Should assume adversary knows more about W 0 – Deal with adversary knowledge by providing security for family V of W 0 , security should hold for whole family • Thm [FRS]: No if W 0 is only known to come from a family V • A3: Yes if security is computational (using obfuscation) [Bitansky Canetti Kalai Paneth 14] • A4: No if security is information-theoretic • A5: No if you try to build (computational) secure sketch Will show negative result for secure sketches (negative result for fuzzy extractors more complicated)

  24. Thm [FRS]: No if W 0 comes from a family V • Describe a family of distributions V • For any secure sketch Sketch , Rec for most W 0 in V , few w* in W 0 could produce p • Implies W 0 has little entropy conditioned on p Rep w 0 p w 0 B t Rec w 1

  25. Now we’ll consider family V , • Adversary specifies V Adv. goal: most W in V , impossible to • Goal: build Sketch , Rec W have many augmented fixed points maximizing H ( W | p) , for all W in V • First consider one dist. W • For w 0 , Rec ( w 0 , p ) =w 0 • For nearby w 1 , Rec ( w 1 , p ) = w 0 • Call augmented fixed point w 0 = Rec ( w 0 , p ) • To maximize H ( W | p ) make as many points of W w 1 augmented fixed points • Augmented fixed points at least distance t apart (exponentially small fraction of space)

  26. • Adversary specifies V W • Goal: build Sketch , Rec maximizing H ( W | p) , for all W in V • Sketch must create augmented fixed points based only on w 0 • Build family with many w 0 possible distributions for each w 0 • Sketch �a�’t tell W from w 0

  27. • Adversary specifies V W • Goal: build Sketch , Rec maximizing H ( W | p) , for all W in V • Sketch must create augmented fixed points based only on w 0 • Build family with many w 0 possible distributions for each w 0 • Sketch �a�’t tell W from w 0

  28. • Adversary specifies V W • Goal: build Sketch , Rec maximizing H ( W | p) , for all W in V • Sketch must create augmented fixed points based only on w 0 • Build family with many w 0 possible distributions for each w 0 • Sketch �a�’t tell W from w 0

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend