solving lpn using large covering codes
play

Solving LPN Using Large Covering Codes Thom Wiggers Radboud - PowerPoint PPT Presentation

Solving LPN Using Large Covering Codes Thom Wiggers Radboud University, Nijmegen, The Netherlands 8th August 2019 Outline Intro Learning Parity with Noise Breaking LPN The covering-codes reduction Covering Codes Combinations of reductions


  1. The LF1 algorithm [LF06] 1. Repeat until small enough 1.1 Sort queries into sets V j that have the same b bits at the end. 1.2 Pick one ( a ′ , c ′ ) from V j and add it to all the other samples in V j . k 0 0 1 1 0 0 1 0 0 1 0 1 ⊕ b 1 0 0 1 1 1 0 1 0 1 0 1 = 1 0 1 0 1 1 1 1 0 0 0 0 k ′ 2. Apply mathematical magic (Walsh-Hadamard transform) to recover s 1 , . . . , s b .

  2. Generic solving algorithm Solve an LPN k ,τ problem, given n samples. 1. Apply reduction algorithm and obtain LPN k ′ ,τ ′ problem with n ′ samples.

  3. Generic solving algorithm Solve an LPN k ,τ problem, given n samples. 1. Apply reduction algorithm and obtain LPN k ′ ,τ ′ problem with n ′ samples. 2. Apply solving algorithm consuming n ′ samples.

  4. Generic solving algorithm Solve an LPN k ,τ problem, given n samples. 1. Apply reduction algorithm and obtain LPN k ′ ,τ ′ problem with n ′ samples. 2. Apply solving algorithm consuming n ′ samples. → obtain information on s .

  5. Generic solving algorithm Solve an LPN k ,τ problem, given n samples. 1. Apply reduction algorithm and obtain LPN k ′ ,τ ′ problem with n ′ samples. 2. Apply solving algorithm consuming n ′ samples. → obtain information on s .

  6. Generic solving algorithm Solve an LPN k ,τ problem, given n samples. 1. Apply reduction algorithm and obtain LPN k ′ ,τ ′ problem with n ′ samples. 2. Apply solving algorithm consuming n ′ samples. → obtain information on s . We may apply several reductions algorithms in sequence!

  7. Complexity LPN with k = 256 and = 1 5 BKW 2 88 LF1 2 77 2 66 2 55 2 44 2 33 2 22 2 11 2 0 Time Samples Memory

  8. The Gauss algorithm [EKM17] Main idea: Try to find an error-free set of samples and then simply apply Gaussian elimination.

  9. The Gauss algorithm [EKM17] Main idea: Try to find an error-free set of samples and then simply apply Gaussian elimination. 1. Take k samples ( a , c ) as invertible matrix A and vector c .

  10. The Gauss algorithm [EKM17] Main idea: Try to find an error-free set of samples and then simply apply Gaussian elimination. 1. Take k samples ( a , c ) as invertible matrix A and vector c . 2. Compute s ′ = A − 1 · c .

  11. The Gauss algorithm [EKM17] Main idea: Try to find an error-free set of samples and then simply apply Gaussian elimination. 1. Take k samples ( a , c ) as invertible matrix A and vector c . 2. Compute s ′ = A − 1 · c . 3. If Test ( s ′ ) confirms it’s error-free, we’re done, else goto 1 .

  12. The Gauss algorithm [EKM17] Main idea: Try to find an error-free set of samples and then simply apply Gaussian elimination. 1. Take k samples ( a , c ) as invertible matrix A and vector c . 2. Compute s ′ = A − 1 · c . 3. If Test ( s ′ ) confirms it’s error-free, we’re done, else goto 1 .

  13. The Gauss algorithm [EKM17] Main idea: Try to find an error-free set of samples and then simply apply Gaussian elimination. 1. Take k samples ( a , c ) as invertible matrix A and vector c . 2. Compute s ′ = A − 1 · c . 3. If Test ( s ′ ) confirms it’s error-free, we’re done, else goto 1 . The Test ( s ′ ) algorithm is as follows: 1. Take m samples ( a , c ) and write them as matrix A test , c test .

  14. The Gauss algorithm [EKM17] Main idea: Try to find an error-free set of samples and then simply apply Gaussian elimination. 1. Take k samples ( a , c ) as invertible matrix A and vector c . 2. Compute s ′ = A − 1 · c . 3. If Test ( s ′ ) confirms it’s error-free, we’re done, else goto 1 . The Test ( s ′ ) algorithm is as follows: 1. Take m samples ( a , c ) and write them as matrix A test , c test . 2. Compute e ′ = A test · s ′ + c test .

  15. The Gauss algorithm [EKM17] Main idea: Try to find an error-free set of samples and then simply apply Gaussian elimination. 1. Take k samples ( a , c ) as invertible matrix A and vector c . 2. Compute s ′ = A − 1 · c . 3. If Test ( s ′ ) confirms it’s error-free, we’re done, else goto 1 . The Test ( s ′ ) algorithm is as follows: 1. Take m samples ( a , c ) and write them as matrix A test , c test . 2. Compute e ′ = A test · s ′ + c test . ◮ We know A test · s + e = c test

  16. The Gauss algorithm [EKM17] Main idea: Try to find an error-free set of samples and then simply apply Gaussian elimination. 1. Take k samples ( a , c ) as invertible matrix A and vector c . 2. Compute s ′ = A − 1 · c . 3. If Test ( s ′ ) confirms it’s error-free, we’re done, else goto 1 . The Test ( s ′ ) algorithm is as follows: 1. Take m samples ( a , c ) and write them as matrix A test , c test . 2. Compute e ′ = A test · s ′ + c test . ◮ We know A test · s + e = c test ◮ We also know e will have roughly m · τ bits flipped

  17. The Gauss algorithm [EKM17] Main idea: Try to find an error-free set of samples and then simply apply Gaussian elimination. 1. Take k samples ( a , c ) as invertible matrix A and vector c . 2. Compute s ′ = A − 1 · c . 3. If Test ( s ′ ) confirms it’s error-free, we’re done, else goto 1 . The Test ( s ′ ) algorithm is as follows: 1. Take m samples ( a , c ) and write them as matrix A test , c test . 2. Compute e ′ = A test · s ′ + c test . ◮ We know A test · s + e = c test ◮ We also know e will have roughly m · τ bits flipped 3. If e ′ has approximately m · τ bits flipped, probably s ′ = s

  18. Variant: Pooled Gauss 1. Take n = k 2 log 2 k samples as a sample pool.

  19. Variant: Pooled Gauss 1. Take n = k 2 log 2 k samples as a sample pool. 2. Randomly take k samples ( a , c ) from the pool as invertible matrix A and vector c .

  20. Variant: Pooled Gauss 1. Take n = k 2 log 2 k samples as a sample pool. 2. Randomly take k samples ( a , c ) from the pool as invertible matrix A and vector c . 3. Compute s ′ = A − 1 · c .

  21. Variant: Pooled Gauss 1. Take n = k 2 log 2 k samples as a sample pool. 2. Randomly take k samples ( a , c ) from the pool as invertible matrix A and vector c . 3. Compute s ′ = A − 1 · c . 4. If Test ( s ′ ) confirms it’s error-free, we’re done, else goto 1 .

  22. Variant: Pooled Gauss 1. Take n = k 2 log 2 k samples as a sample pool. 2. Randomly take k samples ( a , c ) from the pool as invertible matrix A and vector c . 3. Compute s ′ = A − 1 · c . 4. If Test ( s ′ ) confirms it’s error-free, we’re done, else goto 1 .

  23. Variant: Pooled Gauss 1. Take n = k 2 log 2 k samples as a sample pool. 2. Randomly take k samples ( a , c ) from the pool as invertible matrix A and vector c . 3. Compute s ′ = A − 1 · c . 4. If Test ( s ′ ) confirms it’s error-free, we’re done, else goto 1 . � Needs much less samples.

  24. Variant: Pooled Gauss 1. Take n = k 2 log 2 k samples as a sample pool. 2. Randomly take k samples ( a , c ) from the pool as invertible matrix A and vector c . 3. Compute s ′ = A − 1 · c . 4. If Test ( s ′ ) confirms it’s error-free, we’re done, else goto 1 . � Needs much less samples. This algorithm is an Information-Set Decoding algorithm: it finds an error-free index set in the pool. Notably, it resembles the [Pra62] algorithm.

  25. Complexities To solve LPN k ,τ : n Samples Time Memory 20 · ln( 4 k ) · 2 b · ( 1 − 2 τ ) − 2 a BKW kan kn ( 8 b + 2000 ) ( 1 − 2 τ ) − 2 a +( a − 1 ) 2 b kan + b 2 b kn + b 2 b LF1 k 3 + km k 2 + km Gauss � � k · I + m I k 2 log 2 k + m k 3 + km k 2 log 2 k + km Pooled � � I Gauss � log 2 � 2 k Gauss needs I = O iterations to find a solution. ( 1 − τ ) k

  26. Complexity LPN with k = 256 and = 1 5 2 112 BKW LF1 2 98 Gauss Pooled Gauss 2 84 2 70 2 56 2 42 2 28 2 14 2 0 Time Samples Memory

  27. Covering Codes 0 1 0 0 0 0 0 1 1 0 0 0 0 1 0 0 0 0 1 0 1 0 1 0 0 0 1 0 1 1 0 0 0 0 0 0 0 1 0 1 1 1 0 0 0 0 1 0 1 1 0 1

  28. Covering-codes reduction 0 0 1 0 0 0 0 1 1 0 0 0 0 1 0 0 0 0 1 0 0 1 k ′ 1 0 0 0 1 0 0 1 1 0 0 0 0 0 0 1 0 0 1 1 1 0 0 0 1 0 1 1 0 1 k

  29. Covering-codes reduction 0 0 1 0 0 0 0 0 1 1 0 0 0 0 1 0 0 0 1 0 0 1 k ′ 1 0 0 0 1 0 0 1 1 0 0 0 0 0 0 1 0 0 1 1 1 0 0 0 1 0 1 1 0 1 k This allows us to reduce a k -size LPN problem with noise τ to a k ′ -sized LPN problem with noise τ ′ . This new noise τ ′ is strongly dependent on the code used. We measure the impact as bc. (0 ≤ bc ≤ 1, larger is better)

  30. Finding codes for the reduction ◮ We need a code that allows us to reduce from k to k ′ . 0 0 1 0 0 0 0 1 1 0 0 0 0 0 1 0 0 1 0 0 0 1 k ′ 1 0 0 1 0 0 1 1 0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 1 0 1 1 0 1 k

  31. Finding codes for the reduction ◮ We need a code that allows us to reduce from k to k ′ . ◮ We could use random codes, but they are hard to decode. 0 0 1 0 0 0 0 1 1 0 0 0 0 1 0 0 0 0 1 0 1 0 k ′ 1 0 0 1 0 0 1 1 0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 1 0 1 1 0 1 k

  32. Finding codes for the reduction ◮ We need a code that allows us to reduce from k to k ′ . ◮ We could use random codes, but they are hard to decode. ◮ (Quasi-)Perfect codes give the best bc , but only few are known. 0 0 1 0 0 0 0 0 1 1 0 0 0 0 1 0 0 0 1 0 1 0 k ′ 1 0 0 1 0 0 1 1 0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 1 0 1 1 0 1 k

  33. Coded Gauss 1. Apply covering-codes reduction to reduce problem size from k to k ′ . 2. Recover secret using Gauss

  34. Coded Gauss 1. Apply covering-codes reduction to reduce problem size from k to k ′ . 2. Recover secret using Gauss The complexity of this algorithm: � � log 2 2 k ◮ We will need I = O attempts before we find k ′ ( 1 − τ ′ ) k ′ error-free samples.

  35. Coded Gauss 1. Apply covering-codes reduction to reduce problem size from k to k ′ . 2. Recover secret using Gauss The complexity of this algorithm: � � log 2 2 k ◮ We will need I = O attempts before we find k ′ ( 1 − τ ′ ) k ′ error-free samples. ◮ Gauss needs n = k ′ · I + m samples.

  36. Coded Gauss 1. Apply covering-codes reduction to reduce problem size from k to k ′ . 2. Recover secret using Gauss The complexity of this algorithm: � � log 2 2 k ◮ We will need I = O attempts before we find k ′ ( 1 − τ ′ ) k ′ error-free samples. ◮ Gauss needs n = k ′ · I + m samples. ◮ In each Gauss iteration we do k 3 + k · m work

  37. Coded Gauss 1. Apply covering-codes reduction to reduce problem size from k to k ′ . 2. Recover secret using Gauss The complexity of this algorithm: � � log 2 2 k ◮ We will need I = O attempts before we find k ′ ( 1 − τ ′ ) k ′ error-free samples. ◮ Gauss needs n = k ′ · I + m samples. ◮ In each Gauss iteration we do k 3 + k · m work ◮ We will need to decode all the n samples as well

  38. Coded Gauss 1. Apply covering-codes reduction to reduce problem size from k to k ′ . 2. Recover secret using Gauss The complexity of this algorithm: � � log 2 2 k ◮ We will need I = O attempts before we find k ′ ( 1 − τ ′ ) k ′ error-free samples. ◮ Gauss needs n = k ′ · I + m samples. ◮ In each Gauss iteration we do k 3 + k · m work ◮ We will need to decode all the n samples as well → Time complexity O ( n + ( k 3 + k · m ) · I )

  39. bc bc 10 0 1 10 -2 10 -4 0.8 Coded Gauss Coded Gauss is faster 10 -6 is faster 0.6 10 -8 10 -10 0.4 10 -12 10 -14 Plain Gauss is faster Plain Gauss is faster 0.2 10 -16 10 -18 512 k ′ 128 k ′ 64 128 192 256 320 384 448 0 16 32 48 64 80 96 112 How ‘good’ should a code be? Let’s assume we have arbitrary [ k , k ′ ] codes.

  40. bc bc 10 0 1 10 -2 10 -4 0.8 Coded Gauss Coded Gauss is faster 10 -6 is faster 10 -8 0.6 10 -10 0.4 10 -12 10 -14 Plain Gauss is faster Plain Gauss is faster 0.2 10 -16 10 -18 512 k ′ 128 k ′ 64 128 192 256 320 384 448 0 16 32 48 64 80 96 112 How ‘good’ should a code be? Let’s assume we have arbitrary [ k , k ′ ] codes. We have the following inequality T Gauss ( k , τ ) ≥ T Coded Gauss ( k , k ′ , τ, bc )

  41. How ‘good’ should a code be? Let’s assume we have arbitrary [ k , k ′ ] codes. bc bc 10 0 1 10 -2 10 -4 0.8 Coded Gauss Coded Gauss is faster 10 -6 is faster 0.6 10 -8 10 -10 0.4 10 -12 10 -14 Plain Gauss is faster Plain Gauss is faster 0.2 10 -16 10 -18 512 k ′ 128 k ′ 64 128 192 256 320 384 448 0 16 32 48 64 80 96 112 (b) More detailed look at 0 < k ′ ≤ 128. (a) Lower bound for bc Figure: Minimal bc before Coded Gauss is faster than applying Gauss to the full problem. k = 512 , τ = 1 8 .

  42. The best-case scenario for bc Assume we have arbitrary, (quasi-)perfect codes.

  43. The best-case scenario for bc Assume we have arbitrary, (quasi-)perfect codes. R � k � � � bc ≤ 2 k ′ − k � s − δ R + 1 + δ R + 1 δ w . s s w w = 0 Here, R is a property we can bound for quasi-perfect codes (Hamming Bound [Ham50]) and δ s = 1 − 2 τ .

  44. The best-case scenario for bc Assume we have arbitrary, (quasi-)perfect codes. bc bc 2 0 2 -1 2 -7 2 -4 2 -14 2 -7 2 -21 2 -28 2 -10 2 -35 2 -13 2 -42 2 -16 2 -49 2 -19 2 -56 2 -63 2 -22 2 -70 2 -25 2 -77 Needed bc ( τ = 1 8 ) 2 -28 Needed bc ( τ = 1 512 ) 2 -84 p bc at Hamming Bound ( τ = 1 8 ) 2 -31 Hamming Bound bc ( τ = 512 ) 1 2 -91 p k ′ 512 k ′ 0 64 128 192 256 320 384 448 512 0 64 128 192 256 320 384 448 1 (b) τ = 1 (a) τ = √ 8 512 Figure: Minimal bc and the bc obtained at the Hamming bound for various τ . k = 512 , δ = δ s = 1 − 2 τ .

  45. The best-case scenario for bc Assume we have arbitrary, (quasi-)perfect codes. bc bc 2 -4 2 -30 2 -20 2 -67 2 -36 2 -104 2 -52 2 -141 2 -68 2 -178 2 -84 2 -215 2 -100 2 -252 2 -116 2 -289 2 -132 2 -326 2 -148 2 -363 2 -164 2 -400 Needed bc ( τ = 4999999 10000000 ) Needed bc ( τ = 1 4 ) 2 -180 2 -437 bc at Hamming Bound ( τ = 4999999 10000000 ) bc at Hamming Bound ( τ = 1 4 ) 2 -196 2 -474 k ′ k ′ 2 -511 0 64 128 192 256 320 384 448 512 0 64 128 192 256 320 384 448 512 (a) τ = 1 4 999 999 (b) τ = 4 10 000 000 Figure: Minimal bc and the bc obtained at the Hamming bound for various τ . k = 512 , δ = δ s = 1 − 2 τ .

  46. Coded Gauss doesn’t work In conclusion, the following: 1. Apply covering code to reduce to k ′ -sized problem 2. Use Gauss to solve the problem isn’t faster than only applying step 2.

  47. Coded Gauss doesn’t work In conclusion, the following: 1. Apply covering code to reduce to k ′ -sized problem 2. Use Gauss to solve the problem isn’t faster than only applying step 2. Note Our analysis was limited to the above algorithm. We have results that show the following may work 1. Apply some reduction to reduce to a k ′ -sized problem with noise τ ′ 2. Apply covering code to reduce to k ′′ -sized problem with noise τ ′′ 3. Use Gauss to solve the problem

  48. Coded Gauss doesn’t work In conclusion, the following: 1. Apply covering code to reduce to k ′ -sized problem 2. Use Gauss to solve the problem isn’t faster than only applying step 2. Note Our analysis was limited to the above algorithm. We have results that show the following may work 1. Apply some reduction to reduce to a k ′ -sized problem with noise τ ′ 2. Apply covering code to reduce to k ′′ -sized problem with noise τ ′′ 3. Use Gauss to solve the problem However, we would also need to include the complexity of step 1 when analysing this combination.

  49. Outline Intro Learning Parity with Noise Breaking LPN The covering-codes reduction Covering Codes Combinations of reductions What we are working on

  50. Improving the performance of the covering-codes reduction The covering-codes reduction as originally proposed: 1. Apply covering-codes reduction 2. Recover information on s using Walsh-Hadamard Transform.

  51. Improving the performance of the covering-codes reduction The covering-codes reduction as originally proposed: 1. Apply covering-codes reduction 2. Recover information on s using Walsh-Hadamard Transform. Picking codes is hard. Much of the work around this attack has been on finding the right codes to instantiate attacks.

  52. Concatenated Codes Current attacks use concatenations of small perfect codes to construct larger [ k , k ′ ] codes. Example We construct the following [ 12 , 4 ] code from [ 3 , 1 ] repetion codes with generator � 1 1 1 � : 1 1 1 0 0 0 0 0 0   0 0 0 1 1 1 0 0 0   0 0 0 0 0 0 1 1 1

  53. Concatenated Codes Current attacks use concatenations of small perfect codes to construct larger [ k , k ′ ] codes. Example We construct the following [ 12 , 4 ] code from [ 3 , 1 ] repetion codes with generator � 1 1 1 � : 1 1 1 0 0 0 0 0 0   0 0 0 1 1 1 0 0 0   0 0 0 0 0 0 1 1 1 The bc of concatenated codes is the product of the bc of the smaller codes.

  54. Concatenated Codes (cont.) Example We construct the following [ 12 , 4 ] code from [ 3 , 1 ] repetion codes with generator � 1 1 1 � : 1 1 1 0 0 0 0 0 0   0 0 0 1 1 1 0 0 0   0 0 0 0 0 0 1 1 1 Decoding Algorithm 1. Generate look up tables for the small codes 2. Split your vector along the small codes 3. Look up the codewords for the individual pieces in the lookup tables 4. Concatenate

  55. StGen codes Samardjiska and Gliogoski proposed an improvement on these concatenations of codes. We add random noise on top of the blocks. n 1   B 1 B ′   k 1 . . . 2    ...    B ′ B 2 k 2  v  G = I k (1)   n 2        0    B v k v n v Simona proposed using these codes with the covering-codes reduction at a department lunch talk.

  56. Decoding StGen Codes n 1    B 1 B ′  k 1 2 . . .    ...   B ′  B 2 k 2  v  (2) G = I k    n 2       0    B v k v n v Decoding algorithm sketch 1. Set maximum error weights and limits

  57. Decoding StGen Codes n 1    B 1 B ′  k 1 2 . . .    ...   B ′  B 2 k 2  v  (2) G = I k    n 2       0    B v k v n v Decoding algorithm sketch 1. Set maximum error weights and limits 2. Split vector into pieces

  58. Decoding StGen Codes n 1    B 1 B ′  k 1 2 . . .    ...   B ′  B 2 k 2  v  (2) G = I k    n 2       0    B v k v n v Decoding algorithm sketch 1. Set maximum error weights and limits 2. Split vector into pieces 3. Produce all candidate codewords and error vectors for first block B 1

  59. Decoding StGen Codes n 1    B 1 B ′  k 1 2 . . .    ...   B ′  B 2 k 2  v  (2) G = I k    n 2       0    B v k v n v Decoding algorithm sketch 1. Set maximum error weights and limits 2. Split vector into pieces 3. Produce all candidate codewords and error vectors for first block B 1 4. Multiply each of these by B ′ 2 to account for that random noise

  60. Decoding StGen Codes n 1    B 1 B ′  k 1 2 . . .    ...   B ′  B 2 k 2  v  (2) G = I k    n 2       0    B v k v n v Decoding algorithm sketch 1. Set maximum error weights and limits 2. Split vector into pieces 3. Produce all candidate codewords and error vectors for first block B 1 4. Multiply each of these by B ′ 2 to account for that random noise 5. Generate all the candidates for B 2

  61. Decoding StGen Codes n 1    B 1 B ′  k 1 2 . . .    ...   B ′  B 2 k 2  v  (2) G = I k    n 2       0    B v k v n v Decoding algorithm sketch 1. Set maximum error weights and limits 2. Split vector into pieces 3. Produce all candidate codewords and error vectors for first block B 1 4. Multiply each of these by B ′ 2 to account for that random noise 5. Generate all the candidates for B 2 6. Increase maximum weights if you have few candidates for the next round.

  62. Complications of StGen codes ◮ Decoding is not trivial

  63. Complications of StGen codes ◮ Decoding is not trivial ◮ Decoding algorithm based on list decoding [SG17]

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend