recent advances in decoding random binary linear codes
play

Recent Advances in Decoding Random Binary Linear Codes and Their - PowerPoint PPT Presentation

Recent Advances in Decoding Random Binary Linear Codes and Their Implications to Crypto Alexander May Horst Grtz Institute for IT-Security Faculty of Mathematics Ruhr-University of Bochum L ATTICE C ODING AND C RYPTO M EETING M AY 2017,


  1. Recent Advances in Decoding Random Binary Linear Codes – and Their Implications to Crypto Alexander May Horst Görtz Institute for IT-Security Faculty of Mathematics Ruhr-University of Bochum L ATTICE C ODING AND C RYPTO M EETING M AY 2017, UCL Alex May (HGI Bochum) 1 / 32

  2. Linear Codes and Distance Definition Linear Code A linear code is a k -dimensional subspace of F n 2 . Represent via: Generator matrix G 2 } , where G ∈ F k × n C = { x G ∈ F n 2 | x ∈ F k 2 Parity check matrix H 2 | H c = 0 } , where H ∈ F n − k × n C = { c ∈ F n 2 Random Code: G ∈ R F k × n respectively H ∈ R F n − k × n 2 2 ◮ Random codes are hard instances for decoding. ◮ Crypto motivation: Scramble structured C in “random" SCT . ◮ Good generic hardness criterion. Alex May (HGI Bochum) We need more distance. 2 / 32

  3. Bounded and Full Distance Decoding Definition Distance d = min c � = c ′ ∈ C { ∆( c , c ′ ) } , where ∆ is the Hamming distance. Remark: Unique decoding of c + e when ∆( e ) ≤ d − 1 2 . Definition Bounded Distance Decoding (BD) : H , x = c + e with c ∈ C , ∆( e ) ≤ d − 1 Given 2 : e and thus c = x + e Find Syndrome Decoding Syndrome s := H x = H ( c + e ) = H c + H e = H e . Bounded Distance is the usual case in crypto. Definition Full Distance Decoding (FD) : H , x ∈ F n Given 2 Find : c with ∆( c , x ) ≤ d Alex May (HGI Bochum) We need more distance. 3 / 32

  4. On Running Times Running time of any decoding algorithm is a function of ( n , k , d ) . Look at map F n 2 → F n − k with e �→ H e with ∆( e ) ≤ d . 2 � n < 2 n − k . � Map is injective if d ≈ 2 H ( d � n n ) n , which yields � Write d H ( d n ) < 1 − k (Gilbert-Varshamov bound) n . For random codes this bound is sharp. Hence, we can directly link d to n , k . Running time becomes a function of n , k only. Since BD/FD decoding is NP-hard we expect running time T ( n , k ) = 2 f ( k n ) n . For simplifying, we are mainly interested in T ( n ) = max k { T ( n , k ) } . Alex May (HGI Bochum) Run, run, run. 4 / 32

  5. Running Time graphically Alex May (HGI Bochum) Run, run, run. 5 / 32

  6. The Way to go 0 . 097 0 . 102 0 . 112 0 . 117 0 . 121 MO (15) BJMM (12) MMT (11) Stern (89)Prange (62) Figure: Full Distance decoding (FD) 0 . 0473 0 . 0494 0 . 0537 0 . 0557 0 . 0576 MO (15) BJMM (12) MMT (11) Stern (89) Prange (62) Figure: Bounded Distance decoding (BD) Alex May (HGI Bochum) Run, run, run. 6 / 32

  7. Let’s just start. Goal : Solve H e = s for small weight e . Assumption: Wlog we know ω := ∆( e ) . Algorithm Exhaustive Search INPUT: H , x , ω For all e ∈ F n 2 with ∆( e ) = ω : Check whether H e = s = H x . 1 OUTPUT: e � n ≤ 2 0 . 386 n . � Running time: T ( n ) = ω Alex May (HGI Bochum) Brute-Force it. 7 / 32

  8. Allowed Transformations Linear algebra transformation for s = H e . Column permutation : 1 s = H e = HPP − 1 e for some permutation matrix P ∈ F n × n . 2 Elementary row operations : 2 GH e = G s =: s ′ for some invertible matrix G ∈ F n − k × n − k . 2 Easy special cases: Quadratic case : H ∈ F n × n . Compute e = H − 1 s . 1 2 Any weight ∆( e ) : Compute GH e = ( H ′ | I n − k ) e = G s . 2 Remark: Hardness/unicity comes from under-defined + small weight. Alex May (HGI Bochum) Linear algebra is always good. 8 / 32

  9. Prange’s algorithm (1962) Idea: ( H ′ | I n − k )( e 1 || e 2 ) = H ′ e 1 + e 2 = s ′ Algorithm Prange INPUT: H , x , ω REPEAT Permute columns, construct systematic ( H ′ | I n − k ) . Fix p < ω . 1 For all e 1 ∈ F k 2 with ∆( e 1 ) = p : 2 If (∆( H ′ e 1 + s ′ ) = ω − p ) , success. 1 UNTIL success OUTPUT: Undo permutation of e = ( e 1 || H ′ e 1 + s ′ ) . Running time: Outer loop has success prob ( k p )( n − k ω − p ) . ( n ω ) ( n ω ) � k � Inner loop has running time . Total: ω − p ) , optimal for p = 0. p ( n − k 1 17 n , with constant memory. Yields running time T ( n ) = 2 Alex May (HGI Bochum) Linear algebra is always good. 9 / 32

  10. Stern’s algorithm (1989) Meet in the Middle: ( H 1 | H 2 | I n − k )( e 1 || e 2 || e 3 ) = H 1 e 1 + H 2 e 2 + e 3 = s ′ Algorithm Stern INPUT: H , x , ω REPEAT Permute columns, construct systematic ( H 1 | H 2 | I n − k ) . Fix p < ω . 1 k 2 with ∆( e 1 ) = p For all e 1 ∈ F 2 2 : Store H 1 e 1 in sorted L 1 . 2 k 2 : Store H 2 e 2 + s ′ in sorted L 2 . 2 with ∆( e 2 ) = p For all e 2 ∈ F 2 3 Search for elements in L 1 , L 2 that differ by ∆( e 3 ) = ω − p . 4 UNTIL success OUTPUT: Undo permutation of e = ( e 1 || e 2 || H 1 e 1 + H 2 e 2 + s ′ ) . Step 4: Look for vectors that completely match in ℓ coordinates. 1 18 , but requires memory to store L 1 , L 2 . T ( n ) = 2 Alex May (HGI Bochum) Let us meet on a bridge in the middle. 10 / 32

  11. Representation Technique (Howgrave-Graham, Joux) Meet in the Middle k 2 with weight ∆( e i ) = p Split e = ( e 1 || e 2 ) as e 1 , e 2 ∈ F 2 2 each. Combination of e 1 , e 2 is via concenation. Unique representation of e in terms of e 1 , e 2 . Representation [May, Meurer, Thomae 2011] 2 with weight ∆( e i ) = p Split e = e 1 + e 2 as e 1 , e 2 ∈ F k 2 each. Combination of e 1 , e 2 is via addition in F k 2 . e has many representations as e 1 + e 2 . Example for k = 8 , p = 4: ( 01101001 ) = ( 01100000 ) + ( 00001001 ) = ( 01001000 ) + ( 00100001 ) = ( 01000001 ) + ( 00101000 ) = ( 00101000 ) + ( 01000001 ) = ( 00100001 ) + ( 01001000 ) = ( 00001001 ) + ( 01100000 ) Alex May (HGI Bochum) Blow up and shrink. 11 / 32

  12. Pros and Cons of representations Representation [MMT 2011, Asiacrypt 2011] 2 with weight ∆( e i ) = p Split e = e 1 + e 2 as e 1 , e 2 ∈ F k 2 each. Disadvantages: � k � k / 2 ◮ List lengths of L 1 , L 2 increases from � � to . p / 2 p / 2 ◮ Addition of e 1 , e 2 usually yields Hamming weight smaller p . Advantage: � p ◮ e has � =: R representations as e 1 + e 2 . p / 2 Construct via Divide & Conquer only 1 R -fraction of L 1 , L 2 . Since many solutions exist, it is easier to construct a special one. Example: Look only for H 1 e 1 , H 2 e 2 + s ′ with last log ( 1 R ) coord. 0. Advantage (may) dominate whenever ( k p / 2 ) � k / 2 � p / 2 ) < . ( p p / 2 1 19 n . Result: Yields running time 2 Alex May (HGI Bochum) Blow up and shrink. 12 / 32

  13. More representations (Becker,Joux,May,Meurer 2012) Idea: 2 with weight ∆( e i ) = p Choose e 1 , e 2 ∈ F k 2 + ǫ each. Choose ǫ such that ǫ 1-positions cancel on expectation. � p � In MMT: representations of 1’s as p / 2 1 = 1 + 0 = 0 + 1. � k − p � Now: Additionally representations of 0’s as ǫ 0 = 1 + 1 = 0 + 0. Paper subtitle: "How 1 + 1 = 0 Improves Information Set Decoding". 1 20 n . Yields T ( n ) = 2 Alex May (HGI Bochum) Explode, then shrink. 13 / 32

  14. How to construct special solutions Layer 3 Disjoint base lists B i , 1 and B i , 2 for i = 1 , . . . , 4 weight p 2 2 . . . weight p 2 = p 1 L ( 2 ) L ( 2 ) L ( 2 ) L ( 2 ) Layer 2 2 + ε 2 1 2 3 4 ⊲ ⊳ ⊲ ⊳ weight L ( 1 ) L ( 1 ) p 1 = p Layer 1 2 + ε 1 r 2 r 2 1 2 ⊲ ⊳ weight p Layer 0 L r 1 Alex May (HGI Bochum) Explode, then shrink. 14 / 32 Figure: Illustration of the BJMM algorithm.

  15. A word about memory Bounded Distance Full Distance time space time space Prange 0 . 05752 - 0 . 1208 - Stern 0 . 05564 0 . 0135 0 . 1167 0 . 0318 Ball-collision 0 . 05559 0 . 0148 0 . 1164 0 . 0374 MMT 0 . 05364 0 . 0216 0 . 1116 0 . 0541 BJMM 0 . 04934 0 . 0286 0 . 1019 0 . 0769 Alex May (HGI Bochum) Could be worse. 15 / 32

  16. Stern’s algorithm (1989) Meet in the Middle: ( H 1 | H 2 | I n − k )( e 1 || e 2 || e 3 ) = H 1 e 1 + H 2 e 2 + e 3 = s ′ Algorithm Stern INPUT: H , x , ω REPEAT Permute columns, construct systematic ( H 1 | H 2 | I n − k ) . Fix p < ω . 1 k 2 with ∆( e 1 ) = p For all e 1 ∈ F 2 2 : Store H 1 e 1 in sorted L 1 . 2 k 2 : Store H 2 e 2 + s ′ in sorted L 2 . 2 with ∆( e 2 ) = p For all e 2 ∈ F 2 3 Search for elements in L 1 , L 2 that differ by ∆( e 3 ) = ω − p . 4 UNTIL success OUTPUT: Undo permutation of e = ( e 1 || e 2 || H 1 e 1 + H 2 e 2 + s ′ ) . Step 4: Look for vectors that completely match in ℓ coordinates. 1 18 , but requires memory to store L 1 , L 2 . T ( n ) = 2 Alex May (HGI Bochum) Sometimes I have these flashbacks. 16 / 32

  17. Nearest Neighbor Problem Definition Nearest Neighbor Problem : L 1 , L 2 ⊂ R F n 2 with | L i | = 2 λ n Given Find : all ( u , v ) ∈ L 1 × L 2 with ∆( u , v ) = γ n . Easy cases: γ = 1 1 2 ◮ Test every combination in L 1 × L 2 . ◮ Run time 2 2 λ n ( 1 + o ( 1 )) . γ = 0 2 ◮ Sort lists and find matching pairs. ◮ Run time 2 λ n ( 1 + o ( 1 )) . Theorem May, Ozerov 2015 1 1 − γ λ n ( 1 + o ( 1 )) . Nearest Neighbor can be solved in 2 Alex May (HGI Bochum) Know your neighbors. 17 / 32

  18. Main Idea of Nearest Neighbor Observation: Nearest Neighbors are also locally near. u ∈ L 1 L 1 L 2 size: 2 λ n v ∈ L 2 create exponentially many sublists by choosing random partitions P L ′ 1 L ′ L ′ 1 L ′ L ′ 1 L ′ L ′ 1 L ′ · · · 2 2 2 2 For at least one sublist pair we have ( u , v ) ∈ L ′ 1 × L ′ 2 w.o.p. Alex May (HGI Bochum) Sample and hope. 18 / 32

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend