separable statistics and multidimensional linear
play

Separable Statistics and Multidimensional Linear Cryptanalysis - PowerPoint PPT Presentation

Separable Statistics and Multidimensional Linear Cryptanalysis Stian Fauskanger Igor Semaev FFI, Norway Univ. of Bergen, Norway 28 March 2019, FSE, Paris Briefly Matsuis Linear Cryptanalysis is based on the distribution of x 1 . .


  1. Separable Statistics and Multidimensional Linear Cryptanalysis Stian Fauskanger Igor Semaev FFI, Norway Univ. of Bergen, Norway 28 March 2019, FSE, Paris

  2. Briefly ◮ Matsui’s Linear Cryptanalysis is based on the distribution of x 1 ⊕ . . . ⊕ x s ⊕ y 1 ⊕ . . . ⊕ y t , where X = x 1 , . . . , x s some plain-text bits and Y = y 1 , . . . , y t some cipher-text bits in Algorithm1 ◮ In Algorithm2 the bits X are inputs to the second round, and Y to the last round ◮ Starting point in our work : method for computing joint distribution of ( X , Y ) = ( x 1 , . . . , x s , y 1 , . . . , y t ) ◮ The distributions (both Matsui’s and our’s) are approximate ◮ They depend on small sets of the cipher key-bits or linear combinations ◮ Algorithm2-like cryptanalysis is then applied

  3. Outline ◮ Matsui’s Algorithm2 and LLR statistic ◮ New Statistic Construction ◮ Optimisation Problem and Search Algorithm ◮ Implementation for 16-round DES ◮ Multidimensional Distributions in Feistel Ciphers ◮ Conclusions

  4. Outline ◮ Matsui’s Algorithm2 and LLR statistic ◮ ◮ ◮ ◮ ◮

  5. Round Cipher Cryptanalysis with Algorithm2 PL-TEXT Key X key Y Key CH-TEXT

  6. Logarithmic Likelihood Ratio(LLR) Statistic ◮ To distinguish two distributions with densities P ( x ) , Q ( x ) ◮ By independent observations ν 1 , .., ν n ◮ Most powerful test(Neyman-Pearson lemma): ◮ Accept P ( x ) if n ln P ( ν i ) � Q ( ν i ) > threshold i =1 ◮ Left hand side function is called LLR statistic

  7. Algorithm2 Cryptanalisis with LLR statistic ◮ Distribution of ( X , Y ) depends on key-bits key ◮ Observation on ( X , Y ) depends on key-bits Key ◮ LLR statistic depends on key ∪ Key ◮ Distinguish correct and incorrect key ∪ Key with LLR statistic ◮ by computing 2 | key ∪ Key | values of LLR ◮ For large ( X , Y ) the number of the key-bits involved | key ∪ Key | may be too large ◮ Not efficient

  8. New Statistic ◮ Instead of 2 | key ∪ Key | computations of LLR-values ◮ Our work : << 2 | key ∪ Key | ( ≈ 10 3 times faster in DES) ◮ By using a new statistic ◮ Which reflects the structure of the round function ◮ That has a price to pay, but trade-off is positive

  9. Outline ◮ ◮ New Statistic Construction ◮ ◮ ◮ ◮

  10. LLRs for Projections ◮ ( h 1 , .., h m ) some subvectors (projections) of ( X , Y ) such that ◮ Distribution and Observation for h i depend on a lower number of the key-bits key i ∪ Key i ◮ LLR i is a LLR-statistic for h i ◮ Vector ( LLR 1 , .., LLR m ) asymptotically distributed ◮ m -variate N ( n µ, nC ) if key ∪ Key is correct ◮ Close to N ( − n µ, nC ) if key ∪ Key is incorrect ◮ Mean vector µ , covariance matrix C , number of plain-texts n

  11. LLR for Two Normal Distributions ◮ LLR statistic S to distinguish two normal distributions N ( n µ, nC ) and N ( − n µ, nC ) ◮ S degenerates to linear: ◮ S ( key ∪ Key , ν ) = � m i =1 S i ( key i ∪ Key i , ν i ), ◮ where S i = ω i LLR i weighted LLR statistic for h i ◮ ν observation on ( X , Y ) and ν i observation on h i ◮ S is separable ◮ For polynomial distributions the theory of separable statistics was developed by Ivchenko, Medvedev,.. in 1970-s

  12. Distribution ◮ S distributed 1-variate N ( u , u ) if key ∪ Key correct ◮ Close to N ( − u , u ) if incorrect ◮ for an explicit positive u

  13. Cryptanalysis ◮ Find key ∪ Key s.t. S ( key ∪ Key , ν ) > threshold ◮ without brute forcing key ∪ Key ◮ Can be done as ◮ S ( key ∪ Key , ν ) = � m i =1 S i ( key i ∪ Key i , ν i ) ◮ and | key i ∪ Key i | is much smaller than | key ∪ Key | ◮ | key ∪ Key | = 54 and | key i ∪ Key i | ≈ 20 in DES ◮ By solving efficiently an optimisation problem with a Search Algorithm

  14. Outline ◮ ◮ ◮ Optimisation Problem and Search Algorithm ◮ ◮ ◮

  15. Optimisation Problem Example S 1 0.1 0.2 0.3 0.1 x 1 ⊕ x 3 0 0 1 1 x 2 0 1 0 1 S 2 0.5 0.1 x 1 ⊕ x 2 0 1 0.4 0.5 0.7 0.1 S 3 0 0 1 1 x 1 x 2 ⊕ x 3 0 1 0 1 find binary x 1 , x 2 , x 3 s.t. S ( x 1 , x 2 , x 3 ) = S 1 ( x 1 ⊕ x 3 , x 2 ) + S 2 ( x 1 ⊕ x 2 ) + S 3 ( x 1 , x 2 ⊕ x 3 ) > 1 . 3 Threshold is 1 . 3, solution 111

  16. Search Tree root 0 0 1 x 1 X x 1 ,x 2 11 10 X 111 x 1 ,x 2 ,x 3 110 X ◮ One walks over a search tree and checks if the inequality S 1 ( x 1 ⊕ x 3 , x 2 ) + S 2 ( x 1 ⊕ x 2 ) + S 3 ( x 1 , x 2 ⊕ x 3 ) > 1 . 3 ◮ feasible under current fixation ◮ Cut if not feasible. Continue if feasible ◮ One is to check 6 linear inequalities. Brute force takes 8 ◮ Same way one solves m � S ( key ∪ Key , ν ) = S i ( key i ∪ Key i , ν i ) > threshold i =1

  17. Success Probability& Number of ( key ∪ Key )-candidates ◮ Search tree output is ( key ∪ Key )-candidates for the final brute force ◮ The distribution of S ( key ∪ Key , ν ) is known ◮ So one can compute success probability and ◮ The number of wrong solutions, that is ( key ∪ Key )-candidates

  18. Outline ◮ ◮ ◮ ◮ Implementation for 16-round DES ◮ ◮

  19. Two 14-bit vectors ◮ DES K ( X 0 , X 1 ) = ( X 17 , X 16 ) ◮ Matsui’s best linear approximation X 2 { 24 , 18 , 7 } ⊕ X 15 { 15 } ⊕ X 16 { 24 , 18 , 7 , 29 } ◮ We use two 14-bit vectors X 2 [24 , 18 , 7 , 29] , X 15 [16 , 15 , .., 11] , X 16 [24 , 18 , 7 , 29] X 1 [24 , 18 , 7 , 29] , X 2 [16 , 15 , .., 11] , X 15 [24 , 18 , 7 , 29] ◮ Considered independent as they incorporate different bits ◮ Computing their distributions took a few seconds

  20. Projections ◮ 28 projections X 2 [24 , 18 , 7 , 29] , X 15 [ i , j ] , X 16 [24 , 18 , 7 , 29] X 1 [24 , 18 , 7 , 29] , X 2 [ i , j ] , X 15 [24 , 18 , 7 , 29] ◮ For each projection LLR depends on ( ≤ 21) key-bits ◮ 54 key-bits overall ◮ Two separable statistics for two independent bunches of the projections ◮ Search Algorithm combines ( ≤ 21)-bit values to find 54-bit candidates ◮ Those candidates are brute forced

  21. One Particular Projection ◮ projection h 1 : X 2 [24 , 18 , 7 , 29] , X 15 [16 , 15] , X 16 [24 , 18 , 7 , 29] ◮ key 1 ∪ Key 1 incorporates 20 unknowns x 63 , x 61 , x 60 , x 53 , x 46 , x 42 , x 39 , x 36 , x 31 , x 30 , x 27 , x 26 , x 25 , x 22 , x 21 , x 12 , x 10 , x 7 , x 5 , x 57 + x 51 + x 50 + x 19 + x 18 + x 15 + x 14 x i key-bits of 56-bit DES key ◮ 2 20 values of S 1 = ω 1 LLR 1 ◮ Similar for other 27 projections

  22. Key-variables Order for the Search Tree ◮ One needs key ∪ Key ordered to run a tree search ◮ x 2 appears in 14(maximal number) of key i ∪ Key i , etc x 2 , x 19 , x 60 , x 34 , x 10 , x 17 , x 59 , x 36 , x 42 , x 27 , x 25 , x 52 , x 11 , x 33 , x 51 , x 9 , x 23 , x 28 , x 5 , x 55 , x 46 , x 22 , x 62 , x 15 , x 37 , x 47 , x 7 , x 54 , x 39 , x 31 , x 29 , x 20 , x 61 , x 63 , x 30 , x 38 , x 26 , x 50 , x 1 , x 57 , x 18 , x 14 , x 35 , x 44 , x 3 , x 21 , x 41 , x 13 , x 4 , x 45 , x 53 , x 6 , x 12 , x 43

  23. Search Tree Algorithm Run ◮ We fixe desirable success rate 0 . 83 ◮ solve equation n = | keys to brute force | in n ◮ got n = 2 41 . 8 ◮ The number of tree nodes is shown, log 2 scale ◮ | ( key ∪ Key )-candidates | = 2 39 . 8 , | keys to brute force | = 2 41 . 8 ◮ Number of nodes is 2 45 . 5 << 2 54 . Constructing the nodes is faster (in bit operations) than final brute force ◮ Improves Matsui’s result on DES( n = 2 43 , 0 . 85)

  24. Outline ◮ ◮ ◮ ◮ ◮ Multidimensional Distributions in Feistel Ciphers ◮

  25. r-Round DES ◮ DES K ( X ) = Y , where X random, E any event ◮ We want to compute Pr ( E ) in r -round DES. Let’s formalise ◮ X 0 , X 1 , . . . , X r +1 random independently generated 32-bit blocks. Event C defines DES: X i − 1 ⊕ X i +1 = F i ( X i , K i ) , i = 1 , . . . , r ◮ K 1 , . . . , K r fixed round keys. We need Pr ( E|C ) = Pr ( EC ) Pr ( C ) = 2 32 r Pr ( EC ) ◮ infeasible as C depends on all key-bits

  26. Relax C ◮ One chooses a larger event C α (that is C implies C α ) X i − 1 [ α i ] ⊕ X i +1 [ α i ] = F i ( X i , K i )[ α i ] , i = 1 , . . . , r ◮ where α = ( α 1 , . . . , α r ). Then Pr ( C α ) = 2 − � r i =1 | α i | ◮ Let’s accept Pr ( E|C ) ≈ Pr ( E|C α ) = Pr ( EC α ) � r i =1 | α i | Pr ( EC α ) Pr ( C α ) = 2 ◮ C α depends on a lower number of the key-bits. Now feasible and may be computed exactly

  27. Regular Trails ◮ To compute the distribution of Z = X 0 [ α 1 ] , X 1 [ α 2 ∪ β 1 ] , X r [ α r − 1 ∪ β r ] , X r +1 [ α r ] ◮ One chooses event C α , where α = ( α 1 , . . . , α r ), and the trail X i [ β i ] , F i [ α i ] , i = 1 , . . . , r ◮ The trail is called regular if γ i ∩ ( α i − 1 ∪ α i +1 ) ⊆ β i ⊆ γ i , i = 1 , . . . , r where X i [ γ i ] input bits relevant to F i [ α i ] ◮ For a regular trail Pr ( Z = A |C α ) is computed with a convolution-type formula, only depends on α i

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend