# Randomness and Intractability in Kolmogorov Complexity Igor Carboni - PowerPoint PPT Presentation

## Randomness and Intractability in Kolmogorov Complexity Igor Carboni Oliveira University of Oxford ICALP 2019 1 Background and motivation 2 Structure versus Randomness Given a string x { 0 , 1 } n , is it structured or

1. Randomness and Intractability in Kolmogorov Complexity Igor Carboni Oliveira University of Oxford ICALP 2019 1

2. Background and motivation 2

3. Structure versus Randomness ⊲ Given a string x ∈ { 0 , 1 } n , is it “ structured ” or “ random ”? ⊲ Question of relevance to several fields, including: LEARNING: Detecting pattern/structure in data. CRYPTO: Encrypted strings must look random. 3

4. Complexity of strings ⊲ Different ways of measuring the complexity of x . ⊲ This talk: Interested in hardness of estimating complexity. If provably secure cryptography exists, algorithms shouldn’t be able to estimate the “complexity” of strings. 4

5. Complexity of strings ⊲ Different ways of measuring the complexity of x . ⊲ This talk: Interested in hardness of estimating complexity. If provably secure cryptography exists, algorithms shouldn’t be able to estimate the “complexity” of strings. 4

6. Circuit complexity and Kolmogorov complexity Circuit Complexity: – View x as a boolean function f : { 0 , 1 } ℓ → { 0 , 1 } . – complexity ( x ) = minimum size of a circuit for f . . Showing this is hard implies P � = NP . – Deciding complexity is just the MCSP Kolmogorov Complexity: – complexity ( x ) = minimum length of TM that prints x . – Estimating complexity of x is undecidable . “Extremal” . . . Is there an intermediate notion that is useful? 5

7. Circuit complexity and Kolmogorov complexity Circuit Complexity: – View x as a boolean function f : { 0 , 1 } ℓ → { 0 , 1 } . – complexity ( x ) = minimum size of a circuit for f . . Showing this is hard implies P � = NP . – Deciding complexity is just the MCSP Kolmogorov Complexity: – complexity ( x ) = minimum length of TM that prints x . – Estimating complexity of x is undecidable . “Extremal” . . . Is there an intermediate notion that is useful? 5

8. Circuit complexity and Kolmogorov complexity Circuit Complexity: – View x as a boolean function f : { 0 , 1 } ℓ → { 0 , 1 } . – complexity ( x ) = minimum size of a circuit for f . . Showing this is hard implies P � = NP . – Deciding complexity is just the MCSP Kolmogorov Complexity: – complexity ( x ) = minimum length of TM that prints x . – Estimating complexity of x is undecidable . “Extremal” . . . Is there an intermediate notion that is useful? 5

9. Time-bounded Kolmogorov complexity ⊲ Introduced by L. Levin in 1984. ⊲ Takes into account description length and running time of TM. def | M | + log t Kt ( x ) = min A TM M, time t M prints x in time t ⊲ Kt ( x ) can be computed in exponential time (brute-force). Circuit Complexity Levin’s (Time-Bounded) Kt Kolmogorov Complexity NP EXP undecidable 6

10. Time-bounded Kolmogorov complexity ⊲ Introduced by L. Levin in 1984. ⊲ Takes into account description length and running time of TM. def | M | + log t Kt ( x ) = min A TM M, time t M prints x in time t ⊲ Kt ( x ) can be computed in exponential time (brute-force). Circuit Complexity Levin’s (Time-Bounded) Kt Kolmogorov Complexity NP EXP undecidable 6

11. Time-bounded Kolmogorov complexity ⊲ Introduced by L. Levin in 1984. ⊲ Takes into account description length and running time of TM. def | M | + log t Kt ( x ) = min A TM M, time t M prints x in time t ⊲ Kt ( x ) can be computed in exponential time (brute-force). Circuit Complexity Levin’s (Time-Bounded) Kt Kolmogorov Complexity NP EXP undecidable 6

12. Why is Kt an interesting measure? ⊲ log t gives the “right” measure: connection to optimal search . Example: Deterministic generation of n -bit prime numbers. Fastest known algorithm runs in time 2 n/ 2 [Lagarias-Odlyzko, 1987]. ⊲ Is there a sequence { p n } of n -bit primes such that Kt ( p n ) = o ( n ) ? ⇒ there is deterministic prime generation in time 2 o ( n ) True ⇐ 7

13. Why is Kt an interesting measure? ⊲ log t gives the “right” measure: connection to optimal search . Example: Deterministic generation of n -bit prime numbers. Fastest known algorithm runs in time 2 n/ 2 [Lagarias-Odlyzko, 1987]. ⊲ Is there a sequence { p n } of n -bit primes such that Kt ( p n ) = o ( n ) ? ⇒ there is deterministic prime generation in time 2 o ( n ) True ⇐ 7

14. Why is Kt an interesting measure? ⊲ log t gives the “right” measure: connection to optimal search . Example: Deterministic generation of n -bit prime numbers. Fastest known algorithm runs in time 2 n/ 2 [Lagarias-Odlyzko, 1987]. ⊲ Is there a sequence { p n } of n -bit primes such that Kt ( p n ) = o ( n ) ? ⇒ there is deterministic prime generation in time 2 o ( n ) True ⇐ 7

15. How difficult is to compute the complexity of a string? Can we compute Kt ( x ) in polynomial time? ⊲ Explicitly posed in [ABK + 06]. We already know that P � = EXP . . . ⊲ Question strongly connected to power of learning algorithms. ⊲ If provably secure cryptography exists, the answer should be negative . 8

16. Main Result 9

17. Summary of Main Contribution ⊲ We introduce a randomized analogue of Levin’s Kt complexity. ⊲ Main Result: Randomized Kt complexity cannot be estimated in BPP . (The problem can be solved in randomized exponential time.) ⊲ This is an unconditional lower bound for a natural problem. 10

18. Randomized Kt Complexity ⊲ Adaptation of Levin’s definition to Randomized Computation . ⊲ For x ∈ { 0 , 1 } n , we consider algorithms that generate x w.h.p.: def rKt ( x ) = min | M | + log t A randomized TM M, time t Pr M [ M prints x in time t ] ≥ 2 / 3 Intuition: String probabilistically decompressed from short representation. 11

19. Remarks about Kt Complexity def | M | + log t rKt ( x ) = min A randomized TM M, time t Pr M [ M prints x in time t ] ≥ 2 / 3 ⊲ Definition is robust . ⊲ Connected to pseudodeterministic algorithms . In particular, it follows from a recent joint work with R. Santhanam that – There is an infinite sequence { p m } m of m -bit primes such that rKt ( p m ) ≤ m o (1) . ⊲ Under standard derandomization assumptions, Kt ( x ) = Θ( rKt ( x )) . 12

20. Remarks about Kt Complexity def | M | + log t rKt ( x ) = min A randomized TM M, time t Pr M [ M prints x in time t ] ≥ 2 / 3 ⊲ Definition is robust . ⊲ Connected to pseudodeterministic algorithms . In particular, it follows from a recent joint work with R. Santhanam that – There is an infinite sequence { p m } m of m -bit primes such that rKt ( p m ) ≤ m o (1) . ⊲ Under standard derandomization assumptions, Kt ( x ) = Θ( rKt ( x )) . 12

21. Remarks about Kt Complexity def | M | + log t rKt ( x ) = min A randomized TM M, time t Pr M [ M prints x in time t ] ≥ 2 / 3 ⊲ Definition is robust . ⊲ Connected to pseudodeterministic algorithms . In particular, it follows from a recent joint work with R. Santhanam that – There is an infinite sequence { p m } m of m -bit primes such that rKt ( p m ) ≤ m o (1) . ⊲ Under standard derandomization assumptions, Kt ( x ) = Θ( rKt ( x )) . 12

22. How difficult is to compute the complexity of a string? Can we compute Kt ( x ) in polynomial time? MKtP – Minimum Kt Problem Can we compute rKt ( x ) in randomized polynomial time? MrKtP – Minimum rKt Problem 13

23. Main Result: MrKtP is hard “ rKt cannot be approximated in quasi-polynomial time.” Theorem 1. For every ε > 0 , there is no randomized algorithm running in time n poly (log n ) that distinguishes between rKt ( x ) ≤ n ε versus rKt ( x ) ≥ . 99 n , where n is the length of the input string x . Remark. This problem can be solved in randomized exponential time. 14

24. Techniques 15

25. Preliminaries Gap - MrKtP [ n ε , . 99 n ] : def = { x ∈ { 0 , 1 } n | rKt ( x ) ≤ n ε } YES n def = { x ∈ { 0 , 1 } n | rKt ( x ) > . 99 n } NO n ⊲ Algorithm for Gap - MrKtP [ n ε , . 99 n ] distinguishes two cases. ⊲ Approach: indirect diagonalization using techniques from theory of pseudorandomness . 16

26. Preliminaries Gap - MrKtP [ n ε , . 99 n ] : def = { x ∈ { 0 , 1 } n | rKt ( x ) ≤ n ε } YES n def = { x ∈ { 0 , 1 } n | rKt ( x ) > . 99 n } NO n ⊲ Algorithm for Gap - MrKtP [ n ε , . 99 n ] distinguishes two cases. ⊲ Approach: indirect diagonalization using techniques from theory of pseudorandomness . 16

27. Main Lemmas Lemma 1. For every ε > 0 , BPE ≤ P / poly Gap - MrKtP [ n ε , . 99 n ] . ⊲ Very strong non-uniform inclusion . Lemma 2. For every ε > 0 , PSPACE ⊆ BPP Gap - MrKtP [ n ε ,. 99 n ] . ⊲ Strong uniform inclusion . Lemma 3. If n ≤ s ( n ) ≤ 2 o ( n ) then DSPACE [ s 3 ] � Circuit [ s ] . ⊲ Nexus between uniform and non-uniform inclusions. 17

28. Main Lemmas Lemma 1. For every ε > 0 , BPE ≤ P / poly Gap - MrKtP [ n ε , . 99 n ] . ⊲ Very strong non-uniform inclusion . Lemma 2. For every ε > 0 , PSPACE ⊆ BPP Gap - MrKtP [ n ε ,. 99 n ] . ⊲ Strong uniform inclusion . Lemma 3. If n ≤ s ( n ) ≤ 2 o ( n ) then DSPACE [ s 3 ] � Circuit [ s ] . ⊲ Nexus between uniform and non-uniform inclusions. 17

29. Main Lemmas Lemma 1. For every ε > 0 , BPE ≤ P / poly Gap - MrKtP [ n ε , . 99 n ] . ⊲ Very strong non-uniform inclusion . Lemma 2. For every ε > 0 , PSPACE ⊆ BPP Gap - MrKtP [ n ε ,. 99 n ] . ⊲ Strong uniform inclusion . Lemma 3. If n ≤ s ( n ) ≤ 2 o ( n ) then DSPACE [ s 3 ] � Circuit [ s ] . ⊲ Nexus between uniform and non-uniform inclusions. 17

Recommend

More recommend