imperfect gaps in gap eth and pcps
play

Imperfect Gaps in Gap-ETH and PCPs Mitali Bafna Nikhil Vyas - PowerPoint PPT Presentation

Imperfect Gaps in Gap-ETH and PCPs Mitali Bafna Nikhil Vyas Harvard MIT Table of contents 1. Introduction 2. Gap-ETH and Perfect Completeness 3. PCPs and Perfect Completeness 1 Introduction Main Motivations We study the role of perfect


  1. k 1 O k n variables and • Run the above reduction 2 n k n 2 times. k 1 • Total running time 2 n k n 2 2 o n 2 n for large enough algorithm on the MAX 3-SAT 1 1 constant k . o n 2 n k produced instances. and output YES if the algorithm outputs YES on any of the instances Proof Sketch • Run the 2 o n Lemma clauses. on n converted to MAX 3-SAT 1 1 • MAX 3 k -CSP 1 1 2 on n variables and O n clauses can be For large enough constant k, there exists a randomized reduction from 10 MAX 3-SAT ( . 99 , . 97 ) on n variables and O ( n ) clauses to MAX 3 k-CSP ( 1 , 1 / 2 ) on n variables and O ( n ) clauses, such that: • YES instances reduce to YES instances with probability ≥ 2 − n / k . • NO instances reduce to NO instances with probability ≥ 1 − 2 − n .

  2. • Run the above reduction 2 n k n 2 times. k 1 • Total running time 2 n k n 2 2 o n 2 n for large enough Proof Sketch constant k . o n 2 n k produced instances. and output YES if the algorithm outputs YES on any of the instances • Run the 2 o n algorithm on the MAX 3-SAT 1 1 Lemma clauses. For large enough constant k, there exists a randomized reduction from 10 MAX 3-SAT ( . 99 , . 97 ) on n variables and O ( n ) clauses to MAX 3 k-CSP ( 1 , 1 / 2 ) on n variables and O ( n ) clauses, such that: • YES instances reduce to YES instances with probability ≥ 2 − n / k . • NO instances reduce to NO instances with probability ≥ 1 − 2 − n . • MAX 3 k -CSP ( 1 , 1 / 2 ) on n variables and O ( n ) clauses can be converted to MAX 3-SAT ( 1 , 1 − Ω k ( 1 )) on n ′ = O k ( n ) variables and

  3. k 1 • Total running time 2 n k n 2 2 o n 2 n for large enough Proof Sketch algorithm on the MAX 3-SAT 1 1 constant k . o n 2 n k produced instances. and output YES if the algorithm outputs YES on any of the instances • Run the 2 o n Lemma clauses. For large enough constant k, there exists a randomized reduction from 10 MAX 3-SAT ( . 99 , . 97 ) on n variables and O ( n ) clauses to MAX 3 k-CSP ( 1 , 1 / 2 ) on n variables and O ( n ) clauses, such that: • YES instances reduce to YES instances with probability ≥ 2 − n / k . • NO instances reduce to NO instances with probability ≥ 1 − 2 − n . • MAX 3 k -CSP ( 1 , 1 / 2 ) on n variables and O ( n ) clauses can be converted to MAX 3-SAT ( 1 , 1 − Ω k ( 1 )) on n ′ = O k ( n ) variables and • Run the above reduction 2 n / k n 2 times.

  4. • Total running time 2 n k n 2 2 o n 2 n for large enough Proof Sketch Lemma constant k . o n 2 n k produced instances. and output YES if the algorithm outputs YES on any of the 10 For large enough constant k, there exists a randomized reduction from clauses. MAX 3-SAT ( . 99 , . 97 ) on n variables and O ( n ) clauses to MAX 3 k-CSP ( 1 , 1 / 2 ) on n variables and O ( n ) clauses, such that: • YES instances reduce to YES instances with probability ≥ 2 − n / k . • NO instances reduce to NO instances with probability ≥ 1 − 2 − n . • MAX 3 k -CSP ( 1 , 1 / 2 ) on n variables and O ( n ) clauses can be converted to MAX 3-SAT ( 1 , 1 − Ω k ( 1 )) on n ′ = O k ( n ) variables and • Run the above reduction 2 n / k n 2 times. • Run the 2 o ( n ′ ) algorithm on the MAX 3-SAT ( 1 , 1 − Ω k ( 1 )) instances

  5. Proof Sketch Lemma constant k . produced instances. and output YES if the algorithm outputs YES on any of the clauses. 10 For large enough constant k, there exists a randomized reduction from MAX 3-SAT ( . 99 , . 97 ) on n variables and O ( n ) clauses to MAX 3 k-CSP ( 1 , 1 / 2 ) on n variables and O ( n ) clauses, such that: • YES instances reduce to YES instances with probability ≥ 2 − n / k . • NO instances reduce to NO instances with probability ≥ 1 − 2 − n . • MAX 3 k -CSP ( 1 , 1 / 2 ) on n variables and O ( n ) clauses can be converted to MAX 3-SAT ( 1 , 1 − Ω k ( 1 )) on n ′ = O k ( n ) variables and • Run the above reduction 2 n / k n 2 times. • Run the 2 o ( n ′ ) algorithm on the MAX 3-SAT ( 1 , 1 − Ω k ( 1 )) instances • Total running time 2 n / k n 2 · 2 o ( n ′ ) = 2 n / k + o ( n ) ≤ 2 δ n for large enough

  6. Derandomization using samplers • One-sided derandomization using samplers. We use LLL to handle the completeness case. 11

  7. PCPs and Perfect Completeness

  8. Q i Defjnition of PCPs q 12 PCP c , s [ r , q ] with proof size n : YES ( x ∈ L ) : ∃ Π , Pr i [ Q i (Π) = 1 ] ≥ c NO ( x ̸∈ L ) : ∀ Π , Pr i [ Q i (Π) = 1 ] ≤ s Q m = 2 r Q 1 . . . Q 2 . . . . . . Q j . . . Π 1 Π n . . .

  9. Q i Defjnition of PCPs q 12 PCP c , s [ r , q ] with proof size n : YES ( x ∈ L ) : ∃ Π , Pr i [ Q i (Π) = 1 ] ≥ c NO ( x ̸∈ L ) : ∀ Π , Pr i [ Q i (Π) = 1 ] ≤ s Q m = 2 r Q 1 . . . Q 2 . . . . . . Q j . . . Π 1 Π n . . .

  10. Q i Defjnition of PCPs q 12 PCP c , s [ r , q ] with proof size n : YES ( x ∈ L ) : ∃ Π , Pr i [ Q i (Π) = 1 ] ≥ c NO ( x ̸∈ L ) : ∀ Π , Pr i [ Q i (Π) = 1 ] ≤ s Q m = 2 r Q 1 . . . Q 2 . . . . . . Q j . . . Π 1 Π n . . .

  11. Q i Defjnition of PCPs q 12 PCP c , s [ r , q ] with proof size n : YES ( x ∈ L ) : ∃ Π , Pr i [ Q i (Π) = 1 ] ≥ c NO ( x ̸∈ L ) : ∀ Π , Pr i [ Q i (Π) = 1 ] ≤ s Q m = 2 r Q 1 . . . Q 2 . . . . . . Q j . . . Π 1 Π n . . .

  12. Q i Defjnition of PCPs q 12 PCP c , s [ r , q ] with proof size n : YES ( x ∈ L ) : ∃ Π , Pr i [ Q i (Π) = 1 ] ≥ c NO ( x ̸∈ L ) : ∀ Π , Pr i [ Q i (Π) = 1 ] ≤ s Q m = 2 r Q 1 . . . Q 2 . . . . . . Q j . . . Π 1 Π n . . .

  13. PCP 1 s O log n PCP 1 s log n PCP 1 1 2 log n PCP results • Linear-sized PCP with long queries [BKKMS’13]: with a O n proof size. n O 1 NTIME O n O log log n O 1 • PCP theorem[ALMSS]: For some constant s NTIME O n • Almost-linear proofs [Ben-Sasson, Sudan] and [Dinur]: O 1 NTIME O n 1, 13

  14. PCP 1 s log n PCP 1 1 2 log n PCP results • Almost-linear proofs [Ben-Sasson, Sudan] and [Dinur]: NTIME O n O log log n O 1 • Linear-sized PCP with long queries [BKKMS’13]: NTIME O n O 1 n with a O n proof size. 13 • PCP theorem[ALMSS]: For some constant s < 1, NTIME [ O ( n )] ⊆ PCP 1 , s [ O ( log n ) , O ( 1 )]

  15. PCP 1 1 2 log n PCP results • Almost-linear proofs [Ben-Sasson, Sudan] and [Dinur]: • Linear-sized PCP with long queries [BKKMS’13]: NTIME O n O 1 n with a O n proof size. 13 • PCP theorem[ALMSS]: For some constant s < 1, NTIME [ O ( n )] ⊆ PCP 1 , s [ O ( log n ) , O ( 1 )] NTIME [ O ( n )] ⊆ PCP 1 , s [ log n + O ( log log n ) , O ( 1 )]

  16. PCP results • Almost-linear proofs [Ben-Sasson, Sudan] and [Dinur]: • Linear-sized PCP with long queries [BKKMS’13]: 13 • PCP theorem[ALMSS]: For some constant s < 1, NTIME [ O ( n )] ⊆ PCP 1 , s [ O ( log n ) , O ( 1 )] NTIME [ O ( n )] ⊆ PCP 1 , s [ log n + O ( log log n ) , O ( 1 )] NTIME [ O ( n )] ⊆ PCP 1 , 1 / 2 [ log n + O ϵ ( 1 ) , n ϵ ] , with a O ϵ ( n ) proof size.

  17. Linear-Sized PCP conjecture Conjecture (Linear-sized PCP conjecture) 14 NTIME [ O ( n )] has linear-sized PCPs, i.e. NTIME [ O ( n )] ⊆ PCP 1 , s [ log n + O ( 1 ) , O ( 1 )] for some constant s < 1 .

  18. Our Question • What is the role of completeness in PCPs? Can one build better PCPs with imperfect completeness? • Can we convert an imperfect PCP to a perfect completeness PCP in a blackbox manner? 15

  19. Our Question • What is the role of completeness in PCPs? Can one build better PCPs with imperfect completeness? • Can we convert an imperfect PCP to a perfect completeness PCP in a blackbox manner? 15

  20. Our Question • What is the role of completeness in PCPs? Can one build better PCPs with imperfect completeness? • Can we convert an imperfect PCP to a perfect completeness PCP in a blackbox manner? 15

  21. 1 log n PCP c s r q R PCP 1 rs c r qr c Ways to transfer gap • One can just apply the best known PCPs for NTIME O n , for example MAX 3- SAT 99 97 PCP 1 1 O log log n O 1 • Bellare Goldreich and Sudan [1] studied many such black-box reductions between PCP classes. Their result for transferring the gap to 1: 16

  22. PCP c s r q R PCP 1 rs c r qr c Ways to transfer gap example • Bellare Goldreich and Sudan [1] studied many such black-box reductions between PCP classes. Their result for transferring the gap to 1: 16 • One can just apply the best known PCPs for NTIME [ O ( n )] , for MAX 3- SAT ( . 99 , . 97 ) ∈ PCP 1 , 1 − Ω( 1 ) ( log n + O ( log log n ) , O ( 1 ))

  23. PCP c s r q R PCP 1 rs c r qr c Ways to transfer gap example • Bellare Goldreich and Sudan [1] studied many such black-box reductions between PCP classes. Their result for transferring the gap to 1: 16 • One can just apply the best known PCPs for NTIME [ O ( n )] , for MAX 3- SAT ( . 99 , . 97 ) ∈ PCP 1 , 1 − Ω( 1 ) ( log n + O ( log log n ) , O ( 1 ))

  24. Ways to transfer gap example • Bellare Goldreich and Sudan [1] studied many such black-box reductions between PCP classes. Their result for transferring the gap to 1: 16 • One can just apply the best known PCPs for NTIME [ O ( n )] , for MAX 3- SAT ( . 99 , . 97 ) ∈ PCP 1 , 1 − Ω( 1 ) ( log n + O ( log log n ) , O ( 1 )) PCP c , s [ r , q ] ≤ R PCP 1 , rs / c [ r , qr / c ] .

  25. Our Result Gap-Transfer theorem We show a blackbox way to transfer a PCP with imperfect completeness to one with perfect completeness, while incurring a small loss in the query complexity, but maintaining other parameters of the original PCP. From now on, we will take c s 9 10 6 10 . Let L have a PCP with c 0 9 s 0 6, with total verifjer queries m . We will show how to build a new proof system (specify proof bits and verifjer queries) for L that has completeness 1 and soundness 1. 17

  26. Our Result Gap-Transfer theorem We show a blackbox way to transfer a PCP with imperfect completeness to one with perfect completeness, while incurring a small loss in the query complexity, but maintaining other parameters of the original PCP. From now on, we will take c s 9 10 6 10 . Let L have a PCP with c 0 9 s 0 6, with total verifjer queries m . We will show how to build a new proof system (specify proof bits and verifjer queries) for L that has completeness 1 and soundness 1. 17

  27. Our Result Gap-Transfer theorem We show a blackbox way to transfer a PCP with imperfect completeness to one with perfect completeness, while incurring a small loss in the query complexity, but maintaining other parameters of the original PCP. We will show how to build a new proof system (specify proof bits and verifjer queries) for L that has completeness 1 and soundness 1. 17 From now on, we will take ( c , s ) = ( 9 / 10 , 6 / 10 ) . Let L have a PCP with c = 0 . 9 , s = 0 . 6, with total verifjer queries = m .

  28. Our Result Gap-Transfer theorem We show a blackbox way to transfer a PCP with imperfect completeness to one with perfect completeness, while incurring a small loss in the query complexity, but maintaining other parameters of the original PCP. We will show how to build a new proof system (specify proof bits and 17 From now on, we will take ( c , s ) = ( 9 / 10 , 6 / 10 ) . Let L have a PCP with c = 0 . 9 , s = 0 . 6, with total verifjer queries = m . verifjer queries) for L that has completeness 1 and soundness < 1.

  29. A Robust Circuit using Thresholds . log m layers We can derandomize this using samplers. . . . . . . . . 18 C i Thr 0 . 8 ( Thr 0 . 8 ) 1 ( Thr 0 . 8 ) m / 2 . . . Thr 0 . 8 . . . O ( 1 ) C 1 . . . C 2 . . . . . . C j . . . C m Π 1 Π n . . .

  30. A Robust Circuit using Thresholds . log m layers We can derandomize this using samplers. . . . . . . . . 18 C i Thr 0 . 8 ( Thr 0 . 8 ) 1 ( Thr 0 . 8 ) m / 2 . . . Thr 0 . 8 . . . O ( 1 ) C 1 . . . C 2 . . . . . . C j . . . C m Π 1 Π n . . .

  31. A Robust Circuit using Thresholds . log m layers We can derandomize this using samplers. . . . . . . . . 18 C i Thr 0 . 8 ( Thr 0 . 8 ) 1 ( Thr 0 . 8 ) m / 2 . . . Thr 0 . 8 . . . O ( 1 ) C 1 . . . C 2 . . . . . . C j . . . C m Π 1 Π n . . .

  32. A Robust Circuit using Thresholds . log m layers We can derandomize this using samplers. . . . . . . . . 18 C i Thr 0 . 8 ( Thr 0 . 8 ) 1 ( Thr 0 . 8 ) m / 2 . . . Thr 0 . 8 . . . O ( 1 ) C 1 . . . C 2 . . . . . . C j . . . C m Π 1 Π n . . .

  33. A Robust Circuit using Thresholds . log m layers We can derandomize this using samplers. . . . . . . . . 18 C i Thr 0 . 8 ( Thr 0 . 8 ) 1 ( Thr 0 . 8 ) m / 2 . . . Thr 0 . 8 . . . O ( 1 ) C 1 . . . C 2 . . . . . . C j . . . C m Π 1 Π n . . .

  34. A Robust Circuit using Thresholds . log m layers We can derandomize this using samplers. . . . . . . . . 18 C i Thr 0 . 8 ( Thr 0 . 8 ) 1 ( Thr 0 . 8 ) m / 2 . . . Thr 0 . 8 . . . O ( 1 ) C 1 . . . C 2 . . . . . . C j . . . C m Π 1 Π n . . .

  35. A Robust Circuit using Thresholds . log m layers We can derandomize this using samplers. . . . . . . . . 18 C i Thr 0 . 8 ( Thr 0 . 8 ) 1 ( Thr 0 . 8 ) m / 2 . . . Thr 0 . 8 . . . O ( 1 ) C 1 . . . C 2 . . . . . . C j . . . C m Π 1 Π n . . .

  36. Increasing fraction of 1’s . log m layers . . . . . . . . 19 C i frac of 0 ′ s = 0 Thr 0 . 8 frac of 0 ′ s < . 1 / 2 i ( Thr 0 . 8 ) 1 ( Thr 0 . 8 ) m / 2 frac of 0 ′ s < . 1 / 2 . . . Thr 0 . 8 . . . O ( 1 ) frac of 0 ′ s < . 1 C 1 . . . C 2 . . . . . . C j . . . C m Π 1 Π n . . .

  37. Increasing fraction of 1’s . log m layers . . . . . . . . 19 C i frac of 0 ′ s = 0 Thr 0 . 8 frac of 0 ′ s < . 1 / 2 i ( Thr 0 . 8 ) 1 ( Thr 0 . 8 ) m / 2 frac of 0 ′ s < . 1 / 2 . . . Thr 0 . 8 . . . O ( 1 ) frac of 0 ′ s < . 1 C 1 . . . C 2 . . . . . . C j . . . C m Π 1 Π n . . .

  38. Increasing fraction of 1’s . log m layers . . . . . . . . 19 C i frac of 0 ′ s = 0 Thr 0 . 8 frac of 0 ′ s < . 1 / 2 i ( Thr 0 . 8 ) 1 ( Thr 0 . 8 ) m / 2 frac of 0 ′ s < . 1 / 2 . . . Thr 0 . 8 . . . O ( 1 ) frac of 0 ′ s < . 1 C 1 . . . C 2 . . . . . . C j . . . C m Π 1 Π n . . .

  39. Increasing fraction of 1’s . log m layers . . . . . . . . 19 C i frac of 0 ′ s = 0 Thr 0 . 8 frac of 0 ′ s < . 1 / 2 i ( Thr 0 . 8 ) 1 ( Thr 0 . 8 ) m / 2 frac of 0 ′ s < . 1 / 2 . . . Thr 0 . 8 . . . O ( 1 ) frac of 0 ′ s < . 1 C 1 . . . C 2 . . . . . . C j . . . C m Π 1 Π n . . .

  40. Increasing fraction of 1’s . log m layers . . . . . . . . 19 C i frac of 0 ′ s = 0 Thr 0 . 8 frac of 0 ′ s < . 1 / 2 i ( Thr 0 . 8 ) 1 ( Thr 0 . 8 ) m / 2 frac of 0 ′ s < . 1 / 2 . . . Thr 0 . 8 . . . O ( 1 ) frac of 0 ′ s < . 1 C 1 . . . C 2 . . . . . . C j . . . C m Π 1 Π n . . .

  41. Increasing fraction of 1’s . log m layers . . . . . . . . 19 C i frac of 0 ′ s = 0 Thr 0 . 8 frac of 0 ′ s < . 1 / 2 i ( Thr 0 . 8 ) 1 ( Thr 0 . 8 ) m / 2 frac of 0 ′ s < . 1 / 2 . . . Thr 0 . 8 . . . O ( 1 ) frac of 0 ′ s < . 1 C 1 . . . C 2 . . . . . . C j . . . C m Π 1 Π n . . .

  42. Maintaining fraction of 1’s . log m layers . . . . . . . . 20 C i Thr 0 . 8 ( Thr 0 . 8 ) 1 ( Thr 0 . 8 ) m / 2 frac of 1 ′ s < 6 / 10 . . . Thr 0 . 8 . . . O ( 1 ) frac of 1 ′ s < 7 / 10 C 1 . . . C 2 . . . . . . C j . . . C m Π 1 Π n . . .

  43. Maintaining fraction of 1’s . log m layers . . . . . . . . 20 C i Thr 0 . 8 ( Thr 0 . 8 ) 1 ( Thr 0 . 8 ) m / 2 frac of 1 ′ s < 6 / 10 . . . Thr 0 . 8 . . . O ( 1 ) frac of 1 ′ s < 7 / 10 C 1 . . . C 2 . . . . . . C j . . . C m Π 1 Π n . . .

  44. Maintaining fraction of 1’s . log m layers . . . . . . . . 20 C i Thr 0 . 8 ( Thr 0 . 8 ) 1 ( Thr 0 . 8 ) m / 2 frac of 1 ′ s < 6 / 10 . . . Thr 0 . 8 . . . O ( 1 ) frac of 1 ′ s < 7 / 10 C 1 . . . C 2 . . . . . . C j . . . C m Π 1 Π n . . .

  45. Maintaining fraction of 1’s . log m layers . . . . . . . . 20 C i Thr 0 . 8 ( Thr 0 . 8 ) 1 ( Thr 0 . 8 ) m / 2 frac of 1 ′ s < 6 / 10 . . . Thr 0 . 8 . . . O ( 1 ) frac of 1 ′ s < 7 / 10 C 1 . . . C 2 . . . . . . C j . . . C m Π 1 Π n . . .

  46. Maintaining fraction of 1’s . log m layers . . . . . . . . 20 C i Thr 0 . 8 ( Thr 0 . 8 ) 1 ( Thr 0 . 8 ) m / 2 frac of 1 ′ s < 6 / 10 . . . Thr 0 . 8 . . . O ( 1 ) frac of 1 ′ s < 7 / 10 C 1 . . . C 2 . . . . . . C j . . . C m Π 1 Π n . . .

  47. Final PCP Thr 0 8 Thr 0 8 . . . . . . . . . Thr 0 8 In a single query, we will verify all included gates: Thr 0 8 C m C j C i C 2 C 1 n 1 and the top gate evaluates to 1 check whether each gate’s output is consistent with its inputs 21

  48. Final PCP In a single query, we will verify all included gates: . . . . . . . . . 21 and the top gate evaluates to 1 C i check whether each gate’s output is consistent with its inputs Thr 0 . 8 Thr 0 . 8 . . . Thr 0 . 8 . . . Thr 0 . 8 C 1 . . . C 2 . . . . . . C j . . . C m Π 1 Π n . . .

  49. • Queries: q O log m Parameters of the Reduction This gives us a PCP that has the following properties: • Completeness: 1 • Soundness: 9/10 q O r • Randomness complexity: r (stays the same) • Size: O m 22

  50. • Queries: q O log m Parameters of the Reduction This gives us a PCP that has the following properties: • Completeness: 1 • Soundness: 9/10 q O r • Randomness complexity: r (stays the same) • Size: O m 22

  51. • Queries: q O log m Parameters of the Reduction This gives us a PCP that has the following properties: • Completeness: 1 • Soundness: 9/10 q O r • Randomness complexity: r (stays the same) • Size: O m 22

  52. • Queries: q O log m Parameters of the Reduction This gives us a PCP that has the following properties: • Completeness: 1 • Soundness: 9/10 q O r • Randomness complexity: r (stays the same) • Size: O m 22

  53. Parameters of the Reduction This gives us a PCP that has the following properties: • Completeness: 1 • Soundness: 9/10 • Randomness complexity: r (stays the same) • Size: O m 22 • Queries: q + O ( log m ) = q + O ( r )

  54. Parameters of the Reduction This gives us a PCP that has the following properties: • Completeness: 1 • Soundness: 9/10 • Randomness complexity: r (stays the same) • Size: O m 22 • Queries: q + O ( log m ) = q + O ( r )

  55. Parameters of the Reduction This gives us a PCP that has the following properties: • Completeness: 1 • Soundness: 9/10 • Randomness complexity: r (stays the same) 22 • Queries: q + O ( log m ) = q + O ( r ) • Size: O ( m )

  56. PCP c s r q PCP 1 s r PCP c s r q R PCP 1 s r Main theorem Theorem O log r q O 1 c, we have that, 0 1 with s For all constants, c s s the new randomness and query complexities have better dependence on the initial r q . Theorem We have a similar “randomized reduction” between PCP classes where O r q O 1 c, we have that, 0 1 with s For all constants, c s s 23

  57. PCP c s r q R PCP 1 s r Main theorem 0 1 with s O log r q O 1 c, we have that, For all constants, c s s Theorem Theorem the initial r q . the new randomness and query complexities have better dependence on We have a similar “randomized reduction” between PCP classes where 23 For all constants, c , s , s ′ ∈ ( 0 , 1 ) with s < c, we have that, PCP c , s [ r , q ] ⊆ PCP 1 , s ′ [ r + O ( 1 ) , q + O ( r )] .

  58. PCP c s r q R PCP 1 s r Main theorem 0 1 with s O log r q O 1 c, we have that, For all constants, c s s Theorem Theorem the new randomness and query complexities have better dependence on We have a similar “randomized reduction” between PCP classes where 23 For all constants, c , s , s ′ ∈ ( 0 , 1 ) with s < c, we have that, PCP c , s [ r , q ] ⊆ PCP 1 , s ′ [ r + O ( 1 ) , q + O ( r )] . the initial r , q .

  59. Main theorem Theorem We have a similar “randomized reduction” between PCP classes where the new randomness and query complexities have better dependence on Theorem 23 For all constants, c , s , s ′ ∈ ( 0 , 1 ) with s < c, we have that, PCP c , s [ r , q ] ⊆ PCP 1 , s ′ [ r + O ( 1 ) , q + O ( r )] . the initial r , q . For all constants, c , s , s ′ ∈ ( 0 , 1 ) with s < c, we have that, PCP c , s [ r , q ] ≤ R PCP 1 , s ′ [ r + O ( 1 ) , q + O ( log r )]

  60. PCP c s log n PCP 1 s log n PCP 1 s log n Comparison to Best-Known PCPs O log n n O 1 NTIME O n While the current best known linear-sized PCP is: O 1 q We get the following result for NTIME O n : then NTIME O n q , O 1 For all constants, c s s , if NTIME O n Corollary 24

  61. PCP 1 s log n Comparison to Best-Known PCPs Corollary While the current best known linear-sized PCP is: NTIME O n O 1 n 24 We get the following result for NTIME [ O ( n )] : For all constants, c , s , s ′ , if NTIME [ O ( n )] ⊆ PCP c , s [ log n + O ( 1 ) , q ] , then NTIME [ O ( n )] ⊆ PCP 1 , s ′ [ log n + O ( 1 ) , q + O ( log n )] .

  62. Comparison to Best-Known PCPs Corollary While the current best known linear-sized PCP is: 24 We get the following result for NTIME [ O ( n )] : For all constants, c , s , s ′ , if NTIME [ O ( n )] ⊆ PCP c , s [ log n + O ( 1 ) , q ] , then NTIME [ O ( n )] ⊆ PCP 1 , s ′ [ log n + O ( 1 ) , q + O ( log n )] . NTIME [ O ( n )] ⊆ PCP 1 , s [ log n + O ϵ ( 1 ) , n ϵ ] ,

  63. has 2 o n algorithms. Conclusion • Our results imply that building linear-sized PCPs with minimal queries for NTIME O n and perfect completeness should be nearly as hard (or easy!) as linear-sized PCPs with minimal queries for NTIME O n and imperfect completeness. • We show the equivalence of Gap-ETH under perfect and imperfect completeness, i.e. Max-3SAT with perfect completeness has 2 o n randomized algorithms ifg Max-3SAT with imperfect completeness 25

  64. has 2 o n algorithms. Conclusion • Our results imply that building linear-sized PCPs with minimal as hard (or easy!) as linear-sized PCPs with minimal queries for • We show the equivalence of Gap-ETH under perfect and imperfect completeness, i.e. Max-3SAT with perfect completeness has 2 o n randomized algorithms ifg Max-3SAT with imperfect completeness 25 queries for NTIME [ O ( n )] and perfect completeness should be nearly NTIME [ O ( n )] and imperfect completeness.

  65. Conclusion • Our results imply that building linear-sized PCPs with minimal as hard (or easy!) as linear-sized PCPs with minimal queries for • We show the equivalence of Gap-ETH under perfect and imperfect randomized algorithms ifg Max-3SAT with imperfect completeness 25 queries for NTIME [ O ( n )] and perfect completeness should be nearly NTIME [ O ( n )] and imperfect completeness. completeness, i.e. Max-3SAT with perfect completeness has 2 o ( n ) has 2 o ( n ) algorithms.

  66. PCP c s log n O 1 , then PCP 1 s log n PCP c s log n PCP 1 s log n 2 k for satisfjable 2 k 2 k (which is tight up to constant factors). Open Problems k -CSP 1 instances whereas for unsatisfjable instances MAX Currently we know that MAX k -CSP 1 2 O k 1 3 • Blackbox reductions to get better parameters for MAX k -CSP? completeness to Gap-ETH? • Can we derandomize the reduction from Gap-ETH without perfect O 1 O 1 o log log n • A query reduction on our result for PCPs, using [Dinur], gives that: O 1 Can one prove that, This is what one gets using the current PCPs for NTIME O n . O 1 . O log log n NTIME O n If NTIME O n Corollary 26

  67. PCP c s log n PCP 1 s log n 2 k for satisfjable 2 k 2 k (which is tight up to constant factors). Open Problems • Can we derandomize the reduction from Gap-ETH without perfect k -CSP 1 instances whereas for unsatisfjable instances MAX Currently we know that MAX k -CSP 1 2 O k 1 3 • Blackbox reductions to get better parameters for MAX k -CSP? completeness to Gap-ETH? o log log n O 1 • A query reduction on our result for PCPs, using [Dinur], gives that: O 1 O 1 Can one prove that, This is what one gets using the current PCPs for NTIME O n . Corollary 26 If NTIME [ O ( n )] ⊆ PCP c , s [ log n , O ( 1 )] , then NTIME [ O ( n )] ⊆ PCP 1 , s ′ [ log n + O ( log log n ) , O ( 1 )] .

  68. 2 k for satisfjable 2 k 2 k (which is tight up to constant factors). Open Problems completeness to Gap-ETH? k -CSP 1 instances whereas for unsatisfjable instances MAX Currently we know that MAX k -CSP 1 2 O k 1 3 • Blackbox reductions to get better parameters for MAX k -CSP? • Can we derandomize the reduction from Gap-ETH without perfect • A query reduction on our result for PCPs, using [Dinur], gives that: Can one prove that, Corollary 26 If NTIME [ O ( n )] ⊆ PCP c , s [ log n , O ( 1 )] , then NTIME [ O ( n )] ⊆ PCP 1 , s ′ [ log n + O ( log log n ) , O ( 1 )] . This is what one gets using the current PCPs for NTIME [ O ( n )] . PCP c , s [ log n + O ( 1 ) , O ( 1 )] ⊆ PCP 1 , s ′ [ log n + o ( log log n ) , O ( 1 )]?

  69. 2 k for satisfjable 2 k 2 k (which is tight up to constant factors). Open Problems completeness to Gap-ETH? k -CSP 1 instances whereas for unsatisfjable instances MAX Currently we know that MAX k -CSP 1 2 O k 1 3 • Blackbox reductions to get better parameters for MAX k -CSP? • Can we derandomize the reduction from Gap-ETH without perfect • A query reduction on our result for PCPs, using [Dinur], gives that: Can one prove that, Corollary 26 If NTIME [ O ( n )] ⊆ PCP c , s [ log n , O ( 1 )] , then NTIME [ O ( n )] ⊆ PCP 1 , s ′ [ log n + O ( log log n ) , O ( 1 )] . This is what one gets using the current PCPs for NTIME [ O ( n )] . PCP c , s [ log n + O ( 1 ) , O ( 1 )] ⊆ PCP 1 , s ′ [ log n + o ( log log n ) , O ( 1 )]?

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend