Imperfect Gaps in Gap-ETH and PCPs Mitali Bafna Nikhil Vyas - - PowerPoint PPT Presentation

imperfect gaps in gap eth and pcps
SMART_READER_LITE
LIVE PREVIEW

Imperfect Gaps in Gap-ETH and PCPs Mitali Bafna Nikhil Vyas - - PowerPoint PPT Presentation

Imperfect Gaps in Gap-ETH and PCPs Mitali Bafna Nikhil Vyas Harvard MIT Table of contents 1. Introduction 2. Gap-ETH and Perfect Completeness 3. PCPs and Perfect Completeness 1 Introduction Main Motivations We study the role of perfect


slide-1
SLIDE 1

Imperfect Gaps in Gap-ETH and PCPs

Mitali Bafna Nikhil Vyas Harvard MIT

slide-2
SLIDE 2

Table of contents

  • 1. Introduction
  • 2. Gap-ETH and Perfect Completeness
  • 3. PCPs and Perfect Completeness

1

slide-3
SLIDE 3

Introduction

slide-4
SLIDE 4

Main Motivations

We study the role of perfect completeness:

  • Hardness/Easiness of fjnding approximate solutions to satisfjable

CSPs as compared to unsatisfjable ones?

  • Is it easier to build PCPs with imperfect completeness as compared

to perfect completeness?

2

slide-5
SLIDE 5

Main Motivations

We study the role of perfect completeness:

  • Hardness/Easiness of fjnding approximate solutions to satisfjable

CSPs as compared to unsatisfjable ones?

  • Is it easier to build PCPs with imperfect completeness as compared

to perfect completeness?

2

slide-6
SLIDE 6

Main Motivations

We study the role of perfect completeness:

  • Hardness/Easiness of fjnding approximate solutions to satisfjable

CSPs as compared to unsatisfjable ones?

  • Is it easier to build PCPs with imperfect completeness as compared

to perfect completeness?

2

slide-7
SLIDE 7

Main Motivations

We study the role of perfect completeness:

  • Hardness/Easiness of fjnding approximate solutions to satisfjable

CSPs as compared to unsatisfjable ones?

  • Is it easier to build PCPs with imperfect completeness as compared

to perfect completeness?

2

slide-8
SLIDE 8

Gap-ETH and Perfect Completeness

slide-9
SLIDE 9

Constraint Satisfaction Problems (CSPs)

MAX k-CSP c s : Given a k-width Boolean CSP, the problem of deciding whether

  • there exists an assignment satisfying more than a c-fraction of the

clauses or

  • every assignment satisfjes at most s fraction of the clauses.

We will also refer to this as Gap-k-CSP. For this presentation, we will think of a Gap-CSPs on n variables and m O n clauses.

3

slide-10
SLIDE 10

Constraint Satisfaction Problems (CSPs)

MAX k-CSP(c, s): Given a k-width Boolean CSP, the problem of deciding whether

  • there exists an assignment satisfying more than a c-fraction of the

clauses or

  • every assignment satisfjes at most s fraction of the clauses.

We will also refer to this as Gap-k-CSP. For this presentation, we will think of a Gap-CSPs on n variables and m O n clauses.

3

slide-11
SLIDE 11

Constraint Satisfaction Problems (CSPs)

MAX k-CSP(c, s): Given a k-width Boolean CSP, the problem of deciding whether

  • there exists an assignment satisfying more than a c-fraction of the

clauses or

  • every assignment satisfjes at most s fraction of the clauses.

We will also refer to this as Gap-k-CSP. For this presentation, we will think of a Gap-CSPs on n variables and m O n clauses.

3

slide-12
SLIDE 12

Constraint Satisfaction Problems (CSPs)

MAX k-CSP(c, s): Given a k-width Boolean CSP, the problem of deciding whether

  • there exists an assignment satisfying more than a c-fraction of the

clauses or

  • every assignment satisfjes at most s fraction of the clauses.

We will also refer to this as Gap-k-CSP. For this presentation, we will think of a Gap-CSPs on n variables and m O n clauses.

3

slide-13
SLIDE 13

Constraint Satisfaction Problems (CSPs)

MAX k-CSP(c, s): Given a k-width Boolean CSP, the problem of deciding whether

  • there exists an assignment satisfying more than a c-fraction of the

clauses or

  • every assignment satisfjes at most s fraction of the clauses.

We will also refer to this as Gap-k-CSP. For this presentation, we will think of a Gap-CSPs on n variables and m O n clauses.

3

slide-14
SLIDE 14

Constraint Satisfaction Problems (CSPs)

MAX k-CSP(c, s): Given a k-width Boolean CSP, the problem of deciding whether

  • there exists an assignment satisfying more than a c-fraction of the

clauses or

  • every assignment satisfjes at most s fraction of the clauses.

We will also refer to this as Gap-k-CSP. For this presentation, we will think of a Gap-CSPs on n variables and m = O(n) clauses.

3

slide-15
SLIDE 15

Our Problems

Problem (1) Is MAX 3-SAT(1, .98) “easier” than MAX 3-SAT(.99, .97)?

4

slide-16
SLIDE 16

The Gap-ETH Conjecture

Conjecture (Gap-ETH(Dinur’16 and MR’17)) For some constant τ > 0, MAX 3-SAT(1, 1 − τ) does not have a 2o(n) randomized algorithm. Conjecture (Gap-ETH without perfect completeness) For some constants 0, MAX 3-SAT 1 1 does not have a 2o n randomized algorithm.

5

slide-17
SLIDE 17

The Gap-ETH Conjecture

Conjecture (Gap-ETH(Dinur’16 and MR’17)) For some constant τ > 0, MAX 3-SAT(1, 1 − τ) does not have a 2o(n) randomized algorithm. Conjecture (Gap-ETH without perfect completeness) For some constants ϵ > γ > 0, MAX 3-SAT(1 − γ, 1 − ϵ) does not have a 2o(n) randomized algorithm.

5

slide-18
SLIDE 18

Equivalence of Gap-ETH conjectures

Theorem The Gap-ETH conjecture is equivalent to the Gap-ETH conjecture without perfect completeness i.e. For all constants τ > 0, MAX 3-SAT(1, 1 − τ) has a 2o(n) time algorithm ⇐ ⇒ for all constants ϵ > γ > 0, MAX 3-SAT(1 − γ, 1 − ϵ) has a 2o(n) time algorithm. We will present: Theorem If for all constants 0, MAX 3-SAT 1 1 has a 2o n time randomized algorithm, then for all constants 0, MAX 3-SAT 99 97 has a 2 n time randomized algorithm.

6

slide-19
SLIDE 19

Equivalence of Gap-ETH conjectures

Theorem The Gap-ETH conjecture is equivalent to the Gap-ETH conjecture without perfect completeness i.e. For all constants τ > 0, MAX 3-SAT(1, 1 − τ) has a 2o(n) time algorithm ⇐ ⇒ for all constants ϵ > γ > 0, MAX 3-SAT(1 − γ, 1 − ϵ) has a 2o(n) time algorithm. We will present: Theorem If for all constants τ > 0, MAX 3-SAT(1, 1 − τ) has a 2o(n) time randomized algorithm, then for all constants δ > 0, MAX 3-SAT(.99, .97) has a 2δn time randomized algorithm.

6

slide-20
SLIDE 20

Proof Sketch

Lemma For large enough constant k, there exists a randomized reduction from MAX 3-SAT(.99, .97) on n variables and O(n) clauses to MAX 3k-CSP(1, 1/2) on n variables and O(n) clauses, such that:

  • YES instances reduce to YES instances with probability ≥ 2−n/k.
  • NO instances reduce to NO instances with probability ≥ 1 − 2−n.

7

slide-21
SLIDE 21

Getting Perfect Completeness starting from a YES case

x1 xn . . .

  • C1. . .C2

Ci . . . . . . Cj . . . Cm

(Thr0.98)1

. . .

Thr0.98 . . . (Thr0.98)n frac of 1′s > .99

k

Pr[Thr.98 = 0] ≤ 2−Ω(k) Note that this gives us a 3k-CSP. wp ≥ 2−n/k frac of 1′s = 1

8

slide-22
SLIDE 22

Getting Perfect Completeness starting from a YES case

x1 xn . . .

  • C1. . .C2

Ci . . . . . . Cj . . . Cm

(Thr0.98)1

. . .

Thr0.98 . . . (Thr0.98)n frac of 1′s > .99

k

Pr[Thr.98 = 0] ≤ 2−Ω(k) Note that this gives us a 3k-CSP. wp ≥ 2−n/k frac of 1′s = 1

8

slide-23
SLIDE 23

Getting Perfect Completeness starting from a YES case

x1 xn . . .

  • C1. . .C2

Ci . . . . . . Cj . . . Cm

(Thr0.98)1

. . .

Thr0.98 . . . (Thr0.98)n frac of 1′s > .99

k

Pr[Thr.98 = 0] ≤ 2−Ω(k) Note that this gives us a 3k-CSP. wp ≥ 2−n/k frac of 1′s = 1

8

slide-24
SLIDE 24

Getting Perfect Completeness starting from a YES case

x1 xn . . .

  • C1. . .C2

Ci . . . . . . Cj . . . Cm

(Thr0.98)1

. . .

Thr0.98 . . . (Thr0.98)n frac of 1′s > .99

k

Pr[Thr.98 = 0] ≤ 2−Ω(k) Note that this gives us a 3k-CSP. wp ≥ 2−n/k frac of 1′s = 1

8

slide-25
SLIDE 25

Getting Perfect Completeness starting from a YES case

x1 xn . . .

  • C1. . .C2

Ci . . . . . . Cj . . . Cm

(Thr0.98)1

. . .

Thr0.98 . . . (Thr0.98)n frac of 1′s > .99

k

Pr[Thr.98 = 0] ≤ 2−Ω(k) Note that this gives us a 3k-CSP. wp ≥ 2−n/k frac of 1′s = 1

8

slide-26
SLIDE 26

Getting Perfect Completeness starting from a YES case

x1 xn . . .

  • C1. . .C2

Ci . . . . . . Cj . . . Cm

(Thr0.98)1

. . .

Thr0.98 . . . (Thr0.98)n frac of 1′s > .99

k

Pr[Thr.98 = 0] ≤ 2−Ω(k) Note that this gives us a 3k-CSP. wp ≥ 2−n/k frac of 1′s = 1

8

slide-27
SLIDE 27

Soundness starting from a NO case

x1 xn . . .

  • C1. . .C2

Ci . . . . . . Cj . . . Cm

(Thr0.98)1

. . .

Thr0.98 . . . (Thr0.98)n frac of 1′s < .97

k

Pr[Thr.98 = 1] ≤ 2−Ω(k) wp ≥ 1 − 2−n frac of 1′s < 1/2

9

slide-28
SLIDE 28

Soundness starting from a NO case

x1 xn . . .

  • C1. . .C2

Ci . . . . . . Cj . . . Cm

(Thr0.98)1

. . .

Thr0.98 . . . (Thr0.98)n frac of 1′s < .97

k

Pr[Thr.98 = 1] ≤ 2−Ω(k) wp ≥ 1 − 2−n frac of 1′s < 1/2

9

slide-29
SLIDE 29

Soundness starting from a NO case

x1 xn . . .

  • C1. . .C2

Ci . . . . . . Cj . . . Cm

(Thr0.98)1

. . .

Thr0.98 . . . (Thr0.98)n frac of 1′s < .97

k

Pr[Thr.98 = 1] ≤ 2−Ω(k) wp ≥ 1 − 2−n frac of 1′s < 1/2

9

slide-30
SLIDE 30

Soundness starting from a NO case

x1 xn . . .

  • C1. . .C2

Ci . . . . . . Cj . . . Cm

(Thr0.98)1

. . .

Thr0.98 . . . (Thr0.98)n frac of 1′s < .97

k

Pr[Thr.98 = 1] ≤ 2−Ω(k) wp ≥ 1 − 2−n frac of 1′s < 1/2

9

slide-31
SLIDE 31

Soundness starting from a NO case

x1 xn . . .

  • C1. . .C2

Ci . . . . . . Cj . . . Cm

(Thr0.98)1

. . .

Thr0.98 . . . (Thr0.98)n frac of 1′s < .97

k

Pr[Thr.98 = 1] ≤ 2−Ω(k) wp ≥ 1 − 2−n frac of 1′s < 1/2

9

slide-32
SLIDE 32

Proof Sketch

Lemma For large enough constant k, there exists a randomized reduction from MAX 3-SAT(.99, .97) on n variables and O(n) clauses to MAX 3k-CSP(1, 1/2) on n variables and O(n) clauses, such that:

  • YES instances reduce to YES instances with probability ≥ 2−n/k.
  • NO instances reduce to NO instances with probability ≥ 1 − 2−n.
  • MAX 3k-CSP 1 1 2 on n variables and O n clauses can be

converted to MAX 3-SAT 1 1

k 1

  • n n

Ok n variables and clauses.

  • Run the above reduction 2n kn2 times.
  • Run the 2o n

algorithm on the MAX 3-SAT 1 1

k 1

instances and output YES if the algorithm outputs YES on any of the produced instances.

  • Total running time 2n kn2 2o n

2n k

  • n

2 n for large enough constant k.

10

slide-33
SLIDE 33

Proof Sketch

Lemma For large enough constant k, there exists a randomized reduction from MAX 3-SAT(.99, .97) on n variables and O(n) clauses to MAX 3k-CSP(1, 1/2) on n variables and O(n) clauses, such that:

  • YES instances reduce to YES instances with probability ≥ 2−n/k.
  • NO instances reduce to NO instances with probability ≥ 1 − 2−n.
  • MAX 3k-CSP(1, 1/2) on n variables and O(n) clauses can be

converted to MAX 3-SAT(1, 1 − Ωk(1)) on n′ = Ok(n) variables and clauses.

  • Run the above reduction 2n kn2 times.
  • Run the 2o n

algorithm on the MAX 3-SAT 1 1

k 1

instances and output YES if the algorithm outputs YES on any of the produced instances.

  • Total running time 2n kn2 2o n

2n k

  • n

2 n for large enough constant k.

10

slide-34
SLIDE 34

Proof Sketch

Lemma For large enough constant k, there exists a randomized reduction from MAX 3-SAT(.99, .97) on n variables and O(n) clauses to MAX 3k-CSP(1, 1/2) on n variables and O(n) clauses, such that:

  • YES instances reduce to YES instances with probability ≥ 2−n/k.
  • NO instances reduce to NO instances with probability ≥ 1 − 2−n.
  • MAX 3k-CSP(1, 1/2) on n variables and O(n) clauses can be

converted to MAX 3-SAT(1, 1 − Ωk(1)) on n′ = Ok(n) variables and clauses.

  • Run the above reduction 2n/kn2 times.
  • Run the 2o n

algorithm on the MAX 3-SAT 1 1

k 1

instances and output YES if the algorithm outputs YES on any of the produced instances.

  • Total running time 2n kn2 2o n

2n k

  • n

2 n for large enough constant k.

10

slide-35
SLIDE 35

Proof Sketch

Lemma For large enough constant k, there exists a randomized reduction from MAX 3-SAT(.99, .97) on n variables and O(n) clauses to MAX 3k-CSP(1, 1/2) on n variables and O(n) clauses, such that:

  • YES instances reduce to YES instances with probability ≥ 2−n/k.
  • NO instances reduce to NO instances with probability ≥ 1 − 2−n.
  • MAX 3k-CSP(1, 1/2) on n variables and O(n) clauses can be

converted to MAX 3-SAT(1, 1 − Ωk(1)) on n′ = Ok(n) variables and clauses.

  • Run the above reduction 2n/kn2 times.
  • Run the 2o(n′) algorithm on the MAX 3-SAT(1, 1 − Ωk(1)) instances

and output YES if the algorithm outputs YES on any of the produced instances.

  • Total running time 2n kn2 2o n

2n k

  • n

2 n for large enough constant k.

10

slide-36
SLIDE 36

Proof Sketch

Lemma For large enough constant k, there exists a randomized reduction from MAX 3-SAT(.99, .97) on n variables and O(n) clauses to MAX 3k-CSP(1, 1/2) on n variables and O(n) clauses, such that:

  • YES instances reduce to YES instances with probability ≥ 2−n/k.
  • NO instances reduce to NO instances with probability ≥ 1 − 2−n.
  • MAX 3k-CSP(1, 1/2) on n variables and O(n) clauses can be

converted to MAX 3-SAT(1, 1 − Ωk(1)) on n′ = Ok(n) variables and clauses.

  • Run the above reduction 2n/kn2 times.
  • Run the 2o(n′) algorithm on the MAX 3-SAT(1, 1 − Ωk(1)) instances

and output YES if the algorithm outputs YES on any of the produced instances.

  • Total running time 2n/kn2 · 2o(n′) = 2n/k+o(n) ≤ 2δn for large enough

constant k.

10

slide-37
SLIDE 37

Derandomization using samplers

  • One-sided derandomization using samplers. We use LLL to handle

the completeness case.

11

slide-38
SLIDE 38

PCPs and Perfect Completeness

slide-39
SLIDE 39

Defjnition of PCPs

PCPc,s[r, q] with proof size n:

Π1 Πn . . .

  • Q1. . .Q2

Qi . . . . . . Qj . . . Qm=2r

YES (x ∈ L): ∃ Π, Pri[Qi(Π) = 1] ≥ c NO (x ̸∈ L): ∀ Π, Pri[Qi(Π) = 1] ≤ s

q

12

slide-40
SLIDE 40

Defjnition of PCPs

PCPc,s[r, q] with proof size n:

Π1 Πn . . .

  • Q1. . .Q2

Qi . . . . . . Qj . . . Qm=2r

YES (x ∈ L): ∃ Π, Pri[Qi(Π) = 1] ≥ c NO (x ̸∈ L): ∀ Π, Pri[Qi(Π) = 1] ≤ s

q

12

slide-41
SLIDE 41

Defjnition of PCPs

PCPc,s[r, q] with proof size n:

Π1 Πn . . .

  • Q1. . .Q2

Qi . . . . . . Qj . . . Qm=2r

YES (x ∈ L): ∃ Π, Pri[Qi(Π) = 1] ≥ c NO (x ̸∈ L): ∀ Π, Pri[Qi(Π) = 1] ≤ s

q

12

slide-42
SLIDE 42

Defjnition of PCPs

PCPc,s[r, q] with proof size n:

Π1 Πn . . .

  • Q1. . .Q2

Qi . . . . . . Qj . . . Qm=2r

YES (x ∈ L): ∃ Π, Pri[Qi(Π) = 1] ≥ c NO (x ̸∈ L): ∀ Π, Pri[Qi(Π) = 1] ≤ s

q

12

slide-43
SLIDE 43

Defjnition of PCPs

PCPc,s[r, q] with proof size n:

Π1 Πn . . .

  • Q1. . .Q2

Qi . . . . . . Qj . . . Qm=2r

YES (x ∈ L): ∃ Π, Pri[Qi(Π) = 1] ≥ c NO (x ̸∈ L): ∀ Π, Pri[Qi(Π) = 1] ≤ s

q

12

slide-44
SLIDE 44

PCP results

  • PCP theorem[ALMSS]: For some constant s

1, NTIME O n PCP1 s O log n O 1

  • Almost-linear proofs [Ben-Sasson, Sudan] and [Dinur]:

NTIME O n PCP1 s log n O log log n O 1

  • Linear-sized PCP with long queries [BKKMS’13]:

NTIME O n PCP1 1 2 log n O 1 n with a O n proof size.

13

slide-45
SLIDE 45

PCP results

  • PCP theorem[ALMSS]: For some constant s < 1,

NTIME[O(n)] ⊆ PCP1,s[O(log n), O(1)]

  • Almost-linear proofs [Ben-Sasson, Sudan] and [Dinur]:

NTIME O n PCP1 s log n O log log n O 1

  • Linear-sized PCP with long queries [BKKMS’13]:

NTIME O n PCP1 1 2 log n O 1 n with a O n proof size.

13

slide-46
SLIDE 46

PCP results

  • PCP theorem[ALMSS]: For some constant s < 1,

NTIME[O(n)] ⊆ PCP1,s[O(log n), O(1)]

  • Almost-linear proofs [Ben-Sasson, Sudan] and [Dinur]:

NTIME[O(n)] ⊆ PCP1,s[log n + O(log log n), O(1)]

  • Linear-sized PCP with long queries [BKKMS’13]:

NTIME O n PCP1 1 2 log n O 1 n with a O n proof size.

13

slide-47
SLIDE 47

PCP results

  • PCP theorem[ALMSS]: For some constant s < 1,

NTIME[O(n)] ⊆ PCP1,s[O(log n), O(1)]

  • Almost-linear proofs [Ben-Sasson, Sudan] and [Dinur]:

NTIME[O(n)] ⊆ PCP1,s[log n + O(log log n), O(1)]

  • Linear-sized PCP with long queries [BKKMS’13]:

NTIME[O(n)] ⊆ PCP1,1/2[log n + Oϵ(1), nϵ], with a Oϵ(n) proof size.

13

slide-48
SLIDE 48

Linear-Sized PCP conjecture

Conjecture (Linear-sized PCP conjecture) NTIME[O(n)] has linear-sized PCPs, i.e. NTIME[O(n)] ⊆ PCP1,s[log n + O(1), O(1)] for some constant s < 1.

14

slide-49
SLIDE 49

Our Question

  • What is the role of completeness in PCPs? Can one build better

PCPs with imperfect completeness?

  • Can we convert an imperfect PCP to a perfect completeness PCP in

a blackbox manner?

15

slide-50
SLIDE 50

Our Question

  • What is the role of completeness in PCPs? Can one build better

PCPs with imperfect completeness?

  • Can we convert an imperfect PCP to a perfect completeness PCP in

a blackbox manner?

15

slide-51
SLIDE 51

Our Question

  • What is the role of completeness in PCPs? Can one build better

PCPs with imperfect completeness?

  • Can we convert an imperfect PCP to a perfect completeness PCP in

a blackbox manner?

15

slide-52
SLIDE 52

Ways to transfer gap

  • One can just apply the best known PCPs for NTIME O n , for

example MAX 3-SAT 99 97 PCP1 1

1 log n

O log log n O 1

  • Bellare Goldreich and Sudan [1] studied many such black-box

reductions between PCP classes. Their result for transferring the gap to 1: PCPc s r q

R PCP1 rs c r qr c 16

slide-53
SLIDE 53

Ways to transfer gap

  • One can just apply the best known PCPs for NTIME[O(n)], for

example MAX 3-SAT(.99, .97) ∈ PCP1,1−Ω(1)(log n + O(log log n), O(1))

  • Bellare Goldreich and Sudan [1] studied many such black-box

reductions between PCP classes. Their result for transferring the gap to 1: PCPc s r q

R PCP1 rs c r qr c 16

slide-54
SLIDE 54

Ways to transfer gap

  • One can just apply the best known PCPs for NTIME[O(n)], for

example MAX 3-SAT(.99, .97) ∈ PCP1,1−Ω(1)(log n + O(log log n), O(1))

  • Bellare Goldreich and Sudan [1] studied many such black-box

reductions between PCP classes. Their result for transferring the gap to 1: PCPc s r q

R PCP1 rs c r qr c 16

slide-55
SLIDE 55

Ways to transfer gap

  • One can just apply the best known PCPs for NTIME[O(n)], for

example MAX 3-SAT(.99, .97) ∈ PCP1,1−Ω(1)(log n + O(log log n), O(1))

  • Bellare Goldreich and Sudan [1] studied many such black-box

reductions between PCP classes. Their result for transferring the gap to 1: PCPc,s[r, q] ≤R PCP1,rs/c[r, qr/c].

16

slide-56
SLIDE 56

Our Result

Gap-Transfer theorem We show a blackbox way to transfer a PCP with imperfect completeness to one with perfect completeness, while incurring a small loss in the query complexity, but maintaining other parameters of the original PCP. From now on, we will take c s 9 10 6 10 . Let L have a PCP with c 0 9 s 0 6, with total verifjer queries m. We will show how to build a new proof system (specify proof bits and verifjer queries) for L that has completeness 1 and soundness 1.

17

slide-57
SLIDE 57

Our Result

Gap-Transfer theorem We show a blackbox way to transfer a PCP with imperfect completeness to one with perfect completeness, while incurring a small loss in the query complexity, but maintaining other parameters of the original PCP. From now on, we will take c s 9 10 6 10 . Let L have a PCP with c 0 9 s 0 6, with total verifjer queries m. We will show how to build a new proof system (specify proof bits and verifjer queries) for L that has completeness 1 and soundness 1.

17

slide-58
SLIDE 58

Our Result

Gap-Transfer theorem We show a blackbox way to transfer a PCP with imperfect completeness to one with perfect completeness, while incurring a small loss in the query complexity, but maintaining other parameters of the original PCP. From now on, we will take (c, s) = (9/10, 6/10). Let L have a PCP with c = 0.9, s = 0.6, with total verifjer queries = m. We will show how to build a new proof system (specify proof bits and verifjer queries) for L that has completeness 1 and soundness 1.

17

slide-59
SLIDE 59

Our Result

Gap-Transfer theorem We show a blackbox way to transfer a PCP with imperfect completeness to one with perfect completeness, while incurring a small loss in the query complexity, but maintaining other parameters of the original PCP. From now on, we will take (c, s) = (9/10, 6/10). Let L have a PCP with c = 0.9, s = 0.6, with total verifjer queries = m. We will show how to build a new proof system (specify proof bits and verifjer queries) for L that has completeness 1 and soundness < 1.

17

slide-60
SLIDE 60

A Robust Circuit using Thresholds

Π1 Πn . . .

  • C1. . .C2

Ci . . . . . . Cj . . . Cm

(Thr0.8)1

. . .

Thr0.8 . . . (Thr0.8)m/2 O(1) . . . . . . . . . Thr0.8

We can derandomize this using samplers.

log m layers

18

slide-61
SLIDE 61

A Robust Circuit using Thresholds

Π1 Πn . . .

  • C1. . .C2

Ci . . . . . . Cj . . . Cm

(Thr0.8)1

. . .

Thr0.8 . . . (Thr0.8)m/2 O(1) . . . . . . . . . Thr0.8

We can derandomize this using samplers.

log m layers

18

slide-62
SLIDE 62

A Robust Circuit using Thresholds

Π1 Πn . . .

  • C1. . .C2

Ci . . . . . . Cj . . . Cm

(Thr0.8)1

. . .

Thr0.8 . . . (Thr0.8)m/2 O(1) . . . . . . . . . Thr0.8

We can derandomize this using samplers.

log m layers

18

slide-63
SLIDE 63

A Robust Circuit using Thresholds

Π1 Πn . . .

  • C1. . .C2

Ci . . . . . . Cj . . . Cm

(Thr0.8)1

. . .

Thr0.8 . . . (Thr0.8)m/2 O(1) . . . . . . . . . Thr0.8

We can derandomize this using samplers.

log m layers

18

slide-64
SLIDE 64

A Robust Circuit using Thresholds

Π1 Πn . . .

  • C1. . .C2

Ci . . . . . . Cj . . . Cm

(Thr0.8)1

. . .

Thr0.8 . . . (Thr0.8)m/2 O(1) . . . . . . . . . Thr0.8

We can derandomize this using samplers.

log m layers

18

slide-65
SLIDE 65

A Robust Circuit using Thresholds

Π1 Πn . . .

  • C1. . .C2

Ci . . . . . . Cj . . . Cm

(Thr0.8)1

. . .

Thr0.8 . . . (Thr0.8)m/2 O(1) . . . . . . . . . Thr0.8

We can derandomize this using samplers.

log m layers

18

slide-66
SLIDE 66

A Robust Circuit using Thresholds

Π1 Πn . . .

  • C1. . .C2

Ci . . . . . . Cj . . . Cm

(Thr0.8)1

. . .

Thr0.8 . . . (Thr0.8)m/2 O(1) . . . . . . . . . Thr0.8

We can derandomize this using samplers.

log m layers

18

slide-67
SLIDE 67

Increasing fraction of 1’s

Π1 Πn . . .

  • C1. . .C2

Ci . . . . . . Cj . . . Cm

(Thr0.8)1

. . .

Thr0.8 . . . (Thr0.8)m/2 frac of 0′s < .1 O(1) . . . . . . . . . Thr0.8 frac of 0′s < .1/2 frac of 0′s < .1/2i log m layers frac of 0′s = 0

19

slide-68
SLIDE 68

Increasing fraction of 1’s

Π1 Πn . . .

  • C1. . .C2

Ci . . . . . . Cj . . . Cm

(Thr0.8)1

. . .

Thr0.8 . . . (Thr0.8)m/2 frac of 0′s < .1 O(1) . . . . . . . . . Thr0.8 frac of 0′s < .1/2 frac of 0′s < .1/2i log m layers frac of 0′s = 0

19

slide-69
SLIDE 69

Increasing fraction of 1’s

Π1 Πn . . .

  • C1. . .C2

Ci . . . . . . Cj . . . Cm

(Thr0.8)1

. . .

Thr0.8 . . . (Thr0.8)m/2 frac of 0′s < .1 O(1) . . . . . . . . . Thr0.8 frac of 0′s < .1/2 frac of 0′s < .1/2i log m layers frac of 0′s = 0

19

slide-70
SLIDE 70

Increasing fraction of 1’s

Π1 Πn . . .

  • C1. . .C2

Ci . . . . . . Cj . . . Cm

(Thr0.8)1

. . .

Thr0.8 . . . (Thr0.8)m/2 frac of 0′s < .1 O(1) . . . . . . . . . Thr0.8 frac of 0′s < .1/2 frac of 0′s < .1/2i log m layers frac of 0′s = 0

19

slide-71
SLIDE 71

Increasing fraction of 1’s

Π1 Πn . . .

  • C1. . .C2

Ci . . . . . . Cj . . . Cm

(Thr0.8)1

. . .

Thr0.8 . . . (Thr0.8)m/2 frac of 0′s < .1 O(1) . . . . . . . . . Thr0.8 frac of 0′s < .1/2 frac of 0′s < .1/2i log m layers frac of 0′s = 0

19

slide-72
SLIDE 72

Increasing fraction of 1’s

Π1 Πn . . .

  • C1. . .C2

Ci . . . . . . Cj . . . Cm

(Thr0.8)1

. . .

Thr0.8 . . . (Thr0.8)m/2 frac of 0′s < .1 O(1) . . . . . . . . . Thr0.8 frac of 0′s < .1/2 frac of 0′s < .1/2i log m layers frac of 0′s = 0

19

slide-73
SLIDE 73

Maintaining fraction of 1’s

Π1 Πn . . .

  • C1. . .C2

Ci . . . . . . Cj . . . Cm

(Thr0.8)1

. . .

Thr0.8 . . . (Thr0.8)m/2 frac of 1′s < 7/10 O(1) . . . . . . . . . Thr0.8 log m layers frac of 1′s < 6/10

20

slide-74
SLIDE 74

Maintaining fraction of 1’s

Π1 Πn . . .

  • C1. . .C2

Ci . . . . . . Cj . . . Cm

(Thr0.8)1

. . .

Thr0.8 . . . (Thr0.8)m/2 frac of 1′s < 7/10 O(1) . . . . . . . . . Thr0.8 log m layers frac of 1′s < 6/10

20

slide-75
SLIDE 75

Maintaining fraction of 1’s

Π1 Πn . . .

  • C1. . .C2

Ci . . . . . . Cj . . . Cm

(Thr0.8)1

. . .

Thr0.8 . . . (Thr0.8)m/2 frac of 1′s < 7/10 O(1) . . . . . . . . . Thr0.8 log m layers frac of 1′s < 6/10

20

slide-76
SLIDE 76

Maintaining fraction of 1’s

Π1 Πn . . .

  • C1. . .C2

Ci . . . . . . Cj . . . Cm

(Thr0.8)1

. . .

Thr0.8 . . . (Thr0.8)m/2 frac of 1′s < 7/10 O(1) . . . . . . . . . Thr0.8 log m layers frac of 1′s < 6/10

20

slide-77
SLIDE 77

Maintaining fraction of 1’s

Π1 Πn . . .

  • C1. . .C2

Ci . . . . . . Cj . . . Cm

(Thr0.8)1

. . .

Thr0.8 . . . (Thr0.8)m/2 frac of 1′s < 7/10 O(1) . . . . . . . . . Thr0.8 log m layers frac of 1′s < 6/10

20

slide-78
SLIDE 78

Final PCP

In a single query, we will verify all included gates: check whether each gate’s output is consistent with its inputs and the top gate evaluates to 1 1 n

C1 C2 Ci Cj Cm

Thr0 8 Thr0 8 Thr0 8 . . . . . . . . . Thr0 8

21

slide-79
SLIDE 79

Final PCP

In a single query, we will verify all included gates: check whether each gate’s output is consistent with its inputs and the top gate evaluates to 1

Π1 Πn . . .

  • C1. . .C2

Ci . . . . . . Cj . . . Cm

Thr0.8

. . .

Thr0.8

. . .

Thr0.8 . . . . . . . . . Thr0.8

21

slide-80
SLIDE 80

Parameters of the Reduction

This gives us a PCP that has the following properties:

  • Completeness: 1
  • Soundness: 9/10
  • Queries: q

O log m q O r

  • Randomness complexity: r (stays the same)
  • Size: O m

22

slide-81
SLIDE 81

Parameters of the Reduction

This gives us a PCP that has the following properties:

  • Completeness: 1
  • Soundness: 9/10
  • Queries: q

O log m q O r

  • Randomness complexity: r (stays the same)
  • Size: O m

22

slide-82
SLIDE 82

Parameters of the Reduction

This gives us a PCP that has the following properties:

  • Completeness: 1
  • Soundness: 9/10
  • Queries: q

O log m q O r

  • Randomness complexity: r (stays the same)
  • Size: O m

22

slide-83
SLIDE 83

Parameters of the Reduction

This gives us a PCP that has the following properties:

  • Completeness: 1
  • Soundness: 9/10
  • Queries: q

O log m q O r

  • Randomness complexity: r (stays the same)
  • Size: O m

22

slide-84
SLIDE 84

Parameters of the Reduction

This gives us a PCP that has the following properties:

  • Completeness: 1
  • Soundness: 9/10
  • Queries: q + O(log m) = q + O(r)
  • Randomness complexity: r (stays the same)
  • Size: O m

22

slide-85
SLIDE 85

Parameters of the Reduction

This gives us a PCP that has the following properties:

  • Completeness: 1
  • Soundness: 9/10
  • Queries: q + O(log m) = q + O(r)
  • Randomness complexity: r (stays the same)
  • Size: O m

22

slide-86
SLIDE 86

Parameters of the Reduction

This gives us a PCP that has the following properties:

  • Completeness: 1
  • Soundness: 9/10
  • Queries: q + O(log m) = q + O(r)
  • Randomness complexity: r (stays the same)
  • Size: O(m)

22

slide-87
SLIDE 87

Main theorem

Theorem For all constants, c s s 0 1 with s c, we have that, PCPc s r q PCP1 s r O 1 q O r We have a similar “randomized reduction” between PCP classes where the new randomness and query complexities have better dependence on the initial r q. Theorem For all constants, c s s 0 1 with s c, we have that, PCPc s r q

R PCP1 s r

O 1 q O log r

23

slide-88
SLIDE 88

Main theorem

Theorem For all constants, c, s, s′ ∈ (0, 1) with s < c, we have that, PCPc,s[r, q] ⊆ PCP1,s′[r + O(1), q + O(r)]. We have a similar “randomized reduction” between PCP classes where the new randomness and query complexities have better dependence on the initial r q. Theorem For all constants, c s s 0 1 with s c, we have that, PCPc s r q

R PCP1 s r

O 1 q O log r

23

slide-89
SLIDE 89

Main theorem

Theorem For all constants, c, s, s′ ∈ (0, 1) with s < c, we have that, PCPc,s[r, q] ⊆ PCP1,s′[r + O(1), q + O(r)]. We have a similar “randomized reduction” between PCP classes where the new randomness and query complexities have better dependence on the initial r, q. Theorem For all constants, c s s 0 1 with s c, we have that, PCPc s r q

R PCP1 s r

O 1 q O log r

23

slide-90
SLIDE 90

Main theorem

Theorem For all constants, c, s, s′ ∈ (0, 1) with s < c, we have that, PCPc,s[r, q] ⊆ PCP1,s′[r + O(1), q + O(r)]. We have a similar “randomized reduction” between PCP classes where the new randomness and query complexities have better dependence on the initial r, q. Theorem For all constants, c, s, s′ ∈ (0, 1) with s < c, we have that, PCPc,s[r, q] ≤R PCP1,s′[r + O(1), q + O(log r)]

23

slide-91
SLIDE 91

Comparison to Best-Known PCPs

We get the following result for NTIME O n : Corollary For all constants, c s s , if NTIME O n PCPc s log n O 1 q , then NTIME O n PCP1 s log n O 1 q O log n While the current best known linear-sized PCP is: NTIME O n PCP1 s log n O 1 n

24

slide-92
SLIDE 92

Comparison to Best-Known PCPs

We get the following result for NTIME[O(n)]: Corollary For all constants, c, s, s′, if NTIME[O(n)] ⊆ PCPc,s[log n + O(1), q], then NTIME[O(n)] ⊆ PCP1,s′[log n + O(1), q + O(log n)]. While the current best known linear-sized PCP is: NTIME O n PCP1 s log n O 1 n

24

slide-93
SLIDE 93

Comparison to Best-Known PCPs

We get the following result for NTIME[O(n)]: Corollary For all constants, c, s, s′, if NTIME[O(n)] ⊆ PCPc,s[log n + O(1), q], then NTIME[O(n)] ⊆ PCP1,s′[log n + O(1), q + O(log n)]. While the current best known linear-sized PCP is: NTIME[O(n)] ⊆ PCP1,s[log n + Oϵ(1), nϵ],

24

slide-94
SLIDE 94

Conclusion

  • Our results imply that building linear-sized PCPs with minimal

queries for NTIME O n and perfect completeness should be nearly as hard (or easy!) as linear-sized PCPs with minimal queries for NTIME O n and imperfect completeness.

  • We show the equivalence of Gap-ETH under perfect and imperfect

completeness, i.e. Max-3SAT with perfect completeness has 2o n randomized algorithms ifg Max-3SAT with imperfect completeness has 2o n algorithms.

25

slide-95
SLIDE 95

Conclusion

  • Our results imply that building linear-sized PCPs with minimal

queries for NTIME[O(n)] and perfect completeness should be nearly as hard (or easy!) as linear-sized PCPs with minimal queries for NTIME[O(n)] and imperfect completeness.

  • We show the equivalence of Gap-ETH under perfect and imperfect

completeness, i.e. Max-3SAT with perfect completeness has 2o n randomized algorithms ifg Max-3SAT with imperfect completeness has 2o n algorithms.

25

slide-96
SLIDE 96

Conclusion

  • Our results imply that building linear-sized PCPs with minimal

queries for NTIME[O(n)] and perfect completeness should be nearly as hard (or easy!) as linear-sized PCPs with minimal queries for NTIME[O(n)] and imperfect completeness.

  • We show the equivalence of Gap-ETH under perfect and imperfect

completeness, i.e. Max-3SAT with perfect completeness has 2o(n) randomized algorithms ifg Max-3SAT with imperfect completeness has 2o(n) algorithms.

25

slide-97
SLIDE 97

Open Problems

  • A query reduction on our result for PCPs, using [Dinur], gives that:

Corollary If NTIME O n PCPc s log n O 1 , then NTIME O n PCP1 s log n O log log n O 1 . This is what one gets using the current PCPs for NTIME O n . Can one prove that, PCPc s log n O 1 O 1 PCP1 s log n

  • log log n

O 1

  • Can we derandomize the reduction from Gap-ETH without perfect

completeness to Gap-ETH?

  • Blackbox reductions to get better parameters for MAX k-CSP?

Currently we know that MAX k-CSP 1 2O k1 3 2k for satisfjable instances whereas for unsatisfjable instances MAX k-CSP 1 2k 2k (which is tight up to constant factors).

26

slide-98
SLIDE 98

Open Problems

  • A query reduction on our result for PCPs, using [Dinur], gives that:

Corollary If NTIME[O(n)] ⊆ PCPc,s[log n, O(1)], then NTIME[O(n)] ⊆ PCP1,s′[log n + O(log log n), O(1)]. This is what one gets using the current PCPs for NTIME O n . Can one prove that, PCPc s log n O 1 O 1 PCP1 s log n

  • log log n

O 1

  • Can we derandomize the reduction from Gap-ETH without perfect

completeness to Gap-ETH?

  • Blackbox reductions to get better parameters for MAX k-CSP?

Currently we know that MAX k-CSP 1 2O k1 3 2k for satisfjable instances whereas for unsatisfjable instances MAX k-CSP 1 2k 2k (which is tight up to constant factors).

26

slide-99
SLIDE 99

Open Problems

  • A query reduction on our result for PCPs, using [Dinur], gives that:

Corollary If NTIME[O(n)] ⊆ PCPc,s[log n, O(1)], then NTIME[O(n)] ⊆ PCP1,s′[log n + O(log log n), O(1)]. This is what one gets using the current PCPs for NTIME[O(n)]. Can one prove that, PCPc,s[log n + O(1), O(1)] ⊆ PCP1,s′[log n + o(log log n), O(1)]?

  • Can we derandomize the reduction from Gap-ETH without perfect

completeness to Gap-ETH?

  • Blackbox reductions to get better parameters for MAX k-CSP?

Currently we know that MAX k-CSP 1 2O k1 3 2k for satisfjable instances whereas for unsatisfjable instances MAX k-CSP 1 2k 2k (which is tight up to constant factors).

26

slide-100
SLIDE 100

Open Problems

  • A query reduction on our result for PCPs, using [Dinur], gives that:

Corollary If NTIME[O(n)] ⊆ PCPc,s[log n, O(1)], then NTIME[O(n)] ⊆ PCP1,s′[log n + O(log log n), O(1)]. This is what one gets using the current PCPs for NTIME[O(n)]. Can one prove that, PCPc,s[log n + O(1), O(1)] ⊆ PCP1,s′[log n + o(log log n), O(1)]?

  • Can we derandomize the reduction from Gap-ETH without perfect

completeness to Gap-ETH?

  • Blackbox reductions to get better parameters for MAX k-CSP?

Currently we know that MAX k-CSP 1 2O k1 3 2k for satisfjable instances whereas for unsatisfjable instances MAX k-CSP 1 2k 2k (which is tight up to constant factors).

26

slide-101
SLIDE 101

Open Problems

  • A query reduction on our result for PCPs, using [Dinur], gives that:

Corollary If NTIME[O(n)] ⊆ PCPc,s[log n, O(1)], then NTIME[O(n)] ⊆ PCP1,s′[log n + O(log log n), O(1)]. This is what one gets using the current PCPs for NTIME[O(n)]. Can one prove that, PCPc,s[log n + O(1), O(1)] ⊆ PCP1,s′[log n + o(log log n), O(1)]?

  • Can we derandomize the reduction from Gap-ETH without perfect

completeness to Gap-ETH?

  • Blackbox reductions to get better parameters for MAX k-CSP?

Currently we know that MAX k-CSP(1, 2O(k1/3)/2k) for satisfjable instances whereas for unsatisfjable instances MAX k-CSP(1 − ϵ, 2k/2k) (which is tight up to constant factors).

26

slide-102
SLIDE 102

Thanks! Questions?

26

slide-103
SLIDE 103

References i

  • M. Bellare, O. Goldreich, and M. Sudan.

Free bits, pcps, and nonapproximability-towards tight results. SIAM J. Comput., 27(3):804–915, 1998.

27