Derandomizing Isolation in Space-Bounded Settings Dieter van - - PowerPoint PPT Presentation

derandomizing isolation in space bounded settings
SMART_READER_LITE
LIVE PREVIEW

Derandomizing Isolation in Space-Bounded Settings Dieter van - - PowerPoint PPT Presentation

Derandomizing Isolation in Space-Bounded Settings Dieter van Melkebeek and Gautam Prakriya University of Wisconsin-Madison 6 July 2017 Isolation Isolation Computational Problem : instance x set of solutions for x . Eg.:


slide-1
SLIDE 1

Derandomizing Isolation in Space-Bounded Settings

Dieter van Melkebeek and Gautam Prakriya

University of Wisconsin-Madison

6 July 2017

slide-2
SLIDE 2

Isolation

slide-3
SLIDE 3

Isolation

◮ Computational Problem

Π : instance x → set of solutions for x. Eg.: Satisfiability, Reachability...

slide-4
SLIDE 4

Isolation

◮ Computational Problem

Π : instance x → set of solutions for x. Eg.: Satisfiability, Reachability...

◮ An isolation for Π is a map f that satisfies the following for all

instances x:

(1) |Π(f (x))| ≤ 1 (2) |Π(f (x))| = 0 iff |Π(x)| = 0

slide-5
SLIDE 5

Isolation

◮ Computational Problem

Π : instance x → set of solutions for x. Eg.: Satisfiability, Reachability...

◮ An isolation for Π is a map f that satisfies the following for all

instances x:

(1) |Π(f (x))| ≤ 1 (2) |Π(f (x))| = 0 iff |Π(x)| = 0 [(3) Π(f (x)) ⊆ Π(x) ]

With (3) f is called a pruning.

slide-6
SLIDE 6

Motivation

◮ In algorithms:

Avoiding cancellations in algebraic settings Coordination in parallel algorithms ...

◮ In complexity theory:

Used to show problems become no easier when restricted to instances with unique solutions.

slide-7
SLIDE 7

Motivation

◮ In algorithms:

Avoiding cancellations in algebraic settings Coordination in parallel algorithms ...

◮ In complexity theory:

Used to show problems become no easier when restricted to instances with unique solutions.

◮ All known generic isolations are randomized.

slide-8
SLIDE 8

Space-Bounded Setting

◮ NL: languages accepted by non-deterministic logspace

machines.

◮ UL: languages accepted by unambiguous logspace machines,

i.e., non-deterministic machines with at most one accepting path on every input.

slide-9
SLIDE 9

Space-Bounded Setting

◮ NL: languages accepted by non-deterministic logspace

machines.

◮ UL: languages accepted by unambiguous logspace machines,

i.e., non-deterministic machines with at most one accepting path on every input.

◮ Open: NL ?

⊆ UL

slide-10
SLIDE 10

Space-Bounded Setting

◮ NL: languages accepted by non-deterministic logspace

machines.

◮ UL: languages accepted by unambiguous logspace machines,

i.e., non-deterministic machines with at most one accepting path on every input.

◮ Open: NL ?

⊆ UL

⇐ ⇒ Reach ∈ UL. ⇐ ⇒ Reach has a logspace isolation.

slide-11
SLIDE 11

Results

Theorem 1

NL ⊆ UTISP(poly(n), O(log3/2 n)). Compare to NL ⊆ DSPACE(O(log2 n)) [Savitch].

Reach ∈ DTISP(poly(n), n/2

√log n) [Barnes et al.].

slide-12
SLIDE 12

Results

Theorem 1

NL ⊆ UTISP(poly(n), O(log3/2 n)). Compare to NL ⊆ DSPACE(O(log2 n)) [Savitch].

Reach ∈ DTISP(poly(n), n/2

√log n) [Barnes et al.].

Theorem 2

NL ⊆ L/ poly if there is a logspace pruning for Reach.

slide-13
SLIDE 13

Results

Theorem 1

NL ⊆ UTISP(poly(n), O(log3/2 n)). Compare to NL ⊆ DSPACE(O(log2 n)) [Savitch].

Reach ∈ DTISP(poly(n), n/2

√log n) [Barnes et al.].

Theorem 2

NL ⊆ L/ poly if there is a logspace pruning for Reach. Similar results for LogCFL.

slide-14
SLIDE 14

Min-Isolation

Min-isolating weight assignment

For G = (V , E), a weight assignment w : V → N is min-isolating if for all s, t ∈ V , there is at most one min-weight s-t path. The Isolation Lemma of Mulmuley, Vazirani and Vazirani implies For any graph G, a uniformly random choice of weights from {1, . . . , n3} is min-isolating with high probability.

slide-15
SLIDE 15

Min-Isolation

Min-isolating weight assignment

For G = (V , E), a weight assignment w : V → N is min-isolating if for all s, t ∈ V , there is at most one min-weight s-t path. The Isolation Lemma of Mulmuley, Vazirani and Vazirani implies For any graph G, a uniformly random choice of weights from {1, . . . , n3} is min-isolating with high probability.

◮ Gal and Wigderson: NL ⊆ R · promise-UL ◮ Reinhardt and Allender: NL ⊆ R · (co-UL ∩ UL).

slide-16
SLIDE 16

Derandomization

Key Lemma

There exists a logspace machine M M σ ∈ {0, 1}O(log3/2 n) w : [n] → [2O(log3/2 n)] s.t. for all G on n vertices, Pr

σ [w is min-isolating for G] ≥ 1 − 1/n.

slide-17
SLIDE 17

Proof of Key Lemma

Layered DAG: . . .

2k 2k 2k d

slide-18
SLIDE 18

Proof of Key Lemma

Layered DAG: . . .

2k 2k 2k d

◮ Iteratively build w0, w1, . . . , wlog d. ◮ Invariant: wk is

◮ min-isolating for blocks of length 2k. ◮ non-zero only on internal vertices.

slide-19
SLIDE 19

Proof of Key Lemma

Layered DAG: . . .

2k 2k 2k d

◮ Iteratively build w0, w1, . . . , wlog d. ◮ Invariant: wk is

◮ min-isolating for blocks of length 2k. ◮ non-zero only on internal vertices.

◮ Start with w0 ≡ 0. Goal: wlog d.

slide-20
SLIDE 20

Proof of Key Lemma

Layered DAG: . . .

2k 2k 2k d

◮ Iteratively build w0, w1, . . . , wlog d. ◮ Invariant: wk is

◮ min-isolating for blocks of length 2k. ◮ non-zero only on internal vertices.

◮ Start with w0 ≡ 0. Goal: wlog d. ◮ Relevant Parameters:

◮ Number of random bits Rk for wk. ◮ Maximum path weight Wk for wk.

slide-21
SLIDE 21

Proof of Key Lemma

Layered DAG: . . .

2k 2k 2k d

◮ Iteratively build w0, w1, . . . , wlog d. ◮ Invariant: wk is

◮ min-isolating for blocks of length 2k. ◮ non-zero only on internal vertices.

◮ Start with w0 ≡ 0. Goal: wlog d. ◮ Relevant Parameters:

◮ Number of random bits Rk for wk. ◮ Maximum path weight Wk for wk.

◮ Space used by unambiguous algorithm for Reach is

O(R + log W + log n), where R := Rlog d and W := Wlog d.

slide-22
SLIDE 22

Iteration k → k + 1

t v u s M Pu Pv B2 B1

slide-23
SLIDE 23

Iteration k → k + 1

t v u s M Pu Pv B2 B1

◮ wk+1:

◮ Same as wk on vertices internal to blocks of length 2k. ◮ Additionally assigns weights to Lk+1,i.e., vertices in middle

layers M of length 2k+1 blocks.

slide-24
SLIDE 24

Iteration k → k + 1

t v u s M Pu Pv B2 B1

◮ wk+1:

◮ Same as wk on vertices internal to blocks of length 2k. ◮ Additionally assigns weights to Lk+1,i.e., vertices in middle

layers M of length 2k+1 blocks.

◮ Disambiguation requirement:

wk(Pu) + wk+1(u) = wk(Pv) + wk+1(v) ∀s ∈ B1, ∀t ∈ B2, ∀u = v ∈ M, ∀ blocks B = B1 ∪ B2.

slide-25
SLIDE 25

Shifting k → k + 1

wk(Pu) + wk+1(u) = wk(Pv) + wk+1(v) Define wk+1(v) := index(v) · (Wk + 1) for v ∈ Lk+1. wk+1(Pu) =index(u)

wk(Pu)

wk+1(Pv) =index(v)

wk(Pv)

=

slide-26
SLIDE 26

Shifting k → k + 1

wk(Pu) + wk+1(u) = wk(Pv) + wk+1(v) Define wk+1(v) := index(v) · (Wk + 1) for v ∈ Lk+1. wk+1(Pu) =index(u)

wk(Pu)

wk+1(Pv) =index(v)

wk(Pv)

=

◮ Parameters:

Rk+1 = Rk ⇒ Rk = 0. Wk+1 ≤ n2 · (Wk + 1) ⇒ Wk = nO(k). R + log W = O(log2 n).

slide-27
SLIDE 27

Hashing k → k + 1

wk(Pu) + wk+1(u) = wk(Pv) + wk+1(v) Define wk+1(v) := h(v) for v ∈ Lk+1, where h : [n] → [r] is picked uniformly at random from a universal hash family.

slide-28
SLIDE 28

Hashing k → k + 1

wk(Pu) + wk+1(u) = wk(Pv) + wk+1(v) Define wk+1(v) := h(v) for v ∈ Lk+1, where h : [n] → [r] is picked uniformly at random from a universal hash family.

◮ h satisfies disambiguation requirement for a fixed choice of

block, vertices s, t, u, v w.p. 1 − 1/r.

slide-29
SLIDE 29

Hashing k → k + 1

wk(Pu) + wk+1(u) = wk(Pv) + wk+1(v) Define wk+1(v) := h(v) for v ∈ Lk+1, where h : [n] → [r] is picked uniformly at random from a universal hash family.

◮ h satisfies disambiguation requirement for a fixed choice of

block, vertices s, t, u, v w.p. 1 − 1/r.

◮ h satisfies requirement for all blocks, vertices s, t, u, v w.p.

1 − 1/n for large enough r = nΘ(1).

slide-30
SLIDE 30

Hashing k → k + 1

wk(Pu) + wk+1(u) = wk(Pv) + wk+1(v) Define wk+1(v) := h(v) for v ∈ Lk+1, where h : [n] → [r] is picked uniformly at random from a universal hash family.

◮ h satisfies disambiguation requirement for a fixed choice of

block, vertices s, t, u, v w.p. 1 − 1/r.

◮ h satisfies requirement for all blocks, vertices s, t, u, v w.p.

1 − 1/n for large enough r = nΘ(1).

◮ Parameters:

Rk+1 = Rk + O(log n) ⇒ Rk = O(k log n). Wk+1 = Wk + nO(1) ⇒ Wk = k · nO(1). R + log W = O(log2 n).

slide-31
SLIDE 31

Shifting and Hashing k → k′

wk(P) =

00 ∗ . . . ∗ log b 00 ∗ . . . ∗ log b

slide-32
SLIDE 32

Shifting and Hashing k → k′

wk(P) =

00 ∗ . . . ∗ log b 00 ∗ . . . ∗ log b

wk+1(P) = = =

00 ∗ . . . ∗ ∗ . . . ∗

slide-33
SLIDE 33

Shifting and Hashing k → k′

wk(P) =

00 ∗ . . . ∗ log b 00 ∗ . . . ∗ log b

wk+1(P) = = =

00 ∗ . . . ∗ ∗ . . . ∗

wk+2(P) = =

∗ . . . ∗ ∗ . . . ∗

slide-34
SLIDE 34

Shifting and Hashing k → k′

wk(P) =

00 ∗ . . . ∗ log b 00 ∗ . . . ∗ log b

wk+1(P) = = =

00 ∗ . . . ∗ ∗ . . . ∗

wk+2(P) = =

∗ . . . ∗ ∗ . . . ∗

◮ For i ∈ [k′ − k],

wk+i(v) := h(v) · bi−1 for v ∈ Lk+i, where b = nΘ(1) and h is picked from a universal hash family.

slide-35
SLIDE 35

Shifting and Hashing k → k′

wk(P) =

00 ∗ . . . ∗ log b 00 ∗ . . . ∗ log b

wk+1(P) = = =

00 ∗ . . . ∗ ∗ . . . ∗

wk+2(P) = =

∗ . . . ∗ ∗ . . . ∗

◮ For i ∈ [k′ − k],

wk+i(v) := h(v) · bi−1 for v ∈ Lk+i, where b = nΘ(1) and h is picked from a universal hash family.

Rk′ = Rk + O(log n). Wk′ = Wk + nO(k′−k).

slide-36
SLIDE 36

Shifting and Hashing k → k′

wk(P) =

00 ∗ . . . ∗ log b 00 ∗ . . . ∗ log b

wk+1(P) = = =

00 ∗ . . . ∗ ∗ . . . ∗

wk+2(P) = =

∗ . . . ∗ ∗ . . . ∗

◮ For i ∈ [k′ − k],

wk+i(v) := h(v) · bi−1 for v ∈ Lk+i, where b = nΘ(1) and h is picked from a universal hash family.

Rk′ = Rk + O(log n). Wk′ = Wk + nO(k′−k).

◮ Repeating with √log d hash functions and k′ − k = √log d

gives R + log W = O(log n√log d) = O(log3/2 n).

slide-37
SLIDE 37

Proof of Theorem 1

Theorem 1

NL ⊆ UTISP(poly(n), O(log3/2 n)).

slide-38
SLIDE 38

Proof of Theorem 1

Theorem 1

NL ⊆ UTISP(poly(n), O(log3/2 n)).

◮ Running through all possible choices of hash functions gives

NL ⊆ USPACE(O(log3/2 n)).

slide-39
SLIDE 39

Proof of Theorem 1

Theorem 1

NL ⊆ UTISP(poly(n), O(log3/2 n)).

◮ Running through all possible choices of hash functions gives

NL ⊆ USPACE(O(log3/2 n)).

◮ Reduce running time to poly(n) by picking good hash

functions one at a time.

slide-40
SLIDE 40

Recap of main results

Theorem 1: NL ⊆ UTISP(poly(n), O(log3/2 n)). Theorem 2: NL ⊆ L/ poly if either of the following hold:

◮ there is a logspace pruning for Reach. ◮ there is a logspace algorithm that produces a polynomially

bounded weight assignment w and a number that is equal to the weight of min-weight s-t path if there is a s-t path.

◮ Similar results for LogCFL.

slide-41
SLIDE 41

Recap of main results

Theorem 1: NL ⊆ UTISP(poly(n), O(log3/2 n)). Theorem 2: NL ⊆ L/ poly if either of the following hold:

◮ there is a logspace pruning for Reach. ◮ there is a logspace algorithm that produces a polynomially

bounded weight assignment w and a number that is equal to the weight of min-weight s-t path if there is a s-t path.

◮ Similar results for LogCFL. ◮ Open questions.

slide-42
SLIDE 42

Thank You! Questions?