Probabilistic Computation Lecture 14 BPP, ZPP 1 Zoo NEXP EXP - - PowerPoint PPT Presentation

probabilistic computation
SMART_READER_LITE
LIVE PREVIEW

Probabilistic Computation Lecture 14 BPP, ZPP 1 Zoo NEXP EXP - - PowerPoint PPT Presentation

Probabilistic Computation Lecture 14 BPP, ZPP 1 Zoo NEXP EXP NPSPACE PSPACE 2P NP P NL L 2 Zoo NEXP EXP NPSPACE PSPACE 2P NP RP P NL L 2 Zoo NEXP EXP NPSPACE PSPACE 2P NP BPP RP P NL L 2 Zoo NEXP EXP


slide-1
SLIDE 1

Probabilistic Computation

Lecture 14 BPP, ZPP

1

slide-2
SLIDE 2

Zoo

P

PSPACE

EXP NP NEXP L NL

NPSPACE

Σ2P

2

slide-3
SLIDE 3

Zoo

P

PSPACE

EXP NP NEXP L NL

NPSPACE

Σ2P RP

2

slide-4
SLIDE 4

BPP

Zoo

P

PSPACE

EXP NP NEXP L NL

NPSPACE

Σ2P RP

2

slide-5
SLIDE 5

BPP

Zoo

P

PSPACE

EXP NP NEXP L NL

NPSPACE

Σ2P RP

2

slide-6
SLIDE 6

BPP

Zoo

P

PSPACE

EXP NP NEXP L NL

NPSPACE

Σ2P RP

2

slide-7
SLIDE 7

BPP-Complete Problem?

3

slide-8
SLIDE 8

BPP-Complete Problem?

Not known!

3

slide-9
SLIDE 9

BPP-Complete Problem?

Not known! L = { (M,x,1t) | M(x)=yes in time t with probability > 2/3} ?

3

slide-10
SLIDE 10

BPP-Complete Problem?

Not known! L = { (M,x,1t) | M(x)=yes in time t with probability > 2/3} ? Is indeed BPP-Hard

3

slide-11
SLIDE 11

BPP-Complete Problem?

Not known! L = { (M,x,1t) | M(x)=yes in time t with probability > 2/3} ? Is indeed BPP-Hard But in BPP?

3

slide-12
SLIDE 12

BPP-Complete Problem?

Not known! L = { (M,x,1t) | M(x)=yes in time t with probability > 2/3} ? Is indeed BPP-Hard But in BPP? Just run M(x) for t steps and accept if it accepts?

3

slide-13
SLIDE 13

BPP-Complete Problem?

Not known! L = { (M,x,1t) | M(x)=yes in time t with probability > 2/3} ? Is indeed BPP-Hard But in BPP? Just run M(x) for t steps and accept if it accepts? If (M,x,1t) in L, we will indeed accept with prob. > 2/3

3

slide-14
SLIDE 14

BPP-Complete Problem?

Not known! L = { (M,x,1t) | M(x)=yes in time t with probability > 2/3} ? Is indeed BPP-Hard But in BPP? Just run M(x) for t steps and accept if it accepts? If (M,x,1t) in L, we will indeed accept with prob. > 2/3 But M may not have a bounded gap. Then, if (M,x,1t) not in L, we may accept with prob. very close to 2/3.

3

slide-15
SLIDE 15

BPTIME-Hierarchy Theorem?

4

slide-16
SLIDE 16

BPTIME-Hierarchy Theorem?

BPTIME(n) ⊊ BPTIME(n100)?

4

slide-17
SLIDE 17

BPTIME-Hierarchy Theorem?

BPTIME(n) ⊊ BPTIME(n100)? Not known!

4

slide-18
SLIDE 18

BPTIME-Hierarchy Theorem?

BPTIME(n) ⊊ BPTIME(n100)? Not known! But is true for BPTIME(T)/1

4

slide-19
SLIDE 19

Some Probabilistic Algorithmic Concepts

5

slide-20
SLIDE 20

Some Probabilistic Algorithmic Concepts

Sampling to determine some probability

5

slide-21
SLIDE 21

Some Probabilistic Algorithmic Concepts

Sampling to determine some probability Checking if determinant of a symbolic matrix is zero: Substitute random values for the variables and evaluate using Gaussian elimination in polynomial time

5

slide-22
SLIDE 22

Some Probabilistic Algorithmic Concepts

Sampling to determine some probability Checking if determinant of a symbolic matrix is zero: Substitute random values for the variables and evaluate using Gaussian elimination in polynomial time Polynomial Identity Testing: polynomial given as an arithmetic circuit. Like above, but values can be too large. So work over a random modulus.

5

slide-23
SLIDE 23

Some Probabilistic Algorithmic Concepts

Sampling to determine some probability Checking if determinant of a symbolic matrix is zero: Substitute random values for the variables and evaluate using Gaussian elimination in polynomial time Polynomial Identity Testing: polynomial given as an arithmetic circuit. Like above, but values can be too large. So work over a random modulus. Random Walks (for sampling)

5

slide-24
SLIDE 24

Some Probabilistic Algorithmic Concepts

Sampling to determine some probability Checking if determinant of a symbolic matrix is zero: Substitute random values for the variables and evaluate using Gaussian elimination in polynomial time Polynomial Identity Testing: polynomial given as an arithmetic circuit. Like above, but values can be too large. So work over a random modulus. Random Walks (for sampling) Monte Carlo algorithms for calculations

5

slide-25
SLIDE 25

Some Probabilistic Algorithmic Concepts

Sampling to determine some probability Checking if determinant of a symbolic matrix is zero: Substitute random values for the variables and evaluate using Gaussian elimination in polynomial time Polynomial Identity Testing: polynomial given as an arithmetic circuit. Like above, but values can be too large. So work over a random modulus. Random Walks (for sampling) Monte Carlo algorithms for calculations Reachability tests

5

slide-26
SLIDE 26

Random Walks

6

slide-27
SLIDE 27

Random Walks

Which nodes does the walk touch and with what probability?

6

slide-28
SLIDE 28

Random Walks

Which nodes does the walk touch and with what probability? How do these probabilities vary with number of steps

6

slide-29
SLIDE 29

Random Walks

Which nodes does the walk touch and with what probability? How do these probabilities vary with number of steps Analyzing a random walk

6

slide-30
SLIDE 30

Random Walks

Which nodes does the walk touch and with what probability? How do these probabilities vary with number of steps Analyzing a random walk Probability Vector: p

6

slide-31
SLIDE 31

Random Walks

Which nodes does the walk touch and with what probability? How do these probabilities vary with number of steps Analyzing a random walk Probability Vector: p Transition probability matrix: M

6

slide-32
SLIDE 32

Random Walks

Which nodes does the walk touch and with what probability? How do these probabilities vary with number of steps Analyzing a random walk Probability Vector: p Transition probability matrix: M One step of the walk: p’ = Mp

6

slide-33
SLIDE 33

Random Walks

Which nodes does the walk touch and with what probability? How do these probabilities vary with number of steps Analyzing a random walk Probability Vector: p Transition probability matrix: M One step of the walk: p’ = Mp After t steps: p(t) = Mtp

6

slide-34
SLIDE 34

Space-Bounded Probabilistic Computation

7

slide-35
SLIDE 35

Space-Bounded Probabilistic Computation

PL, RL, BPL

7

slide-36
SLIDE 36

Space-Bounded Probabilistic Computation

PL, RL, BPL Logspace analogues of PP, RP, BPP

7

slide-37
SLIDE 37

Space-Bounded Probabilistic Computation

PL, RL, BPL Logspace analogues of PP, RP, BPP Note: RL ⊆ NL, RL ⊆ BPL

7

slide-38
SLIDE 38

Space-Bounded Probabilistic Computation

PL, RL, BPL Logspace analogues of PP, RP, BPP Note: RL ⊆ NL, RL ⊆ BPL Recall NL ⊆ P (because PATH ∈ P)

7

slide-39
SLIDE 39

Space-Bounded Probabilistic Computation

PL, RL, BPL Logspace analogues of PP, RP, BPP Note: RL ⊆ NL, RL ⊆ BPL Recall NL ⊆ P (because PATH ∈ P) So RL ⊆ P

7

slide-40
SLIDE 40

Space-Bounded Probabilistic Computation

PL, RL, BPL Logspace analogues of PP, RP, BPP Note: RL ⊆ NL, RL ⊆ BPL Recall NL ⊆ P (because PATH ∈ P) So RL ⊆ P In fact BPL ⊆ P

7

slide-41
SLIDE 41

BPL ⊆ P

8

slide-42
SLIDE 42

BPL ⊆ P

Consider the BPL algorithm, on input x, as a random walk over configurations

8

slide-43
SLIDE 43

BPL ⊆ P

Consider the BPL algorithm, on input x, as a random walk over configurations Construct the transition matrix M

8

slide-44
SLIDE 44

BPL ⊆ P

Consider the BPL algorithm, on input x, as a random walk over configurations Construct the transition matrix M Size of graph is poly(n), probability values are 0, 0.5 and 1

8

slide-45
SLIDE 45

BPL ⊆ P

Consider the BPL algorithm, on input x, as a random walk over configurations Construct the transition matrix M Size of graph is poly(n), probability values are 0, 0.5 and 1 Calculate Mt for t = max running time = poly(n)

8

slide-46
SLIDE 46

BPL ⊆ P

Consider the BPL algorithm, on input x, as a random walk over configurations Construct the transition matrix M Size of graph is poly(n), probability values are 0, 0.5 and 1 Calculate Mt for t = max running time = poly(n) Accept if (Mt pstart)accept > 2/3 where pstart is the probability distribution with all the weight on the start configuration

8

slide-47
SLIDE 47

Zoo

P

PSPACE

EXP NP NEXP L NL

NPSPACE

Σ2P

9

slide-48
SLIDE 48

Zoo

P

PSPACE

EXP NP NEXP L NL

NPSPACE

Σ2P RP

9

slide-49
SLIDE 49

Zoo

P

PSPACE

EXP NP NEXP L NL

NPSPACE

Σ2P RP RL

9

slide-50
SLIDE 50

BPP

Zoo

P

PSPACE

EXP NP NEXP L NL

NPSPACE

Σ2P RP RL

9

slide-51
SLIDE 51

BPP

Zoo

P

PSPACE

EXP NP NEXP L NL

NPSPACE

Σ2P RP RL

9

slide-52
SLIDE 52

BPP

Zoo

P

PSPACE

EXP NP NEXP L NL

NPSPACE

Σ2P RP RL

9

slide-53
SLIDE 53

BPP

Zoo

P

PSPACE

EXP NP NEXP L NL

NPSPACE

Σ2P BPL RP RL

9

slide-54
SLIDE 54

BPP

Zoo

P

PSPACE

EXP NP NEXP L NL

NPSPACE

Σ2P BPL RP RL

9

slide-55
SLIDE 55

BPP

Zoo

P

PSPACE

EXP NP NEXP L NL

NPSPACE

Σ2P BPL RP RL

9

slide-56
SLIDE 56

Expected Running Time

10

slide-57
SLIDE 57

Expected Running Time

Running time is a random variable too

10

slide-58
SLIDE 58

Expected Running Time

Running time is a random variable too As is the outcome of yes/no

10

slide-59
SLIDE 59

Expected Running Time

Running time is a random variable too As is the outcome of yes/no May ask for running time to be polynomial only in expectation, or with high probability

10

slide-60
SLIDE 60

Expected Running Time

Running time is a random variable too As is the outcome of yes/no May ask for running time to be polynomial only in expectation, or with high probability Las Vegas algorithms: only expected running time is polynomial; but when it terminates, it produces the correct answer

10

slide-61
SLIDE 61

Expected Running Time

Running time is a random variable too As is the outcome of yes/no May ask for running time to be polynomial only in expectation, or with high probability Las Vegas algorithms: only expected running time is polynomial; but when it terminates, it produces the correct answer Zero error probability

10

slide-62
SLIDE 62

Zero-Error Computation

11

slide-63
SLIDE 63

e.g. A simple algorithm for finding median in expected linear time

Zero-Error Computation

11

slide-64
SLIDE 64

e.g. A simple algorithm for finding median in expected linear time (There are non-trivial algorithms to do it in deterministic linear

  • time. Simple sorting takes O(n log n) time.)

Zero-Error Computation

11

slide-65
SLIDE 65

e.g. A simple algorithm for finding median in expected linear time (There are non-trivial algorithms to do it in deterministic linear

  • time. Simple sorting takes O(n log n) time.)

Procedure Find-element(L,k) to find kth smallest element in list L

Zero-Error Computation

11

slide-66
SLIDE 66

e.g. A simple algorithm for finding median in expected linear time (There are non-trivial algorithms to do it in deterministic linear

  • time. Simple sorting takes O(n log n) time.)

Procedure Find-element(L,k) to find kth smallest element in list L Pick random element x in L. Scan L; divide it into L>x (elements > x) and L<x (elements < x); also determine position m of x in L.

Zero-Error Computation

11

slide-67
SLIDE 67

e.g. A simple algorithm for finding median in expected linear time (There are non-trivial algorithms to do it in deterministic linear

  • time. Simple sorting takes O(n log n) time.)

Procedure Find-element(L,k) to find kth smallest element in list L Pick random element x in L. Scan L; divide it into L>x (elements > x) and L<x (elements < x); also determine position m of x in L. If m = k, return x. If m > k, call Find-element(L<x,k), else call Find-element(L>x,k-m)

Zero-Error Computation

11

slide-68
SLIDE 68

e.g. A simple algorithm for finding median in expected linear time (There are non-trivial algorithms to do it in deterministic linear

  • time. Simple sorting takes O(n log n) time.)

Procedure Find-element(L,k) to find kth smallest element in list L Pick random element x in L. Scan L; divide it into L>x (elements > x) and L<x (elements < x); also determine position m of x in L. If m = k, return x. If m > k, call Find-element(L<x,k), else call Find-element(L>x,k-m) Correctness obvious. Expected running time?

Zero-Error Computation

11

slide-69
SLIDE 69

Zero-Error Computation

12

slide-70
SLIDE 70

Expected running time (worst case over all lists of size n, and all k) be T(n)

Zero-Error Computation

12

slide-71
SLIDE 71

Expected running time (worst case over all lists of size n, and all k) be T(n) Time for non-recursive operations is linear: say bounded by cn. Will show inductively T(n) at most 4cn (base case n=1).

Zero-Error Computation

12

slide-72
SLIDE 72

Expected running time (worst case over all lists of size n, and all k) be T(n) Time for non-recursive operations is linear: say bounded by cn. Will show inductively T(n) at most 4cn (base case n=1). T(n) ! cn + 1/n [Σn!j>kT(j) + Σ0<j<kT(n-j)]

Zero-Error Computation

12

slide-73
SLIDE 73

Expected running time (worst case over all lists of size n, and all k) be T(n) Time for non-recursive operations is linear: say bounded by cn. Will show inductively T(n) at most 4cn (base case n=1). T(n) ! cn + 1/n [Σn!j>kT(j) + Σ0<j<kT(n-j)] T(n) ! cn + 1/n.4c[Σj>k j + Σj<k(n-j)] by inductive hypothesis

Zero-Error Computation

12

slide-74
SLIDE 74

Expected running time (worst case over all lists of size n, and all k) be T(n) Time for non-recursive operations is linear: say bounded by cn. Will show inductively T(n) at most 4cn (base case n=1). T(n) ! cn + 1/n [Σn!j>kT(j) + Σ0<j<kT(n-j)] T(n) ! cn + 1/n.4c[Σj>k j + Σj<k(n-j)] by inductive hypothesis Σj>k j + Σj<k(n-j) = Σj>k j + (k-1)n - Σj<k j ! Σj j + (k-1)n -2 Σj<k j

Zero-Error Computation

12

slide-75
SLIDE 75

Expected running time (worst case over all lists of size n, and all k) be T(n) Time for non-recursive operations is linear: say bounded by cn. Will show inductively T(n) at most 4cn (base case n=1). T(n) ! cn + 1/n [Σn!j>kT(j) + Σ0<j<kT(n-j)] T(n) ! cn + 1/n.4c[Σj>k j + Σj<k(n-j)] by inductive hypothesis Σj>k j + Σj<k(n-j) = Σj>k j + (k-1)n - Σj<k j ! Σj j + (k-1)n -2 Σj<k j ! n2/2 + (k-1)n - k(k-1) < n2/2 + k(n-k) ! 3/ 4 n2

Zero-Error Computation

12

slide-76
SLIDE 76

Expected running time (worst case over all lists of size n, and all k) be T(n) Time for non-recursive operations is linear: say bounded by cn. Will show inductively T(n) at most 4cn (base case n=1). T(n) ! cn + 1/n [Σn!j>kT(j) + Σ0<j<kT(n-j)] T(n) ! cn + 1/n.4c[Σj>k j + Σj<k(n-j)] by inductive hypothesis Σj>k j + Σj<k(n-j) = Σj>k j + (k-1)n - Σj<k j ! Σj j + (k-1)n -2 Σj<k j ! n2/2 + (k-1)n - k(k-1) < n2/2 + k(n-k) ! 3/ 4 n2 T(n) ! cn + 3cn as required

Zero-Error Computation

12

slide-77
SLIDE 77

Zero-Error Computation

13

slide-78
SLIDE 78

Zero-Error Computation

Las-Vegas Algorithms: Probabilistic algorithms with deterministic outcome (but probabilistic run time)

13

slide-79
SLIDE 79

Zero-Error Computation

Las-Vegas Algorithms: Probabilistic algorithms with deterministic outcome (but probabilistic run time) ZPTIME(T): class of languages decided by a zero- error probabilistic TM, with expected running time at most T

13

slide-80
SLIDE 80

Zero-Error Computation

Las-Vegas Algorithms: Probabilistic algorithms with deterministic outcome (but probabilistic run time) ZPTIME(T): class of languages decided by a zero- error probabilistic TM, with expected running time at most T ZPP = ZPTIME(poly)

13

slide-81
SLIDE 81

Zero-Error Computation

Las-Vegas Algorithms: Probabilistic algorithms with deterministic outcome (but probabilistic run time) ZPTIME(T): class of languages decided by a zero- error probabilistic TM, with expected running time at most T ZPP = ZPTIME(poly) ZPP = RP ∩ co-RP

13

slide-82
SLIDE 82

ZPP ⊆ RP

14

slide-83
SLIDE 83

ZPP ⊆ RP

Truncate after “long enough,” and say “no”

14

slide-84
SLIDE 84

ZPP ⊆ RP

Truncate after “long enough,” and say “no” Do we still have bounded (one-sided) error?

14

slide-85
SLIDE 85

ZPP ⊆ RP

Truncate after “long enough,” and say “no” Do we still have bounded (one-sided) error? Will run for “too long” only with small probability

14

slide-86
SLIDE 86

ZPP ⊆ RP

Truncate after “long enough,” and say “no” Do we still have bounded (one-sided) error? Will run for “too long” only with small probability Because expected running time short

14

slide-87
SLIDE 87

ZPP ⊆ RP

Truncate after “long enough,” and say “no” Do we still have bounded (one-sided) error? Will run for “too long” only with small probability Because expected running time short With high probability the running time does not exceed the expected running time by much

14

slide-88
SLIDE 88

ZPP ⊆ RP

Truncate after “long enough,” and say “no” Do we still have bounded (one-sided) error? Will run for “too long” only with small probability Because expected running time short With high probability the running time does not exceed the expected running time by much Pr[ X > a E[X] ] < 1/a (non-negative X)

14

slide-89
SLIDE 89

ZPP ⊆ RP

Truncate after “long enough,” and say “no” Do we still have bounded (one-sided) error? Will run for “too long” only with small probability Because expected running time short With high probability the running time does not exceed the expected running time by much Pr[ X > a E[X] ] < 1/a (non-negative X) Markov’ s inequality

14

slide-90
SLIDE 90

ZPP ⊆ RP

Truncate after “long enough,” and say “no” Do we still have bounded (one-sided) error? Will run for “too long” only with small probability Because expected running time short With high probability the running time does not exceed the expected running time by much Pr[ X > a E[X] ] < 1/a (non-negative X) Markov’ s inequality Pr[error] at most 1/a if truncated after a times expected running time

14

slide-91
SLIDE 91

RP ∩ co-RP ⊆ ZPP

15

slide-92
SLIDE 92

If L ∈ RP ∩ co-RP, then a ZPP algorithm for L: Run both RP and coRP algorithms If former says yes or latter says no, output that answer Else, i.e., if former says no and latter yes, repeat Expected number of repeats = O(1)

RP ∩ co-RP ⊆ ZPP

15

slide-93
SLIDE 93

Today

16

slide-94
SLIDE 94

Today

Zoo BPL ⊆ P Expected running time Zero-Error probabilistic computation ZPP = RP ∩ co-RP

16