Randomness in Computing L ECTURE 24 Last time Probabilistic method - - PowerPoint PPT Presentation

โ–ถ
randomness in computing
SMART_READER_LITE
LIVE PREVIEW

Randomness in Computing L ECTURE 24 Last time Probabilistic method - - PowerPoint PPT Presentation

Randomness in Computing L ECTURE 24 Last time Probabilistic method Algorithmic LLL Applications of LLL Today Markov chains 4/21/2020 Sofya Raskhodnikova;Randomness in Computing; based on slides by Baranasuriya et al.


slide-1
SLIDE 1

4/21/2020

Randomness in Computing

LECTURE 24

Last time

  • Probabilistic method
  • Algorithmic LLL
  • Applications of LLL

Today

  • Markov chains

Sofya Raskhodnikova;Randomness in Computing; based on slides by Baranasuriya et al.

slide-2
SLIDE 2

Drunkardโ€™s walk problem

๐‘ž๐‘˜ = Pr[Tipsy goes home he started at position ๐‘˜ ๐‘ž๐‘œ = 1 ๐‘ž0 = 0

4/21/2020

Sofya Raskhodnikova; Randomness in Computing

Tipsy ๐’Œ ๐’ ๐’Œ โˆ’ ๐Ÿ ๐’Œ + ๐Ÿ

slide-3
SLIDE 3

Drunkardโ€™s walk: probability

๐‘ž๐‘˜ = Pr[Tipsy goes home he started at position ๐‘˜ ๐‘ž๐‘œ = 1 ๐‘ž0 = 0

4/21/2020

Sofya Raskhodnikova; Randomness in Computing

Tipsy ๐’Œ ๐’ ๐’Œ โˆ’ ๐Ÿ ๐’Œ + ๐Ÿ

slide-4
SLIDE 4

Drunkardโ€™s walk: probability

๐‘ž๐‘˜ = Pr[Tipsy goes home he started at position ๐‘˜ ๐‘ž๐‘œ = 1 ๐‘ž0 = 0 for ๐‘˜ โˆˆ [1, ๐‘œ โˆ’ 1]: ๐‘ž๐‘˜ =

๐‘ž๐‘˜โˆ’1 2

+

๐‘ž๐‘˜+1 2

4/21/2020

Sofya Raskhodnikova; Randomness in Computing

๐’Œ ๐’ ๐’Œ โˆ’ ๐Ÿ ๐’Œ + ๐Ÿ Tipsy

๐’’๐’Œ = ๐’Œ ๐’

slide-5
SLIDE 5

Drunkardโ€™s walk: probability

Pr[Tipsy goes home he started at position ๐‘˜ = ๐‘˜ ๐‘œ Pr[Tipsy falls into the river he started at position ๐‘˜ = ๐‘œ โˆ’ ๐‘˜ ๐‘œ

4/21/2020

Sofya Raskhodnikova; Randomness in Computing

Tipsy ๐’Œ ๐’ ๐’Œ โˆ’ ๐Ÿ ๐’Œ + ๐Ÿ

slide-6
SLIDE 6

Drunkardโ€™s walk: expected time

๐‘ก

๐‘˜ = expected number of steps to finish the walk,

starting at postion ๐‘˜ ๐‘ก0 = 0 ๐‘ก๐‘œ = 0 for ๐‘˜ โˆˆ [1, ๐‘œ โˆ’ 1]: ๐‘ก

๐‘˜ = 1 + ๐‘ก๐‘˜โˆ’1 2 + ๐‘ก๐‘˜+1 2

4/21/2020

Sofya Raskhodnikova; Randomness in Computing

๐’Œ ๐’ ๐’Œ โˆ’ ๐Ÿ ๐’Œ + ๐Ÿ Tipsy

๐’•๐’Œ = ๐’Œ(๐’ โˆ’ ๐’Œ)

slide-7
SLIDE 7

Markov Chains

  • A (discrete time) stochastic process is a (finite or countably infinite)

collection of random variables ๐‘Œ0, ๐‘Œ1, ๐‘Œ2, โ€ฆ

  • A discrete time stochastic process is a Markov chain

if โˆ€๐‘ข โ‰ฅ 1 and โˆ€values ๐‘0, ๐‘1, โ€ฆ , ๐‘๐‘ข,

Pr ๐‘Œ๐‘ข = ๐‘๐‘ข ๐‘Œ๐‘ขโˆ’1 = ๐‘๐‘ขโˆ’1, ๐‘Œ๐‘ขโˆ’2 = ๐‘๐‘ขโˆ’2, โ€ฆ , ๐‘Œ0 = ๐‘0 = Pr[ ๐‘Œ๐‘ข = ๐‘๐‘ข|๐‘Œ๐‘ขโˆ’1 = ๐‘๐‘ขโˆ’1] = ๐‘„

๐‘๐‘ขโˆ’1,๐‘๐‘ข

4/21/2020

Sofya Raskhodnikova; Randomness in Computing

Markov property or memoryless property Time-homogeneous property

represent evolution of some random process over time

slide-8
SLIDE 8

Terminology

state space states visited by the chain transition probability from ๐’ƒ๐’–โˆ’๐Ÿ to ๐’ƒ๐’–

Memoryless property:

  • ๐‘Œ๐‘ข depends on ๐‘Œ๐‘ขโˆ’1, but not on how the process arrived at state

๐‘Œ๐‘ขโˆ’1.

  • It does not imply that ๐‘Œ๐‘ข is independent of ๐‘Œ0, โ€ฆ , ๐‘Œ๐‘ขโˆ’2

(only that this dependency is captured by ๐‘Œ๐‘ขโˆ’1)

4/21/2020

Sofya Raskhodnikova; Randomness in Computing

๐’€๐Ÿ, ๐’€๐Ÿ, โ€ฆ ๐‘ธ๐’ƒ๐’–โˆ’๐Ÿ,๐’ƒ๐’– the set of values the RVs can take, e.g. 0,1,2, โ€ฆ

slide-9
SLIDE 9
  • Set of vertices = state space
  • Directed edge (๐‘—, ๐‘˜) iff ๐‘„๐‘—,๐‘˜ > 0; the edge weight is ๐‘„๐‘—,๐‘˜

Representation: directed weighted graph

1 2 3

4 1 2 1 3 1 2 1 6 1

1

4 1 4 3 4 1

slide-10
SLIDE 10
  • Entry ๐‘„๐‘—,๐‘˜ in matrix ๐‘ธ is the transition probability from ๐‘— to ๐‘˜
  • For all rows ๐‘—, the sum ฯƒ๐‘˜โ‰ฅ0 ๐‘„๐‘—,๐‘˜ = 1

Representation: Transition Matrix

๏ƒท ๏ƒท ๏ƒท ๏ƒท ๏ƒท ๏ƒธ ๏ƒถ ๏ƒง ๏ƒง ๏ƒง ๏ƒง ๏ƒง ๏ƒจ ๏ƒฆ ๏€ฝ 4 1 4 1 2 1 1 6 1 3 1 2 1 4 3 4 1 P

1 2 3

4 1 2 1 3 1 2 1 6 1

1

4 1 4 3 4 1

slide-11
SLIDE 11

Distribution of states

  • Let ๐‘ž๐‘— ๐‘ข be the probability that the process is at state ๐‘— at time ๐‘ข.

By Law of Total Probability, ๐‘ž๐‘— ๐‘ข = เท

๐‘˜โ‰ฅ0

๐‘ž๐‘˜ ๐‘ข โˆ’ 1 โ‹… ๐‘„๐‘—,๐‘˜

  • Let าง

๐‘ž ๐‘ข = (๐‘ž0 ๐‘ข , ๐‘ž1 ๐‘ข , โ€ฆ ) be the vector giving the distribution

  • f the chain at time ๐‘ข.

าง ๐‘ž ๐‘ข = าง ๐‘ž ๐‘ข โˆ’ 1 ๐‘ธ

  • For all ๐‘› โ‰ฅ 0, we define the m-step transition probability

๐‘„๐‘—,๐‘˜

๐‘› = Pr ๐‘Œ๐‘ข+๐‘› = ๐‘˜ ๐‘Œ๐‘ข = ๐‘—]

  • Conditioning on the first transition from ๐‘—,

by Law of Total Probability, ๐‘„๐‘—,๐‘˜

๐‘› = เท ๐‘™โ‰ฅ0

๐‘„๐‘—,๐‘™๐‘„๐‘™,๐‘˜

๐‘›โˆ’1

4/21/2020

Sofya Raskhodnikova; Randomness in Computing

slide-12
SLIDE 12

Distribution of states at time ๐’

๐‘„๐‘—,๐‘˜

๐‘› = เท ๐‘™โ‰ฅ0

๐‘„๐‘—,๐‘™๐‘„๐‘™,๐‘˜

๐‘›โˆ’1

  • Let ๐‘ธ(๐’) be the matrix whose entries ๐‘—, ๐‘˜ are the ๐’-step

transitional probabilities ๐‘„๐‘—,๐‘˜

๐‘›.

๐‘ธ ๐’ = ๐‘ธ โ‹… ๐‘ธ ๐‘›โˆ’1 By induction on ๐‘›, ๐‘ธ ๐’ = ๐‘ธ๐’

  • For all ๐‘ข โ‰ฅ 0 and ๐‘› โ‰ฅ 1,

าง ๐‘ž ๐‘ข + ๐‘› = าง ๐‘ž(๐‘ข)๐‘ธ๐’

4/21/2020

Sofya Raskhodnikova; Randomness in Computing

slide-13
SLIDE 13

What is the probability of going from state 0 to state 3 in exactly three steps?

Example

๏ƒท ๏ƒท ๏ƒท ๏ƒท ๏ƒท ๏ƒธ ๏ƒถ ๏ƒง ๏ƒง ๏ƒง ๏ƒง ๏ƒง ๏ƒจ ๏ƒฆ ๏€ฝ 4 1 4 1 2 1 1 6 1 3 1 2 1 4 3 4 1 P

1 2 3

4 1 2 1 3 1 2 1 6 1

1

4 1 4 3 4 1

slide-14
SLIDE 14

What is the probability of going from state 0 to state 3 in exactly three steps?

Example

1 2 3

4 1 2 1 3 1 2 1 6 1

1

4 1 4 3 4 1

1 1 3 0-1-0-3 2 2 3 1 2 3 0-1-3-3 3 1 2 3 0-3-1-3 2 2 3 1 2 3

0-3-3-3

slide-15
SLIDE 15

Example

  • We calculate the probability of the four events:
  • 0 โ€“ 1 โ€“ 0 โ€“ 3 | Pr = 3/32
  • 0 โ€“ 1 โ€“ 3 โ€“ 3 | Pr = 1/96
  • 0 โ€“ 3 โ€“ 1 โ€“ 3 | Pr = 1/16
  • 0 โ€“ 3 โ€“ 3 โ€“ 3 | Pr = 3/64
  • Since they are mutually exclusive,

the total probability is Pr = 3/32 + 1/96 + 1/16 + 3/64 = 41/192

4/21/2020

Sofya Raskhodnikova; Randomness in Computing

1 2 3

4 1 2 1 3 1 2 1 6 1

1

4 1 4 3 4 1

slide-16
SLIDE 16

Alternatively, we can calculate ๐‘ธ3

Example

๏ƒท ๏ƒท ๏ƒท ๏ƒท ๏ƒท ๏ƒท ๏ƒธ ๏ƒถ ๏ƒง ๏ƒง ๏ƒง ๏ƒง ๏ƒง ๏ƒง ๏ƒจ ๏ƒฆ 192 47 192 107 96 13 16 1 1 36 5 144 79 24 5 48 5 192 41 64 29 48 7 16 3

and find the entry (0,3)

slide-17
SLIDE 17

Example 2

What is the probability of ending up in state 3 after three steps if we start in a uniformly random state?

Solution:

  • Calculate

1 4 , 1 4 , 1 4 , 1 4 ๐‘„3 = 17 192 , 47 384 , 737 1152 , 43 288

4/21/2020

Sofya Raskhodnikova; Randomness in Computing

๏ƒท ๏ƒท ๏ƒท ๏ƒท ๏ƒท ๏ƒธ ๏ƒถ ๏ƒง ๏ƒง ๏ƒง ๏ƒง ๏ƒง ๏ƒจ ๏ƒฆ ๏€ฝ 4 1 4 1 2 1 1 6 1 3 1 2 1 4 3 4 1 P

1 2 3

4 1 2 1 3 1 2 1 6 1

1

4 1 4 3 4 1

Answer

slide-18
SLIDE 18

Application: Algorithm for 2SAT

Recall: A 2CNF formula is an AND of clauses

  • Each clause is an OR of literals.
  • Each literal is a Boolean variable or its negation.
  • E.g. ๐‘ฆ1 โˆจ ๐‘ฆ2 โˆง ๐‘ฆ2 โˆจ ๐‘ฆ3 โˆง ๐‘ฆ3 โˆจ ๐‘ฆ4 โˆง ๐‘ฆ1 โˆจ ๐‘ฆ4 โˆง ๐‘ฆ2 โˆจ ๐‘ฆ4

2SAT Problem (search version): Given a 2CNF formula, find a satisfying assignment if it is satisfiable.

4/21/2020

Sofya Raskhodnikova; Randomness in Computing

slide-19
SLIDE 19

Randomized Algorithm for 2SAT

Example: ๐œš = ๐‘ฆ1 โˆจ ๐‘ฆ2 โˆง ๐‘ฆ2 โˆจ ๐‘ฆ3 โˆง ๐‘ฆ3 โˆจ ๐‘ฆ4 โˆง ๐‘ฆ1 โˆจ ๐‘ฆ4 โˆง ๐‘ฆ2 โˆจ ๐‘ฆ4

  • Initial assignment: ๐‘ฆ1 = 0, ๐‘ฆ2 = 0, ๐‘ฆ3 = 0, ๐‘ฆ4 = 0
  • Unsatisfied clause: C = ๐‘ฆ1 โˆจ ๐‘ฆ4
  • Pick ๐‘ฆ1 or ๐‘ฆ4 and flip its value: ๐‘ฆ1 = 0, ๐‘ฆ2 = 0, ๐‘ฆ3 = 0, ๐‘ฆ4 = 1
  • New unsatisfied clause: C = ๐‘ฆ3 โˆจ ๐‘ฆ4
  • Pick ๐‘ฆ3 or ๐‘ฆ4 and flip its value: ๐‘ฆ1 = 0, ๐‘ฆ2 = 0, ๐‘ฆ3 = 1, ๐‘ฆ4 = 1

4/21/2020

Sofya Raskhodnikova; Randomness in Computing

  • 1. Start with an arbitrary truth assignment, e.g., all 0โ€™s.
  • 2. Repeat R times, terminating if ๐œš is satisfied:

a) Choose an arbitrary clause ๐ท that is not satisfied. b) Pick a uniformly random literal in ๐ท and flip its assignment.

  • 3. If a satisfying assignment is found, return it.
  • 4. Otherwise, return ``unsatisfiableโ€™โ€™.

Input: a 2CNF formula ๐œš on ๐‘œ variables

parameter

slide-20
SLIDE 20

When can the algorithm fail?

  • Only if ๐œš is satisfiable, but we did not find a

satisfying assignment in ๐‘† iterations (steps).

  • We will analyze the number of steps necessary.
  • Each step can be implemented to run in ๐‘ƒ(๐‘œ2) time,

since there are ๐‘ƒ ๐‘œ2 clauses.

4/21/2020

Sofya Raskhodnikova; Randomness in Computing

slide-21
SLIDE 21

Analysis of the number of steps

  • Let ๐‘‡ = a satisfying assignment of ๐œš.
  • ๐ต๐‘— = an assignment to ๐œš after ๐‘— steps
  • ๐‘Œ๐‘— = number of variables that have the same value in ๐ต๐‘— and ๐‘‡

When ๐‘Œ๐‘— = ๐‘œ, the algorithm terminates with a satisfying assignment.

(It could do it before ๐‘Œ๐‘— = ๐‘œ if it finds another satisfying assignment.)

  • If ๐‘Œ๐‘— = 0 then ๐‘Œ๐‘—+1 = 1

Pr ๐‘Œ๐‘—+1 = 1 ๐‘Œ๐‘— = 0 = 1

  • If ๐‘Œ๐‘— โˆˆ [1, ๐‘œ โˆ’ 1] then ๐ต๐‘— disagrees with ๐‘‡ on 1 or 2 literals of ๐ท

Pr ๐‘Œ๐‘—+1 = ๐‘˜ + 1 ๐‘Œ๐‘— = ๐‘˜ โ‰ฅ 1/2 Pr ๐‘Œ๐‘—+1 = ๐‘˜ โˆ’ 1 ๐‘Œ๐‘— = ๐‘˜ โ‰ค 1/2

4/21/2020

Sofya Raskhodnikova; Randomness in Computing

1/2 or 1

๐‘Œ0, ๐‘Œ1, ๐‘Œ2, โ€ฆ is not necessarily a Markov chain, since the probability of ๐‘Œ๐‘—+1 > ๐‘Œ_๐‘— depends on whether ๐ต๐‘— and ๐‘‡ disagree on 1 or 2 literals of ๐ท (which could depend on previous choices, not just ๐‘Œ๐‘—)

slide-22
SLIDE 22

Creating a true Markov chain

  • Define a Markov Chain ๐‘

0, ๐‘ 1, ๐‘ 2, โ€ฆ

๐‘

0 = ๐‘Œ0

Pr ๐‘

๐‘—+1 = 1 ๐‘ ๐‘— = 0 = 1

Pr ๐‘

๐‘—+1 = ๐‘˜ + 1 ๐‘ ๐‘— = ๐‘˜ = 1/2

Pr ๐‘

๐‘—+1 = ๐‘˜ โˆ’ 1 ๐‘ ๐‘— = ๐‘˜ = 1/2

  • ``Pessimistic versionโ€™โ€™ of stochastic process ๐‘Œ0, ๐‘Œ1, ๐‘Œ2, โ€ฆ

The expected time to reach ๐‘œ is larger for ๐‘

0, ๐‘ 1, ๐‘ 2, โ€ฆ than for ๐‘Œ0, ๐‘Œ1, ๐‘Œ2, โ€ฆ

4/21/2020

Sofya Raskhodnikova; Randomness in Computing

slide-23
SLIDE 23

Expected time to reach ๐’

๐‘ก

๐‘˜ = expected number of steps to reach position ๐‘œ,

starting at postion ๐‘˜ ๐‘ก0 = ๐’•๐Ÿ + ๐Ÿ ๐‘ก๐‘œ = 0 for ๐‘˜ โˆˆ [1, ๐‘œ โˆ’ 1]: ๐‘ก

๐‘˜ = 1 + ๐‘ก๐‘˜โˆ’1 2 + ๐‘ก๐‘˜+1 2

๐‘ก0 = ๐‘œ2

4/21/2020

Sofya Raskhodnikova; Randomness in Computing

๐’Œ ๐’ ๐’Œ โˆ’ ๐Ÿ ๐’Œ + ๐Ÿ Tipsy

๐’•๐’Œ = ๐’๐Ÿ‘ โˆ’ ๐’Œ๐Ÿ‘

slide-24
SLIDE 24

2SAT algorithm: correctness

4/21/2020

Sofya Raskhodnikova; Randomness in Computing

Theorem

If number of steps ๐‘บ = ๐Ÿ‘๐’ƒ๐’๐Ÿ‘ and ๐œš is satisfiable, then the algorithm returns a satisfying assignment with probability at least 1 โˆ’ 2โˆ’๐‘.