Advanced Algorithms (VI) Shanghai Jiao Tong University Chihao Zhang - - PowerPoint PPT Presentation

advanced algorithms vi
SMART_READER_LITE
LIVE PREVIEW

Advanced Algorithms (VI) Shanghai Jiao Tong University Chihao Zhang - - PowerPoint PPT Presentation

Advanced Algorithms (VI) Shanghai Jiao Tong University Chihao Zhang April 13, 2020 Martingale Martingale Let be a sequence of random variables { X t } t 0 Martingale Let be a sequence of random variables { X t } t 0 Let be a


slide-1
SLIDE 1

Advanced Algorithms (VI)

Shanghai Jiao Tong University

Chihao Zhang

April 13, 2020

slide-2
SLIDE 2

Martingale

slide-3
SLIDE 3

Martingale

Let be a sequence of random variables

{Xt}t≥0

slide-4
SLIDE 4

Martingale

Let be a sequence of random variables

{Xt}t≥0

Let be a sequence of -algebras such that

{ℱt}t≥0 σ

ℱ0 ⊆ ℱ1 ⊆ ℱ2⋯

slide-5
SLIDE 5

Martingale

Let be a sequence of random variables

{Xt}t≥0

Let be a sequence of -algebras such that

{ℱt}t≥0 σ

ℱ0 ⊆ ℱ1 ⊆ ℱ2⋯ filtration

slide-6
SLIDE 6

Martingale

Let be a sequence of random variables

{Xt}t≥0

Let be a sequence of -algebras such that

{ℱt}t≥0 σ

ℱ0 ⊆ ℱ1 ⊆ ℱ2⋯ filtration A martingale is a sequence of pairs s.t.

{Xt, ℱt}t≥0

  • for all

is

  • measurable
  • for all

,

t ≥ 0, Xt ℱt t ≥ 0 E[Xt+1 ∣ ℱt] = Xt

slide-7
SLIDE 7

Stopping Time

slide-8
SLIDE 8

Stopping Time

The stopping time is a random variable such that

τ ∈ ℕ ∪ {∞}

is

  • measurable for all

[τ ≤ t] ℱt t

slide-9
SLIDE 9

Stopping Time

The stopping time is a random variable such that

τ ∈ ℕ ∪ {∞}

is

  • measurable for all

[τ ≤ t] ℱt t

“whether to stop can be determined by looking at the outcomes seen so far”

slide-10
SLIDE 10

Stopping Time

The stopping time is a random variable such that

τ ∈ ℕ ∪ {∞}

is

  • measurable for all

[τ ≤ t] ℱt t

“whether to stop can be determined by looking at the outcomes seen so far”

  • The first time a gambler wins five games in a row
  • The last time a gambler wins five games in a row
slide-11
SLIDE 11
slide-12
SLIDE 12
slide-13
SLIDE 13

A basic property of a martingale is for any

{Xt, ℱt}t≥0 E[Xt] = E[X0] t ≥ 0

slide-14
SLIDE 14

A basic property of a martingale is for any

{Xt, ℱt}t≥0 E[Xt] = E[X0] t ≥ 0

Proof. ,

∀t ≥ 1 E[Xt] = E[E[Xt ∣ ℱt−1]] = E[Xt−1]

slide-15
SLIDE 15

A basic property of a martingale is for any

{Xt, ℱt}t≥0 E[Xt] = E[X0] t ≥ 0

Proof. ,

∀t ≥ 1 E[Xt] = E[E[Xt ∣ ℱt−1]] = E[Xt−1]

Does hold for a (randomized) stopping time ?

E[Xτ] = E[X0] τ

slide-16
SLIDE 16

A basic property of a martingale is for any

{Xt, ℱt}t≥0 E[Xt] = E[X0] t ≥ 0

Proof. ,

∀t ≥ 1 E[Xt] = E[E[Xt ∣ ℱt−1]] = E[Xt−1]

Does hold for a (randomized) stopping time ?

E[Xτ] = E[X0] τ

Not true in general. Assume is the first time a gambler wins

τ $100

slide-17
SLIDE 17

Optional Stopping Theorem

slide-18
SLIDE 18

Optional Stopping Theorem

For a stopping time , holds if

τ E[Xτ] = E[X0]

slide-19
SLIDE 19

Optional Stopping Theorem

For a stopping time , holds if

τ E[Xτ] = E[X0]

  • Pr[τ < ∞] = 1

E[|Xτ|] < ∞ lim

t→∞ E[Xt ⋅ 1[τ>t]] = 0

slide-20
SLIDE 20
slide-21
SLIDE 21

The following conditions are stronger, but easier to verify

slide-22
SLIDE 22

The following conditions are stronger, but easier to verify

  • 1. There is a fixed such that

a.s. 2. and there is a fixed such that for all 3. and there is a fixed such that for all

n τ ≤ n Pr[τ < ∞] = 1 M |Xt| ≤ M t ≤ τ E[τ] < ∞ c |Xt+1 − Xt| ≤ c t < τ

slide-23
SLIDE 23

The following conditions are stronger, but easier to verify

  • 1. There is a fixed such that

a.s. 2. and there is a fixed such that for all 3. and there is a fixed such that for all

n τ ≤ n Pr[τ < ∞] = 1 M |Xt| ≤ M t ≤ τ E[τ] < ∞ c |Xt+1 − Xt| ≤ c t < τ

OST applies when at least one of above holds

slide-24
SLIDE 24

Proof of the Optional Stopping Theorem

slide-25
SLIDE 25

Applications of OST

slide-26
SLIDE 26

Random Walk in 1-D

slide-27
SLIDE 27

Random Walk in 1-D

Let u.a.r. and

Zt ∈ {−1, + 1} Xt =

t

i=1

Zi

slide-28
SLIDE 28

Random Walk in 1-D

Let u.a.r. and

Zt ∈ {−1, + 1} Xt =

t

i=1

Zi

The random walk stops when it hits

  • r

−a < 0 b > 0

slide-29
SLIDE 29

Random Walk in 1-D

Let u.a.r. and

Zt ∈ {−1, + 1} Xt =

t

i=1

Zi

The random walk stops when it hits

  • r

−a < 0 b > 0

Let be the time it stops. is a stopping time

τ τ

slide-30
SLIDE 30

Random Walk in 1-D

Let u.a.r. and

Zt ∈ {−1, + 1} Xt =

t

i=1

Zi

The random walk stops when it hits

  • r

−a < 0 b > 0

Let be the time it stops. is a stopping time

τ τ

What is ?

E[τ]

slide-31
SLIDE 31
slide-32
SLIDE 32

The random walk stops when one of two ends is arrived

slide-33
SLIDE 33

The random walk stops when one of two ends is arrived We first determine , the probability that the walk ends at , using OST

pa −a

slide-34
SLIDE 34

The random walk stops when one of two ends is arrived We first determine , the probability that the walk ends at , using OST

pa −a

slide-35
SLIDE 35

The random walk stops when one of two ends is arrived We first determine , the probability that the walk ends at , using OST

pa −a

E[Xτ] = pa(−a) + (1 − pa)b = E[X0] = 0

slide-36
SLIDE 36

The random walk stops when one of two ends is arrived We first determine , the probability that the walk ends at , using OST

pa −a

E[Xτ] = pa(−a) + (1 − pa)b = E[X0] = 0 ⟹ pa = b a + b

slide-37
SLIDE 37
slide-38
SLIDE 38

Now define a random variable Yt = X2

t − t

slide-39
SLIDE 39

Now define a random variable Yt = X2

t − t

Claim. is a martingale

{Yt}t≥0

slide-40
SLIDE 40

Now define a random variable Yt = X2

t − t

Claim. is a martingale

{Yt}t≥0

E[Yt+1 ∣ ℱt] = E[(Xt + Zt+1)2 − (t + 1) ∣ ℱt] = E[X2

t + 2Zt+1Xt − t ∣ ℱt]

= X2

t − t = Yt

slide-41
SLIDE 41
slide-42
SLIDE 42
slide-43
SLIDE 43

satisfies the condition for OST, so

slide-44
SLIDE 44

satisfies the condition for OST, so

E[Yτ] = E[X2

τ ] − E[τ] = E[Y0] = 0

slide-45
SLIDE 45

On the other hand, we have

satisfies the condition for OST, so

E[Yτ] = E[X2

τ ] − E[τ] = E[Y0] = 0

slide-46
SLIDE 46

On the other hand, we have E[X2

τ ] = pa ⋅ a2 + (1 − pa) ⋅ b2 = ab

satisfies the condition for OST, so

E[Yτ] = E[X2

τ ] − E[τ] = E[Y0] = 0

slide-47
SLIDE 47

On the other hand, we have E[X2

τ ] = pa ⋅ a2 + (1 − pa) ⋅ b2 = ab

This implies E[τ] = ab

satisfies the condition for OST, so

E[Yτ] = E[X2

τ ] − E[τ] = E[Y0] = 0

slide-48
SLIDE 48

Wald’s Equation

slide-49
SLIDE 49

Wald’s Equation

Recall in Week two, we consider the sum where are independent with mean and is a random variable

E [

N

i

Xi] {Xi} μ N

slide-50
SLIDE 50

Wald’s Equation

Recall in Week two, we consider the sum where are independent with mean and is a random variable

E [

N

i

Xi] {Xi} μ N

slide-51
SLIDE 51

Wald’s Equation

Recall in Week two, we consider the sum where are independent with mean and is a random variable

E [

N

i

Xi] {Xi} μ N

We are now ready to prove the general case!

slide-52
SLIDE 52
slide-53
SLIDE 53

Assume is finite and let

E[N] Yt =

t

i=1

(Xi − μ)

slide-54
SLIDE 54

Assume is finite and let

E[N] Yt =

t

i=1

(Xi − μ)

is a martingale and the stopping time satisfies the conditions for OST

{Yt} N

slide-55
SLIDE 55

Assume is finite and let

E[N] Yt =

t

i=1

(Xi − μ)

is a martingale and the stopping time satisfies the conditions for OST

{Yt} N

E[YN] = E [

N

i=1

(Xi − μ)] = E [

N

i=1

Xi] − E [

N

i=1

μ] = E [

N

i=1

Xi] − E[N] ⋅ μ = 0

slide-56
SLIDE 56

Waiting Time for Patterns

slide-57
SLIDE 57

Waiting Time for Patterns

Fix a pattern “00110”

P =

slide-58
SLIDE 58

Waiting Time for Patterns

Fix a pattern “00110”

P =

How many fair coins one needs to toss to see for the first time (in expectation)?

P

slide-59
SLIDE 59

Waiting Time for Patterns

Fix a pattern “00110”

P =

How many fair coins one needs to toss to see for the first time (in expectation)?

P

The number can be calculated using OST

slide-60
SLIDE 60

Waiting Time for Patterns

Fix a pattern “00110”

P =

How many fair coins one needs to toss to see for the first time (in expectation)?

P

Shuo-Yen Robert Li (李碩彥)

The number can be calculated using OST

slide-61
SLIDE 61
slide-62
SLIDE 62

Let the pattern P = p1p2…pk

slide-63
SLIDE 63

Let the pattern P = p1p2…pk We draw a random string B = b1b2b3…

slide-64
SLIDE 64

Let the pattern P = p1p2…pk We draw a random string B = b1b2b3… Imagine for each , there is a gambler

j ≥ 1 Gj

slide-65
SLIDE 65

Let the pattern P = p1p2…pk We draw a random string B = b1b2b3… Imagine for each , there is a gambler

j ≥ 1 Gj

At time , bets for “ ”. If he wins, he bets for “ ”, …

j Gj $1 bj = p1 $2 bj+1 = p2

slide-66
SLIDE 66

Let the pattern P = p1p2…pk We draw a random string B = b1b2b3… Imagine for each , there is a gambler

j ≥ 1 Gj

At time , bets for “ ”. If he wins, he bets for “ ”, …

j Gj $1 bj = p1 $2 bj+1 = p2

He keeps doubling the money until he loses

slide-67
SLIDE 67
slide-68
SLIDE 68

The money of is a martingale (w.r.t. )

Gj B

slide-69
SLIDE 69

The money of is a martingale (w.r.t. )

Gj B

Let be the money of all gamblers at time

Xt t

slide-70
SLIDE 70

The money of is a martingale (w.r.t. )

Gj B

is also a martingale

{Xt}t≥1

Let be the money of all gamblers at time

Xt t

slide-71
SLIDE 71

The money of is a martingale (w.r.t. )

Gj B

is also a martingale

{Xt}t≥1

Let be the money of all gamblers at time

Xt t

Let be the first time that we meet in

τ P B

slide-72
SLIDE 72

The money of is a martingale (w.r.t. )

Gj B

is also a martingale

{Xt}t≥1

Let be the money of all gamblers at time

Xt t

Let be the first time that we meet in

τ P B

and meet the conditions for OST, so

{Xt} τ E[Xτ] = 0

slide-73
SLIDE 73
slide-74
SLIDE 74

Now we can compute the money of each at

Gj τ

slide-75
SLIDE 75

Now we can compute the money of each at

Gj τ

  • All gamblers before

must lose

  • The gambler

wins

  • Any other gamblers can win?

τ − k + 1 Gτ−k+1 2k − 1

slide-76
SLIDE 76

Now we can compute the money of each at

Gj τ

  • All gamblers before

must lose

  • The gambler

wins

  • Any other gamblers can win?

τ − k + 1 Gτ−k+1 2k − 1

A gambler wins iff

Gτ−j+1 p1p2…pj = pk−j+1pk−j+2…pk

slide-77
SLIDE 77

Now we can compute the money of each at

Gj τ

  • All gamblers before

must lose

  • The gambler

wins

  • Any other gamblers can win?

τ − k + 1 Gτ−k+1 2k − 1

A gambler wins iff

Gτ−j+1 p1p2…pj = pk−j+1pk−j+2…pk

If wins, he wins

Gτ−j+1 $2j − 1

slide-78
SLIDE 78
slide-79
SLIDE 79

For any and , let be the indicator that

P = p1p2…pk 1 ≤ j ≤ k χj p1…pj = pk−j+1…pk

slide-80
SLIDE 80

For any and , let be the indicator that

P = p1p2…pk 1 ≤ j ≤ k χj p1…pj = pk−j+1…pk

Then Xτ = − τ −

k

j=1

χj +

k

j=1

χj ⋅ (2j − 1)

slide-81
SLIDE 81

For any and , let be the indicator that

P = p1p2…pk 1 ≤ j ≤ k χj p1…pj = pk−j+1…pk

Then Xτ = − τ −

k

j=1

χj +

k

j=1

χj ⋅ (2j − 1)

contribution of losers contribution of winners

slide-82
SLIDE 82

For any and , let be the indicator that

P = p1p2…pk 1 ≤ j ≤ k χj p1…pj = pk−j+1…pk

Then Xτ = − τ −

k

j=1

χj +

k

j=1

χj ⋅ (2j − 1)

contribution of losers contribution of winners

This implies E[τ] =

k

j=1

χj ⋅ 2j

slide-83
SLIDE 83

Proof of OST

slide-84
SLIDE 84

Proof of OST

Show on Board

slide-85
SLIDE 85

Read Chapter 8 of “Notes on Randomized Algorithms” for more details https://arxiv.org/abs/2003.01902

Proof of OST

Show on Board