Stochastic Processes MATH5835, P. Del Moral UNSW, School of - - PowerPoint PPT Presentation

stochastic processes
SMART_READER_LITE
LIVE PREVIEW

Stochastic Processes MATH5835, P. Del Moral UNSW, School of - - PowerPoint PPT Presentation

Stochastic Processes MATH5835, P. Del Moral UNSW, School of Mathematics & Statistics Lectures Notes 2 Consultations (RC 5112): Wednesday 3.30 pm 4.30 pm & Thursday 3.30 pm 4.30 pm 1/34 2/34 Citation of the day As far as the


slide-1
SLIDE 1

Stochastic Processes

MATH5835, P. Del Moral UNSW, School of Mathematics & Statistics Lectures Notes 2 Consultations (RC 5112): Wednesday 3.30 pm 4.30 pm & Thursday 3.30 pm 4.30 pm

1/34

slide-2
SLIDE 2

2/34

slide-3
SLIDE 3

Citation of the day

As far as the laws of mathematics refer to reality, they are not certain,

3/34

slide-4
SLIDE 4

Citation of the day

As far as the laws of mathematics refer to reality, they are not certain, and as far as they are certain, they do not refer to reality. – Albert Einstein (1879-1955)

3/34

slide-5
SLIDE 5

Citation of the day

As far as the laws of mathematics refer to reality, they are not certain, and as far as they are certain, they do not refer to reality. – Albert Einstein (1879-1955) personal question X random variable ⇔ Law(X) = certain ??

3/34

slide-6
SLIDE 6

Citation of the day

As far as the laws of mathematics refer to reality, they are not certain, and as far as they are certain, they do not refer to reality. – Albert Einstein (1879-1955) personal question X random variable ⇔ Law(X) = certain ?? Mathematics is a game played according to certain simple rules with meaningless marks on paper. – Hilbert, David (1862-1943)

3/34

slide-7
SLIDE 7

Some basic notation

4/34

slide-8
SLIDE 8

Some basic notation

∀1 ≤ i ≤ d P(Y = j)

  • =pY (j)

=

  • 1≤i≤d

P(X = i)

  • =pX (i)

P(Y = j | X = i)

  • =M(i,j)

4/34

slide-9
SLIDE 9

Some basic notation

∀1 ≤ i ≤ d P(Y = j)

  • =pY (j)

=

  • 1≤i≤d

P(X = i)

  • =pX (i)

P(Y = j | X = i)

  • =M(i,j)
  • Matrix notation:

pY = [P(Y = 1), . . . , P(Y = d)] = [P(X = 1), . . . , P(X = d)]

  • =pX

×         P(Y = 1 | X = 1) P(Y = 2 | X = 1)) . . . P(Y = d | X = 1) P(Y = 1 | X = 2) P(Y = 2 | X = 2)) . . . P(Y = d | X = 2) . . . . . . . . . . . . P(Y = 1 | X = d) P(Y = 2 | X = d)) . . . P(Y = d | X = d)        

  • M=(M(i,j))i,j

4/34

slide-10
SLIDE 10

Some basic notation

∀1 ≤ i ≤ d P(Y = j)

  • =pY (j)

=

  • 1≤i≤d

P(X = i)

  • =pX (i)

P(Y = j | X = i)

  • =M(i,j)
  • Matrix notation:

pY = [P(Y = 1), . . . , P(Y = d)] = [P(X = 1), . . . , P(X = d)]

  • =pX

×         P(Y = 1 | X = 1) P(Y = 2 | X = 1)) . . . P(Y = d | X = 1) P(Y = 1 | X = 2) P(Y = 2 | X = 2)) . . . P(Y = d | X = 2) . . . . . . . . . . . . P(Y = 1 | X = d) P(Y = 2 | X = d)) . . . P(Y = d | X = d)        

  • M=(M(i,j))i,j
  • Matrix synthetic notation:

pY = pXM

4/34

slide-11
SLIDE 11

Some basic notation

E(f (Y ) | X = i) =

  • 1≤j≤d

P(Y = j | X = i)

  • =M(i,j)

f (j)

5/34

slide-12
SLIDE 12

Some basic notation

E(f (Y ) | X = i) =

  • 1≤j≤d

P(Y = j | X = i)

  • =M(i,j)

f (j)

  • Matrix notation:

        P(Y = 1 | X = 1) P(Y = 2 | X = 1)) . . . P(Y = d | X = 1) P(Y = 1 | X = 2) P(Y = 2 | X = 2)) . . . P(Y = d | X = 2) . . . . . . . . . . . . P(Y = 1 | X = d) P(Y = 2 | X = d)) . . . P(Y = d | X = d)        

  • M=(M(i,j))i,j

        f (1) f (2) . . . f (d)        

  • =f

=         E(f (Y ) | X = 1) E(f (Y ) | X = 2) . . . E(f (Y ) | X = d)        

  • =M(f )

5/34

slide-13
SLIDE 13

Some basic notation

E(f (Y ) | X = i) =

  • 1≤j≤d

P(Y = j | X = i)

  • =M(i,j)

f (j)

  • Matrix notation:

        P(Y = 1 | X = 1) P(Y = 2 | X = 1)) . . . P(Y = d | X = 1) P(Y = 1 | X = 2) P(Y = 2 | X = 2)) . . . P(Y = d | X = 2) . . . . . . . . . . . . P(Y = 1 | X = d) P(Y = 2 | X = d)) . . . P(Y = d | X = d)        

  • M=(M(i,j))i,j

        f (1) f (2) . . . f (d)        

  • =f

=         E(f (Y ) | X = 1) E(f (Y ) | X = 2) . . . E(f (Y ) | X = d)        

  • =M(f )
  • Matrix synthetic notation:

E (f (Y ) | X = i) = M(f )(i)

5/34

slide-14
SLIDE 14

Some basic notation

Markov chain = ”sequence of r.v.” X0 X1 . . . Xn−1 Xn

6/34

slide-15
SLIDE 15

Some basic notation

Markov chain = ”sequence of r.v.” X0 X1 . . . Xn−1 Xn

  • P(Xn = j)
  • =pXn (j)

=

  • 1≤i≤d

P(Xn−1 = i)

  • =pXn−1(i)

P(Xn = j | Xn−1 = i)

  • =Mn(i,j)

6/34

slide-16
SLIDE 16

Some basic notation

Markov chain = ”sequence of r.v.” X0 X1 . . . Xn−1 Xn

  • P(Xn = j)
  • =pXn (j)

=

  • 1≤i≤d

P(Xn−1 = i)

  • =pXn−1(i)

P(Xn = j | Xn−1 = i)

  • =Mn(i,j)
  • Matrix synthetic notation:

pXn = pXn−1Mn = . . . = pX0M1M2 . . . Mn

6/34

slide-17
SLIDE 17

Some basic notation

E (f (Xn) | X0 = i) = E   

Mn(f )(Xn−1)

  • E (f (Xn) | Xn−1) | X0 = i

   = E (Mn(f )(Xn−1) | X0 = i)

7/34

slide-18
SLIDE 18

Some basic notation

E (f (Xn) | X0 = i) = E   

Mn(f )(Xn−1)

  • E (f (Xn) | Xn−1) | X0 = i

   = E (Mn(f )(Xn−1) | X0 = i) = E (Mn−1 (Mn(f )) (Xn−2) | X0 = i)

7/34

slide-19
SLIDE 19

Some basic notation

E (f (Xn) | X0 = i) = E   

Mn(f )(Xn−1)

  • E (f (Xn) | Xn−1) | X0 = i

   = E (Mn(f )(Xn−1) | X0 = i) = E (Mn−1 (Mn(f )) (Xn−2) | X0 = i) = . . .

7/34

slide-20
SLIDE 20

Some basic notation

E (f (Xn) | X0 = i) = E   

Mn(f )(Xn−1)

  • E (f (Xn) | Xn−1) | X0 = i

   = E (Mn(f )(Xn−1) | X0 = i) = E (Mn−1 (Mn(f )) (Xn−2) | X0 = i) = . . . = E ((M1 . . . (Mn(f ))) (X0) | X0 = i)

7/34

slide-21
SLIDE 21

Some basic notation

E (f (Xn) | X0 = i) = E   

Mn(f )(Xn−1)

  • E (f (Xn) | Xn−1) | X0 = i

   = E (Mn(f )(Xn−1) | X0 = i) = E (Mn−1 (Mn(f )) (Xn−2) | X0 = i) = . . . = E ((M1 . . . (Mn(f ))) (X0) | X0 = i) = (M1M2 . . . Mn)(f )(i)

7/34

slide-22
SLIDE 22

Stabilizing populations - Migration processes

◮ 193 countries (UN report 2013) ci, i = 1, . . . , 193. ◮ qn(i) = average-population of country ci at some time n

(years/months/...).

◮ Mn(i, j) =

proportions of migrants from ci to cj at time n. Some questions:

◮ Stabilization ∃? q∞(i) invariant w.r.t. migration process ◮ Chance for two migrants to meet in some country?

8/34

slide-23
SLIDE 23

Migration - Stochastic process

{

individuals

  • I 1

i,n, I 2 i,n, I 3 i,n, . . . , I mn(i) i,n

} = Country ci at time n with pop. mn(i) During the migration process Each I k

i,n chooses the index

I k

i,n = j of a country cj ∼ Mn(i, j)

Simulation?

9/34

slide-24
SLIDE 24

Migration - Stochastic process

{

individuals

  • I 1

i,n, I 2 i,n, I 3 i,n, . . . , I mn(i) i,n

} = Country ci at time n with pop. mn(i) During the migration process Each I k

i,n chooses the index

I k

i,n = j of a country cj ∼ Mn(i, j)

Simulation?

9/34

slide-25
SLIDE 25

Migration - Stochastic process

{

individuals

  • I 1

i,n, I 2 i,n, I 3 i,n, . . . , I mn(i) i,n

} = Country ci at time n with pop. mn(i) During the migration process Each I k

i,n chooses the index

I k

i,n = j of a country cj ∼ Mn(i, j)

Simulation? m(n+1)(i, j) =

  • 1≤k≤mn(i)

1j

  • I k

i,n

  • = Migrants i j

9/34

slide-26
SLIDE 26

Migration - Stochastic process

{

individuals

  • I 1

i,n, I 2 i,n, I 3 i,n, . . . , I mn(i) i,n

} = Country ci at time n with pop. mn(i) During the migration process Each I k

i,n chooses the index

I k

i,n = j of a country cj ∼ Mn(i, j)

Simulation? m(n+1)(i, j) =

  • 1≤k≤mn(i)

1j

  • I k

i,n

  • = Migrants i j

⇓ m(n+1)(j) =

  • 1≤i≤193

m(n+1)(i, j)

9/34

slide-27
SLIDE 27

Migration - Stochastic process

{

individuals

  • I 1

i,n, I 2 i,n, I 3 i,n, . . . , I mn(i) i,n

} = Country ci at time n with pop. mn(i) During the migration process Each I k

i,n chooses the index

I k

i,n = j of a country cj ∼ Mn(i, j)

Simulation? m(n+1)(i, j) =

  • 1≤k≤mn(i)

1j

  • I k

i,n

  • = Migrants i j

⇓ m(n+1)(j) =

  • 1≤i≤193

m(n+1)(i, j) If no birth & death!

9/34

slide-28
SLIDE 28

Migration - Stochastic process

{

individuals

  • I 1

i,n, I 2 i,n, I 3 i,n, . . . , I mn(i) i,n

} = Country ci at time n with pop. mn(i) During the migration process Each I k

i,n chooses the index

I k

i,n = j of a country cj ∼ Mn(i, j)

Simulation? m(n+1)(i, j) =

  • 1≤k≤mn(i)

1j

  • I k

i,n

  • = Migrants i j

⇓ m(n+1)(j) =

  • 1≤i≤193

m(n+1)(i, j) If no birth & death! Mean-average?

9/34

slide-29
SLIDE 29

Migration - Stochastic process

{

individuals

  • I 1

i,n, I 2 i,n, I 3 i,n, . . . , I mn(i) i,n

} = Country ci at time n with pop. mn(i) During the migration process Each I k

i,n chooses the index

I k

i,n = j of a country cj ∼ Mn(i, j)

Simulation? m(n+1)(i, j) =

  • 1≤k≤mn(i)

1j

  • I k

i,n

  • = Migrants i j

⇓ m(n+1)(j) =

  • 1≤i≤193

m(n+1)(i, j) If no birth & death! Mean-average?

9/34

slide-30
SLIDE 30

Migration - Stochastic process

E(mn(j)) = qn(j) = ⇒ qn(j) =

  • 1≤i≤193

qn−1(i) Mn(i, j)

10/34

slide-31
SLIDE 31

Migration - Stochastic process

E(mn(j)) = qn(j) = ⇒ qn(j) =

  • 1≤i≤193

qn−1(i) Mn(i, j) ⇐ ⇒ qn = qn−1Mn

10/34

slide-32
SLIDE 32

Migration - Stochastic process

E(mn(j)) = qn(j) = ⇒ qn(j) =

  • 1≤i≤193

qn−1(i) Mn(i, j) ⇐ ⇒ qn = qn−1Mn = q0M1M2 . . . Mn If World pop. size Nn = N fixed: qn(i) N := pn(i) = Proba on {1, . . . , 193}

10/34

slide-33
SLIDE 33

Migration - Stochastic process

E(mn(j)) = qn(j) = ⇒ qn(j) =

  • 1≤i≤193

qn−1(i) Mn(i, j) ⇐ ⇒ qn = qn−1Mn = q0M1M2 . . . Mn If World pop. size Nn = N fixed: qn(i) N := pn(i) = Proba on {1, . . . , 193} := P (Xn = i) Stochastic model for a migrant Xn between countries P (Xn = j)

  • =pn(j)

=

  • 1≤i≤193

P (Xn−1 = i)

  • =pn−1(i)

P (Xn = j | Xn−1 = i)

  • =Mn(i,j)

10/34

slide-34
SLIDE 34

Migration - Stochastic process

E(mn(j)) = qn(j) = ⇒ qn(j) =

  • 1≤i≤193

qn−1(i) Mn(i, j) ⇐ ⇒ qn = qn−1Mn = q0M1M2 . . . Mn If World pop. size Nn = N fixed: qn(i) N := pn(i) = Proba on {1, . . . , 193} := P (Xn = i) Stochastic model for a migrant Xn between countries P (Xn = j)

  • =pn(j)

=

  • 1≤i≤193

P (Xn−1 = i)

  • =pn−1(i)

P (Xn = j | Xn−1 = i)

  • =Mn(i,j)
  • pn = pn−1Mn

10/34

slide-35
SLIDE 35

Migration - Stabilization Mn = M

pn = pn−1M − →n↑∞ p∞ = p∞M = left eigenvector of M

  • Stationary population

q∞ = N × p∞

11/34

slide-36
SLIDE 36

Migration - Stabilization Mn = M

pn = pn−1M − →n↑∞ p∞ = p∞M = left eigenvector of M

  • Stationary population

q∞ = N × p∞ If no birth & death!

11/34

slide-37
SLIDE 37

Migration - Stabilization Mn = M

pn = pn−1M − →n↑∞ p∞ = p∞M = left eigenvector of M

  • Stationary population

q∞ = N × p∞ If no birth & death!

◮ Power method Mn(i, j) →n↑∞ p∞(j)

pn = pn−1M = pn−2M2 = . . . = p0Mn ⇓ pn(j) =

  • i

p0(i) Mn(i, j)

→n↑∞p∞(j)

→n↑∞ p∞(j)

◮ Law of large numbers = Ergodic theorem (admitted today)

= by simulation proportions of visits to cj = 1 n

  • 1≤k≤n

1cj(Xk) →n↑∞ p∞(j)

11/34

slide-38
SLIDE 38

The evolution of 2 migrants

Walker Xn starting at X0 = i & Walker X ′

n starting at X ′ 0 = i′

pn(j) = P (Xn = j) = p0Mn(j) with p0(j) = 1i(j) p′

n(j)

= P (X ′

n = j) = p′ 0Mn(j)

with p′

0(j) = 1i′(j)

Natural questions:

◮ Do they forget their initial state? ◮ Can we define/couple their random evolution in the same

probability space?

◮ What is their meeting time probabilities?

12/34

slide-39
SLIDE 39

Forgetting their original country

pn = pn−1M ⊕ Hypothesis M(i, j) ≥ ǫ λ(i)

  • =1/193

KEY ǫ-transition Mǫ(i, j) = M(i, j) − ǫλ(j) 1 − ǫ

13/34

slide-40
SLIDE 40

Forgetting their original country

pn = pn−1M ⊕ Hypothesis M(i, j) ≥ ǫ λ(i)

  • =1/193

KEY ǫ-transition Mǫ(i, j) = M(i, j) − ǫλ(j) 1 − ǫ ⇔ M(i, j) = (1 − ǫ) Mǫ(i, j) + ǫ λ(j) = ⇒ pM = (1 − ǫ) pMǫ + ǫλ = ⇒ [p − p′]M = (1 − ǫ) [p − p′]Mǫ ⇓ pn+1 − p′

n+1 = [pn − p′ n]M

= (1 − ǫ) [pn − p′

n]Mǫ

= (1 − ǫ)2 [pn−1 − p′

n−1]M2 ǫ

= (1 − ǫ)n+1 [p0 − p′

0]Mn+1 ǫ

↓n↑∞ 0

13/34

slide-41
SLIDE 41

Coupling the 2 migrations

◮ Coupling 2 r.v.

⇔ Defined using the ”same” randomness.

◮ How to couple two individuals?

14/34

slide-42
SLIDE 42

Coupling the 2 migrations

◮ Coupling 2 r.v.

⇔ Defined using the ”same” randomness.

◮ How to couple two individuals?

Why? An illustration:

P (X ∈ A) − P (Y ∈ A) = P (X = Y ∈ A, X = Y ) + P (X ∈ A, X = Y ) −P (Y = X ∈ A, Y = X) − P (Y ∈ A, X = Y ) = P (X ∈ A, X = Y ) − P (Y ∈ A, X = Y ) = [P (X ∈ A | X = Y ) − P (Y ∈ A | X = Y )] × P (X = Y )

14/34

slide-43
SLIDE 43

Coupling the 2 migrations

◮ Coupling 2 r.v.

⇔ Defined using the ”same” randomness.

◮ How to couple two individuals?

Why? An illustration:

P (X ∈ A) − P (Y ∈ A) = P (X = Y ∈ A, X = Y ) + P (X ∈ A, X = Y ) −P (Y = X ∈ A, Y = X) − P (Y ∈ A, X = Y ) = P (X ∈ A, X = Y ) − P (Y ∈ A, X = Y ) = [P (X ∈ A | X = Y ) − P (Y ∈ A | X = Y )] × P (X = Y )

= ⇒ Law(X) − Law(Y )tv := sup

A

|P (X ∈ A) − P (Y ∈ A)| ≤ P (X = Y )

14/34

slide-44
SLIDE 44

Coupling 2 migrations

(Xn, X ′

n) = (i, i′) (Xn+1, X ′ n+1) = (j, j′)

recalling that M(i, j) = (1 − ǫ) Mǫ(i, j) + ǫ λ(j) M(i′, j′) = (1 − ǫ) Mǫ(i′, j′) + ǫ λ(j′) KEY ǫ-coupling transition M((i, i′), (j, j′)) := (1 − ǫ) Mǫ(i, j)Mǫ(i′, j′) + ǫ λ(j) 1j=j′ ⇔ God flips ǫ-Head coin to define their joint evolution! Proof: Integration the evolution of X ′

n we have

  • j′

M((i, i′), (j, j′)) = M(i, j) and vice-versa P (Xn = X ′

n) ≤ P (Never Head in n trials) = (1 − ǫ)n

15/34

slide-45
SLIDE 45

Birth and Death processes

population at time n after migration

branching

− − − − − − − − − − − → population at time n after birth and death

(n + 1)-th migration

− − − − − − − − − − − → population at time (n + 1) after migration {

individuals

  • I 1

i,n, I 2 i,n, I 3 i,n, . . . , I mn(i) i,n

} = Country ci at time n with pop. mn(i) I k

i,n Nk i,n offsprings

  • I k,1

i,n , I k,2 i,n , . . . , I k,Nk

i,n

i,n

  • with branching rates depending on country ci attraction at time n

E(Nk

i,n)

= Gn(i) Simulation?

16/34

slide-46
SLIDE 46

Birth and Death processes

population at time n after migration

branching

− − − − − − − − − − − → population at time n after birth and death

(n + 1)-th migration

− − − − − − − − − − − → population at time (n + 1) after migration {

individuals

  • I 1

i,n, I 2 i,n, I 3 i,n, . . . , I mn(i) i,n

} = Country ci at time n with pop. mn(i) I k

i,n Nk i,n offsprings

  • I k,1

i,n , I k,2 i,n , . . . , I k,Nk

i,n

i,n

  • with branching rates depending on country ci attraction at time n

E(Nk

i,n)

= Gn(i) Simulation?

16/34

slide-47
SLIDE 47

Birth and Death processes Nn =

1≤i≤193 mn(i) random ! m(n+1)(j) =

  • 1≤i≤193
  • 1≤k≤mn(i)
  • 1≤l≤Nk

i,n

1j

  • I k,l

i,n

  • =

Sum of all l-children of k-migrants i j ⇓ E(.) q(n+1)(j) =

  • 1≤i≤193

qn(i) Gn(i) Mn+1(i, j) E(Nn) =

  • j

qn(j) = E(Nn−1) ×

  • i

qn(i)

  • j qn(j) Gn(i)

World pop. size evolution?

17/34

slide-48
SLIDE 48

Birth and Death processes Nn =

1≤i≤193 mn(i) random ! m(n+1)(j) =

  • 1≤i≤193
  • 1≤k≤mn(i)
  • 1≤l≤Nk

i,n

1j

  • I k,l

i,n

  • =

Sum of all l-children of k-migrants i j ⇓ E(.) q(n+1)(j) =

  • 1≤i≤193

qn(i) Gn(i) Mn+1(i, j) E(Nn) =

  • j

qn(j) = E(Nn−1) ×

  • i

qn(i)

  • j qn(j) Gn(i)

World pop. size evolution?

  • Super and Sub Critical!!

Worldometer check Worlfram - Mathworld

17/34

slide-49
SLIDE 49

The traps of reinforcement

◮ Reinforcement make more frequent ”positive” events. ◮ ⊂ Learning process, natural behavior, reward-based algo, . . . ◮ All events are related to the past, the experience,. . .

18/34

slide-50
SLIDE 50

The traps of reinforcement

◮ Reinforcement make more frequent ”positive” events. ◮ ⊂ Learning process, natural behavior, reward-based algo, . . . ◮ All events are related to the past, the experience,. . .

↓ A (real) story:

◮ French tourist visit ever night one of the 2100 hotel pubs,

taverns and bars in Sydney

◮ He is attracted by pubs visited in the past.

18/34

slide-51
SLIDE 51

The traps of reinforcement

◮ What is the stochastic model? ◮ How to simulate it? ◮ Is there some math. formulae?

19/34

slide-52
SLIDE 52

The traps of reinforcement - Stochastic model

Ingredients:

◮ Uniform r.v. Un on {1, . . . , d}, with d = 2100 pubs. ◮ A ”coin” with Head probability ǫ = Reinforcement rate. ◮ Xn = Pub visited the n-th evening

20/34

slide-53
SLIDE 53

The traps of reinforcement - Stochastic model

Ingredients:

◮ Uniform r.v. Un on {1, . . . , d}, with d = 2100 pubs. ◮ A ”coin” with Head probability ǫ = Reinforcement rate. ◮ Xn = Pub visited the n-th evening

⇓ Self-reinforced model: Given the pubs X0, X2, . . . , Xn−1 visited at time (n − 1) Xn ∼ ǫ 1 n

  • 0≤p<n

δXp + (1 − ǫ) Un

20/34

slide-54
SLIDE 54

The traps of reinforcement - Stochastic model

Ingredients:

◮ Uniform r.v. Un on {1, . . . , d}, with d = 2100 pubs. ◮ A ”coin” with Head probability ǫ = Reinforcement rate. ◮ Xn = Pub visited the n-th evening

⇓ Self-reinforced model: Given the pubs X0, X2, . . . , Xn−1 visited at time (n − 1) Xn ∼ ǫ 1 n

  • 0≤p<n

δXp + (1 − ǫ) Un Simulation & Analysis?

20/34

slide-55
SLIDE 55

The traps of reinforcement - Stochastic model

Ingredients:

◮ Uniform r.v. Un on {1, . . . , d}, with d = 2100 pubs. ◮ A ”coin” with Head probability ǫ = Reinforcement rate. ◮ Xn = Pub visited the n-th evening

⇓ Self-reinforced model: Given the pubs X0, X2, . . . , Xn−1 visited at time (n − 1) Xn ∼ ǫ 1 n

  • 0≤p<n

δXp + (1 − ǫ) Un Simulation & Analysis?

20/34

slide-56
SLIDE 56

The traps of reinforcement - Stochastic model

Ingredients:

◮ Uniform r.v. Un on {1, . . . , d}, with d = 2100 pubs. ◮ A ”coin” with Head probability ǫ = Reinforcement rate. ◮ Xn = Pub visited the n-th evening

⇓ Self-reinforced model: Given the pubs X0, X2, . . . , Xn−1 visited at time (n − 1) Xn ∼ ǫ 1 n

  • 0≤p<n

δXp + (1 − ǫ) Un Simulation & Analysis?

20/34

slide-57
SLIDE 57

The traps of reinforcement - ǫ = 10%

10 20 30 40 50 0.2 0.4 0.6 time axis α1/10(n)

21/34

slide-58
SLIDE 58

The traps of reinforcement - ǫ = 50%

20 40 60 80 100 0.1 0.2 0.3 0.4 time axis α1/2(n)

22/34

slide-59
SLIDE 59

The traps of reinforcement - ǫ = 90%

0.2 0.4 0.6 0.8 1 ·109 0.12 0.14 0.16 0.18 0.2 0.22 time axis α9/10(n)

23/34

slide-60
SLIDE 60

The traps of reinforcement - ǫ = 90%

0.2 0.4 0.6 0.8 1 ·109 0.12 0.14 0.16 0.18 0.2 0.22 time axis α9/10(n)

Conclusion of the day by Henry David Thoreau (1817-1862) Never look back unless you are planning to go that way.

23/34

slide-61
SLIDE 61

Casino roulette - Double or Nothing

◮ Asheyl Revell (after "some" beers in a London pub)

Double or Nothing in Vegas.

24/34

slide-62
SLIDE 62

Casino roulette - Double or Nothing

◮ Asheyl Revell (after "some" beers in a London pub)

Double or Nothing in Vegas.

◮ Chances to win on the red color (18 + 18 = 36)?

24/34

slide-63
SLIDE 63

Casino roulette - Double or Nothing

◮ Asheyl Revell (after "some" beers in a London pub)

Double or Nothing in Vegas.

◮ Chances to win on the red color (18 + 18 = 36)?

⇓ US = 18/(36 + 2) = 0.474 < CEE = 18/(36 + 1) = 0.486 < 0.5

24/34

slide-64
SLIDE 64

Casino roulette - Predictions?

◮ Starting with $1 ≤ x < $100:

Chance to win $100 before ruin?

◮ How long it takes?

25/34

slide-65
SLIDE 65

Casino roulette - Predictions?

◮ Starting with $1 ≤ x < $100:

Chance to win $100 before ruin?

◮ How long it takes?

Worlfram - Mathworld

25/34

slide-66
SLIDE 66

Casino roulette - Predictions?

◮ Starting with $1 ≤ x < $100:

Chance to win $100 before ruin?

◮ How long it takes?

Worlfram - Mathworld ⊕ Martingales betting systems = Project No 5

◮ St.Petersburg martingales ◮ The Grand Martingale ◮ The d’Alembert Martingale ◮ The Whittacker Martingale

25/34

slide-67
SLIDE 67

Casino roulette - some predictions

20 40 60 80 100 0.2 0.4 0.6 0.8 1 initial bet fortune probabilities

26/34

slide-68
SLIDE 68

Casino roulette - some predictions

20 40 60 80 0.2 0.4 0.6 0.8 initial bet fortune probabilities

27/34

slide-69
SLIDE 69

Casino roulette - some predictions

20 40 60 80 100 500 1,000 1,500 2,000 2,500 initial bet mean game duration

28/34

slide-70
SLIDE 70

Proofs ⊂ Martingale theory

A gambling model = Random walk! Yn = Y0 + X1 + . . . + Xn ⇔ ∆Yn = Yn − Yn−1 = Xn with some initial fortune Y0 = y0 & ⊥ bettor’s profits per unit of time P(Xn = +1) = p and P(Xn = −1) = q = 1 − p ∈]0, 1[ Information at time n encoded in Fn = σ (X1, . . . , Xn) E (∆Yn | Fn−1) = E(Xn) = p − q = ρ    =    when p = 1/2 = q ⇔ martingale > 0 when p > q ⇔ sub-martingale < 0 when p < q ⇔ super-martingale

29/34

slide-71
SLIDE 71

Some martingale properties

∀r.v. Mn = M0 + ∆M1 + . . . + ∆Mn with ∆Mn = Mn − Mn−1 Martingale w.r.t. some filtration of the information Fn = σ(X0, . . . , Xn) with Mn = ϕn(X0, . . . , Xn)

  • E(∆Mn | Fn−1) = 0

⇒ E(Mn | Fn−1) = Mn−1 + E(∆Mn | Fn−1) = Mn−1

30/34

slide-72
SLIDE 72

Some martingale properties

∀r.v. Mn = M0 + ∆M1 + . . . + ∆Mn with ∆Mn = Mn − Mn−1 Martingale w.r.t. some filtration of the information Fn = σ(X0, . . . , Xn) with Mn = ϕn(X0, . . . , Xn)

  • E(∆Mn | Fn−1) = 0

⇒ E(Mn | Fn−1) = Mn−1 + E(∆Mn | Fn−1) = Mn−1 ⇒ E(Mn | Fn−2) = E   E(Mn | Fn−1)

  • =Mn−1

| Fn−2    = Mn−2

30/34

slide-73
SLIDE 73

Some martingale properties

∀r.v. Mn = M0 + ∆M1 + . . . + ∆Mn with ∆Mn = Mn − Mn−1 Martingale w.r.t. some filtration of the information Fn = σ(X0, . . . , Xn) with Mn = ϕn(X0, . . . , Xn)

  • E(∆Mn | Fn−1) = 0

⇒ E(Mn | Fn−1) = Mn−1 + E(∆Mn | Fn−1) = Mn−1 ⇒ E(Mn | Fn−2) = E   E(Mn | Fn−1)

  • =Mn−1

| Fn−2    = Mn−2 ⇒ . . .

30/34

slide-74
SLIDE 74

Some martingale properties

∀r.v. Mn = M0 + ∆M1 + . . . + ∆Mn with ∆Mn = Mn − Mn−1 Martingale w.r.t. some filtration of the information Fn = σ(X0, . . . , Xn) with Mn = ϕn(X0, . . . , Xn)

  • E(∆Mn | Fn−1) = 0

⇒ E(Mn | Fn−1) = Mn−1 + E(∆Mn | Fn−1) = Mn−1 ⇒ E(Mn | Fn−2) = E   E(Mn | Fn−1)

  • =Mn−1

| Fn−2    = Mn−2 ⇒ . . . ⇒ E(Mn | Fp)

p<n

= Mp ⇒ E(Mn) = E(M0)

30/34

slide-75
SLIDE 75

Some martingale properties

∀r.v. Mn = M0 + ∆M1 + . . . + ∆Mn with ∆Mn = Mn − Mn−1 Martingale w.r.t. some filtration of the information Fn = σ(X0, . . . , Xn) with Mn = ϕn(X0, . . . , Xn)

  • E(∆Mn | Fn−1) = 0

⇒ E(Mn | Fn−1) = Mn−1 + E(∆Mn | Fn−1) = Mn−1 ⇒ E(Mn | Fn−2) = E   E(Mn | Fn−1)

  • =Mn−1

| Fn−2    = Mn−2 ⇒ . . . ⇒ E(Mn | Fp)

p<n

= Mp ⇒ E(Mn) = E(M0) ⇓ Theo (Doob’s optional stopping) E(MT) = E(M0) For regular ”stopping times” T

30/34

slide-76
SLIDE 76

Fair game martingales (1/2)

◮ Martingale Yn

∆Yn = Xn with P(Xn = +1) = P(Xn = −1) = 1/2

◮ Martingale Zn = Y 2

n − n

  • [Y2

n − n]

= (Yn−1 + ∆Yn)2 − n = [Y2

n−1 − (n − 1)] + 2Yn−1∆Yn + (∆Yn)2 − 1

⇒ ∆Zn = 2Yn−1 ∆Yn + X 2

n − 1 ⇒ E(∆Zn | Fn−1) = 0

◮ Stopping time

Ta,b = first time Yn hits the boundaries [a, b]

ex.

= [0, 100] ∋ Y0

31/34

slide-77
SLIDE 77

Fair game martingales (2/2)

◮ Martingale Yn

y0 = E(YTa,b) = b P(YTa,b = b) + a

  • 1 − P(YTa,b = b)

P(YTa,b = b) = (y0 − a)/(b − a)

◮ Martingale Zn = Y 2

n − n

y 2

0 − 0

= E

  • Y 2

Ta,b

  • − E(Ta,b)

= b2 P(YTa,b = b) + a2 1 − P(YTa,b = b)

  • − E(Ta,b)

⇓ E(Ta,b) = b2 y0 − a b − a + a2 b − y0 b − a − y 2 = . . . = (b − y0)(y0 − a)

32/34

slide-78
SLIDE 78

Unfair game martingales (1/2)

◮ Martingale

Yn = Yn − (p − q) n ∆ Yn = Xn − E(Xn) = Xn − (p − q)

◮ Martingale Zn = (q/p)Yn

  • (q/p)Yn

= (q/p)Yn−1+∆Yn = (q/p)Yn−1 (q/p)Xn ⇒ ∆Zn = (q/p)Yn−1 (q/p)Xn − 1

  • ⇒ E(∆Zn | Fn−1) = 0

33/34

slide-79
SLIDE 79

Unfair game martingales (2/2)

◮ Martingale Zn = (q/p)Yn

(q/p)y0 = E

  • (q/p)YTa,b
  • =

(q/p)b P(YTa,b = b) + (q/p)a 1 − P(YTa,b = b)

  • P(YTa,b = b) = (q/p)y0 − (q/p)a

(q/p)b − (q/p)a

◮ Martingale

Yn = Yn − (p − q) n y0 − (p − q) × 0 = E

  • YTa,b
  • − (p − q) E (Ta,b)

= b P(YTa,b = b) + a

  • 1 − P(YTa,b = b)
  • −(p − q) E (Ta,b)

⇓ (p−q) E (Ta,b) = (b−y0) P(YTa,b = b)+(a−y0)

  • 1 − P(YTa,b = b)
  • 34/34