Limit laws of anticipated rejection and related algorithms Axel - - PowerPoint PPT Presentation

limit laws of anticipated rejection and related algorithms
SMART_READER_LITE
LIVE PREVIEW

Limit laws of anticipated rejection and related algorithms Axel - - PowerPoint PPT Presentation

Limit laws of anticipated rejection and related algorithms Axel Bacher Coauthors: Olivier Bodini, Alice Jacquot, Andrea Sportiello Universit Paris Nord October 9th, 2017 Outline Anticipated rejection 1 Recovering algorithms 2


slide-1
SLIDE 1

Limit laws of anticipated rejection and related algorithms

Axel Bacher Coauthors: Olivier Bodini, Alice Jacquot, Andrea Sportiello

Université Paris Nord

October 9th, 2017

slide-2
SLIDE 2

Outline

1

Anticipated rejection

2

“Recovering” algorithms

3

Density of the limit laws

4

Perspectives

slide-3
SLIDE 3

Florentine algorithm

[Barcucci, Pinzani, Sprugnoli 1994]

slide-4
SLIDE 4

Florentine algorithm

[Barcucci, Pinzani, Sprugnoli 1994]

slide-5
SLIDE 5

Florentine algorithm

[Barcucci, Pinzani, Sprugnoli 1994]

slide-6
SLIDE 6

Florentine algorithm

[Barcucci, Pinzani, Sprugnoli 1994]

slide-7
SLIDE 7

Florentine algorithm

[Barcucci, Pinzani, Sprugnoli 1994]

slide-8
SLIDE 8

Florentine algorithm

[Barcucci, Pinzani, Sprugnoli 1994]

slide-9
SLIDE 9

Florentine algorithm

[Barcucci, Pinzani, Sprugnoli 1994]

slide-10
SLIDE 10

Florentine algorithm

[Barcucci, Pinzani, Sprugnoli 1994]

slide-11
SLIDE 11

Florentine algorithm

[Barcucci, Pinzani, Sprugnoli 1994]

slide-12
SLIDE 12

Florentine algorithm

[Barcucci, Pinzani, Sprugnoli 1994]

slide-13
SLIDE 13

Florentine algorithm

[Barcucci, Pinzani, Sprugnoli 1994]

slide-14
SLIDE 14

Florentine algorithm

[Barcucci, Pinzani, Sprugnoli 1994]

slide-15
SLIDE 15

Florentine algorithm

[Barcucci, Pinzani, Sprugnoli 1994]

slide-16
SLIDE 16

Florentine algorithm

[Barcucci, Pinzani, Sprugnoli 1994]

slide-17
SLIDE 17

Florentine algorithm

[Barcucci, Pinzani, Sprugnoli 1994]

slide-18
SLIDE 18

Florentine algorithm

[Barcucci, Pinzani, Sprugnoli 1994]

slide-19
SLIDE 19

Florentine algorithm

[Barcucci, Pinzani, Sprugnoli 1994]

slide-20
SLIDE 20

Florentine algorithm

[Barcucci, Pinzani, Sprugnoli 1994]

slide-21
SLIDE 21

Florentine algorithm

[Barcucci, Pinzani, Sprugnoli 1994]

slide-22
SLIDE 22

Florentine algorithm

[Barcucci, Pinzani, Sprugnoli 1994]

slide-23
SLIDE 23

Florentine algorithm

[Barcucci, Pinzani, Sprugnoli 1994]

slide-24
SLIDE 24

Florentine algorithm

[Barcucci, Pinzani, Sprugnoli 1994]

Complexity: O(√n) tries,

slide-25
SLIDE 25

Florentine algorithm

[Barcucci, Pinzani, Sprugnoli 1994]

Complexity: O(√n) tries, cost O(√n) per try

slide-26
SLIDE 26

Florentine algorithm

[Barcucci, Pinzani, Sprugnoli 1994]

Complexity: O(√n) tries, cost O(√n) per try ⇒ O(n).

slide-27
SLIDE 27

Florentine algorithm

[Barcucci, Pinzani, Sprugnoli 1994]

Complexity: O(√n) tries, cost O(√n) per try ⇒ O(n). Limit law analysis [Louchard 1999].

slide-28
SLIDE 28

Florentine algorithm

[Barcucci, Pinzani, Sprugnoli 1994]

Complexity: O(√n) tries, cost O(√n) per try ⇒ O(n). Limit law analysis [Louchard 1999]. Motivation: directed animal random generation.

slide-29
SLIDE 29

Florentine algorithms in the quarter-plane

Numer of tries O(n3/4). Cost of a try O(n1/4). Complexity O(n). Number of tries O(n2/3). Cost of a try O(n1/3). Complexity O(n).

slide-30
SLIDE 30

Florentine algorithms in the quarter-plane

Numer of tries O(n3/4). Cost of a try O(n1/4). Complexity O(n). Number of tries O(n2/3). Cost of a try O(n1/3). Complexity O(n). Efficient random generation of a wider set of quarter-plane walks

[Lumbroso, Mishna, Ponty 2016].

Other families of walks: walks in a cone, d dimensions, etc.

slide-31
SLIDE 31

Binary trees

− − − − →

grafting

− − − − − →

repointing

Random binary tree [B., Bodini, Jacquot 2013]

Start from a pointed leaf and repeat n times: graft a new leaf to the left or right (flip a coin) and point it; flip a coin; if tails, repoint; If repointing failed, delete the tree and start over.

slide-32
SLIDE 32

Binary trees

− − − − →

grafting

− − − − − →

repointing

Random binary tree [B., Bodini, Jacquot 2013]

Start from a pointed leaf and repeat n times: graft a new leaf to the left or right (flip a coin) and point it; flip a coin; if tails, repoint; If repointing failed, delete the tree and start over. At each iteration, the tree is uniformly distributed.

slide-33
SLIDE 33

Binary trees

− − − − →

grafting

− − − − − →

repointing

Random binary tree [B., Bodini, Jacquot 2013]

Start from a pointed leaf and repeat n times: graft a new leaf to the left or right (flip a coin) and point it; flip a coin; if tails, repoint; If repointing failed, delete the tree and start over. At each iteration, the tree is uniformly distributed. Complexity in random bits: O(√n) × O(√n) = O(n).

slide-34
SLIDE 34

Binary trees

− − − − →

grafting

− − − − − →

repointing

Random binary tree [B., Bodini, Jacquot 2013]

Start from a pointed leaf and repeat n times: graft a new leaf to the left or right (flip a coin) and point it; flip a coin; if tails, repoint; If repointing failed, delete the tree and start over. At each iteration, the tree is uniformly distributed. Complexity in random bits: O(√n) × O(√n) = O(n). This is a variant of Rémy’s algorithm, which has complexity O(n log n).

slide-35
SLIDE 35

Limit law of anticipated rejection

Let (Xi)i≥0 be i.i.d. positive random variables such that, for x > 0: P[X ≥ xt] P[X ≥ t] − − − →

t→∞ x−α,

0 < α < 1. Let for t > 0: i(t) = min{i | Xi ≥ t} and S(t) = X0 + · · · + Xi(t)−1.

slide-36
SLIDE 36

Limit law of anticipated rejection

Let (Xi)i≥0 be i.i.d. positive random variables such that, for x > 0: P[X ≥ xt] P[X ≥ t] − − − →

t→∞ x−α,

0 < α < 1. Let for t > 0: i(t) = min{i | Xi ≥ t} and S(t) = X0 + · · · + Xi(t)−1.

Theorem [B., Sportiello 2015]

The random variable S(t)/t tends in distribution to Dα, with: E

  • ezDα

=

  • 1 −

  • n=1

α n − α zn n! −1 .

slide-37
SLIDE 37

Limit law of anticipated rejection

Let (Xi)i≥0 be i.i.d. positive random variables such that, for x > 0: P[X ≥ xt] P[X ≥ t] − − − →

t→∞ x−α,

0 < α < 1. Let for t > 0: i(t) = min{i | Xi ≥ t} and S(t) = X0 + · · · + Xi(t)−1.

Theorem [B., Sportiello 2015]

The random variable S(t)/t tends in distribution to Dα, with: E

  • ezDα

=

  • 1 −

  • n=1

α n − α zn n! −1 . If α ≥ 1, the scaling factor is superlinear and the limit law exponential.

slide-38
SLIDE 38

Limit law of anticipated rejection

Let (Xi)i≥0 be i.i.d. positive random variables such that, for x > 0: P[X ≥ xt] P[X ≥ t] − − − →

t→∞ x−α,

0 < α < 1. Let for t > 0: i(t) = min{i | Xi ≥ t} and S(t) = X0 + · · · + Xi(t)−1.

Theorem [B., Sportiello 2015]

The random variable S(t)/t tends in distribution to Dα, with: E

  • ezDα

=

  • 1 −

  • n=1

α n − α zn n! −1 . If α ≥ 1, the scaling factor is superlinear and the limit law exponential. The law Dα is the Darling-Mandelbrot law. [Darling 1952, Lew 1994]

slide-39
SLIDE 39

Second round of rejection

A second round of rejection may occur when the size n is reached, with probability tending to p.

slide-40
SLIDE 40

Second round of rejection

A second round of rejection may occur when the size n is reached, with probability tending to p. If p = β/(1 + β), the complexity has limit law Dα,β, with: E

  • ezDα,β

=

  • 1 −

  • n=1

α + βn n − α zn n! −1 .

slide-41
SLIDE 41

Recovering algorithm for binary trees

− − − − →

grafting

− − − − − →

repointing

Random binary tree [B., Bodini, Jacquot 2013]

Start from a pointed leaf and repeat n times: graft a new leaf to the left or right (flip a coin) and point it; flip a coin; if tails, repoint; If repointing failed, pick a new point uniformly at random.

slide-42
SLIDE 42

Recovering algorithm for binary trees

− − − − →

grafting

− − − − − →

repointing

Random binary tree [B., Bodini, Jacquot 2013]

Start from a pointed leaf and repeat n times: graft a new leaf to the left or right (flip a coin) and point it; flip a coin; if tails, repoint; If repointing failed, pick a new point uniformly at random. Average cost in random bits: 2n + O(log2 n) (entropic algorithm).

slide-43
SLIDE 43

Recovering algorithm for binary trees

− − − − →

grafting

− − − − − →

repointing

Random binary tree [B., Bodini, Jacquot 2013]

Start from a pointed leaf and repeat n times: graft a new leaf to the left or right (flip a coin) and point it; flip a coin; if tails, repoint; If repointing failed, pick a new point uniformly at random. Average cost in random bits: 2n + O(log2 n) (entropic algorithm). Does not work on unary-binary trees (uniformity is lost).

slide-44
SLIDE 44

Recovering algorithm for Dyck prefixes

− − − − − →

unfolding

Random Dyck prefix [B. 2016]

Start from the empty path and repeat n times: Add a random step to P. If P is not a Dyck prefix, pick a point uniformly at random and unfold.

slide-45
SLIDE 45

Recovering algorithm for Dyck prefixes

− − − − − →

unfolding

Random Dyck prefix [B. 2016]

Start from the empty path and repeat n times: Add a random step to P. If P is not a Dyck prefix, pick a point uniformly at random and unfold. At each iteration, the path is uniformly distributed.

slide-46
SLIDE 46

Recovering algorithm for Dyck prefixes

− − − − − →

unfolding

Random Dyck prefix [B. 2016]

Start from the empty path and repeat n times: Add a random step to P. If P is not a Dyck prefix, pick a point uniformly at random and unfold. At each iteration, the path is uniformly distributed. Cost: n + O(log2 n) random bits, O(n) time.

slide-47
SLIDE 47

Recovering algorithm for Dyck prefixes

− − − − − →

unfolding

Random Dyck prefix [B. 2016]

Start from the empty path and repeat n times: Add a random step to P. If P is not a Dyck prefix, pick a point uniformly at random and unfold. At each iteration, the path is uniformly distributed. Cost: n + O(log2 n) random bits, O(n) time. Possible extension to m-Dyck paths (+1/−m), entropic if we have an entropic source of Bernoulli

  • 1

1+m

  • .
slide-48
SLIDE 48

Recovering algorithm for Dyck prefixes

− − − − − →

unfolding

Random Dyck prefix [B. 2016]

Start from the empty path and repeat n times: Add a random step to P. If P is not a Dyck prefix, pick a point uniformly at random and unfold. At each iteration, the path is uniformly distributed. Cost: n + O(log2 n) random bits, O(n) time. Possible extension to m-Dyck paths (+1/−m), entropic if we have an entropic source of Bernoulli

  • 1

1+m

  • .

Does not work on Motzkin or Schröder paths.

slide-49
SLIDE 49

Limit laws

Let Bn and Mn be the cost in random bits and memory accesses

  • f the “recoveries” in the Dyck prefix algorithm.

Theorem

The variable Bn tends to a Gaussian law, with: E[Bn] ∼ log2 n 4 log 2, V[Bn] ∼ log3 n 6 log2 2. The variable Mn/n tends to L1/2, where the law Lα is defined by: Lα =

  • x∈Poisson(0,1] α

x

Unif[0, x] E

  • ezLα

= exp ∞

  • n=1

α n(n + 1) zn n!

  • .
slide-50
SLIDE 50

Density of the law Dα

E

  • ezDα

=

  • 1 −

  • n=1

α n − α zn n! −1

1 2 3 0,5 D1/2

slide-51
SLIDE 51

Density of the law Dα

E

  • ezDα

=

  • 1 −

  • n=1

α n − α zn n! −1

1 2 3 0,5 D1/2

The Laplace transform of Dα takes the form: A(z) = z−α Γ(1 − α) E

  • e−zDα

= A(z) 1 − B(z), B(z) =

z

e−yy−1−α Γ(−α) dy.

slide-52
SLIDE 52

Density of the law Dα

E

  • ezDα

=

  • 1 −

  • n=1

α n − α zn n! −1

1 2 3 0,5 D1/2

The Laplace transform of Dα takes the form: A(z) = z−α Γ(1 − α) E

  • e−zDα

= A(z) 1 − B(z), B(z) =

z

e−yy−1−α Γ(−α) dy. Its density is therefore: a(x) = sin(απ) π xα−1 f(x) =

  • k=0

a ∗ b∗k(x), b(x) = −sin(απ) π (x − 1)α x 1x>1

slide-53
SLIDE 53

Density of the law Dα

E

  • ezDα

=

  • 1 −

  • n=1

α n − α zn n! −1

1 2 3 0,5 D1/2

The Laplace transform of Dα takes the form: A(z) = z−α Γ(1 − α) E

  • e−zDα

= A(z) 1 − B(z), B(z) =

z

e−yy−1−α Γ(−α) dy. Its density is therefore: a(x) = sin(απ) π xα−1 f(x) =

  • k=0

a ∗ b∗k(x), b(x) = −sin(απ) π (x − 1)α x 1x>1 and satisfies: xf ′(x) + (1 − α)f(x) = −αf ∗ f(x − 1).

slide-54
SLIDE 54

Density of the law L1/2

E

  • ezL1/2

= exp ∞

  • n=1

1 2n(n + 1) zn n!

  • 1

2 3 0,5 L1/2

slide-55
SLIDE 55

Density of the law L1/2

E

  • ezL1/2

= exp ∞

  • n=1

1 2n(n + 1) zn n!

  • 1

2 3 0,5 L1/2

The Laplace transform of L1/2 takes the form: A(z) = e

1−γ 2 e − 1 2z z−1/2

E

  • e−zL1/2

= A(z) exp

  • B(z)
  • ,

B(z) =

z

e−y 2y2 dy.

slide-56
SLIDE 56

Density of the law L1/2

E

  • ezL1/2

= exp ∞

  • n=1

1 2n(n + 1) zn n!

  • 1

2 3 0,5 L1/2

The Laplace transform of L1/2 takes the form: A(z) = e

1−γ 2 e − 1 2z z−1/2

E

  • e−zL1/2

= A(z) exp

  • B(z)
  • ,

B(z) =

z

e−y 2y2 dy. Its density f(x) is therefore: a(x) = e

1−γ 2

cos √ 2x √πx f(x) =

  • k=0

a ∗ b∗k(x) k! , b(x) = x − 1 2x 1x>1

slide-57
SLIDE 57

Density of the law L1/2

E

  • ezL1/2

= exp ∞

  • n=1

1 2n(n + 1) zn n!

  • 1

2 3 0,5 L1/2

The Laplace transform of L1/2 takes the form: A(z) = e

1−γ 2 e − 1 2z z−1/2

E

  • e−zL1/2

= A(z) exp

  • B(z)
  • ,

B(z) =

z

e−y 2y2 dy. Its density f(x) is therefore: a(x) = e

1−γ 2

cos √ 2x √πx f(x) =

  • k=0

a ∗ b∗k(x) k! , b(x) = x − 1 2x 1x>1 and satisfies: 2xf ′′(x) + 3f ′(x) + f(x) = f(x − 1).

slide-58
SLIDE 58

Distribution tails

1 2 3 0,5 D1/4 1 2 3 0,5 D1/2 1 2 3 0,5 D3/4 1 2 3 0,5 L1/2 1 2 3 0,5 L1

The tails of Dα and Lα are of the form: [Lew 1994] P[Dα ≥ x] = e−a0 α e−a0x + O(e−a1x) P[Lα ≥ x] =

  • αe

x log2 x x eo(x).

slide-59
SLIDE 59

Perspectives

1 2 3 0,5 D1/2 1 2 3 0,5 L1/2

Can we make the “recovery” idea work with other walks or trees? (Motzkin, Schröder, +a/−b, etc.) Are there other interesting distributions with similar properties? (ex: Dickman function in number theory)