Finite Horizon Dynamic Programming: Getting Value from Spending - - PowerPoint PPT Presentation

finite horizon dynamic programming getting value from
SMART_READER_LITE
LIVE PREVIEW

Finite Horizon Dynamic Programming: Getting Value from Spending - - PowerPoint PPT Presentation

Finite Horizon Dynamic Programming: Getting Value from Spending Symmetry J. Michael Steele University of Pennsylvania The Wharton School Department of Statistics Stochastic Processes and Applications, Buenos Aires, August 8, 2014 J. M. Steele


slide-1
SLIDE 1

Finite Horizon Dynamic Programming: Getting Value from Spending Symmetry

  • J. Michael Steele

University of Pennsylvania The Wharton School Department of Statistics

Stochastic Processes and Applications, Buenos Aires, August 8, 2014

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 1

slide-2
SLIDE 2

Introduction

Some History and Motivation

Famous combinatorial problems with long mathematical history on sequences of n real numbers, or permutations

  • f the integers 1, . . . , n

◮ Erd˝

  • s and Szekeres (1935): monotone subsequences

◮ Fan Chung (1980): unimodal subsequences ◮ Euler (cf. Stanley, 2010): alternating permutations

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 2

slide-3
SLIDE 3

Introduction

Some History and Motivation

Famous combinatorial problems with long mathematical history on sequences of n real numbers, or permutations

  • f the integers 1, . . . , n

◮ Erd˝

  • s and Szekeres (1935): monotone subsequences

◮ Fan Chung (1980): unimodal subsequences ◮ Euler (cf. Stanley, 2010): alternating permutations

Probabilistic version (full-information)

◮ Longest monotone subsequences: Hammersley (1972),

Kingman (1973), Logan and Shepp (1977), Verˇ sik and Kerov (1977), . . .

◮ Longest Unimodal subsequences: Steele (1981) ◮ Longest Alternating subsequences: Widom (2006),

Pemantle (cf. Stanley, 2007), Stanley (2008), Houdr´ e and Restrepo (2010)

10 20 30 40 50 60 70 80 90 100 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 10 20 30 40 50 60 70 80 90 100 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 10 20 30 40 50 60 70 80 90 100 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 2

slide-4
SLIDE 4

Introduction

Some History and Motivation

Famous combinatorial problems with long mathematical history on sequences of n real numbers, or permutations

  • f the integers 1, . . . , n

◮ Erd˝

  • s and Szekeres (1935): monotone subsequences

◮ Fan Chung (1980): unimodal subsequences ◮ Euler (cf. Stanley, 2010): alternating permutations

Probabilistic version (full-information)

◮ Longest monotone subsequences: Hammersley (1972),

Kingman (1973), Logan and Shepp (1977), Verˇ sik and Kerov (1977), . . .

◮ Longest Unimodal subsequences: Steele (1981) ◮ Longest Alternating subsequences: Widom (2006),

Pemantle (cf. Stanley, 2007), Stanley (2008), Houdr´ e and Restrepo (2010)

Now ... Study the sequential (on-line) version of these problems

◮ Objective: maximize the expected length (number of

selections) of monotone, unimodal and alternating subsequences

10 20 30 40 50 60 70 80 90 100 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 10 20 30 40 50 60 70 80 90 100 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 10 20 30 40 50 60 70 80 90 100 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 2

slide-5
SLIDE 5

Introduction

Full-information vs. on-line — Increasing

n = 100

10 20 30 40 50 60 70 80 90 100 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 3

slide-6
SLIDE 6

Introduction

Full-information vs. on-line — Increasing

n = 100 In = 15

10 20 30 40 50 60 70 80 90 100 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 3

slide-7
SLIDE 7

Introduction

Full-information vs. on-line — Increasing

n = 100 In = 15 I o

n (π∗ n ) = 14 10 20 30 40 50 60 70 80 90 100 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 3

slide-8
SLIDE 8

Introduction

Full-information vs. on-line — Unimodal

n = 100

10 20 30 40 50 60 70 80 90 100 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 3

slide-9
SLIDE 9

Introduction

Full-information vs. on-line — Unimodal

n = 100 Un = 22

10 20 30 40 50 60 70 80 90 100 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 3

slide-10
SLIDE 10

Introduction

Full-information vs. on-line — Unimodal

n = 100 Un = 22 Uo

n (π∗ n ) = 21 10 20 30 40 50 60 70 80 90 100 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 3

slide-11
SLIDE 11

Introduction

Summary View of Means in Some On-Line Selection Problems

How Much Better Does a “Prophet” Do Asymptotically? Full Information Real Time Info. Only Realized Bonus Increasing Unimodal Alternating

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 4

slide-12
SLIDE 12

Introduction

Summary View of Means in Some On-Line Selection Problems

How Much Better Does a “Prophet” Do Asymptotically? Full Information Real Time Info. Only Realized Bonus Increasing 2√n √ 2n 29% Unimodal Alternating

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 4

slide-13
SLIDE 13

Introduction

Summary View of Means in Some On-Line Selection Problems

How Much Better Does a “Prophet” Do Asymptotically? Full Information Real Time Info. Only Realized Bonus Increasing 2√n √ 2n 29% Unimodal 2 √ 2n 2√n 29% Alternating

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 4

slide-14
SLIDE 14

Introduction

Summary View of Means in Some On-Line Selection Problems

How Much Better Does a “Prophet” Do Asymptotically? Full Information Real Time Info. Only Realized Bonus Increasing 2√n √ 2n 29% Unimodal 2 √ 2n 2√n 29% Alternating 2n/3 (2 − √ 2)n 12%

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 4

slide-15
SLIDE 15

Introduction

Summary View of Means in Some On-Line Selection Problems

How Much Better Does a “Prophet” Do Asymptotically? Full Information Real Time Info. Only Realized Bonus Increasing 2√n √ 2n 29% Unimodal 2 √ 2n 2√n 29% Alternating 2n/3 (2 − √ 2)n 12% Question: Can one get more detailed information?

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 4

slide-16
SLIDE 16

Introduction

Summary View of Means in Some On-Line Selection Problems

How Much Better Does a “Prophet” Do Asymptotically? Full Information Real Time Info. Only Realized Bonus Increasing 2√n √ 2n 29% Unimodal 2 √ 2n 2√n 29% Alternating 2n/3 (2 − √ 2)n 12% Question: Can one get more detailed information? More precise asymptotics of the means?

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 4

slide-17
SLIDE 17

Introduction

Summary View of Means in Some On-Line Selection Problems

How Much Better Does a “Prophet” Do Asymptotically? Full Information Real Time Info. Only Realized Bonus Increasing 2√n √ 2n 29% Unimodal 2 √ 2n 2√n 29% Alternating 2n/3 (2 − √ 2)n 12% Question: Can one get more detailed information? More precise asymptotics of the means? Any second-order information, i.e. what about the variances?

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 4

slide-18
SLIDE 18

Introduction

Summary View of Means in Some On-Line Selection Problems

How Much Better Does a “Prophet” Do Asymptotically? Full Information Real Time Info. Only Realized Bonus Increasing 2√n √ 2n 29% Unimodal 2 √ 2n 2√n 29% Alternating 2n/3 (2 − √ 2)n 12% Question: Can one get more detailed information? More precise asymptotics of the means? Any second-order information, i.e. what about the variances? Is there hope for a CLT or other distributional result?

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 4

slide-19
SLIDE 19

Introduction

Summary View of Means in Some On-Line Selection Problems

How Much Better Does a “Prophet” Do Asymptotically? Full Information Real Time Info. Only Realized Bonus Increasing 2√n √ 2n 29% Unimodal 2 √ 2n 2√n 29% Alternating 2n/3 (2 − √ 2)n 12% Question: Can one get more detailed information? More precise asymptotics of the means? Any second-order information, i.e. what about the variances? Is there hope for a CLT or other distributional result? There is a CLT for the On-Line Alternating Subsequence Problem (briefly noted in next frame) There has much further work on the On-Line Selection of a Monotone Increasing Subsequence, the original motivating problem. This will get most of our attention.

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 4

slide-20
SLIDE 20

Introduction CLT for Alternating

Sequentially Selected Alternating Series — A CLT

Theorem (Arlotto & Steele, AAP 2014)

There is a constant σ > 0 such that Ao

n(π∗ n ) − n(2 −

√ 2) nσ ⇒ N(0, 1).

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 5

slide-21
SLIDE 21

Introduction CLT for Alternating

Sequentially Selected Alternating Series — A CLT

Theorem (Arlotto & Steele, AAP 2014)

There is a constant σ > 0 such that Ao

n(π∗ n ) − n(2 −

√ 2) nσ ⇒ N(0, 1). The Mysterious σ? Its existence is proved but the value is not yet known.

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 5

slide-22
SLIDE 22

Introduction CLT for Alternating

Sequentially Selected Alternating Series — A CLT

Theorem (Arlotto & Steele, AAP 2014)

There is a constant σ > 0 such that Ao

n(π∗ n ) − n(2 −

√ 2) nσ ⇒ N(0, 1). The Mysterious σ? Its existence is proved but the value is not yet known. A Candidate σ? Yes, but not yet in the bag.

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 5

slide-23
SLIDE 23

Introduction CLT for Alternating

Sequentially Selected Alternating Series — A CLT

Theorem (Arlotto & Steele, AAP 2014)

There is a constant σ > 0 such that Ao

n(π∗ n ) − n(2 −

√ 2) nσ ⇒ N(0, 1). The Mysterious σ? Its existence is proved but the value is not yet known. A Candidate σ? Yes, but not yet in the bag. Path to Proof? Ao

n(π∗ n ) can be written as a (reverse, inhomogeneous) Markov

Additive Functional.

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 5

slide-24
SLIDE 24

Introduction CLT for Alternating

Sequentially Selected Alternating Series — A CLT

Theorem (Arlotto & Steele, AAP 2014)

There is a constant σ > 0 such that Ao

n(π∗ n ) − n(2 −

√ 2) nσ ⇒ N(0, 1). The Mysterious σ? Its existence is proved but the value is not yet known. A Candidate σ? Yes, but not yet in the bag. Path to Proof? Ao

n(π∗ n ) can be written as a (reverse, inhomogeneous) Markov

Additive Functional. Appropriate Tools? Dobrushin (long ago) and Sethuraman and Varadhan (more recently) have an elegant approach to the CLT for inhomogeneous Markov additive process.

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 5

slide-25
SLIDE 25

Introduction CLT for Alternating

Sequentially Selected Alternating Series — A CLT

Theorem (Arlotto & Steele, AAP 2014)

There is a constant σ > 0 such that Ao

n(π∗ n ) − n(2 −

√ 2) nσ ⇒ N(0, 1). The Mysterious σ? Its existence is proved but the value is not yet known. A Candidate σ? Yes, but not yet in the bag. Path to Proof? Ao

n(π∗ n ) can be written as a (reverse, inhomogeneous) Markov

Additive Functional. Appropriate Tools? Dobrushin (long ago) and Sethuraman and Varadhan (more recently) have an elegant approach to the CLT for inhomogeneous Markov additive process. Conditions to Check? These are surprisingly concrete L2 calculations (variance bounds).

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 5

slide-26
SLIDE 26

Introduction CLT for Alternating

Sequentially Selected Alternating Series — A CLT

Theorem (Arlotto & Steele, AAP 2014)

There is a constant σ > 0 such that Ao

n(π∗ n ) − n(2 −

√ 2) nσ ⇒ N(0, 1). The Mysterious σ? Its existence is proved but the value is not yet known. A Candidate σ? Yes, but not yet in the bag. Path to Proof? Ao

n(π∗ n ) can be written as a (reverse, inhomogeneous) Markov

Additive Functional. Appropriate Tools? Dobrushin (long ago) and Sethuraman and Varadhan (more recently) have an elegant approach to the CLT for inhomogeneous Markov additive process. Conditions to Check? These are surprisingly concrete L2 calculations (variance bounds). Source of Juice? Very detailed analytical understanding of the acceptance threshold functions.

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 5

slide-27
SLIDE 27

On-Line Selection of Increasing Subsequences On-line Selection of Increasing Subsequences

On-Line LIS Problem: First Some More on the Means

Theorem (On-Line Monotone)

There is a policy π∗ ∈ Π(n) such that E[I o

n (π∗)] =

sup

π∈Π(n)

E[I o

n (π)],

and for such an optimal policy and all n ≥ 1 one has

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 6

slide-28
SLIDE 28

On-Line Selection of Increasing Subsequences On-line Selection of Increasing Subsequences

On-Line LIS Problem: First Some More on the Means

Theorem (On-Line Monotone)

There is a policy π∗ ∈ Π(n) such that E[I o

n (π∗)] =

sup

π∈Π(n)

E[I o

n (π)],

and for such an optimal policy and all n ≥ 1 one has

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 6

slide-29
SLIDE 29

On-Line Selection of Increasing Subsequences On-line Selection of Increasing Subsequences

On-Line LIS Problem: First Some More on the Means

Theorem (On-Line Monotone)

There is a policy π∗ ∈ Π(n) such that E[I o

n (π∗)] =

sup

π∈Π(n)

E[I o

n (π)],

and for such an optimal policy and all n ≥ 1 one has E[I o

n (π∗)] ∼ (2n)1/2

as n → ∞.

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 6

slide-30
SLIDE 30

On-Line Selection of Increasing Subsequences On-line Selection of Increasing Subsequences

On-Line LIS Problem: First Some More on the Means

Theorem (On-Line Monotone)

There is a policy π∗ ∈ Π(n) such that E[I o

n (π∗)] =

sup

π∈Π(n)

E[I o

n (π)],

and for such an optimal policy and all n ≥ 1 one has E[I o

n (π∗)] ∼ (2n)1/2

as n → ∞. Or, more precisely, (2n)1/2 − O(n1/4) < E[I o

n (π∗)] < (2n)1/2.

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 6

slide-31
SLIDE 31

On-Line Selection of Increasing Subsequences On-line Selection of Increasing Subsequences

On-Line LIS Problem: First Some More on the Means

Theorem (On-Line Monotone)

There is a policy π∗ ∈ Π(n) such that E[I o

n (π∗)] =

sup

π∈Π(n)

E[I o

n (π)],

and for such an optimal policy and all n ≥ 1 one has E[I o

n (π∗)] ∼ (2n)1/2

as n → ∞. Or, more precisely, (2n)1/2 − O(n1/4) < E[I o

n (π∗)] < (2n)1/2.

Asymptotic behavior: Samuels and Steele (1981)

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 6

slide-32
SLIDE 32

On-Line Selection of Increasing Subsequences On-line Selection of Increasing Subsequences

On-Line LIS Problem: First Some More on the Means

Theorem (On-Line Monotone)

There is a policy π∗ ∈ Π(n) such that E[I o

n (π∗)] =

sup

π∈Π(n)

E[I o

n (π)],

and for such an optimal policy and all n ≥ 1 one has E[I o

n (π∗)] ∼ (2n)1/2

as n → ∞. Or, more precisely, (2n)1/2 − O(n1/4) < E[I o

n (π∗)] < (2n)1/2.

Asymptotic behavior: Samuels and Steele (1981) Upper bound: Bruss and Robertson (1991), Gnedin (1999)

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 6

slide-33
SLIDE 33

On-Line Selection of Increasing Subsequences On-line Selection of Increasing Subsequences

On-Line LIS Problem: First Some More on the Means

Theorem (On-Line Monotone)

There is a policy π∗ ∈ Π(n) such that E[I o

n (π∗)] =

sup

π∈Π(n)

E[I o

n (π)],

and for such an optimal policy and all n ≥ 1 one has E[I o

n (π∗)] ∼ (2n)1/2

as n → ∞. Or, more precisely, (2n)1/2 − O(n1/4) < E[I o

n (π∗)] < (2n)1/2.

Asymptotic behavior: Samuels and Steele (1981) Upper bound: Bruss and Robertson (1991), Gnedin (1999) Lower bound: Rhee and Talagrand (1991)

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 6

slide-34
SLIDE 34

On-Line Selection of Increasing Subsequences On-line Selection of Increasing Subsequences

On-Line LIS Problem: First Some More on the Means

Theorem (On-Line Monotone)

There is a policy π∗ ∈ Π(n) such that E[I o

n (π∗)] =

sup

π∈Π(n)

E[I o

n (π)],

and for such an optimal policy and all n ≥ 1 one has E[I o

n (π∗)] ∼ (2n)1/2

as n → ∞. Or, more precisely, (2n)1/2 − O(n1/4) < E[I o

n (π∗)] < (2n)1/2.

Asymptotic behavior: Samuels and Steele (1981) Upper bound: Bruss and Robertson (1991), Gnedin (1999) Lower bound: Rhee and Talagrand (1991) Bigger Steps: How about variance asymptotics or even a CLT?

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 6

slide-35
SLIDE 35

On-Line Selection of Increasing Subsequences On-line Selection of Increasing Subsequences

On-Line LIS Problem: First Some More on the Means

Theorem (On-Line Monotone)

There is a policy π∗ ∈ Π(n) such that E[I o

n (π∗)] =

sup

π∈Π(n)

E[I o

n (π)],

and for such an optimal policy and all n ≥ 1 one has E[I o

n (π∗)] ∼ (2n)1/2

as n → ∞. Or, more precisely, (2n)1/2 − O(n1/4) < E[I o

n (π∗)] < (2n)1/2.

Asymptotic behavior: Samuels and Steele (1981) Upper bound: Bruss and Robertson (1991), Gnedin (1999) Lower bound: Rhee and Talagrand (1991) Bigger Steps: How about variance asymptotics or even a CLT? Puzzle: A CLT is far from a sure thing. For the off-line problem one does NOT have a CLT — One has the famous Tracy-Widom Law.

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 6

slide-36
SLIDE 36

On-Line Selection of Increasing Subsequences Poissonization and a CLT

Poissonization: A Homogenizing Trick with Benefits

If one takes a sample size N(t) that is Poisson with mean t there are several benefits: (a) optimal policies are stationary — no horizon effects and (b) one gets the machinery of infinitesimal generators, Dynkin Martingale, etc. There is long history of applications, perhaps starting with Lucien LeCam.

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 7

slide-37
SLIDE 37

On-Line Selection of Increasing Subsequences Poissonization and a CLT

Poissonization: A Homogenizing Trick with Benefits

If one takes a sample size N(t) that is Poisson with mean t there are several benefits: (a) optimal policies are stationary — no horizon effects and (b) one gets the machinery of infinitesimal generators, Dynkin Martingale, etc. There is long history of applications, perhaps starting with Lucien LeCam.

Theorem (Bruss & Delbaen, 2001 and 2004)

For the on-line Poisson LIS problem, one has

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 7

slide-38
SLIDE 38

On-Line Selection of Increasing Subsequences Poissonization and a CLT

Poissonization: A Homogenizing Trick with Benefits

If one takes a sample size N(t) that is Poisson with mean t there are several benefits: (a) optimal policies are stationary — no horizon effects and (b) one gets the machinery of infinitesimal generators, Dynkin Martingale, etc. There is long history of applications, perhaps starting with Lucien LeCam.

Theorem (Bruss & Delbaen, 2001 and 2004)

For the on-line Poisson LIS problem, one has (2t)1/2 − O(log(t)) < E[Lo

N(t)] < (2t)1/2,

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 7

slide-39
SLIDE 39

On-Line Selection of Increasing Subsequences Poissonization and a CLT

Poissonization: A Homogenizing Trick with Benefits

If one takes a sample size N(t) that is Poisson with mean t there are several benefits: (a) optimal policies are stationary — no horizon effects and (b) one gets the machinery of infinitesimal generators, Dynkin Martingale, etc. There is long history of applications, perhaps starting with Lucien LeCam.

Theorem (Bruss & Delbaen, 2001 and 2004)

For the on-line Poisson LIS problem, one has (2t)1/2 − O(log(t)) < E[Lo

N(t)] < (2t)1/2,

1 3(2t)1/2 − O(1) < Var[Lo

N(t)] < 1

3(2t)1/2 + O(log t), and...

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 7

slide-40
SLIDE 40

On-Line Selection of Increasing Subsequences Poissonization and a CLT

Poissonization: A Homogenizing Trick with Benefits

If one takes a sample size N(t) that is Poisson with mean t there are several benefits: (a) optimal policies are stationary — no horizon effects and (b) one gets the machinery of infinitesimal generators, Dynkin Martingale, etc. There is long history of applications, perhaps starting with Lucien LeCam.

Theorem (Bruss & Delbaen, 2001 and 2004)

For the on-line Poisson LIS problem, one has (2t)1/2 − O(log(t)) < E[Lo

N(t)] < (2t)1/2,

1 3(2t)1/2 − O(1) < Var[Lo

N(t)] < 1

3(2t)1/2 + O(log t), and... 31/2{Lo

N(t) − (2t)1/2}

(2t)1/4 = ⇒ N(0, 1).

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 7

slide-41
SLIDE 41

On-Line Selection of Increasing Subsequences Poissonization and a CLT

Finite Horizon On-Line LIS: De-Poissonization or What?

Can one prove the FINITE horizon analog of the Bruss-Delbean CLT for the On-Line Poisson LIS?

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 8

slide-42
SLIDE 42

On-Line Selection of Increasing Subsequences Poissonization and a CLT

Finite Horizon On-Line LIS: De-Poissonization or What?

Can one prove the FINITE horizon analog of the Bruss-Delbean CLT for the On-Line Poisson LIS? Is this a routine de-Poissonization, or is there something special?

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 8

slide-43
SLIDE 43

On-Line Selection of Increasing Subsequences Poissonization and a CLT

Finite Horizon On-Line LIS: De-Poissonization or What?

Can one prove the FINITE horizon analog of the Bruss-Delbean CLT for the On-Line Poisson LIS? Is this a routine de-Poissonization, or is there something special? De-Poissonization in General — and for the LIS CLT in particular

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 8

slide-44
SLIDE 44

On-Line Selection of Increasing Subsequences Poissonization and a CLT

Finite Horizon On-Line LIS: De-Poissonization or What?

Can one prove the FINITE horizon analog of the Bruss-Delbean CLT for the On-Line Poisson LIS? Is this a routine de-Poissonization, or is there something special? De-Poissonization in General — and for the LIS CLT in particular

◮ De-Poissonization is Tauberian process, i.e. one moves from “average behavior” to

“individual behavior”.

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 8

slide-45
SLIDE 45

On-Line Selection of Increasing Subsequences Poissonization and a CLT

Finite Horizon On-Line LIS: De-Poissonization or What?

Can one prove the FINITE horizon analog of the Bruss-Delbean CLT for the On-Line Poisson LIS? Is this a routine de-Poissonization, or is there something special? De-Poissonization in General — and for the LIS CLT in particular

◮ De-Poissonization is Tauberian process, i.e. one moves from “average behavior” to

“individual behavior”.

◮ There are situations where this process is now a well-known, relatively easy, part of

Tauberian theory.

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 8

slide-46
SLIDE 46

On-Line Selection of Increasing Subsequences Poissonization and a CLT

Finite Horizon On-Line LIS: De-Poissonization or What?

Can one prove the FINITE horizon analog of the Bruss-Delbean CLT for the On-Line Poisson LIS? Is this a routine de-Poissonization, or is there something special? De-Poissonization in General — and for the LIS CLT in particular

◮ De-Poissonization is Tauberian process, i.e. one moves from “average behavior” to

“individual behavior”.

◮ There are situations where this process is now a well-known, relatively easy, part of

Tauberian theory.

◮ De-Poissonization of a Decision Problem is a whole new kettle of fish.

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 8

slide-47
SLIDE 47

On-Line Selection of Increasing Subsequences Poissonization and a CLT

Finite Horizon On-Line LIS: De-Poissonization or What?

Can one prove the FINITE horizon analog of the Bruss-Delbean CLT for the On-Line Poisson LIS? Is this a routine de-Poissonization, or is there something special? De-Poissonization in General — and for the LIS CLT in particular

◮ De-Poissonization is Tauberian process, i.e. one moves from “average behavior” to

“individual behavior”.

◮ There are situations where this process is now a well-known, relatively easy, part of

Tauberian theory.

◮ De-Poissonization of a Decision Problem is a whole new kettle of fish. ◮ Only “one of the five steps” to the proof of the CLT for the finite horizon LIS uses

what one could call classical de-Poissonization.

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 8

slide-48
SLIDE 48

On-Line Selection of Increasing Subsequences Poissonization and a CLT

Out of Five: Only One for Free

The CLT of Bruss and Delbean has five parts:

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 9

slide-49
SLIDE 49

On-Line Selection of Increasing Subsequences Poissonization and a CLT

Out of Five: Only One for Free

The CLT of Bruss and Delbean has five parts:

1

Mean lower bound: (2t)1/2 − O(log(t))

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 9

slide-50
SLIDE 50

On-Line Selection of Increasing Subsequences Poissonization and a CLT

Out of Five: Only One for Free

The CLT of Bruss and Delbean has five parts:

1

Mean lower bound: (2t)1/2 − O(log(t))

2

Mean upper bound: (2t)1/2

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 9

slide-51
SLIDE 51

On-Line Selection of Increasing Subsequences Poissonization and a CLT

Out of Five: Only One for Free

The CLT of Bruss and Delbean has five parts:

1

Mean lower bound: (2t)1/2 − O(log(t))

2

Mean upper bound: (2t)1/2

3

Variance lower bound: 1 3 (2t)1/2 − O(1)

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 9

slide-52
SLIDE 52

On-Line Selection of Increasing Subsequences Poissonization and a CLT

Out of Five: Only One for Free

The CLT of Bruss and Delbean has five parts:

1

Mean lower bound: (2t)1/2 − O(log(t))

2

Mean upper bound: (2t)1/2

3

Variance lower bound: 1 3 (2t)1/2 − O(1)

4

Variance upper bound: 1 3 (2t)1/2 + O(log t)

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 9

slide-53
SLIDE 53

On-Line Selection of Increasing Subsequences Poissonization and a CLT

Out of Five: Only One for Free

The CLT of Bruss and Delbean has five parts:

1

Mean lower bound: (2t)1/2 − O(log(t))

2

Mean upper bound: (2t)1/2

3

Variance lower bound: 1 3 (2t)1/2 − O(1)

4

Variance upper bound: 1 3 (2t)1/2 + O(log t)

5

The CLT itself

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 9

slide-54
SLIDE 54

On-Line Selection of Increasing Subsequences Poissonization and a CLT

Out of Five: Only One for Free

The CLT of Bruss and Delbean has five parts:

1

Mean lower bound: (2t)1/2 − O(log(t))

2

Mean upper bound: (2t)1/2

3

Variance lower bound: 1 3 (2t)1/2 − O(1)

4

Variance upper bound: 1 3 (2t)1/2 + O(log t)

5

The CLT itself

Only one of these steps has what one can properly call a de-Poissonization.

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 9

slide-55
SLIDE 55

On-Line Selection of Increasing Subsequences Poissonization and a CLT

Out of Five: Only One for Free

The CLT of Bruss and Delbean has five parts:

1

Mean lower bound: (2t)1/2 − O(log(t))

2

Mean upper bound: (2t)1/2

3

Variance lower bound: 1 3 (2t)1/2 − O(1)

4

Variance upper bound: 1 3 (2t)1/2 + O(log t)

5

The CLT itself

Only one of these steps has what one can properly call a de-Poissonization. De-Poissonization gives us the mean lower bound for the finite horizon problem — and leaves us four steps to go.

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 9

slide-56
SLIDE 56

On-Line Selection of Increasing Subsequences Poissonization and a CLT

De-Poissonization of the Mean Lower Bound: One Proof

In the Poisson model, one knows the Poisson parameter t and one makes optimal selections from a sequence of random size N(t).

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 10

slide-57
SLIDE 57

On-Line Selection of Increasing Subsequences Poissonization and a CLT

De-Poissonization of the Mean Lower Bound: One Proof

In the Poisson model, one knows the Poisson parameter t and one makes optimal selections from a sequence of random size N(t). If, ex-post, we are told that N(t) = j our expected reward is E[Lo

N(t)|N(t) = j].

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 10

slide-58
SLIDE 58

On-Line Selection of Increasing Subsequences Poissonization and a CLT

De-Poissonization of the Mean Lower Bound: One Proof

In the Poisson model, one knows the Poisson parameter t and one makes optimal selections from a sequence of random size N(t). If, ex-post, we are told that N(t) = j our expected reward is E[Lo

N(t)|N(t) = j].

The Poisson strategy is a suboptimal strategy for a problem where one knows ex-ante that the sample has size j, so we have E[Lo

N(t)|N(t) = j] ≤ E[Lo j ].

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 10

slide-59
SLIDE 59

On-Line Selection of Increasing Subsequences Poissonization and a CLT

De-Poissonization of the Mean Lower Bound: One Proof

In the Poisson model, one knows the Poisson parameter t and one makes optimal selections from a sequence of random size N(t). If, ex-post, we are told that N(t) = j our expected reward is E[Lo

N(t)|N(t) = j].

The Poisson strategy is a suboptimal strategy for a problem where one knows ex-ante that the sample has size j, so we have E[Lo

N(t)|N(t) = j] ≤ E[Lo j ].

If we now compute the total expectations we have E[Lo

N(t)] ≤ ∞

  • j=0

e−t tj j! E[Lo

j ].

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 10

slide-60
SLIDE 60

On-Line Selection of Increasing Subsequences Poissonization and a CLT

De-Poissonization of the Mean Lower Bound: One Proof

In the Poisson model, one knows the Poisson parameter t and one makes optimal selections from a sequence of random size N(t). If, ex-post, we are told that N(t) = j our expected reward is E[Lo

N(t)|N(t) = j].

The Poisson strategy is a suboptimal strategy for a problem where one knows ex-ante that the sample has size j, so we have E[Lo

N(t)|N(t) = j] ≤ E[Lo j ].

If we now compute the total expectations we have E[Lo

N(t)] ≤ ∞

  • j=0

e−t tj j! E[Lo

j ].

We may now seem stuck. No conventional Tauberian theory comes to our aid.

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 10

slide-61
SLIDE 61

On-Line Selection of Increasing Subsequences Poissonization and a CLT

De-Poissonization of the Mean Lower Bound: One Proof

In the Poisson model, one knows the Poisson parameter t and one makes optimal selections from a sequence of random size N(t). If, ex-post, we are told that N(t) = j our expected reward is E[Lo

N(t)|N(t) = j].

The Poisson strategy is a suboptimal strategy for a problem where one knows ex-ante that the sample has size j, so we have E[Lo

N(t)|N(t) = j] ≤ E[Lo j ].

If we now compute the total expectations we have E[Lo

N(t)] ≤ ∞

  • j=0

e−t tj j! E[Lo

j ].

We may now seem stuck. No conventional Tauberian theory comes to our aid. But we have another property: the map φ(j) = E[Lo

j ] is concave. Jensen’s inequality

then forks up E[Lo

N(n)] ≤ ∞

  • j=0

e−n nj j! E[Lo

j ] ≤ E[Lo n].

Thus, we have lossless transference of any mean lower bound from the Poisson model to the Finite Horizon model.

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 10

slide-62
SLIDE 62

On-Line Selection of Increasing Subsequences Poissonization and a CLT

The Shape of E[Lo

n] and the Shape of Value Functions

The transference of the lower bounds is exceptional — but suggestive.

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 11

slide-63
SLIDE 63

On-Line Selection of Increasing Subsequences Poissonization and a CLT

The Shape of E[Lo

n] and the Shape of Value Functions

The transference of the lower bounds is exceptional — but suggestive. Question: where does one get concavity of φ(j) = E[Lo

j ]? It’s no real help that

E[Lo

n] ∼ (2n)1/2.

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 11

slide-64
SLIDE 64

On-Line Selection of Increasing Subsequences Poissonization and a CLT

The Shape of E[Lo

n] and the Shape of Value Functions

The transference of the lower bounds is exceptional — but suggestive. Question: where does one get concavity of φ(j) = E[Lo

j ]? It’s no real help that

E[Lo

n] ∼ (2n)1/2.

Ultimately we get concavity of of j → φ(j) from the Bellman equation: vk(s) = F(s)vk−1(s) + ∞

s

max{vk−1(s), 1 + vk−1(x)}f (x) dx

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 11

slide-65
SLIDE 65

On-Line Selection of Increasing Subsequences Poissonization and a CLT

The Shape of E[Lo

n] and the Shape of Value Functions

The transference of the lower bounds is exceptional — but suggestive. Question: where does one get concavity of φ(j) = E[Lo

j ]? It’s no real help that

E[Lo

n] ∼ (2n)1/2.

Ultimately we get concavity of of j → φ(j) from the Bellman equation: vk(s) = F(s)vk−1(s) + ∞

s

max{vk−1(s), 1 + vk−1(x)}f (x) dx What other “shape” properties can one extract from the Bellman equation?

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 11

slide-66
SLIDE 66

On-Line Selection of Increasing Subsequences Poissonization and a CLT

The Shape of E[Lo

n] and the Shape of Value Functions

The transference of the lower bounds is exceptional — but suggestive. Question: where does one get concavity of φ(j) = E[Lo

j ]? It’s no real help that

E[Lo

n] ∼ (2n)1/2.

Ultimately we get concavity of of j → φ(j) from the Bellman equation: vk(s) = F(s)vk−1(s) + ∞

s

max{vk−1(s), 1 + vk−1(x)}f (x) dx What other “shape” properties can one extract from the Bellman equation? If we take the uniform distribution on [0, 1], the Bellman equation and induction can be used to prove the concavity of s → vk(s) for all k.

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 11

slide-67
SLIDE 67

On-Line Selection of Increasing Subsequences Poissonization and a CLT

The Shape of E[Lo

n] and the Shape of Value Functions

The transference of the lower bounds is exceptional — but suggestive. Question: where does one get concavity of φ(j) = E[Lo

j ]? It’s no real help that

E[Lo

n] ∼ (2n)1/2.

Ultimately we get concavity of of j → φ(j) from the Bellman equation: vk(s) = F(s)vk−1(s) + ∞

s

max{vk−1(s), 1 + vk−1(x)}f (x) dx What other “shape” properties can one extract from the Bellman equation? If we take the uniform distribution on [0, 1], the Bellman equation and induction can be used to prove the concavity of s → vk(s) for all k. This gives a path to the proof of the lower bound of Var[Lo

n]. It is not easy but it is

direct; no passage through the bound of Bruss and Delbean.

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 11

slide-68
SLIDE 68

On-Line Selection of Increasing Subsequences Poissonization and a CLT

The Shape of E[Lo

n] and the Shape of Value Functions

The transference of the lower bounds is exceptional — but suggestive. Question: where does one get concavity of φ(j) = E[Lo

j ]? It’s no real help that

E[Lo

n] ∼ (2n)1/2.

Ultimately we get concavity of of j → φ(j) from the Bellman equation: vk(s) = F(s)vk−1(s) + ∞

s

max{vk−1(s), 1 + vk−1(x)}f (x) dx What other “shape” properties can one extract from the Bellman equation? If we take the uniform distribution on [0, 1], the Bellman equation and induction can be used to prove the concavity of s → vk(s) for all k. This gives a path to the proof of the lower bound of Var[Lo

n]. It is not easy but it is

direct; no passage through the bound of Bruss and Delbean. How about the upper bound for Var[Lo

n]?

Alessandro and I were stuck here for a long time.

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 11

slide-69
SLIDE 69

On-Line Selection of Increasing Subsequences Poissonization and a CLT

Breaking Symmetry

A Simple but Critical Observation: The distribution of Lo

n does not depend on f ,

but the value function s → vk(s) does depend on f .

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 12

slide-70
SLIDE 70

On-Line Selection of Increasing Subsequences Poissonization and a CLT

Breaking Symmetry

A Simple but Critical Observation: The distribution of Lo

n does not depend on f ,

but the value function s → vk(s) does depend on f . This means that we spend symmetry when make a specific choice of f .

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 12

slide-71
SLIDE 71

On-Line Selection of Increasing Subsequences Poissonization and a CLT

Breaking Symmetry

A Simple but Critical Observation: The distribution of Lo

n does not depend on f ,

but the value function s → vk(s) does depend on f . This means that we spend symmetry when make a specific choice of f . If one takes the exponential distribution, then with a sustained analysis the Bellman equation can used to show that s → vk(s) is convex.

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 12

slide-72
SLIDE 72

On-Line Selection of Increasing Subsequences Poissonization and a CLT

Breaking Symmetry

A Simple but Critical Observation: The distribution of Lo

n does not depend on f ,

but the value function s → vk(s) does depend on f . This means that we spend symmetry when make a specific choice of f . If one takes the exponential distribution, then with a sustained analysis the Bellman equation can used to show that s → vk(s) is convex. This came as a surprise for us, but we knew why we wanted such a result.

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 12

slide-73
SLIDE 73

On-Line Selection of Increasing Subsequences Poissonization and a CLT

Breaking Symmetry

A Simple but Critical Observation: The distribution of Lo

n does not depend on f ,

but the value function s → vk(s) does depend on f . This means that we spend symmetry when make a specific choice of f . If one takes the exponential distribution, then with a sustained analysis the Bellman equation can used to show that s → vk(s) is convex. This came as a surprise for us, but we knew why we wanted such a result. Arguments like that given for the lower bound on Var[Lo

n] could now be used to get

an upper bound — again without passage through the bounds of Bruss and Delbean.

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 12

slide-74
SLIDE 74

On-Line Selection of Increasing Subsequences Poissonization and a CLT

Breaking Symmetry

A Simple but Critical Observation: The distribution of Lo

n does not depend on f ,

but the value function s → vk(s) does depend on f . This means that we spend symmetry when make a specific choice of f . If one takes the exponential distribution, then with a sustained analysis the Bellman equation can used to show that s → vk(s) is convex. This came as a surprise for us, but we knew why we wanted such a result. Arguments like that given for the lower bound on Var[Lo

n] could now be used to get

an upper bound — again without passage through the bounds of Bruss and Delbean. The flood gate is opened and more analysis of the same flavor (but with plenty of details) lead us through the Martingale CLT to a CLT for the Finite Horizon Selection Problem for LIS: 31/2{Lo

n − (2n)1/2}

(2n)1/4 = ⇒ N(0, 1).

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 12

slide-75
SLIDE 75

On-Line Selection of Increasing Subsequences Final Slide

Quick Glance Back: What Can You Take Away?

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 13

slide-76
SLIDE 76

On-Line Selection of Increasing Subsequences Final Slide

Quick Glance Back: What Can You Take Away?

Problems of Sequential Selection: Rich in history, connections, problems and techniques

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 13

slide-77
SLIDE 77

On-Line Selection of Increasing Subsequences Final Slide

Quick Glance Back: What Can You Take Away?

Problems of Sequential Selection: Rich in history, connections, problems and techniques

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 13

slide-78
SLIDE 78

On-Line Selection of Increasing Subsequences Final Slide

Quick Glance Back: What Can You Take Away?

Problems of Sequential Selection: Rich in history, connections, problems and techniques Poissonization is very powerful!

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 13

slide-79
SLIDE 79

On-Line Selection of Increasing Subsequences Final Slide

Quick Glance Back: What Can You Take Away?

Problems of Sequential Selection: Rich in history, connections, problems and techniques Poissonization is very powerful! De-Poissonization may be easy — or almost impossible.

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 13

slide-80
SLIDE 80

On-Line Selection of Increasing Subsequences Final Slide

Quick Glance Back: What Can You Take Away?

Problems of Sequential Selection: Rich in history, connections, problems and techniques Poissonization is very powerful! De-Poissonization may be easy — or almost impossible. Given any “invariance” (or symmetry): Ask “Does this break someplace?” “What do we buy if we spend our symmetry?”

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 13

slide-81
SLIDE 81

On-Line Selection of Increasing Subsequences Final Slide

Quick Glance Back: What Can You Take Away?

Problems of Sequential Selection: Rich in history, connections, problems and techniques Poissonization is very powerful! De-Poissonization may be easy — or almost impossible. Given any “invariance” (or symmetry): Ask “Does this break someplace?” “What do we buy if we spend our symmetry?” Here we bought a lot, but we always needed our workhorses: the Bellman equation, shape, and submodularity

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 13

slide-82
SLIDE 82

On-Line Selection of Increasing Subsequences Final Slide

Quick Glance Back: What Can You Take Away?

Problems of Sequential Selection: Rich in history, connections, problems and techniques Poissonization is very powerful! De-Poissonization may be easy — or almost impossible. Given any “invariance” (or symmetry): Ask “Does this break someplace?” “What do we buy if we spend our symmetry?” Here we bought a lot, but we always needed our workhorses: the Bellman equation, shape, and submodularity Enough for Today? ... almost certainly, but with some left for tomorrow.

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 13

slide-83
SLIDE 83

On-Line Selection of Increasing Subsequences Final Slide

Quick Glance Back: What Can You Take Away?

Problems of Sequential Selection: Rich in history, connections, problems and techniques Poissonization is very powerful! De-Poissonization may be easy — or almost impossible. Given any “invariance” (or symmetry): Ask “Does this break someplace?” “What do we buy if we spend our symmetry?” Here we bought a lot, but we always needed our workhorses: the Bellman equation, shape, and submodularity Enough for Today? ... almost certainly, but with some left for tomorrow.

¡Gracias por su atenci´

  • n !
  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 13

slide-84
SLIDE 84

References

References I

  • F. Thomas Bruss and James B. Robertson. “Wald’s lemma” for sums of order statistics
  • f i.i.d. random variables. Adv. in Appl. Probab., 23(3):612–623, 1991.
  • F. R. K. Chung. On unimodal subsequences. J. Combin. Theory Ser. A, 29(3):267–279,

1980.

  • P. Erd˝
  • s and G. Szekeres. A combinatorial problem in geometry. Compositio Math., 2:

463–470, 1935. Alexander V. Gnedin. Sequential selection of an increasing subsequence from a sample of random size. J. Appl. Probab., 36(4):1074–1085, 1999.

  • J. M. Hammersley. A few seedlings of research. In Proceedings of the Sixth Berkeley

Symposium on Mathematical Statistics and Probability (Univ. California, Berkeley, Calif., 1970/1971), Vol. I: Theory of statistics, pages 345–394, Berkeley, CA, 1972.

  • Univ. California Press.
  • C. Houdr´

e and R. Restrepo. A probabilistic approach to the asymptotics of the length of the longest alternating subsequence. Electron. J. Combin., 17(1):Research Paper 168, 1–19, 2010.

  • J. F. C. Kingman. Subadditive ergodic theory. Ann. Probability, 1:883–909, 1973. With

discussion by D. L. Burkholder, Daryl Daley, H. Kesten, P. Ney, Frank Spitzer and J.

  • M. Hammersley, and a reply by the author.
  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 14

slide-85
SLIDE 85

References

References II

  • B. F. Logan and L. A. Shepp. A variational problem for random Young tableaux.

Advances in Math., 26(2):206–222, 1977. WanSoo Rhee and Michel Talagrand. A note on the selection of random variables under a sum constraint. J. Appl. Probab., 28(4):919–923, 1991. Stephen M. Samuels and J. Michael Steele. Optimal sequential selection of a monotone sequence from a random sample. Ann. Probab., 9(6):937–947, 1981. Richard P. Stanley. Increasing and decreasing subsequences and their variants. In International Congress of Mathematicians. Vol. I, pages 545–579. Eur. Math. Soc., Z¨ urich, 2007. Richard P. Stanley. Longest alternating subsequences of permutations. Michigan Math. J., 57:675–687, 2008. Special volume in honor of Melvin Hochster. Richard P. Stanley. A survey of alternating permutations. Contemp. Math., 531:165–196, 2010.

  • J. Michael Steele. Long unimodal subsequences: a problem of F. R. K. Chung. Discrete

Math., 33(2):223–225, 1981.

  • A. M. Verˇ

sik and S. V. Kerov. Asymptotic behavior of the Plancherel measure of the symmetric group and the limit form of Young tableaux. Dokl. Akad. Nauk SSSR, 233 (6):1024–1027, 1977.

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 15

slide-86
SLIDE 86

References

References III

Harold Widom. On the limiting distribution for the length of the longest alternating sequence in a random permutation. Electron. J. Combin., 13(1):Research Paper 25, 1–7, 2006.

  • J. M. Steele (UPenn, Wharton)

On-line Selection August 2015 16