Roots of Polynomials Under Repeated Differentiation Stefan - - PowerPoint PPT Presentation

roots of polynomials under repeated differentiation
SMART_READER_LITE
LIVE PREVIEW

Roots of Polynomials Under Repeated Differentiation Stefan - - PowerPoint PPT Presentation

Roots of Polynomials Under Repeated Differentiation Stefan Steinerberger UCLA/Caltech, October 2020 Outline of the Talk 1. Roots of Polynomials Outline of the Talk 1. Roots of Polynomials 2. A Nonlinear PDE Outline of the Talk 1. Roots of


slide-1
SLIDE 1

Roots of Polynomials Under Repeated Differentiation

Stefan Steinerberger

UCLA/Caltech, October 2020

slide-2
SLIDE 2

Outline of the Talk

  • 1. Roots of Polynomials
slide-3
SLIDE 3

Outline of the Talk

  • 1. Roots of Polynomials
  • 2. A Nonlinear PDE
slide-4
SLIDE 4

Outline of the Talk

  • 1. Roots of Polynomials
  • 2. A Nonlinear PDE
  • 3. Hermite Polynomials out of thin air (with Jeremy Hoskins)
slide-5
SLIDE 5

Outline of the Talk

  • 1. Roots of Polynomials
  • 2. A Nonlinear PDE
  • 3. Hermite Polynomials out of thin air (with Jeremy Hoskins)
  • 4. Free Probability, Random Matrices, ...
slide-6
SLIDE 6

Outline of the Talk

  • 1. Roots of Polynomials
  • 2. A Nonlinear PDE
  • 3. Hermite Polynomials out of thin air (with Jeremy Hoskins)
  • 4. Free Probability, Random Matrices, ...
  • 5. A Nonlinear PDE in the Complex Plane (with Sean O’Rourke)
slide-7
SLIDE 7

pn will be a polynomial of degree n having n distinct roots.

slide-8
SLIDE 8

pn will be a polynomial of degree n having n distinct roots. The Gauss-Lucas theorem (1830s). The roots of p′

n are

contained in the convex hull of the roots of pn.

slide-9
SLIDE 9

pn will be a polynomial of degree n having n distinct roots. The Gauss-Lucas theorem (1830s). The roots of p′

n are

contained in the convex hull of the roots of pn.

  • Proof. The ‘electrostatic interpretation’:

p′

n(z)

pn(z) =

n

  • k=1

1 z − zk .

slide-10
SLIDE 10

pn will be a polynomial of degree n having n distinct roots. The Gauss-Lucas theorem (1830s). The roots of p′

n are

contained in the convex hull of the roots of pn.

  • Proof. The ‘electrostatic interpretation’:

p′

n(z)

pn(z) =

n

  • k=1

1 z − zk . If you are outside the convex hull, the charges ‘push you away’.

slide-11
SLIDE 11

p′

n(z)

pn(z) =

n

  • k=1

1 z − zk .

slide-12
SLIDE 12

p′

n(z)

pn(z) =

n

  • k=1

1 z − zk .

slide-13
SLIDE 13

Suppose µ is a probability measure on C and suppose pn(z) =

n

  • k=1

(z − zk), where z1, . . . , zn ∼ µ are i.i.d. random variables distributed according to µ.

slide-14
SLIDE 14

Suppose µ is a probability measure on C and suppose pn(z) =

n

  • k=1

(z − zk), where z1, . . . , zn ∼ µ are i.i.d. random variables distributed according to µ. What is the distribution of the critical points of pn (the roots of p′

n)?

slide-15
SLIDE 15

Suppose µ is a probability measure on C and suppose pn(z) =

n

  • k=1

(z − zk), where z1, . . . , zn ∼ µ are i.i.d. random variables distributed according to µ. What is the distribution of the critical points of pn (the roots of p′

n)?

Conjecture (Rivin-Pemantle), Theorem (Kabluchko, 2015)

The critical points are also distributed according to µ.

slide-16
SLIDE 16

Suppose µ is a probability measure on C and suppose pn(z) =

n

  • k=1

(z − zk), where z1, . . . , zn ∼ µ are i.i.d. random variables distributed according to µ. What is the distribution of the critical points of pn (the roots of p′

n)?

Conjecture (Rivin-Pemantle), Theorem (Kabluchko, 2015)

The critical points are also distributed according to µ. Note: The critical points are w1, . . . , wn−1 ∈ C. The statement really says that 1 n − 1

n−1

  • k=1

δwk ⇀ µ.

slide-17
SLIDE 17

Here’s a heuristic why this is not too surprising.

slide-18
SLIDE 18

Here’s a heuristic why this is not too surprising. Suppose µ is absolutely continuous and compactly supported. Then the roots look a bit like this

slide-19
SLIDE 19

Here’s a heuristic why this is not too surprising. Suppose µ is absolutely continuous and compactly supported. Then the roots look a bit like this roots of pn

slide-20
SLIDE 20

Here’s a heuristic why this is not too surprising. Suppose µ is absolutely continuous and compactly supported typically ∼ n−1/2

slide-21
SLIDE 21

typically ∼ n−1/2 Every root of the derivative satisfies p′

n(z) = 0 which means

1 z − zℓ = −

n

  • k=1

k=ℓ

1 z − zk .

slide-22
SLIDE 22

typically ∼ n−1/2 Every root of the derivative satisfies p′

n(z) = 0 which means

1 z − zℓ = −

n

  • k=1

k=ℓ

1 z − zk . The right-hand side is typically size ∼ n.

slide-23
SLIDE 23

typically ∼ n−1/2 Every root of the derivative satisfies p′

n(z) = 0 which means

1 z − zℓ = −

n

  • k=1

k=ℓ

1 z − zk . The right-hand side is typically size ∼ n. So the root of the derivative has to be distance ∼ n−1 from one of the existing roots

slide-24
SLIDE 24

typically ∼ n−1/2 Every root of the derivative satisfies p′

n(z) = 0 which means

1 z − zℓ = −

n

  • k=1

k=ℓ

1 z − zk . The right-hand side is typically size ∼ n. So the root of the derivative has to be distance ∼ n−1 from one of the existing roots and the existing roots are ∼ n−1/2 separated, so no two of them are very close.

slide-25
SLIDE 25

typically ∼ n−1/2 What this argument tells us is roughly the following:

slide-26
SLIDE 26

typically ∼ n−1/2 What this argument tells us is roughly the following: if µ is a sufficiently nice measure, then for each (random) root pn(zk) = 0, we would expect that there is a critical point p′

n(z) = 0 that is at

most distance ∼ n−1 nearby.

slide-27
SLIDE 27

typically ∼ n−1/2 What this argument tells us is roughly the following: if µ is a sufficiently nice measure, then for each (random) root pn(zk) = 0, we would expect that there is a critical point p′

n(z) = 0 that is at

most distance ∼ n−1 nearby. This is roughly correct and there are recent papers by Sean O’Rourke and Noah Williams in this direction.

slide-28
SLIDE 28

picture from O’Rourke and Williams (2018)

slide-29
SLIDE 29

Theorem (O’Rourke and Williams)

Under reasonable assumptions on the measure W1(µn, µ′

n) (log n)10

n .

slide-30
SLIDE 30

picture from O’Rourke and Williams (2018)

?

slide-31
SLIDE 31

picture from O’Rourke and Williams (2018) n

k=1 k=ℓ

1 z−zk vanishes

slide-32
SLIDE 32

In fact, the bijective relationship has to fail somewhere: there are n roots and n − 1 critical points.

slide-33
SLIDE 33

In fact, the bijective relationship has to fail somewhere: there are n roots and n − 1 critical points. The unpaired root is frequently close to the root of V (z) =

n

  • k=1

1 z − zk .

slide-34
SLIDE 34

picture from O’Rourke and Williams (2018) This looks almost like a flow of particles captured at nearby times.

slide-35
SLIDE 35

Let’s now return to the one-dimensional setting.

slide-36
SLIDE 36

Let’s now return to the one-dimensional setting. Things are slightly different here pn(x) =

n

  • k=1

(x − xk).

slide-37
SLIDE 37

Let’s now return to the one-dimensional setting. Things are slightly different here pn(x) =

n

  • k=1

(x − xk). Between any two roots there is a maximum or a minimum, thus a root of p′

n.

slide-38
SLIDE 38

Let’s now return to the one-dimensional setting. Things are slightly different here pn(x) =

n

  • k=1

(x − xk). Between any two roots there is a maximum or a minimum, thus a root of p′

  • n. Moreover, there are n − 1 intervals between the n

roots, so each interval has exactly one root. Thus the roots of pn and the roots of p′

n interlace.

slide-39
SLIDE 39

Let’s now return to the one-dimensional setting. Things are slightly different here pn(x) =

n

  • k=1

(x − xk). Between any two roots there is a maximum or a minimum, thus a root of p′

  • n. Moreover, there are n − 1 intervals between the n

roots, so each interval has exactly one root. Thus the roots of pn and the roots of p′

n interlace.

slide-40
SLIDE 40

Let’s now return to the one-dimensional setting. Things are slightly different here pn(x) =

n

  • k=1

(x − xk). Between any two roots there is a maximum or a minimum, thus a root of p′

  • n. Moreover, there are n − 1 intervals between the n

roots, so each interval has exactly one root. Thus the roots of pn and the roots of p′

n interlace.

slide-41
SLIDE 41

Conjecture (Rivin-Pemantle), Theorem (Kabluchko, 2015)

The critical points are also distributed according to µ.

slide-42
SLIDE 42

Conjecture (Rivin-Pemantle), Theorem (Kabluchko, 2015)

The critical points are also distributed according to µ.

Fact for real-rooted polynomials

The roots of p

(

n log n )

n

also distributed according to µ as n → ∞.

slide-43
SLIDE 43

Conjecture (Rivin-Pemantle), Theorem (Kabluchko, 2015)

The critical points are also distributed according to µ.

Fact for real-rooted polynomials

The roots of p

(

n log n )

n

also distributed according to µ as n → ∞.

  • Sketch. Each root moves roughly ±n−1 under one step of

differentiation.

slide-44
SLIDE 44

Conjecture (Rivin-Pemantle), Theorem (Kabluchko, 2015)

The critical points are also distributed according to µ.

Fact for real-rooted polynomials

The roots of p

(

n log n )

n

also distributed according to µ as n → ∞.

  • Sketch. Each root moves roughly ±n−1 under one step of

differentiation.

Main Question

What about the roots of p(t·n)

n

where 0 < t < 1?

slide-45
SLIDE 45

Main Question

What about the roots of p(t·n)

n

where 0 < t < 1? t = 0 t = 0.5 t = 0.99

slide-46
SLIDE 46

Main Question

What about the roots of p(t·n)

n

where 0 < t < 1? t = 0 t = 0.5 t = 0.99

slide-47
SLIDE 47

Main Question

What about the roots of p(t·n)

n

where 0 < t < 1? t = 0 t = 0.5 t = 0.99

slide-48
SLIDE 48

Main Question

What about the roots of p(t·n)

n

where 0 < t < 1?

slide-49
SLIDE 49

Main Question

What about the roots of p(t·n)

n

where 0 < t < 1? Some History.

  • 1. The question hasn’t been studied very much.
slide-50
SLIDE 50

Main Question

What about the roots of p(t·n)

n

where 0 < t < 1? Some History.

  • 1. The question hasn’t been studied very much.
  • 2. Polya asked a whole number of questions in the setting of real

entire functions.

slide-51
SLIDE 51

Main Question

What about the roots of p(t·n)

n

where 0 < t < 1? Some History.

  • 1. The question hasn’t been studied very much.
  • 2. Polya asked a whole number of questions in the setting of real

entire functions.

  • 3. The smallest gap grows under differentiation.
slide-52
SLIDE 52

Main Question

What about the roots of p(t·n)

n

where 0 < t < 1? Some History.

  • 1. The question hasn’t been studied very much.
  • 2. Polya asked a whole number of questions in the setting of real

entire functions.

  • 3. The smallest gap grows under differentiation. Denoting the

smallest gap of a polynomial pn having n real roots {x1, . . . , xn} by G(pn) = min

i=j |xi − xj|,

slide-53
SLIDE 53

Main Question

What about the roots of p(t·n)

n

where 0 < t < 1? Some History.

  • 1. The question hasn’t been studied very much.
  • 2. Polya asked a whole number of questions in the setting of real

entire functions.

  • 3. The smallest gap grows under differentiation. Denoting the

smallest gap of a polynomial pn having n real roots {x1, . . . , xn} by G(pn) = min

i=j |xi − xj|,

we have (Riesz, Sz-Nagy, Walker, 1920s) G(p′

n) ≥ G(pn).

slide-54
SLIDE 54

Main Question

What about the roots of p(t·n)

n

where 0 < t < 1? Let us denote the answer by u(t, x). Here, the idea is that u(t, x) is the limiting behavior as n → ∞.

slide-55
SLIDE 55

Main Question

What about the roots of p(t·n)

n

where 0 < t < 1? Let us denote the answer by u(t, x). Here, the idea is that u(t, x) is the limiting behavior as n → ∞. In particular µ = u(0, x)dx

slide-56
SLIDE 56

Main Question

What about the roots of p(t·n)

n

where 0 < t < 1? Let us denote the answer by u(t, x). Here, the idea is that u(t, x) is the limiting behavior as n → ∞. In particular µ = u(0, x)dx and

  • R

u(t, x)dx = 1 − t.

slide-57
SLIDE 57

Main Question

What about the roots of p(t·n)

n

where 0 < t < 1? Let us denote the answer by u(t, x). Here, the idea is that u(t, x) is the limiting behavior as n → ∞. In particular µ = u(0, x)dx and

  • R

u(t, x)dx = 1 − t. What can one say about u(t, x)?

slide-58
SLIDE 58

Main Question

What about the roots of p(t·n)

n

where 0 < t < 1? Let us denote the answer by u(t, x). 1.

  • R u(t, x)dx = 1 − t.
slide-59
SLIDE 59

Main Question

What about the roots of p(t·n)

n

where 0 < t < 1? Let us denote the answer by u(t, x). 1.

  • R u(t, x)dx = 1 − t.

2.

  • R u(t, x)x dx = (1 − t)
  • R u(0, x)x dx
slide-60
SLIDE 60

Main Question

What about the roots of p(t·n)

n

where 0 < t < 1? Let us denote the answer by u(t, x). 1.

  • R u(t, x)dx = 1 − t.

2.

  • R u(t, x)x dx = (1 − t)
  • R u(0, x)x dx

3.

  • R
  • R u(t, x)(x − y)2u(t, y) dxdy =
slide-61
SLIDE 61

Main Question

What about the roots of p(t·n)

n

where 0 < t < 1? Let us denote the answer by u(t, x). 1.

  • R u(t, x)dx = 1 − t.

2.

  • R u(t, x)x dx = (1 − t)
  • R u(0, x)x dx

3.

  • R
  • R u(t, x)(x − y)2u(t, y) dxdy =

(1 − t)3

R

  • R u(0, x)(x − y)2u(0, y) dxdy
slide-62
SLIDE 62

Main Question

What about the roots of p(t·n)

n

where 0 < t < 1? Let us denote the answer by u(t, x). 1.

  • R u(t, x)dx = 1 − t.

2.

  • R u(t, x)x dx = (1 − t)
  • R u(0, x)x dx

3.

  • R
  • R u(t, x)(x − y)2u(t, y) dxdy =

(1 − t)3

R

  • R u(0, x)(x − y)2u(0, y) dxdy

This means: the distribution shrinks linearly in mass,

slide-63
SLIDE 63

Main Question

What about the roots of p(t·n)

n

where 0 < t < 1? Let us denote the answer by u(t, x). 1.

  • R u(t, x)dx = 1 − t.

2.

  • R u(t, x)x dx = (1 − t)
  • R u(0, x)x dx

3.

  • R
  • R u(t, x)(x − y)2u(t, y) dxdy =

(1 − t)3

R

  • R u(0, x)(x − y)2u(0, y) dxdy

This means: the distribution shrinks linearly in mass, its mean is preserved and

slide-64
SLIDE 64

Main Question

What about the roots of p(t·n)

n

where 0 < t < 1? Let us denote the answer by u(t, x). 1.

  • R u(t, x)dx = 1 − t.

2.

  • R u(t, x)x dx = (1 − t)
  • R u(0, x)x dx

3.

  • R
  • R u(t, x)(x − y)2u(t, y) dxdy =

(1 − t)3

R

  • R u(0, x)(x − y)2u(0, y) dxdy

This means: the distribution shrinks linearly in mass, its mean is preserved and the mass is distributed over area ∼ √1 − t.

slide-65
SLIDE 65

An Equation (S. 2018)

There’s some good heuristic reasoning for ∂u ∂t + 1 π ∂ ∂x arctan Hu u

  • = 0
  • n supp(u)
slide-66
SLIDE 66

An Equation (S. 2018)

There’s some good heuristic reasoning for ∂u ∂t + 1 π ∂ ∂x arctan Hu u

  • = 0
  • n supp(u)

where Hf (x) = p.v. 1 π

  • R

f (y) x − y dy is the Hilbert transform.

slide-67
SLIDE 67

An Equation (S. 2018)

There’s some good heuristic reasoning for ∂u ∂t + 1 π ∂ ∂x arctan Hu u

  • = 0
  • n supp(u)

where Hf (x) = p.v. 1 π

  • R

f (y) x − y dy is the Hilbert transform. The argument is actually fun and I can give it in full. But before, let’s explore this strange equation.

slide-68
SLIDE 68

A nice way to understand a PDE is through explicit closed-form solutions (if they exist).

slide-69
SLIDE 69

A nice way to understand a PDE is through explicit closed-form solutions (if they exist). So the relevant question is: are there nice special solutions that we can construct? For this we need polynomials pn whose roots have a nice distribution

slide-70
SLIDE 70

A nice way to understand a PDE is through explicit closed-form solutions (if they exist). So the relevant question is: are there nice special solutions that we can construct? For this we need polynomials pn whose roots have a nice distribution and whose derivatives p(k)

n

also have a nice distribution?

  • 1. Hermite polynomials
slide-71
SLIDE 71

A nice way to understand a PDE is through explicit closed-form solutions (if they exist). So the relevant question is: are there nice special solutions that we can construct? For this we need polynomials pn whose roots have a nice distribution and whose derivatives p(k)

n

also have a nice distribution?

  • 1. Hermite polynomials
  • 2. (associated) Laguerre polynomials
slide-72
SLIDE 72

A nice way to understand a PDE is through explicit closed-form solutions (if they exist). So the relevant question is: are there nice special solutions that we can construct? For this we need polynomials pn whose roots have a nice distribution and whose derivatives p(k)

n

also have a nice distribution?

  • 1. Hermite polynomials
  • 2. (associated) Laguerre polynomials

Presumably there are many others(?)

slide-73
SLIDE 73

Hermite Polynomials

Hermite polynomials Hn : R → R satisfy a nice recurrence relation dm dxm Hn(x) = 2nn! (n − m)!Hn−m(x).

slide-74
SLIDE 74

Hermite Polynomials

Hermite polynomials Hn : R → R satisfy a nice recurrence relation dm dxm Hn(x) = 2nn! (n − m)!Hn−m(x). Moreover, the roots of Hn converge, in a suitable sense, to µ = 1 π

  • 2n − x2dx.
slide-75
SLIDE 75

Hermite Polynomials

Hermite polynomials Hn : R → R satisfy a nice recurrence relation dm dxm Hn(x) = 2nn! (n − m)!Hn−m(x). Moreover, the roots of Hn converge, in a suitable sense, to µ = 1 π

  • 2n − x2dx.

This suggests that u(t, x) = 2 π

  • 1 − t − x2 · χ|x|≤√1−t

for t ≤ 1 should be a solution of the PDE (and it is).

slide-76
SLIDE 76

Hermite Polynomials

u(t, x) = 2 π

  • 1 − t − x2 · χ|x|≤√1−t

for t ≤ 1

slide-77
SLIDE 77

Laguerre Polynomials

(Associated) Laguerre polynomials Hn : R → R satisfy the recurrence relation dk dxk L(α)

n (x) = (−1)kL(α+k) n−k (x).

slide-78
SLIDE 78

Laguerre Polynomials

(Associated) Laguerre polynomials Hn : R → R satisfy the recurrence relation dk dxk L(α)

n (x) = (−1)kL(α+k) n−k (x).

The roots converge in distribution to the Marchenko-Pastur distribution v(c, x) =

  • (x+ − x)(x − x−)

2πx χ(x−,x+)dx where x± = ( √ c + 1 ± 1)2.

slide-79
SLIDE 79

Laguerre Polynomials

(Associated) Laguerre polynomials Hn : R → R satisfy the recurrence relation dk dxk L(α)

n (x) = (−1)kL(α+k) n−k (x).

The roots converge in distribution to the Marchenko-Pastur distribution v(c, x) =

  • (x+ − x)(x − x−)

2πx χ(x−,x+)dx where x± = ( √ c + 1 ± 1)2. Indeed, uc(t, x) = v c + t 1 − t , x 1 − t

  • is a solution of the PDE.
slide-80
SLIDE 80

Laguerre Polynomials

uc(t, x) = v c + t 1 − t , x 1 − t

  • .

Figure: Marchenko-Pastur solutions uc(t, x): c = 1 (left) and c = 15 (right) shown for t ∈ {0, 0.2, 0.4, 0.6, 0.8, 0.9, 0.95, 0.99}.

slide-81
SLIDE 81

A Bonus Solution

There are several classical orthogonal polynomials on [−1, 1] (Gegenbauer, Jacobi, ...).

slide-82
SLIDE 82

A Bonus Solution

There are several classical orthogonal polynomials on [−1, 1] (Gegenbauer, Jacobi, ...). For fairly general classes (Erd˝

  • s-Freud

theorem) of such polynomials, the distribution of roots is asymptotically given by µ = 1 π dx √ 1 − x2 .

slide-83
SLIDE 83

A Bonus Solution

There are several classical orthogonal polynomials on [−1, 1] (Gegenbauer, Jacobi, ...). For fairly general classes (Erd˝

  • s-Freud

theorem) of such polynomials, the distribution of roots is asymptotically given by µ = 1 π dx √ 1 − x2 . As it turns out, u(t, x) = c √ 1 − x2 is indeed a stationary solution of the equation.

slide-84
SLIDE 84

A Bonus Solution

There are several classical orthogonal polynomials on [−1, 1] (Gegenbauer, Jacobi, ...). For fairly general classes (Erd˝

  • s-Freud

theorem) of such polynomials, the distribution of roots is asymptotically given by µ = 1 π dx √ 1 − x2 . As it turns out, u(t, x) = c √ 1 − x2 is indeed a stationary solution of the equation.

Theorem (Tricomi?)

Let f : (−1, 1) → R≥0. If Hf ≡ 0 in (−1, 1), then f = c √ 1 − x2 .

slide-85
SLIDE 85

∂u ∂t + 1 π ∂ ∂x arctan Hu u

  • = 0
  • n supp(u)
slide-86
SLIDE 86

∂u ∂t + 1 π ∂ ∂x arctan Hu u

  • = 0
  • n supp(u)

Sketch of the Derivation. Crystallization as key assumption.

slide-87
SLIDE 87

∂u ∂t + 1 π ∂ ∂x arctan Hu u

  • = 0
  • n supp(u)

Sketch of the Derivation. Crystallization as key assumption. u(t, x)

slide-88
SLIDE 88

∂u ∂t + 1 π ∂ ∂x arctan Hu u

  • = 0
  • n supp(u)

Sketch of the Derivation. Crystallization as key assumption. u(t, x)

slide-89
SLIDE 89

∂u ∂t + 1 π ∂ ∂x arctan Hu u

  • = 0
  • n supp(u)

Sketch of the Derivation. Crystallization as key assumption. u(t, x)

slide-90
SLIDE 90

xk

n

k=1 1 x−xk = 0

slide-91
SLIDE 91

xk

n

k=1 1 x−xk = 0

n

  • k=1

1 x − xk =

  • |xk−x| large

1 x − xk +

  • |xk−x| small

1 x − xk

slide-92
SLIDE 92

xk

n

k=1 1 x−xk = 0

n

  • k=1

1 x − xk =

  • |xk−x| large

1 x − xk +

  • |xk−x| small

1 x − xk

  • |xk−x| large

1 x − xk ∼ n

  • R

1 x − y · u(t, y)dy = n · [Hu](t, x).

slide-93
SLIDE 93

xk

n

k=1 1 x−xk = 0

n

  • k=1

1 x − xk =

  • |xk−x| large

1 x − xk +

  • |xk−x| small

1 x − xk

  • |xk−x| large

1 x − xk ∼ n

  • R

1 x − y · u(t, y)dy = n · [Hu](t, x). It thus remains to understand the behavior of the local term.

slide-94
SLIDE 94

The local term is

  • |xk−x| small

1 x − xk .

slide-95
SLIDE 95

The local term is

  • |xk−x| small

1 x − xk . Crystallization means that the roots form, locally, an arithmetic progressions

slide-96
SLIDE 96

The local term is

  • |xk−x| small

1 x − xk . Crystallization means that the roots form, locally, an arithmetic progressions and thus

  • |xk−x| small

1 x − xk ∼

  • ℓ∈Z

1 x −

  • xk +

ℓ u(t,x)n

.

slide-97
SLIDE 97

The local term is

  • |xk−x| small

1 x − xk . Crystallization means that the roots form, locally, an arithmetic progressions and thus

  • |xk−x| small

1 x − xk ∼

  • ℓ∈Z

1 x −

  • xk +

ℓ u(t,x)n

. We are in luck: this sum has a closed-form expression due to Euler π cot πx = 1 x +

  • n=1
  • 1

x + n + 1 x − n

  • for x ∈ R \ Z.
slide-98
SLIDE 98

The Local Field

slide-99
SLIDE 99

The Local Field

We can then predict the behavior of the roots of the derivative: they are in places where the local (near) field and the global (far) field cancel out. This leads to the desired equation.

slide-100
SLIDE 100

A Fast Numerical Algorithm

Jeremy Hoskins (U Chicago) used the electrostatic interpretation to produce an algorithm that can compute all derivatives of polynomials up to degree ∼ 100.000.

slide-101
SLIDE 101

A Fast Numerical Algorithm

Jeremy Hoskins (U Chicago) used the electrostatic interpretation to produce an algorithm that can compute all derivatives of polynomials up to degree ∼ 100.000. Semicircles.

slide-102
SLIDE 102

Theorem (J. Hoskins and S, 2020)

Let X be a random variable on R such that all moments are finite and EX = 0 as well as VX = 1.

slide-103
SLIDE 103

Theorem (J. Hoskins and S, 2020)

Let X be a random variable on R such that all moments are finite and EX = 0 as well as VX = 1. Let pn be a random polynomial whose roots are i.i.d. copies of X and fix ℓ ∈ N.

slide-104
SLIDE 104

Theorem (J. Hoskins and S, 2020)

Let X be a random variable on R such that all moments are finite and EX = 0 as well as VX = 1. Let pn be a random polynomial whose roots are i.i.d. copies of X and fix ℓ ∈ N. Then, as n → ∞, nℓ/2 ℓ! n! · p(n−ℓ)

n

x √n

  • ∼ (1 + o(1)) · Heℓ(x + γn),
slide-105
SLIDE 105

Theorem (J. Hoskins and S, 2020)

Let X be a random variable on R such that all moments are finite and EX = 0 as well as VX = 1. Let pn be a random polynomial whose roots are i.i.d. copies of X and fix ℓ ∈ N. Then, as n → ∞, nℓ/2 ℓ! n! · p(n−ℓ)

n

x √n

  • ∼ (1 + o(1)) · Heℓ(x + γn),

where γn ∼ N(0, 1) and Heℓ is the ℓ−th Hermite polynomial.

slide-106
SLIDE 106

Theorem (J. Hoskins and S, 2020)

Let X be a random variable on R such that all moments are finite and EX = 0 as well as VX = 1. Let pn be a random polynomial whose roots are i.i.d. copies of X and fix ℓ ∈ N. Then, as n → ∞, nℓ/2 ℓ! n! · p(n−ℓ)

n

x √n

  • ∼ (1 + o(1)) · Heℓ(x + γn),

where γn ∼ N(0, 1) and Heℓ is the ℓ−th Hermite polynomial. Remarks.

  • 1. The roots of the Hermite polynomial have a semicircle density.
slide-107
SLIDE 107

Theorem (J. Hoskins and S, 2020)

Let X be a random variable on R such that all moments are finite and EX = 0 as well as VX = 1. Let pn be a random polynomial whose roots are i.i.d. copies of X and fix ℓ ∈ N. Then, as n → ∞, nℓ/2 ℓ! n! · p(n−ℓ)

n

x √n

  • ∼ (1 + o(1)) · Heℓ(x + γn),

where γn ∼ N(0, 1) and Heℓ is the ℓ−th Hermite polynomial. Remarks.

  • 1. The roots of the Hermite polynomial have a semicircle density.
  • 2. If x1, x2, . . . , xn ∼ X, then

x1 + · · · + xn √n ∼ N(0, 1) and the mean of the roots is preserved under differentiation (hence the random shift).

slide-108
SLIDE 108

Ideas behind the proof

pn(x) = xn + an−1xn−1 + · · · + a0

slide-109
SLIDE 109

Ideas behind the proof

pn(x) = xn + an−1xn−1 + · · · + a0 Coefficients are preserved under differentiation, they are simply multiplied with degrees.

slide-110
SLIDE 110

Ideas behind the proof

pn(x) = xn + an−1xn−1 + · · · + a0 Coefficients are preserved under differentiation, they are simply multiplied with degrees.

n

  • i=1

(x − xi) =

n

  • k=0

(−1)kek(x1, . . . , xn)xn−k.

slide-111
SLIDE 111

Ideas behind the proof

pn(x) = xn + an−1xn−1 + · · · + a0 Coefficients are preserved under differentiation, they are simply multiplied with degrees.

n

  • i=1

(x − xi) =

n

  • k=0

(−1)kek(x1, . . . , xn)xn−k. We need elementary symmetric polynomials e0(x1, . . . , xn) = 1 e1(x1, . . . , xn) = x1 + · · · + xn e2(x1, . . . , xn) =

  • i<j

xixj e3(x1, . . . , xn) =

  • i<j<k

xixjxk

slide-112
SLIDE 112

Ideas behind the proof

Given x1, . . . , xn ∼ X, what do we know about e0(x1, . . . , xn) = 1 e1(x1, . . . , xn) = x1 + · · · + xn e2(x1, . . . , xn) =

  • i<j

xixj e3(x1, . . . , xn) =

  • i<j<k

xixjxk

slide-113
SLIDE 113

Ideas behind the proof

Given x1, . . . , xn ∼ X, what do we know about e0(x1, . . . , xn) = 1 e1(x1, . . . , xn) = x1 + · · · + xn e2(x1, . . . , xn) =

  • i<j

xixj e3(x1, . . . , xn) =

  • i<j<k

xixjxk As it turns out: e1 determines everything else.

slide-114
SLIDE 114

Ideas behind the proof

e3(x1, . . . , xn) =

  • i<j<k

xixjxk ek has ∼ nk terms which means we expect it to be size nk/2.

slide-115
SLIDE 115

Ideas behind the proof

e3(x1, . . . , xn) =

  • i<j<k

xixjxk ek has ∼ nk terms which means we expect it to be size nk/2.

Lemma

Let m ∈ N and let x1, . . . , xn be i.i.d. random variables sampled from a distribution on R with EX = 0, EX 2 = 1 and E|X|m < ∞. Then, as n → ∞, E

  • em −

⌊m/2⌋

  • k=0

(−1)k 1 k!(m − 2k)!2k · em−2k

1

nk

  • X n

m−1 2

slide-116
SLIDE 116

Sep 3, 2020

slide-117
SLIDE 117

Sep 3, 2020

slide-118
SLIDE 118

Sep 3, 2020

slide-119
SLIDE 119

Sep 3, 2020

The same PDE in a supposedly different context is presumably not a coincidence.

slide-120
SLIDE 120

Sep 3, 2020

is concerned with free convolution of measures µ ⊞ ν as introduced by Voiculescu in the 1980s.

slide-121
SLIDE 121

Sep 3, 2020

is concerned with free convolution of measures µ ⊞ ν as introduced by Voiculescu in the 1980s. It is an analogue of classical convolution in the non-commutative setting.

slide-122
SLIDE 122

Sep 3, 2020

is concerned with free convolution of measures µ ⊞ ν as introduced by Voiculescu in the 1980s. It is an analogue of classical convolution in the non-commutative setting. In particular, we expect that u(t, x) is given by a fractional free convolution µ⊞k with k ≥ 1.

slide-123
SLIDE 123

Sep 3, 2020

is concerned with free convolution of measures µ ⊞ ν as introduced by Voiculescu in the 1980s. It is an analogue of classical convolution in the non-commutative setting. In particular, we expect that u(t, x) is given by a fractional free convolution µ⊞k with k ≥ 1. As t → 1, we have k → ∞.

slide-124
SLIDE 124

Sep 3, 2020

is concerned with free convolution of measures µ ⊞ ν as introduced by Voiculescu in the 1980s. It is an analogue of classical convolution in the non-commutative setting. In particular, we expect that u(t, x) is given by a fractional free convolution µ⊞k with k ≥ 1. As t → 1, we have k → ∞.

An Optimistic Conjecture

Under some reasonable assumptions µ⊞k = u

  • 1 − 1

k , x k

  • dx.
slide-125
SLIDE 125

An Optimistic Conjecture

Under some reasonable assumptions µ⊞k = u

  • 1 − 1

k , x k

  • dx.
slide-126
SLIDE 126

An Optimistic Conjecture

Under some reasonable assumptions µ⊞k = u

  • 1 − 1

k , x k

  • dx.

This would have a large number of implications.

slide-127
SLIDE 127

An Optimistic Conjecture

Under some reasonable assumptions µ⊞k = u

  • 1 − 1

k , x k

  • dx.

This would have a large number of implications. ◮ Fractional Free Convolution preserves free cumulants κ1(µ) =

  • R

xdµ κ2(µ) =

  • R

x2dµ −

  • R

xdµ 2 . . .

slide-128
SLIDE 128

An Optimistic Conjecture

Under some reasonable assumptions µ⊞k = u

  • 1 − 1

k , x k

  • dx.

This would have a large number of implications. ◮ Fractional Free Convolution preserves free cumulants κ1(µ) =

  • R

xdµ κ2(µ) =

  • R

x2dµ −

  • R

xdµ 2 . . . since κn(µ⊞k) = k · κn(µ).

slide-129
SLIDE 129

An Optimistic Conjecture

Under some reasonable assumptions µ⊞k = u

  • 1 − 1

k , x k

  • dx.

This would have a large number of implications. ◮ Fractional Free Convolution preserves free cumulants κ1(µ) =

  • R

xdµ κ2(µ) =

  • R

x2dµ −

  • R

xdµ 2 . . . since κn(µ⊞k) = k · κn(µ). Infinitely many conserved quantities.

slide-130
SLIDE 130

Conjecture

µ⊞k = u

  • 1 − 1

k , x k

  • dx.

would have a large number of implications.

slide-131
SLIDE 131

Conjecture

µ⊞k = u

  • 1 − 1

k , x k

  • dx.

would have a large number of implications.

Voiculescu’s Free Central Limit Theorem

µ ⊞ µ ⊞ · · · ⊞ µ → semicircle.

slide-132
SLIDE 132

Conjecture

µ⊞k = u

  • 1 − 1

k , x k

  • dx.

would have a large number of implications.

Voiculescu’s Free Central Limit Theorem

µ ⊞ µ ⊞ · · · ⊞ µ → semicircle. This would then imply that u(t, x) should be a semicircle for t close to 1.

slide-133
SLIDE 133

Sep 3, 2020

slide-134
SLIDE 134

Sep 4, 2020

slide-135
SLIDE 135

Sep 4, 2020

slide-136
SLIDE 136

Sep 4, 2020

which proves that, in a certain setting, the crystallization assumption for roots is justified in the bulk

slide-137
SLIDE 137

Sep 4, 2020

which proves that, in a certain setting, the crystallization assumption for roots is justified in the bulk and a couple of weeks later

slide-138
SLIDE 138

Sep 4, 2020

which proves that, in a certain setting, the crystallization assumption for roots is justified in the bulk and a couple of weeks later which establishes a connection between Bessel processes and free convolution.

slide-139
SLIDE 139

Sep 4, 2020

which proves that, in a certain setting, the crystallization assumption for roots is justified in the bulk and a couple of weeks later which establishes a connection between Bessel processes and free

  • convolution. So I think we are pretty close to having completely

rigorous arguments for most things.

slide-140
SLIDE 140

What’s left to do?

slide-141
SLIDE 141

What’s left to do?

◮ Can the PDE be useful? Linearization seems really nice?

slide-142
SLIDE 142

What’s left to do?

◮ Can the PDE be useful? Linearization seems really nice? ◮ Is Jeremy Hoskins’ algorithm a useful method to compute µ⊞k?

slide-143
SLIDE 143

What’s left to do?

◮ Can the PDE be useful? Linearization seems really nice? ◮ Is Jeremy Hoskins’ algorithm a useful method to compute µ⊞k? ◮ What about the complex case?

slide-144
SLIDE 144

What’s left to do?

picture from O’Rourke and Williams (2018)

slide-145
SLIDE 145

The Complex Case

One can derive the same sort of PDE in the complex case. The derivation is actually simpler

slide-146
SLIDE 146

The Complex Case

One can derive the same sort of PDE in the complex case. The derivation is actually simpler typically ∼ n−1/2 1 z − zℓ = −

n

  • k=1

k=ℓ

1 z − zk .

slide-147
SLIDE 147

A Nonlocal Transport Equation

Sean O’Rourke and I tried to see whether the equation simplifies if we assume that the initial distribution is radial around the origin.

slide-148
SLIDE 148

A Nonlocal Transport Equation

Sean O’Rourke and I tried to see whether the equation simplifies if we assume that the initial distribution is radial around the origin. If the density is ψ(t, x), then ∂ψ ∂t = ∂ ∂x 1 x x ψ(s)ds −1 ψ(x)

  • .
slide-149
SLIDE 149

A Nonlocal Transport Equation

Sean O’Rourke and I tried to see whether the equation simplifies if we assume that the initial distribution is radial around the origin. If the density is ψ(t, x), then ∂ψ ∂t = ∂ ∂x 1 x x ψ(s)ds −1 ψ(x)

  • .

r

slide-150
SLIDE 150

∂ψ ∂t = ∂ ∂x 1 x x ψ(s)ds −1 ψ(x)

  • has a nice closed form solution

u(t, x) = χ0≤x≤1−t.

slide-151
SLIDE 151

∂ψ ∂t = ∂ ∂x 1 x x ψ(s)ds −1 ψ(x)

  • has a nice closed form solution

u(t, x) = χ0≤x≤1−t. This corresponds to Random Taylor Polynomials.

1000 500 500 1000 1000 500 500 1000

slide-152
SLIDE 152

1000 500 500 1000 1000 500 500 1000

Random Taylor polynomials are defined by pn =

n

  • k=0

γk zk k! , where γk ∼ N(0, 1).

slide-153
SLIDE 153

1000 500 500 1000 1000 500 500 1000

Random Taylor polynomials are defined by pn =

n

  • k=0

γk zk k! , where γk ∼ N(0, 1). They are preserved under differentiation.

slide-154
SLIDE 154

1000 500 500 1000 1000 500 500 1000

Random Taylor polynomials are defined by pn =

n

  • k=0

γk zk k! , where γk ∼ N(0, 1). They are preserved under differentiation.

Theorem ( Kabluchko & Zaporozhets)

1 n

n

  • k=1

δzkn−1 → χ|z|≤1 2π|z| as n → ∞.

slide-155
SLIDE 155

A final pretty fact: when trying to study L2−stability of the solution, one runs into the following beautiful inequality.

Lemma

For f : (0, ∞) → R≥0 ∞ f (x) x2 x f (y)dy

  • dx ≤

∞ f (x)2 x dx,

slide-156
SLIDE 156

A final pretty fact: when trying to study L2−stability of the solution, one runs into the following beautiful inequality.

Lemma

For f : (0, ∞) → R≥0 ∞ f (x) x2 x f (y)dy

  • dx ≤

∞ f (x)2 x dx,

  • Proof. follows easily from a general Hardy inequality.
slide-157
SLIDE 157

Thank you!