SLIDE 1 Roots of Polynomials Under Repeated Differentiation
Stefan Steinerberger
UCLA/Caltech, October 2020
SLIDE 2 Outline of the Talk
SLIDE 3 Outline of the Talk
- 1. Roots of Polynomials
- 2. A Nonlinear PDE
SLIDE 4 Outline of the Talk
- 1. Roots of Polynomials
- 2. A Nonlinear PDE
- 3. Hermite Polynomials out of thin air (with Jeremy Hoskins)
SLIDE 5 Outline of the Talk
- 1. Roots of Polynomials
- 2. A Nonlinear PDE
- 3. Hermite Polynomials out of thin air (with Jeremy Hoskins)
- 4. Free Probability, Random Matrices, ...
SLIDE 6 Outline of the Talk
- 1. Roots of Polynomials
- 2. A Nonlinear PDE
- 3. Hermite Polynomials out of thin air (with Jeremy Hoskins)
- 4. Free Probability, Random Matrices, ...
- 5. A Nonlinear PDE in the Complex Plane (with Sean O’Rourke)
SLIDE 7
pn will be a polynomial of degree n having n distinct roots.
SLIDE 8 pn will be a polynomial of degree n having n distinct roots. The Gauss-Lucas theorem (1830s). The roots of p′
n are
contained in the convex hull of the roots of pn.
SLIDE 9 pn will be a polynomial of degree n having n distinct roots. The Gauss-Lucas theorem (1830s). The roots of p′
n are
contained in the convex hull of the roots of pn.
- Proof. The ‘electrostatic interpretation’:
p′
n(z)
pn(z) =
n
1 z − zk .
SLIDE 10 pn will be a polynomial of degree n having n distinct roots. The Gauss-Lucas theorem (1830s). The roots of p′
n are
contained in the convex hull of the roots of pn.
- Proof. The ‘electrostatic interpretation’:
p′
n(z)
pn(z) =
n
1 z − zk . If you are outside the convex hull, the charges ‘push you away’.
SLIDE 11 p′
n(z)
pn(z) =
n
1 z − zk .
SLIDE 12 p′
n(z)
pn(z) =
n
1 z − zk .
SLIDE 13 Suppose µ is a probability measure on C and suppose pn(z) =
n
(z − zk), where z1, . . . , zn ∼ µ are i.i.d. random variables distributed according to µ.
SLIDE 14 Suppose µ is a probability measure on C and suppose pn(z) =
n
(z − zk), where z1, . . . , zn ∼ µ are i.i.d. random variables distributed according to µ. What is the distribution of the critical points of pn (the roots of p′
n)?
SLIDE 15 Suppose µ is a probability measure on C and suppose pn(z) =
n
(z − zk), where z1, . . . , zn ∼ µ are i.i.d. random variables distributed according to µ. What is the distribution of the critical points of pn (the roots of p′
n)?
Conjecture (Rivin-Pemantle), Theorem (Kabluchko, 2015)
The critical points are also distributed according to µ.
SLIDE 16 Suppose µ is a probability measure on C and suppose pn(z) =
n
(z − zk), where z1, . . . , zn ∼ µ are i.i.d. random variables distributed according to µ. What is the distribution of the critical points of pn (the roots of p′
n)?
Conjecture (Rivin-Pemantle), Theorem (Kabluchko, 2015)
The critical points are also distributed according to µ. Note: The critical points are w1, . . . , wn−1 ∈ C. The statement really says that 1 n − 1
n−1
δwk ⇀ µ.
SLIDE 17
Here’s a heuristic why this is not too surprising.
SLIDE 18
Here’s a heuristic why this is not too surprising. Suppose µ is absolutely continuous and compactly supported. Then the roots look a bit like this
SLIDE 19
Here’s a heuristic why this is not too surprising. Suppose µ is absolutely continuous and compactly supported. Then the roots look a bit like this roots of pn
SLIDE 20
Here’s a heuristic why this is not too surprising. Suppose µ is absolutely continuous and compactly supported typically ∼ n−1/2
SLIDE 21 typically ∼ n−1/2 Every root of the derivative satisfies p′
n(z) = 0 which means
1 z − zℓ = −
n
k=ℓ
1 z − zk .
SLIDE 22 typically ∼ n−1/2 Every root of the derivative satisfies p′
n(z) = 0 which means
1 z − zℓ = −
n
k=ℓ
1 z − zk . The right-hand side is typically size ∼ n.
SLIDE 23 typically ∼ n−1/2 Every root of the derivative satisfies p′
n(z) = 0 which means
1 z − zℓ = −
n
k=ℓ
1 z − zk . The right-hand side is typically size ∼ n. So the root of the derivative has to be distance ∼ n−1 from one of the existing roots
SLIDE 24 typically ∼ n−1/2 Every root of the derivative satisfies p′
n(z) = 0 which means
1 z − zℓ = −
n
k=ℓ
1 z − zk . The right-hand side is typically size ∼ n. So the root of the derivative has to be distance ∼ n−1 from one of the existing roots and the existing roots are ∼ n−1/2 separated, so no two of them are very close.
SLIDE 25
typically ∼ n−1/2 What this argument tells us is roughly the following:
SLIDE 26 typically ∼ n−1/2 What this argument tells us is roughly the following: if µ is a sufficiently nice measure, then for each (random) root pn(zk) = 0, we would expect that there is a critical point p′
n(z) = 0 that is at
most distance ∼ n−1 nearby.
SLIDE 27 typically ∼ n−1/2 What this argument tells us is roughly the following: if µ is a sufficiently nice measure, then for each (random) root pn(zk) = 0, we would expect that there is a critical point p′
n(z) = 0 that is at
most distance ∼ n−1 nearby. This is roughly correct and there are recent papers by Sean O’Rourke and Noah Williams in this direction.
SLIDE 28
picture from O’Rourke and Williams (2018)
SLIDE 29 Theorem (O’Rourke and Williams)
Under reasonable assumptions on the measure W1(µn, µ′
n) (log n)10
n .
SLIDE 30
picture from O’Rourke and Williams (2018)
?
SLIDE 31 picture from O’Rourke and Williams (2018) n
k=1 k=ℓ
1 z−zk vanishes
SLIDE 32
In fact, the bijective relationship has to fail somewhere: there are n roots and n − 1 critical points.
SLIDE 33 In fact, the bijective relationship has to fail somewhere: there are n roots and n − 1 critical points. The unpaired root is frequently close to the root of V (z) =
n
1 z − zk .
SLIDE 34
picture from O’Rourke and Williams (2018) This looks almost like a flow of particles captured at nearby times.
SLIDE 35
Let’s now return to the one-dimensional setting.
SLIDE 36 Let’s now return to the one-dimensional setting. Things are slightly different here pn(x) =
n
(x − xk).
SLIDE 37 Let’s now return to the one-dimensional setting. Things are slightly different here pn(x) =
n
(x − xk). Between any two roots there is a maximum or a minimum, thus a root of p′
n.
SLIDE 38 Let’s now return to the one-dimensional setting. Things are slightly different here pn(x) =
n
(x − xk). Between any two roots there is a maximum or a minimum, thus a root of p′
- n. Moreover, there are n − 1 intervals between the n
roots, so each interval has exactly one root. Thus the roots of pn and the roots of p′
n interlace.
SLIDE 39 Let’s now return to the one-dimensional setting. Things are slightly different here pn(x) =
n
(x − xk). Between any two roots there is a maximum or a minimum, thus a root of p′
- n. Moreover, there are n − 1 intervals between the n
roots, so each interval has exactly one root. Thus the roots of pn and the roots of p′
n interlace.
SLIDE 40 Let’s now return to the one-dimensional setting. Things are slightly different here pn(x) =
n
(x − xk). Between any two roots there is a maximum or a minimum, thus a root of p′
- n. Moreover, there are n − 1 intervals between the n
roots, so each interval has exactly one root. Thus the roots of pn and the roots of p′
n interlace.
SLIDE 41
Conjecture (Rivin-Pemantle), Theorem (Kabluchko, 2015)
The critical points are also distributed according to µ.
SLIDE 42 Conjecture (Rivin-Pemantle), Theorem (Kabluchko, 2015)
The critical points are also distributed according to µ.
Fact for real-rooted polynomials
The roots of p
(
n log n )
n
also distributed according to µ as n → ∞.
SLIDE 43 Conjecture (Rivin-Pemantle), Theorem (Kabluchko, 2015)
The critical points are also distributed according to µ.
Fact for real-rooted polynomials
The roots of p
(
n log n )
n
also distributed according to µ as n → ∞.
- Sketch. Each root moves roughly ±n−1 under one step of
differentiation.
SLIDE 44 Conjecture (Rivin-Pemantle), Theorem (Kabluchko, 2015)
The critical points are also distributed according to µ.
Fact for real-rooted polynomials
The roots of p
(
n log n )
n
also distributed according to µ as n → ∞.
- Sketch. Each root moves roughly ±n−1 under one step of
differentiation.
Main Question
What about the roots of p(t·n)
n
where 0 < t < 1?
SLIDE 45 Main Question
What about the roots of p(t·n)
n
where 0 < t < 1? t = 0 t = 0.5 t = 0.99
SLIDE 46 Main Question
What about the roots of p(t·n)
n
where 0 < t < 1? t = 0 t = 0.5 t = 0.99
SLIDE 47 Main Question
What about the roots of p(t·n)
n
where 0 < t < 1? t = 0 t = 0.5 t = 0.99
SLIDE 48 Main Question
What about the roots of p(t·n)
n
where 0 < t < 1?
SLIDE 49 Main Question
What about the roots of p(t·n)
n
where 0 < t < 1? Some History.
- 1. The question hasn’t been studied very much.
SLIDE 50 Main Question
What about the roots of p(t·n)
n
where 0 < t < 1? Some History.
- 1. The question hasn’t been studied very much.
- 2. Polya asked a whole number of questions in the setting of real
entire functions.
SLIDE 51 Main Question
What about the roots of p(t·n)
n
where 0 < t < 1? Some History.
- 1. The question hasn’t been studied very much.
- 2. Polya asked a whole number of questions in the setting of real
entire functions.
- 3. The smallest gap grows under differentiation.
SLIDE 52 Main Question
What about the roots of p(t·n)
n
where 0 < t < 1? Some History.
- 1. The question hasn’t been studied very much.
- 2. Polya asked a whole number of questions in the setting of real
entire functions.
- 3. The smallest gap grows under differentiation. Denoting the
smallest gap of a polynomial pn having n real roots {x1, . . . , xn} by G(pn) = min
i=j |xi − xj|,
SLIDE 53 Main Question
What about the roots of p(t·n)
n
where 0 < t < 1? Some History.
- 1. The question hasn’t been studied very much.
- 2. Polya asked a whole number of questions in the setting of real
entire functions.
- 3. The smallest gap grows under differentiation. Denoting the
smallest gap of a polynomial pn having n real roots {x1, . . . , xn} by G(pn) = min
i=j |xi − xj|,
we have (Riesz, Sz-Nagy, Walker, 1920s) G(p′
n) ≥ G(pn).
SLIDE 54 Main Question
What about the roots of p(t·n)
n
where 0 < t < 1? Let us denote the answer by u(t, x). Here, the idea is that u(t, x) is the limiting behavior as n → ∞.
SLIDE 55 Main Question
What about the roots of p(t·n)
n
where 0 < t < 1? Let us denote the answer by u(t, x). Here, the idea is that u(t, x) is the limiting behavior as n → ∞. In particular µ = u(0, x)dx
SLIDE 56 Main Question
What about the roots of p(t·n)
n
where 0 < t < 1? Let us denote the answer by u(t, x). Here, the idea is that u(t, x) is the limiting behavior as n → ∞. In particular µ = u(0, x)dx and
u(t, x)dx = 1 − t.
SLIDE 57 Main Question
What about the roots of p(t·n)
n
where 0 < t < 1? Let us denote the answer by u(t, x). Here, the idea is that u(t, x) is the limiting behavior as n → ∞. In particular µ = u(0, x)dx and
u(t, x)dx = 1 − t. What can one say about u(t, x)?
SLIDE 58 Main Question
What about the roots of p(t·n)
n
where 0 < t < 1? Let us denote the answer by u(t, x). 1.
SLIDE 59 Main Question
What about the roots of p(t·n)
n
where 0 < t < 1? Let us denote the answer by u(t, x). 1.
2.
- R u(t, x)x dx = (1 − t)
- R u(0, x)x dx
SLIDE 60 Main Question
What about the roots of p(t·n)
n
where 0 < t < 1? Let us denote the answer by u(t, x). 1.
2.
- R u(t, x)x dx = (1 − t)
- R u(0, x)x dx
3.
- R
- R u(t, x)(x − y)2u(t, y) dxdy =
SLIDE 61 Main Question
What about the roots of p(t·n)
n
where 0 < t < 1? Let us denote the answer by u(t, x). 1.
2.
- R u(t, x)x dx = (1 − t)
- R u(0, x)x dx
3.
- R
- R u(t, x)(x − y)2u(t, y) dxdy =
(1 − t)3
R
- R u(0, x)(x − y)2u(0, y) dxdy
SLIDE 62 Main Question
What about the roots of p(t·n)
n
where 0 < t < 1? Let us denote the answer by u(t, x). 1.
2.
- R u(t, x)x dx = (1 − t)
- R u(0, x)x dx
3.
- R
- R u(t, x)(x − y)2u(t, y) dxdy =
(1 − t)3
R
- R u(0, x)(x − y)2u(0, y) dxdy
This means: the distribution shrinks linearly in mass,
SLIDE 63 Main Question
What about the roots of p(t·n)
n
where 0 < t < 1? Let us denote the answer by u(t, x). 1.
2.
- R u(t, x)x dx = (1 − t)
- R u(0, x)x dx
3.
- R
- R u(t, x)(x − y)2u(t, y) dxdy =
(1 − t)3
R
- R u(0, x)(x − y)2u(0, y) dxdy
This means: the distribution shrinks linearly in mass, its mean is preserved and
SLIDE 64 Main Question
What about the roots of p(t·n)
n
where 0 < t < 1? Let us denote the answer by u(t, x). 1.
2.
- R u(t, x)x dx = (1 − t)
- R u(0, x)x dx
3.
- R
- R u(t, x)(x − y)2u(t, y) dxdy =
(1 − t)3
R
- R u(0, x)(x − y)2u(0, y) dxdy
This means: the distribution shrinks linearly in mass, its mean is preserved and the mass is distributed over area ∼ √1 − t.
SLIDE 65 An Equation (S. 2018)
There’s some good heuristic reasoning for ∂u ∂t + 1 π ∂ ∂x arctan Hu u
SLIDE 66 An Equation (S. 2018)
There’s some good heuristic reasoning for ∂u ∂t + 1 π ∂ ∂x arctan Hu u
where Hf (x) = p.v. 1 π
f (y) x − y dy is the Hilbert transform.
SLIDE 67 An Equation (S. 2018)
There’s some good heuristic reasoning for ∂u ∂t + 1 π ∂ ∂x arctan Hu u
where Hf (x) = p.v. 1 π
f (y) x − y dy is the Hilbert transform. The argument is actually fun and I can give it in full. But before, let’s explore this strange equation.
SLIDE 68
A nice way to understand a PDE is through explicit closed-form solutions (if they exist).
SLIDE 69
A nice way to understand a PDE is through explicit closed-form solutions (if they exist). So the relevant question is: are there nice special solutions that we can construct? For this we need polynomials pn whose roots have a nice distribution
SLIDE 70 A nice way to understand a PDE is through explicit closed-form solutions (if they exist). So the relevant question is: are there nice special solutions that we can construct? For this we need polynomials pn whose roots have a nice distribution and whose derivatives p(k)
n
also have a nice distribution?
SLIDE 71 A nice way to understand a PDE is through explicit closed-form solutions (if they exist). So the relevant question is: are there nice special solutions that we can construct? For this we need polynomials pn whose roots have a nice distribution and whose derivatives p(k)
n
also have a nice distribution?
- 1. Hermite polynomials
- 2. (associated) Laguerre polynomials
SLIDE 72 A nice way to understand a PDE is through explicit closed-form solutions (if they exist). So the relevant question is: are there nice special solutions that we can construct? For this we need polynomials pn whose roots have a nice distribution and whose derivatives p(k)
n
also have a nice distribution?
- 1. Hermite polynomials
- 2. (associated) Laguerre polynomials
Presumably there are many others(?)
SLIDE 73
Hermite Polynomials
Hermite polynomials Hn : R → R satisfy a nice recurrence relation dm dxm Hn(x) = 2nn! (n − m)!Hn−m(x).
SLIDE 74 Hermite Polynomials
Hermite polynomials Hn : R → R satisfy a nice recurrence relation dm dxm Hn(x) = 2nn! (n − m)!Hn−m(x). Moreover, the roots of Hn converge, in a suitable sense, to µ = 1 π
SLIDE 75 Hermite Polynomials
Hermite polynomials Hn : R → R satisfy a nice recurrence relation dm dxm Hn(x) = 2nn! (n − m)!Hn−m(x). Moreover, the roots of Hn converge, in a suitable sense, to µ = 1 π
This suggests that u(t, x) = 2 π
for t ≤ 1 should be a solution of the PDE (and it is).
SLIDE 76 Hermite Polynomials
u(t, x) = 2 π
for t ≤ 1
SLIDE 77 Laguerre Polynomials
(Associated) Laguerre polynomials Hn : R → R satisfy the recurrence relation dk dxk L(α)
n (x) = (−1)kL(α+k) n−k (x).
SLIDE 78 Laguerre Polynomials
(Associated) Laguerre polynomials Hn : R → R satisfy the recurrence relation dk dxk L(α)
n (x) = (−1)kL(α+k) n−k (x).
The roots converge in distribution to the Marchenko-Pastur distribution v(c, x) =
2πx χ(x−,x+)dx where x± = ( √ c + 1 ± 1)2.
SLIDE 79 Laguerre Polynomials
(Associated) Laguerre polynomials Hn : R → R satisfy the recurrence relation dk dxk L(α)
n (x) = (−1)kL(α+k) n−k (x).
The roots converge in distribution to the Marchenko-Pastur distribution v(c, x) =
2πx χ(x−,x+)dx where x± = ( √ c + 1 ± 1)2. Indeed, uc(t, x) = v c + t 1 − t , x 1 − t
- is a solution of the PDE.
SLIDE 80 Laguerre Polynomials
uc(t, x) = v c + t 1 − t , x 1 − t
Figure: Marchenko-Pastur solutions uc(t, x): c = 1 (left) and c = 15 (right) shown for t ∈ {0, 0.2, 0.4, 0.6, 0.8, 0.9, 0.95, 0.99}.
SLIDE 81
A Bonus Solution
There are several classical orthogonal polynomials on [−1, 1] (Gegenbauer, Jacobi, ...).
SLIDE 82 A Bonus Solution
There are several classical orthogonal polynomials on [−1, 1] (Gegenbauer, Jacobi, ...). For fairly general classes (Erd˝
theorem) of such polynomials, the distribution of roots is asymptotically given by µ = 1 π dx √ 1 − x2 .
SLIDE 83 A Bonus Solution
There are several classical orthogonal polynomials on [−1, 1] (Gegenbauer, Jacobi, ...). For fairly general classes (Erd˝
theorem) of such polynomials, the distribution of roots is asymptotically given by µ = 1 π dx √ 1 − x2 . As it turns out, u(t, x) = c √ 1 − x2 is indeed a stationary solution of the equation.
SLIDE 84 A Bonus Solution
There are several classical orthogonal polynomials on [−1, 1] (Gegenbauer, Jacobi, ...). For fairly general classes (Erd˝
theorem) of such polynomials, the distribution of roots is asymptotically given by µ = 1 π dx √ 1 − x2 . As it turns out, u(t, x) = c √ 1 − x2 is indeed a stationary solution of the equation.
Theorem (Tricomi?)
Let f : (−1, 1) → R≥0. If Hf ≡ 0 in (−1, 1), then f = c √ 1 − x2 .
SLIDE 85 ∂u ∂t + 1 π ∂ ∂x arctan Hu u
SLIDE 86 ∂u ∂t + 1 π ∂ ∂x arctan Hu u
Sketch of the Derivation. Crystallization as key assumption.
SLIDE 87 ∂u ∂t + 1 π ∂ ∂x arctan Hu u
Sketch of the Derivation. Crystallization as key assumption. u(t, x)
SLIDE 88 ∂u ∂t + 1 π ∂ ∂x arctan Hu u
Sketch of the Derivation. Crystallization as key assumption. u(t, x)
SLIDE 89 ∂u ∂t + 1 π ∂ ∂x arctan Hu u
Sketch of the Derivation. Crystallization as key assumption. u(t, x)
SLIDE 90
xk
n
k=1 1 x−xk = 0
SLIDE 91 xk
n
k=1 1 x−xk = 0
n
1 x − xk =
1 x − xk +
1 x − xk
SLIDE 92 xk
n
k=1 1 x−xk = 0
n
1 x − xk =
1 x − xk +
1 x − xk
1 x − xk ∼ n
1 x − y · u(t, y)dy = n · [Hu](t, x).
SLIDE 93 xk
n
k=1 1 x−xk = 0
n
1 x − xk =
1 x − xk +
1 x − xk
1 x − xk ∼ n
1 x − y · u(t, y)dy = n · [Hu](t, x). It thus remains to understand the behavior of the local term.
SLIDE 94 The local term is
1 x − xk .
SLIDE 95 The local term is
1 x − xk . Crystallization means that the roots form, locally, an arithmetic progressions
SLIDE 96 The local term is
1 x − xk . Crystallization means that the roots form, locally, an arithmetic progressions and thus
1 x − xk ∼
1 x −
ℓ u(t,x)n
.
SLIDE 97 The local term is
1 x − xk . Crystallization means that the roots form, locally, an arithmetic progressions and thus
1 x − xk ∼
1 x −
ℓ u(t,x)n
. We are in luck: this sum has a closed-form expression due to Euler π cot πx = 1 x +
∞
x + n + 1 x − n
SLIDE 98
The Local Field
SLIDE 99
The Local Field
We can then predict the behavior of the roots of the derivative: they are in places where the local (near) field and the global (far) field cancel out. This leads to the desired equation.
SLIDE 100
A Fast Numerical Algorithm
Jeremy Hoskins (U Chicago) used the electrostatic interpretation to produce an algorithm that can compute all derivatives of polynomials up to degree ∼ 100.000.
SLIDE 101
A Fast Numerical Algorithm
Jeremy Hoskins (U Chicago) used the electrostatic interpretation to produce an algorithm that can compute all derivatives of polynomials up to degree ∼ 100.000. Semicircles.
SLIDE 102
Theorem (J. Hoskins and S, 2020)
Let X be a random variable on R such that all moments are finite and EX = 0 as well as VX = 1.
SLIDE 103
Theorem (J. Hoskins and S, 2020)
Let X be a random variable on R such that all moments are finite and EX = 0 as well as VX = 1. Let pn be a random polynomial whose roots are i.i.d. copies of X and fix ℓ ∈ N.
SLIDE 104 Theorem (J. Hoskins and S, 2020)
Let X be a random variable on R such that all moments are finite and EX = 0 as well as VX = 1. Let pn be a random polynomial whose roots are i.i.d. copies of X and fix ℓ ∈ N. Then, as n → ∞, nℓ/2 ℓ! n! · p(n−ℓ)
n
x √n
- ∼ (1 + o(1)) · Heℓ(x + γn),
SLIDE 105 Theorem (J. Hoskins and S, 2020)
Let X be a random variable on R such that all moments are finite and EX = 0 as well as VX = 1. Let pn be a random polynomial whose roots are i.i.d. copies of X and fix ℓ ∈ N. Then, as n → ∞, nℓ/2 ℓ! n! · p(n−ℓ)
n
x √n
- ∼ (1 + o(1)) · Heℓ(x + γn),
where γn ∼ N(0, 1) and Heℓ is the ℓ−th Hermite polynomial.
SLIDE 106 Theorem (J. Hoskins and S, 2020)
Let X be a random variable on R such that all moments are finite and EX = 0 as well as VX = 1. Let pn be a random polynomial whose roots are i.i.d. copies of X and fix ℓ ∈ N. Then, as n → ∞, nℓ/2 ℓ! n! · p(n−ℓ)
n
x √n
- ∼ (1 + o(1)) · Heℓ(x + γn),
where γn ∼ N(0, 1) and Heℓ is the ℓ−th Hermite polynomial. Remarks.
- 1. The roots of the Hermite polynomial have a semicircle density.
SLIDE 107 Theorem (J. Hoskins and S, 2020)
Let X be a random variable on R such that all moments are finite and EX = 0 as well as VX = 1. Let pn be a random polynomial whose roots are i.i.d. copies of X and fix ℓ ∈ N. Then, as n → ∞, nℓ/2 ℓ! n! · p(n−ℓ)
n
x √n
- ∼ (1 + o(1)) · Heℓ(x + γn),
where γn ∼ N(0, 1) and Heℓ is the ℓ−th Hermite polynomial. Remarks.
- 1. The roots of the Hermite polynomial have a semicircle density.
- 2. If x1, x2, . . . , xn ∼ X, then
x1 + · · · + xn √n ∼ N(0, 1) and the mean of the roots is preserved under differentiation (hence the random shift).
SLIDE 108
Ideas behind the proof
pn(x) = xn + an−1xn−1 + · · · + a0
SLIDE 109
Ideas behind the proof
pn(x) = xn + an−1xn−1 + · · · + a0 Coefficients are preserved under differentiation, they are simply multiplied with degrees.
SLIDE 110 Ideas behind the proof
pn(x) = xn + an−1xn−1 + · · · + a0 Coefficients are preserved under differentiation, they are simply multiplied with degrees.
n
(x − xi) =
n
(−1)kek(x1, . . . , xn)xn−k.
SLIDE 111 Ideas behind the proof
pn(x) = xn + an−1xn−1 + · · · + a0 Coefficients are preserved under differentiation, they are simply multiplied with degrees.
n
(x − xi) =
n
(−1)kek(x1, . . . , xn)xn−k. We need elementary symmetric polynomials e0(x1, . . . , xn) = 1 e1(x1, . . . , xn) = x1 + · · · + xn e2(x1, . . . , xn) =
xixj e3(x1, . . . , xn) =
xixjxk
SLIDE 112 Ideas behind the proof
Given x1, . . . , xn ∼ X, what do we know about e0(x1, . . . , xn) = 1 e1(x1, . . . , xn) = x1 + · · · + xn e2(x1, . . . , xn) =
xixj e3(x1, . . . , xn) =
xixjxk
SLIDE 113 Ideas behind the proof
Given x1, . . . , xn ∼ X, what do we know about e0(x1, . . . , xn) = 1 e1(x1, . . . , xn) = x1 + · · · + xn e2(x1, . . . , xn) =
xixj e3(x1, . . . , xn) =
xixjxk As it turns out: e1 determines everything else.
SLIDE 114 Ideas behind the proof
e3(x1, . . . , xn) =
xixjxk ek has ∼ nk terms which means we expect it to be size nk/2.
SLIDE 115 Ideas behind the proof
e3(x1, . . . , xn) =
xixjxk ek has ∼ nk terms which means we expect it to be size nk/2.
Lemma
Let m ∈ N and let x1, . . . , xn be i.i.d. random variables sampled from a distribution on R with EX = 0, EX 2 = 1 and E|X|m < ∞. Then, as n → ∞, E
⌊m/2⌋
(−1)k 1 k!(m − 2k)!2k · em−2k
1
nk
m−1 2
SLIDE 116
Sep 3, 2020
SLIDE 117
Sep 3, 2020
SLIDE 118
Sep 3, 2020
SLIDE 119
Sep 3, 2020
The same PDE in a supposedly different context is presumably not a coincidence.
SLIDE 120
Sep 3, 2020
is concerned with free convolution of measures µ ⊞ ν as introduced by Voiculescu in the 1980s.
SLIDE 121
Sep 3, 2020
is concerned with free convolution of measures µ ⊞ ν as introduced by Voiculescu in the 1980s. It is an analogue of classical convolution in the non-commutative setting.
SLIDE 122
Sep 3, 2020
is concerned with free convolution of measures µ ⊞ ν as introduced by Voiculescu in the 1980s. It is an analogue of classical convolution in the non-commutative setting. In particular, we expect that u(t, x) is given by a fractional free convolution µ⊞k with k ≥ 1.
SLIDE 123
Sep 3, 2020
is concerned with free convolution of measures µ ⊞ ν as introduced by Voiculescu in the 1980s. It is an analogue of classical convolution in the non-commutative setting. In particular, we expect that u(t, x) is given by a fractional free convolution µ⊞k with k ≥ 1. As t → 1, we have k → ∞.
SLIDE 124 Sep 3, 2020
is concerned with free convolution of measures µ ⊞ ν as introduced by Voiculescu in the 1980s. It is an analogue of classical convolution in the non-commutative setting. In particular, we expect that u(t, x) is given by a fractional free convolution µ⊞k with k ≥ 1. As t → 1, we have k → ∞.
An Optimistic Conjecture
Under some reasonable assumptions µ⊞k = u
k , x k
SLIDE 125 An Optimistic Conjecture
Under some reasonable assumptions µ⊞k = u
k , x k
SLIDE 126 An Optimistic Conjecture
Under some reasonable assumptions µ⊞k = u
k , x k
This would have a large number of implications.
SLIDE 127 An Optimistic Conjecture
Under some reasonable assumptions µ⊞k = u
k , x k
This would have a large number of implications. ◮ Fractional Free Convolution preserves free cumulants κ1(µ) =
xdµ κ2(µ) =
x2dµ −
xdµ 2 . . .
SLIDE 128 An Optimistic Conjecture
Under some reasonable assumptions µ⊞k = u
k , x k
This would have a large number of implications. ◮ Fractional Free Convolution preserves free cumulants κ1(µ) =
xdµ κ2(µ) =
x2dµ −
xdµ 2 . . . since κn(µ⊞k) = k · κn(µ).
SLIDE 129 An Optimistic Conjecture
Under some reasonable assumptions µ⊞k = u
k , x k
This would have a large number of implications. ◮ Fractional Free Convolution preserves free cumulants κ1(µ) =
xdµ κ2(µ) =
x2dµ −
xdµ 2 . . . since κn(µ⊞k) = k · κn(µ). Infinitely many conserved quantities.
SLIDE 130 Conjecture
µ⊞k = u
k , x k
would have a large number of implications.
SLIDE 131 Conjecture
µ⊞k = u
k , x k
would have a large number of implications.
Voiculescu’s Free Central Limit Theorem
µ ⊞ µ ⊞ · · · ⊞ µ → semicircle.
SLIDE 132 Conjecture
µ⊞k = u
k , x k
would have a large number of implications.
Voiculescu’s Free Central Limit Theorem
µ ⊞ µ ⊞ · · · ⊞ µ → semicircle. This would then imply that u(t, x) should be a semicircle for t close to 1.
SLIDE 133
Sep 3, 2020
SLIDE 134
Sep 4, 2020
SLIDE 135
Sep 4, 2020
SLIDE 136
Sep 4, 2020
which proves that, in a certain setting, the crystallization assumption for roots is justified in the bulk
SLIDE 137
Sep 4, 2020
which proves that, in a certain setting, the crystallization assumption for roots is justified in the bulk and a couple of weeks later
SLIDE 138
Sep 4, 2020
which proves that, in a certain setting, the crystallization assumption for roots is justified in the bulk and a couple of weeks later which establishes a connection between Bessel processes and free convolution.
SLIDE 139 Sep 4, 2020
which proves that, in a certain setting, the crystallization assumption for roots is justified in the bulk and a couple of weeks later which establishes a connection between Bessel processes and free
- convolution. So I think we are pretty close to having completely
rigorous arguments for most things.
SLIDE 140
What’s left to do?
SLIDE 141
What’s left to do?
◮ Can the PDE be useful? Linearization seems really nice?
SLIDE 142
What’s left to do?
◮ Can the PDE be useful? Linearization seems really nice? ◮ Is Jeremy Hoskins’ algorithm a useful method to compute µ⊞k?
SLIDE 143
What’s left to do?
◮ Can the PDE be useful? Linearization seems really nice? ◮ Is Jeremy Hoskins’ algorithm a useful method to compute µ⊞k? ◮ What about the complex case?
SLIDE 144
What’s left to do?
picture from O’Rourke and Williams (2018)
SLIDE 145
The Complex Case
One can derive the same sort of PDE in the complex case. The derivation is actually simpler
SLIDE 146 The Complex Case
One can derive the same sort of PDE in the complex case. The derivation is actually simpler typically ∼ n−1/2 1 z − zℓ = −
n
k=ℓ
1 z − zk .
SLIDE 147
A Nonlocal Transport Equation
Sean O’Rourke and I tried to see whether the equation simplifies if we assume that the initial distribution is radial around the origin.
SLIDE 148 A Nonlocal Transport Equation
Sean O’Rourke and I tried to see whether the equation simplifies if we assume that the initial distribution is radial around the origin. If the density is ψ(t, x), then ∂ψ ∂t = ∂ ∂x 1 x x ψ(s)ds −1 ψ(x)
SLIDE 149 A Nonlocal Transport Equation
Sean O’Rourke and I tried to see whether the equation simplifies if we assume that the initial distribution is radial around the origin. If the density is ψ(t, x), then ∂ψ ∂t = ∂ ∂x 1 x x ψ(s)ds −1 ψ(x)
r
SLIDE 150 ∂ψ ∂t = ∂ ∂x 1 x x ψ(s)ds −1 ψ(x)
- has a nice closed form solution
u(t, x) = χ0≤x≤1−t.
SLIDE 151 ∂ψ ∂t = ∂ ∂x 1 x x ψ(s)ds −1 ψ(x)
- has a nice closed form solution
u(t, x) = χ0≤x≤1−t. This corresponds to Random Taylor Polynomials.
1000 500 500 1000 1000 500 500 1000
SLIDE 152 1000 500 500 1000 1000 500 500 1000
Random Taylor polynomials are defined by pn =
n
γk zk k! , where γk ∼ N(0, 1).
SLIDE 153 1000 500 500 1000 1000 500 500 1000
Random Taylor polynomials are defined by pn =
n
γk zk k! , where γk ∼ N(0, 1). They are preserved under differentiation.
SLIDE 154 1000 500 500 1000 1000 500 500 1000
Random Taylor polynomials are defined by pn =
n
γk zk k! , where γk ∼ N(0, 1). They are preserved under differentiation.
Theorem ( Kabluchko & Zaporozhets)
1 n
n
δzkn−1 → χ|z|≤1 2π|z| as n → ∞.
SLIDE 155 A final pretty fact: when trying to study L2−stability of the solution, one runs into the following beautiful inequality.
Lemma
For f : (0, ∞) → R≥0 ∞ f (x) x2 x f (y)dy
∞ f (x)2 x dx,
SLIDE 156 A final pretty fact: when trying to study L2−stability of the solution, one runs into the following beautiful inequality.
Lemma
For f : (0, ∞) → R≥0 ∞ f (x) x2 x f (y)dy
∞ f (x)2 x dx,
- Proof. follows easily from a general Hardy inequality.
SLIDE 157
Thank you!