roots of polynomials under repeated differentiation
play

Roots of Polynomials Under Repeated Differentiation Stefan - PowerPoint PPT Presentation

Roots of Polynomials Under Repeated Differentiation Stefan Steinerberger UCLA/Caltech, October 2020 Outline of the Talk 1. Roots of Polynomials Outline of the Talk 1. Roots of Polynomials 2. A Nonlinear PDE Outline of the Talk 1. Roots of


  1. Main Question What about the roots of p ( t · n ) where 0 < t < 1? n Some History. 1. The question hasn’t been studied very much. 2. Polya asked a whole number of questions in the setting of real entire functions.

  2. Main Question What about the roots of p ( t · n ) where 0 < t < 1? n Some History. 1. The question hasn’t been studied very much. 2. Polya asked a whole number of questions in the setting of real entire functions. 3. The smallest gap grows under differentiation.

  3. Main Question What about the roots of p ( t · n ) where 0 < t < 1? n Some History. 1. The question hasn’t been studied very much. 2. Polya asked a whole number of questions in the setting of real entire functions. 3. The smallest gap grows under differentiation. Denoting the smallest gap of a polynomial p n having n real roots { x 1 , . . . , x n } by G ( p n ) = min i � = j | x i − x j | ,

  4. Main Question What about the roots of p ( t · n ) where 0 < t < 1? n Some History. 1. The question hasn’t been studied very much. 2. Polya asked a whole number of questions in the setting of real entire functions. 3. The smallest gap grows under differentiation. Denoting the smallest gap of a polynomial p n having n real roots { x 1 , . . . , x n } by G ( p n ) = min i � = j | x i − x j | , we have (Riesz, Sz-Nagy, Walker, 1920s) G ( p ′ n ) ≥ G ( p n ) .

  5. Main Question What about the roots of p ( t · n ) where 0 < t < 1? n Let us denote the answer by u ( t , x ). Here, the idea is that u ( t , x ) is the limiting behavior as n → ∞ .

  6. Main Question What about the roots of p ( t · n ) where 0 < t < 1? n Let us denote the answer by u ( t , x ). Here, the idea is that u ( t , x ) is the limiting behavior as n → ∞ . In particular µ = u (0 , x ) dx

  7. Main Question What about the roots of p ( t · n ) where 0 < t < 1? n Let us denote the answer by u ( t , x ). Here, the idea is that u ( t , x ) is the limiting behavior as n → ∞ . In particular µ = u (0 , x ) dx and � u ( t , x ) dx = 1 − t . R

  8. Main Question What about the roots of p ( t · n ) where 0 < t < 1? n Let us denote the answer by u ( t , x ). Here, the idea is that u ( t , x ) is the limiting behavior as n → ∞ . In particular µ = u (0 , x ) dx and � u ( t , x ) dx = 1 − t . R What can one say about u ( t , x )?

  9. Main Question What about the roots of p ( t · n ) where 0 < t < 1? n Let us denote the answer by u ( t , x ). � 1. R u ( t , x ) dx = 1 − t .

  10. Main Question What about the roots of p ( t · n ) where 0 < t < 1? n Let us denote the answer by u ( t , x ). � 1. R u ( t , x ) dx = 1 − t . � R u ( t , x ) x dx = (1 − t ) � 2. R u (0 , x ) x dx

  11. Main Question What about the roots of p ( t · n ) where 0 < t < 1? n Let us denote the answer by u ( t , x ). � 1. R u ( t , x ) dx = 1 − t . � R u ( t , x ) x dx = (1 − t ) � 2. R u (0 , x ) x dx R u ( t , x )( x − y ) 2 u ( t , y ) dxdy = � � 3. R

  12. Main Question What about the roots of p ( t · n ) where 0 < t < 1? n Let us denote the answer by u ( t , x ). � 1. R u ( t , x ) dx = 1 − t . � R u ( t , x ) x dx = (1 − t ) � 2. R u (0 , x ) x dx R u ( t , x )( x − y ) 2 u ( t , y ) dxdy = � � 3. R (1 − t ) 3 � R u (0 , x )( x − y ) 2 u (0 , y ) dxdy � R

  13. Main Question What about the roots of p ( t · n ) where 0 < t < 1? n Let us denote the answer by u ( t , x ). � 1. R u ( t , x ) dx = 1 − t . � R u ( t , x ) x dx = (1 − t ) � 2. R u (0 , x ) x dx R u ( t , x )( x − y ) 2 u ( t , y ) dxdy = � � 3. R (1 − t ) 3 � R u (0 , x )( x − y ) 2 u (0 , y ) dxdy � R This means: the distribution shrinks linearly in mass,

  14. Main Question What about the roots of p ( t · n ) where 0 < t < 1? n Let us denote the answer by u ( t , x ). � 1. R u ( t , x ) dx = 1 − t . � R u ( t , x ) x dx = (1 − t ) � 2. R u (0 , x ) x dx R u ( t , x )( x − y ) 2 u ( t , y ) dxdy = � � 3. R (1 − t ) 3 � R u (0 , x )( x − y ) 2 u (0 , y ) dxdy � R This means: the distribution shrinks linearly in mass, its mean is preserved and

  15. Main Question What about the roots of p ( t · n ) where 0 < t < 1? n Let us denote the answer by u ( t , x ). � 1. R u ( t , x ) dx = 1 − t . � R u ( t , x ) x dx = (1 − t ) � 2. R u (0 , x ) x dx R u ( t , x )( x − y ) 2 u ( t , y ) dxdy = � � 3. R (1 − t ) 3 � R u (0 , x )( x − y ) 2 u (0 , y ) dxdy � R This means: the distribution shrinks linearly in mass, its mean is preserved and the mass is distributed over area ∼ √ 1 − t .

  16. An Equation (S. 2018) There’s some good heuristic reasoning for ∂ t + 1 � Hu � ∂ u ∂ ∂ x arctan = 0 on supp( u ) π u

  17. An Equation (S. 2018) There’s some good heuristic reasoning for ∂ t + 1 � Hu � ∂ u ∂ ∂ x arctan = 0 on supp( u ) π u where Hf ( x ) = p.v. 1 � f ( y ) is the Hilbert transform. x − y dy π R

  18. An Equation (S. 2018) There’s some good heuristic reasoning for ∂ t + 1 � Hu � ∂ u ∂ ∂ x arctan = 0 on supp( u ) π u where Hf ( x ) = p.v. 1 � f ( y ) is the Hilbert transform. x − y dy π R The argument is actually fun and I can give it in full. But before, let’s explore this strange equation.

  19. A nice way to understand a PDE is through explicit closed-form solutions (if they exist).

  20. A nice way to understand a PDE is through explicit closed-form solutions (if they exist). So the relevant question is: are there nice special solutions that we can construct? For this we need polynomials p n whose roots have a nice distribution

  21. A nice way to understand a PDE is through explicit closed-form solutions (if they exist). So the relevant question is: are there nice special solutions that we can construct? For this we need polynomials p n whose roots have a nice distribution and whose derivatives p ( k ) also have a nice n distribution? 1. Hermite polynomials

  22. A nice way to understand a PDE is through explicit closed-form solutions (if they exist). So the relevant question is: are there nice special solutions that we can construct? For this we need polynomials p n whose roots have a nice distribution and whose derivatives p ( k ) also have a nice n distribution? 1. Hermite polynomials 2. (associated) Laguerre polynomials

  23. A nice way to understand a PDE is through explicit closed-form solutions (if they exist). So the relevant question is: are there nice special solutions that we can construct? For this we need polynomials p n whose roots have a nice distribution and whose derivatives p ( k ) also have a nice n distribution? 1. Hermite polynomials 2. (associated) Laguerre polynomials Presumably there are many others(?)

  24. Hermite Polynomials Hermite polynomials H n : R → R satisfy a nice recurrence relation d m 2 n n ! dx m H n ( x ) = ( n − m )! H n − m ( x ) .

  25. Hermite Polynomials Hermite polynomials H n : R → R satisfy a nice recurrence relation d m 2 n n ! dx m H n ( x ) = ( n − m )! H n − m ( x ) . Moreover, the roots of H n converge, in a suitable sense, to µ = 1 � 2 n − x 2 dx . π

  26. Hermite Polynomials Hermite polynomials H n : R → R satisfy a nice recurrence relation d m 2 n n ! dx m H n ( x ) = ( n − m )! H n − m ( x ) . Moreover, the roots of H n converge, in a suitable sense, to µ = 1 � 2 n − x 2 dx . π This suggests that u ( t , x ) = 2 � 1 − t − x 2 · χ | x |≤√ 1 − t for t ≤ 1 π should be a solution of the PDE (and it is).

  27. Hermite Polynomials u ( t , x ) = 2 1 − t − x 2 · χ | x |≤√ 1 − t � for t ≤ 1 π

  28. Laguerre Polynomials (Associated) Laguerre polynomials H n : R → R satisfy the recurrence relation d k dx k L ( α ) n ( x ) = ( − 1) k L ( α + k ) n − k ( x ) .

  29. Laguerre Polynomials (Associated) Laguerre polynomials H n : R → R satisfy the recurrence relation d k dx k L ( α ) n ( x ) = ( − 1) k L ( α + k ) n − k ( x ) . The roots converge in distribution to the Marchenko-Pastur distribution � ( x + − x )( x − x − ) v ( c , x ) = χ ( x − , x + ) dx 2 π x where √ c + 1 ± 1) 2 . x ± = (

  30. Laguerre Polynomials (Associated) Laguerre polynomials H n : R → R satisfy the recurrence relation d k dx k L ( α ) n ( x ) = ( − 1) k L ( α + k ) n − k ( x ) . The roots converge in distribution to the Marchenko-Pastur distribution � ( x + − x )( x − x − ) v ( c , x ) = χ ( x − , x + ) dx 2 π x where √ c + 1 ± 1) 2 . x ± = ( Indeed, � c + t � x u c ( t , x ) = v 1 − t , 1 − t is a solution of the PDE.

  31. Laguerre Polynomials � c + t � x u c ( t , x ) = v 1 − t , . 1 − t Figure: Marchenko-Pastur solutions u c ( t , x ): c = 1 (left) and c = 15 (right) shown for t ∈ { 0 , 0 . 2 , 0 . 4 , 0 . 6 , 0 . 8 , 0 . 9 , 0 . 95 , 0 . 99 } .

  32. A Bonus Solution There are several classical orthogonal polynomials on [ − 1 , 1] (Gegenbauer, Jacobi, ...).

  33. A Bonus Solution There are several classical orthogonal polynomials on [ − 1 , 1] (Gegenbauer, Jacobi, ...). For fairly general classes (Erd˝ os-Freud theorem) of such polynomials, the distribution of roots is asymptotically given by µ = 1 dx √ 1 − x 2 . π

  34. A Bonus Solution There are several classical orthogonal polynomials on [ − 1 , 1] (Gegenbauer, Jacobi, ...). For fairly general classes (Erd˝ os-Freud theorem) of such polynomials, the distribution of roots is asymptotically given by µ = 1 dx √ 1 − x 2 . π As it turns out, c √ u ( t , x ) = 1 − x 2 is indeed a stationary solution of the equation.

  35. A Bonus Solution There are several classical orthogonal polynomials on [ − 1 , 1] (Gegenbauer, Jacobi, ...). For fairly general classes (Erd˝ os-Freud theorem) of such polynomials, the distribution of roots is asymptotically given by µ = 1 dx √ 1 − x 2 . π As it turns out, c √ u ( t , x ) = 1 − x 2 is indeed a stationary solution of the equation. Theorem (Tricomi?) Let f : ( − 1 , 1) → R ≥ 0 . If Hf ≡ 0 in ( − 1 , 1), then c √ f = 1 − x 2 .

  36. ∂ t + 1 � Hu � ∂ u ∂ ∂ x arctan = 0 on supp( u ) π u

  37. ∂ t + 1 � Hu � ∂ u ∂ ∂ x arctan = 0 on supp( u ) π u Sketch of the Derivation. Crystallization as key assumption.

  38. ∂ t + 1 � Hu � ∂ u ∂ ∂ x arctan = 0 on supp( u ) π u Sketch of the Derivation. Crystallization as key assumption. u ( t , x )

  39. ∂ t + 1 � Hu � ∂ u ∂ ∂ x arctan = 0 on supp( u ) π u Sketch of the Derivation. Crystallization as key assumption. u ( t , x )

  40. ∂ t + 1 � Hu � ∂ u ∂ ∂ x arctan = 0 on supp( u ) π u Sketch of the Derivation. Crystallization as key assumption. u ( t , x )

  41. � n 1 x − x k = 0 k =1 x k

  42. � n 1 x − x k = 0 k =1 x k n 1 1 1 � � � = + x − x k x − x k x − x k k =1 | x k − x | large | x k − x | small

  43. � n 1 x − x k = 0 k =1 x k n 1 1 1 � � � = + x − x k x − x k x − x k k =1 | x k − x | large | x k − x | small 1 � 1 � ∼ n x − y · u ( t , y ) dy = n · [ Hu ]( t , x ) . x − x k R | x k − x | large

  44. � n 1 x − x k = 0 k =1 x k n 1 1 1 � � � = + x − x k x − x k x − x k k =1 | x k − x | large | x k − x | small 1 � 1 � ∼ n x − y · u ( t , y ) dy = n · [ Hu ]( t , x ) . x − x k R | x k − x | large It thus remains to understand the behavior of the local term.

  45. The local term is 1 � . x − x k | x k − x | small

  46. The local term is 1 � . x − x k | x k − x | small Crystallization means that the roots form, locally, an arithmetic progressions

  47. The local term is 1 � . x − x k | x k − x | small Crystallization means that the roots form, locally, an arithmetic progressions and thus 1 1 � � ∼ � . x − x k � ℓ x − x k + ℓ ∈ Z | x k − x | small u ( t , x ) n

  48. The local term is 1 � . x − x k | x k − x | small Crystallization means that the roots form, locally, an arithmetic progressions and thus 1 1 � � ∼ � . x − x k � ℓ x − x k + ℓ ∈ Z | x k − x | small u ( t , x ) n We are in luck: this sum has a closed-form expression due to Euler ∞ π cot π x = 1 � 1 1 � � x + x + n + for x ∈ R \ Z . x − n n =1

  49. The Local Field

  50. The Local Field We can then predict the behavior of the roots of the derivative: they are in places where the local (near) field and the global (far) field cancel out. This leads to the desired equation.

  51. A Fast Numerical Algorithm Jeremy Hoskins (U Chicago) used the electrostatic interpretation to produce an algorithm that can compute all derivatives of polynomials up to degree ∼ 100 . 000.

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend