an asymptotic lower bound for the norm of the laplace
play

An asymptotic lower bound for the norm of the Laplace operator on a - PowerPoint PPT Presentation

An asymptotic lower bound for the norm of the Laplace operator on a space of polynomials Christian Rebs Technische Universit at Chemnitz, 17 . 8 . 2017 Introduction A. B ottcher, C.Rebs: On the constants in Markov inequalities for the


  1. An asymptotic lower bound for the norm of the Laplace operator on a space of polynomials Christian Rebs Technische Universit¨ at Chemnitz, 17 . 8 . 2017

  2. Introduction A. B¨ ottcher, C.Rebs: On the constants in Markov inequalities for the Laplace operator on polynomials with the Laguerre norm , Asymptotic Analysis 101 (2017), 227-239. For n, N ∈ N we define P N n as the finite dimensional linear space of all complex polynomials f of the form � f i 1 ,...,i N t i 1 1 . . . t i N f ( t 1 , . . . , t N ) = N , n ,..., iN ( i 1 n ) ∈ [0 , 1] N f i 1 ,...,i N ∈ C . We equip these space with the Laguerre norm � � f � 2 := (0 , ∞ ) N | f ( t 1 , . . . , t N ) | 2 e − t 1 . . . e − t N dt 1 . . . dt N . 1

  3. The Laplace operator ∆ = ∂ 2 + . . . + ∂ 2 : P N n → P N n ∂t 2 ∂t 2 1 N is a bounded linear operator on P N n . There exists a constant C = C ( n, N ), such that � ∆ f � ≤ C � f � holds for all f ∈ P N n . The best constant with this property is C = � ∆ � . Aim: Calculate (asymptotic) bounds for � ∆ � 2

  4. Main result: Theorem 1. Let n, N ∈ N and let ω 0 be the positive solution of the equation 2 + 2 cosh( ω ) − ω sinh( ω ) = 0 . Then we have for the operator norm of the Laplace operators on P N with respect to the Laguerre n norm � ( n + 1) 2 � � ∆ � ≥ N ( n + 1) 2 + o ω 2 0 for n → ∞ . 2 With = 0 , 34741 . . . this implies ω 2 0 � ∆ � lim inf Nn 2 ≥ 0 . 1737 . n →∞ 3

  5. The matrix representation of ∆ The norm � ∆ � is the largest singular value of the matrix representation [∆] of ∆ in an orthonormal basis of P N n . We set P n := P 1 n . For k ∈ N 0 we set � t � t 2 � t k � k � k 2! − · · · + ( − 1) k � k L k ( t ) := 1 − 1! + 1 2 k ! k and get an orthonormal basis { L 0 , L 1 , . . . , L n } is P n . 4

  6. The ordinary differential operator D 2 is defined by D 2 : P n → P n , f �→ f ′′ . [1] L. F. Shampine: Some L 2 Markoff inequalities , J. Res. Nat. Bur. Standards 69B (1965), 155-158: The matrix representation of D 2 is the ( n + 1) × ( n + 1) matrix   0 0 1 2 n − 2 n − 1 . . .   0 0 1 n − 3 n − 2 . . .     . ... ... ... ... . .   � D 2 �   . ... ... ... .   = . .   ...   0 1     0 0   0 5

  7. with � N We identify P N j =1 P n . The polynomials L j 1 ⊗ · · · ⊗ L j N with n j 1 , . . . , j N ∈ { 0 , . . . , n } form an orthonormal basis in P N n . The matrix representation of the Laplace operator in this basis is � D 2 ⊗ I ⊗ . . . ⊗ I + I ⊗ D 2 ⊗ I ⊗ . . . ⊗ I + . . . + I ⊗ . . . ⊗ I ⊗ D 2 � [∆] = � D 2 � � D 2 � = ⊗ I n +1 ⊗ . . . ⊗ I n +1 + I n +1 ⊗ ⊗ I n +1 ⊗ . . . ⊗ I n +1 � D 2 � + . . . + I n +1 ⊗ . . . ⊗ I n +1 ⊗ N � D 2 � � = I ( n +1) k − 1 ⊗ ⊗ I ( n +1) N − k . k =1 6

  8. Proof of the main result For a n × n matrix A we set H ( A ) := 1 2 ( A + A ∗ ). [2] R. A. Horn, C. R. Johnson: Topics in Matrix Analysis , Cambridge University Press, Cambridge, New York, Melbourne, Madrid, Cape Town, Singapore, Sao Paulo, 8th printing, 2007: Theorem 2: [Corollary 3 . 1 . 5 in [2]] For a n × n matrix A we denote by σ 1 ( A ) ≥ σ 2 ( A ) ≥ . . . ≥ σ n ( A ) the singular values A and by λ 1 ( H ( A )) ≥ . . . ≥ λ n ( H ( A )) the eigenvalues of H ( A ). Then we have the estimate σ k ( A ) ≥ λ k ( H ( A )) for all k = 1 , . . . , n . 7

  9. For a n × n matrix A and a m × m matrix B we define the Kronecker sum A ⊕ B := ( I m ⊗ A ) + ( B ⊗ I n ) . Theorem 3: [Theorem 4 . 4 . 5 in [2]] If λ is an eigenvalue of A and if µ is an eigenvalue of B , then λ + µ is an eigenvalue of A ⊕ B . If ν is an eigenvalue of A ⊕ B , then there exist eigenvalues λ of A and µ of B such that ν = λ + µ . 8

  10. We need Kronecker sums with more then two summands: For matrices A 1 of order n 1 , . . . , A N of order n N we set A 1 ⊕ . . . ⊕ A N := A 1 ⊕ ( A 2 ⊕ ( A 3 ⊕ ( . . . ⊕ ( A N − 1 ⊕ A N ) . . . ) . For the Kronecker sum with N summands we get the formula N � A 1 ⊕ . . . ⊕ A N = I � N k = j +1 n k ⊗ A j ⊗ I � j − 1 k =1 n k , j =1 where we set � l k = j n k = 1 for l < j . Eigenvalues: For eigenvalues λ 1 of A 1 , λ 2 of A 2 , . . . , λ N of A N , λ 1 + . . . + λ N is an eigenvalue of A 1 ⊕ . . . ⊕ A N . Every eigenvalue λ of A 1 ⊕ . . . ⊕ A N is a sum λ = λ 1 + . . . + λ N of eigenvalues λ k of A k , k = 1 , . . . , N . 9

  11. Now we get an estimate for the norm of the Laplace operator: � 1 [∆] + [∆] T �� � � ∆ � = σ max ([∆]) ≥ λ max 2  N � D 2 � 1 �  = I ( n +1) k − 1 ⊗ ⊗ I ( n +1) N − k 2 λ max k =1  N � D 2 � T ⊗ I ( n +1) N − k �  + I ( n +1) k − 1 ⊗ k =1   �� D 2 � T � N D 2 � � 1 �   = I ( n +1) k − 1 ⊗ + ⊗ I ( n +1) N − k 2 λ max k =1 ��� D 2 � T � �� D 2 � T �� D 2 � � D 2 � � 1 = + ⊕ . . . ⊕ + 2 λ max �� D 2 � T � D 2 � � N = + 2 λ max . 10

  12. We have to calculate the largest eigenvalue of   0 0 1 2 n − 2 n − 1 . . .   0 0 0 1 n − 2  . . .    . . . .   1 0 . .   ... ...   � D 2 � � D 2 � T = 2 1   + .   ... ... 1 2     . . . .   . . 0 1     n − 2 1 0 0 0 . . .   n − 1 n − 2 2 1 0 0 . . . 11

  13. [3] J. M. Bogoya, A. B¨ ottcher, S. M. Grudsky: Eigenvalues of Hermitian Toeplitz matrices with polynomially increasing entries , Journal of Spectral Theory 2 (2012), 267-292: For α > 0 we define the integral operator K α on L 2 (0 , 1) as � 1 0 | x − y | α f ( y ) dy ( K α f )( x ) := for x ∈ (0 , 1). We denote by µ 1 ( K α ) ≥ µ 2 ( K α ) ≥ . . . > 0 the positive eigenvalues of K α and by L + the index set of this eigenvalues. Theorem 4: [Theorem 1 . 2 in [3]] The operator K 1 has only one positive eigenvalue µ 1 ( K 1 ) = 2 = 0 , 34741 . . . . ω 2 0 Here is ω 0 the positive solution of the equation 2 + 2 cosh( ω ) − ω sinh( ω ) = 0. 12

  14. We denote by T n [ a 0 , . . . , a n − 1 ] the hermitian Toeplitz matrix   a 0 a 1 . . . a n − 1   ¯ a 1 a 0 . . . a n − 2   T n = T n [ a 0 , . . . a n − 1 ] =  .   . . . ... . . . . . .  ¯ a n − 1 ¯ a n − 2 . . . a 0 For the eigenvalues � λ 1 ( T n ) ≤ � λ 2 ( T n ) ≤ . . . ≤ � λ n ( T n ) of T n we have: Theorem 5: [Theorem 1 . 1 in [3]] We assume a k = k α + o ( k α ) for k → ∞ , α > 0. Then the eigenvalues of T n = T n [ a 0 , . . . , a n − 1 ] satisfy, as n → ∞ , λ n +1 − l ( T n ) = µ l ( K α ) n α +1 + o ( n α +1 ) � for l ∈ L + . 13

  15. � D 2 � � D 2 � T = T n +1 [0 , 0 , 1 , . . . , n − 1] = T n +1 [ a 0 , . . . , a n ] with We have + a k = k + o ( k ) for k → ∞ . Now we apply Theorem 1 . 1 from [3] and get for the largest eigenvalue � D 2 � � D 2 � T of + �� D 2 � T � D 2 � � � � � + = T n +1 [0 , 0 , 1 , . . . , n − 1] λ max λ n +1 � � � = T n +1 [0 , 0 , 1 , . . . , n − 1] λ n +2 − 1 � ( n + 1) 2 � µ 1 ( K 1 )( n + 1) 2 + o = � ( n + 1) 2 � 2 ( n + 1) 2 + o = ( n → ∞ ) . ω 2 0 This leads for all N ∈ N to the estimate �� D 2 � T � D 2 � � � ( n + 1) 2 � � ∆ � ≥ N = N ( n + 1) 2 + o + 2 λ max ω 2 0 for n → ∞ . 14

  16. An upper bound � � D 2 � � � � D 2 �� � N � � � � We have � ∆ � = � . k =1 I ( n +1) k − 1 ⊗ ⊗ I ( n +1) N − k � ≤ N � � From [1] L. F. Shampine: Some L 2 Markoff inequalities , J. Res. Nat. Bur. Standards 69B (1965), 155-158: we know � � D 2 � � � � → 1 µ 2 = 0 . 28441 . . . , n 2 where µ is the smallest positive solution of 1 + cos µ cosh µ = 0. This leads to � ∆ � � ∆ � 0 . 1737 ≤ lim inf Nn 2 ≤ lim sup Nn 2 ≤ 0 . 2845 . n →∞ n →∞ 15

  17. N = 1 10 50 100 500 1000 n � ∆ � 0 . 2829 0 . 2844 0 . 2844 0 . 2844 0 . 2844 n 2 N = 2 10 50 100 500 1000 n � ∆ � 0 . 2107 0 . 2175 0 . 2183 0 . 2189 0 . 2190 2 n 2 N = 3 10 20 30 50 100 n � ∆ � 0 . 1950 0 . 1993 0 . 2008 0 . 2020 0 . 2029 3 n 2 N = 4 10 20 30 50 100 n � ∆ � 0 . 1855 0 . 1906 0 . 1923 0 . 1937 0 . 1947 4 n 2 N = 5 10 20 30 50 n � ∆ � 0 . 1804 0 . 1857 0 . 1875 0 . 1889 5 n 2 N = 6 10 20 30 n � ∆ � 0 . 1770 0 . 1824 0 . 1843 6 n 2 N = 7 10 n � ∆ � 0 . 1745 7 n 2 16

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend