optimal adaptive wavelet methods
play

Optimal adaptive wavelet methods for linear operator equations T. - PowerPoint PPT Presentation

Convergent iterations Complexity analysis An adaptive Galerkin method Summary Optimal adaptive wavelet methods for linear operator equations T. Gantumur R. Stevenson Numerical Colloquium 17 June Gantumur, Stevenson Convergent iterations


  1. Convergent iterations Complexity analysis An adaptive Galerkin method Summary Optimal adaptive wavelet methods for linear operator equations T. Gantumur R. Stevenson Numerical Colloquium 17 June Gantumur, Stevenson

  2. Convergent iterations Complexity analysis An adaptive Galerkin method Summary Overview Linear operator equation Au = g with A : H → H ′ Riesz basis Ψ = { ψ λ } of H , e.g. u = � λ u λ ψ λ Infinite dimensional matrix-vector system Au = g , with u = ( u λ ) λ and A : ℓ 2 → ℓ 2 Convergent iterations such as u ( i + 1 ) = u ( i ) + α [ g − Au ( i ) ] We can approximate Au ( i ) by a finitely supported vector How cheap can we compute this approximation? The answer will depend on A and Ψ Gantumur, Stevenson

  3. Convergent iterations Complexity analysis An adaptive Galerkin method Summary Outline Continuous problem, discretization, and convergent iterations 1 Linear operator equations Discretization Convergent iterations in discrete space Complexity analysis 2 Uniform methods - convergence, complexity Nonlinear approximation Optimal complexity Computability An adaptive Galerkin method 3 Optimal complexity with coarsening Optimal complexity without coarsening Gantumur, Stevenson

  4. Convergent iterations Complexity analysis An adaptive Galerkin method Summary Linear Operator Equations Let H be a separable Hilbert space, H ′ be its dual A : H → H ′ is boundedly invertible g ∈ H ′ is a linear functional Problem u ∈ H is such that Au = g For v ∈ H and h ∈ H ′ , � h , v � = h ( v ) the duality pairing Gantumur, Stevenson

  5. Convergent iterations Complexity analysis An adaptive Galerkin method Summary Sobolev Spaces Let Ω be an n -dimensional domain or smooth manifold H = H t ⊂ H t (Ω) is a closed subspace H ′ = H − t the dual space Gantumur, Stevenson

  6. Convergent iterations Complexity analysis An adaptive Galerkin method Summary Linear Differential Operators Partial differential operators of order 2 t � � a αβ ∂ β u , ∂ α v � , � Au , v � = | α | , | β |≤ t Example: The reaction-diffusion equation ( t = 1) � ∇ u · ∇ v + κ 2 uv , � Au , v � = Ω Gantumur, Stevenson

  7. Convergent iterations Complexity analysis An adaptive Galerkin method Summary Boundary Integral Operators Boundary integral operators � � � Au , v � = v ( x ) K ( x , y ) u ( y ) d Ω y d Ω x Ω Ω with the kernel K ( x , y ) singular at x = y Example: The single layer operator for the Laplace BVP in 3-d domain ( t = − 1 2 ) 1 K ( x , y ) = 4 π | x − y | Gantumur, Stevenson

  8. Convergent iterations Complexity analysis An adaptive Galerkin method Summary Convergent Iterations in Continuous Space Gradient Iterations u ( i + 1 ) = u ( i ) + B i ( g − Au ( i ) ) , B i : H ′ → H u − u ( i + 1 ) = u − u ( i ) − B i A ( u − u ( i ) ) = ( I − B i A )( u − u ( i ) ) � u − u ( i + 1 ) � H ≤ � I − B i A � H→H � u − u ( i ) � H Convergence ρ i := � I − B i A � H→H < 1 Gantumur, Stevenson

  9. Convergent iterations Complexity analysis An adaptive Galerkin method Summary Normal Equations Observation Let R : H ′ → H be self-adjoint: � Rf , h � = � f , Rh � for f , h ∈ H ′ and H ′ -elliptic: with some α > 0 � Rf , f � ≥ α � f � 2 H for f ∈ H ′ . Then A ′ RA : H → H ′ is self-adjoint and H -elliptic. Normal Equation A ′ RAu = A ′ Rg Au = g = ⇒ Assumption A is self-adjoint and H -elliptic. Gantumur, Stevenson

  10. Convergent iterations Complexity analysis An adaptive Galerkin method Summary Riesz bases Ψ = { ψ λ : λ ∈ ∇} is a Riesz basis for H – each v ∈ H has a unique expansion | d λ ( v ) | 2 ≤ C � v � 2 � � c � v � 2 v = d λ ( v ) ψ λ H ≤ s.t. H λ ∈∇ λ ∈∇ d λ ∈ H ′ and d λ ( ψ µ ) = δ λµ { d λ : λ ∈ ∇} is a Riesz basis for H ′ Ψ = { ˜ ˜ ψ λ } := { d λ } is the dual basis: � ˜ ψ λ , ψ µ � = δ λµ For v ∈ H , we have v = { v λ } := { d λ ( v ) } ∈ ℓ 2 ( ∇ ) Gantumur, Stevenson

  11. Convergent iterations Complexity analysis An adaptive Galerkin method Summary Wavelet bases Ψ Riesz basis for H = H t Nested index sets ∇ 0 ⊂ ∇ 1 ⊂ . . . ⊂ ∇ j ⊂ . . . ⊂ ∇ , S j = span { ψ λ : λ ∈ ∇ j } ⊂ H and ˜ S j = span { ˜ ψ λ : λ ∈ ∇ j } ⊂ H ′ Locality, Polynomial exactness and Vanishing moments diam ( supp ψ λ ) = O ( 2 − j ) if λ ∈ ∇ j \ ∇ j − 1 All polynomials of degree d − 1, P d − 1 ⊂ S 0 d − 1 ⊂ ˜ d − 1 , ·� L 2 ⊂ ˜ S 0 more precisely, � P ˜ S 0 P ˜ {S j } has a good approximation property If λ ∈ ∇ \ ∇ 0 , we have � P ˜ d − 1 , ψ λ � L 2 = 0 � cancellation property Gantumur, Stevenson

  12. Convergent iterations Complexity analysis An adaptive Galerkin method Summary Equivalent Discrete Problem [Cohen, Dahmen, DeVore ’02] Wavelet basis Ψ = { ψ λ : λ ∈ ∇} Stiffness A = � A ψ λ , ψ µ � λ,µ and load g = � g , ψ λ � λ Linear equation in ℓ 2 ( ∇ ) Au = g , A : ℓ 2 ( ∇ ) → ℓ 2 ( ∇ ) SPD and g ∈ ℓ 2 ( ∇ ) u = � λ u λ ψ λ is the solution of Au = g � u − v � ℓ 2 ( ∇ ) � � u − v � H with v = � λ v λ ψ λ A good approx. of u induces a good approx. of u Ψ defines a topological isomorphism between H and ℓ 2 ( ∇ ) Gantumur, Stevenson

  13. Convergent iterations Complexity analysis An adaptive Galerkin method Summary Convergent Iterations in Discrete Space Richardson’s iterations u ( 0 ) = 0 u ( i + 1 ) = u ( i ) + α [ g − Au ( i ) ] i = 0 , 1 , . . . u − u ( i + 1 ) = u − u ( i ) − α A ( u − u ( i ) ) = ( I − α A )( u − u ( i ) ) � u − u ( i + 1 ) � ℓ 2 ≤ � I − α A � ℓ 2 → ℓ 2 � u − u ( i ) � ℓ 2 Convergence ρ := � I − α A � ℓ 2 → ℓ 2 < 1 g and Au ( i ) are infinitely supported Approximate them by finitely supported sequences Gantumur, Stevenson

  14. Convergent iterations Complexity analysis An adaptive Galerkin method Summary Approximate Iterations Approximate right-hand side RHS [ g , ε ] → g ε satisfies � g − g ε � ℓ 2 ≤ ε Approximate application of the matrix APPLY [ A , v , ε ] → w ε satisfies � Av − w ε � ℓ 2 ≤ ε Approximate Richardson’s iterations u ( 0 ) = 0 ˜ u ( i + 1 ) = ˜ u ( i ) + α � u ( i ) , ε i ] � ˜ RHS [ g , ε i ] − APPLY [ A , ˜ i = 0 , 1 , . . . Choose ε i such that � u ( i ) − ˜ u ( i ) � � � u − u ( i ) � Gantumur, Stevenson

  15. Convergent iterations Complexity analysis An adaptive Galerkin method Summary Convergence u ( 0 ) , ε fin ] → ˜ u ( i ) RICHARDSON [˜ for i = 0 , 1 , . . . r ( i ) := RHS [ g , ε i ] − APPLY [ A , ˜ u ( i ) , ε i ] ε i := C ρ i ; ˜ r ( i ) � ℓ 2 + 2 ε i ≤ ε fin then terminate; if � ˜ u ( i + 1 ) := ˜ u ( i ) + α r ( i ) ˜ endfor Lemma u ( 0 ) , ε ] → ˜ RICHARDSON [˜ u terminates with � g − A ˜ u � ℓ 2 ≤ ε u ( 0 ) , ε ] depending on ε ? Computational cost of RICHARDSON [˜ Gantumur, Stevenson

  16. Convergent iterations Complexity analysis An adaptive Galerkin method Summary Uniform Refinement Galerkin Methods Wavelet basis Ψ j := { ψ λ : λ ∈ ∇ j } of S j Stiffness A j = � A ψ λ , ψ µ � λ,µ ∈∇ j Load g j = � g , ψ λ � λ ∈∇ j Linear equation in ℓ 2 ( ∇ j ) A j u j = g j , A j : ℓ 2 ( ∇ j ) → ℓ 2 ( ∇ j ) SPD and g j ∈ ℓ 2 ( ∇ j ) u j = � λ [ u j ] λ ψ λ ∈ S j approximates the solution of Au = g With the orthogonal projector P j : ℓ 2 ( ∇ ) → ℓ 2 ( ∇ j ) , the above equation is equivalent to P j Au j = P j g Gantumur, Stevenson

  17. Convergent iterations Complexity analysis An adaptive Galerkin method Summary Convergence and Complexity If u ∈ H t + ns for some s ∈ ( 0 , d − t n ] v ∈S j � u − v � H t ≤ O ( 2 − jns ) ε j := � u − u j � H t ≤ C inf N j = dim S j = O ( 2 jn ) ε j ≤ O ( N − s j ) Solve A j u j = g j with Cascadic CG � complexity O ( N j ) Similar estimates for FEM Gantumur, Stevenson

  18. Convergent iterations Complexity analysis An adaptive Galerkin method Summary Best N -term Approximation Given u = ( u λ ) λ ∈ ℓ 2 , approximate u using N nonzero coeffs � ℵ N := ℓ 2 (Λ) Λ ⊂∇ :#Λ= N ℵ N is a nonlinear manifold Let u N be such that � u − u N � ℓ 2 ≤ � u − v � ℓ 2 for v ∈ ℵ N u N is a best approximation of u with # supp u N ≤ N u N can be constructed by picking N largest in modulus coeffs from u Gantumur, Stevenson

  19. Convergent iterations Complexity analysis An adaptive Galerkin method Summary Nonlinear vs. linear approximation Nonlinear approximation If u ∈ B t + ns ( L τ ) with 1 τ = 1 2 + s for some s ∈ ( 0 , d − t n ) τ ε N = � u N − u � ≤ O ( N − s ) Linear approximation If u ∈ H t + ns for some s ∈ ( 0 , d − t n ] , uniform refinement ε j = � u j − u � ≤ O ( N − s j ) H t + ns is a proper subset of B t + ns ( L τ ) τ [Dahlke, DeVore]: u ∈ B t + ns ( L τ ) much milder than u ∈ H t + ns τ Gantumur, Stevenson

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend