efficient numerical methods for fractional laplacian and
play

Efficient Numerical Methods for Fractional Laplacian and time - PowerPoint PPT Presentation

Efficient Numerical Methods for Fractional Laplacian and time fractional PDEs Jie Shen Purdue University Collaborators: Sheng Chen and Changtao Sheng ICERM Workshop on Fractional PDEs, June 18-22, 2018 Two main difficulties with fractional


  1. Efficient Numerical Methods for Fractional Laplacian and time fractional PDEs Jie Shen Purdue University Collaborators: Sheng Chen and Changtao Sheng ICERM Workshop on Fractional PDEs, June 18-22, 2018

  2. Two main difficulties with fractional PDEs: fractional derivatives are non-local operators which are much more difficult and expensive to deal with than local operators. fractional PDEs have weakly singularities at t = 0 and/or boundaries. The following two situations will be considered: Part I. Solving the fractional Laplacian using the Caffarelli-Silverstre extension Part II. Space-time Petrov-Galerkin method for time-fractional diffusion equations

  3. Part I: Fractional Laplacian equations in bounded domains We consider the fractional Laplacian equation in a bounded domain Ω: � ( − ∆) s u ( x ) = f ( x ) , x = ( x 1 , x 2 , . . . , x d ) ∈ Ω , u | ∂ Ω = 0 , where 0 < s < 1, and the fractional Laplacian operator is defined through the spectral decomposition of Laplace operator. Two review papers: What is the fractional Laplacian? by Liscke et al. Numerical methods for fractional diffusion, by Bonito et al. Three approaches: Using the discrete eigenfunctions of the Laplacian Using the Dunford-Taylor formula � ∞ u = ( − ∆) − s f = sin s π µ − s ( µ I − ∆) − 1 f d µ π 0 Using the Caffarelli-Silvestre extension (cf. Stinga & Torrea ’10)

  4. Caffarelli-Silvestre extension To overcome the difficulty associated with non-local operators, Caffarelli-Silvestre ’07 (see Stinga & Torrea ’10 for the bounded case) introduced an extension problem in d + 1 dimension with local differential operators:  � � y α ∇ U ( x , y )  ∇ · = 0 , in D = Ω × (0 , ∞ ) ,   U ( x , y ) = 0 , on ∂ L D = ∂ Ω × [0 , ∞ ) ,   y → 0 y α U y ( x , y ) = − d s f ( x ) ,  lim y →∞ U ( x , y ) = 0 lim where α = 1 − 2 s and d s = 2 1 − 2 s Γ(1 − s ) / Γ( s ). Then, the solution of the fractional Laplacian equation can be expressed as u ( x ) = U ( x , 0) . Hence, one only needs to solve the above d + 1 dimensional problems with local differential operators.

  5. Results by using finite elements Nochetto, Otarola & Salgado (2016) made a systematical study of the finite element approximation to the extension problem. Fig. 2 Computational rate of DOFs − 0 . 1 convergence # ( T Y ) − s /( n + 1 ) for e H 1 ( y α ) −0.2 10 quasi-uniform meshes T Y , with s = 0 . 2 and n = 1 −0.4 10 Error −0.6 10 −0.8 10 1 2 3 4 10 10 10 10 Degrees of Freedom (DOFs) Figure: Q 1 FEM convergence rate of the quasi-uniform mesh (in y ), Nochetto et al 2016.

  6. Improved convergence rate with a graded mesh (in y ) 0 10 DOFs − 1 / 3 DOFs − 1 / 3 � e � H 1 ( y α ) � e � H 1 ( y α ) − 1 10 Error Error − 1 10 − 2 10 − 2 10 2 4 6 2 4 6 10 10 10 10 10 10 Degrees of Freedom (DOFs) Degrees of Freedom (DOFs) Fig. 3 Computational rate of convergence for approximate solution of the fractional Laplacian over a square with graded meshes on the extended dimension. Left panel : rate for s = 0 . 2; right panel : rate for s = 0 . 8. In both cases, the rate is ≈ ( # T Y k ) − 1 / 3 , in agreement with Theorem 5.4 and Remark 5.5 Q. Can we further improve the convergence rate in the extended direction?

  7. Galerkin approximation with Laguerre spectral method in y The particular weight function y α in the extension problem calls for the use of generalized Laguerre polynomials {L α k ( y ) } which are mutually orthogonal w.r.t. the weight y α e − y / 2 . Let us denote Y α N = span { ˆ L α k ( y ) := L α k ( y ) e − y / 2 , k = 0 , 1 , · · · , N } , For the x -directions, one can use your favorite approximation space X K , e.g., FEM or spectral method. The Galerkin approximation for the extension problem is to find u NK ∈ X N , K = Y α N × X K such that ( y α ∇ u NK , ∇ v ) D = d s ( f , v ( x , 0)) Ω , ∀ v ∈ X N , K .

  8. Fast solvers Let { ψ j ( x ) } 1 ≤ j ≤ K be a set of basis functions in X K , we write u NK = � N � K j =1 ˜ u kj φ k ( y ) ψ j ( x ) and U = (˜ u kj ). k =1 Let us denote S y M y kj = ( y α φ ′ j ( y ) , φ ′ kj = ( y α φ j ( y ) , φ k ( y )) (0 , ∞ ) , k ( y )) (0 , ∞ ) , S x M x kj = ( ∇ x ψ j ( x ) , ∇ x ψ k ( x )) Ω , kj = ( ψ j ( x ) , ψ k ( x )) Ω . Then, the linear system for the Galerkin approximation is S y UM x + M y US x = F . Choice of basis functions for Y α N : − 1 ( y ) = 0, we set φ k ( y ) := ˆ k − 1 y ) − ˆ Let L α L α L α k ( y ). Then Y α N = span { φ k ( y ) : k = 0 , 1 , · · · , N } . and we have ∂ y φ k ( y ) = 1 2( ˆ k − 1 y ) + ˆ L α L α k ( y )) . Thanks to the orthogonality of generalized Laguerre functions, M y and S y are both symmetric penta-diagonal.

  9. Fast solvers: continued The above linear system can be solved efficiently by using the matrix-diagonalization method. Let S y E = M y E Λ where ( E , Λ) consists of eigenvectors and eigenvalues of S y ¯ x = λ M y ¯ x . Setting the change of variable U = EV , we can reduce the matrix system to a sequence of N problems in x -direction: ( λ j M x + S x )¯ v j = ( E t ( M y ) − 1 F ) j , j = 1 , 2 , · · · , N . Since usually N ≪ K , this procedure is very efficient, and is not intrusive as your favorite elliptic solver can be used.

  10. Error estimates for the Laguerre spectral methods Error estimates with generalized Laguerre functions: � ˆ ∂ l y ( u − v N ) � y α + l � N ( l − m ) / 2 � ˆ ∂ m min y u � y α + m , 0 ≤ l ≤ m v N ∈ Y α N where ˆ ∂ y = ( ∂ y + 1 / 2). Then for the problem − ∂ y ( y α ∂ y u ) = f , u (0) = 0 , lim y →∞ u ( y ) = 0 , the generalized Laguerre-Galerkin method in Y α N leads to: � ( u − u N ) y � y α � N (1 − m ) / 2 � ˆ ∂ m y u � y α + m − 1 .

  11. Error estimates for the extension problem The error estimate for Galerkin approximation of the extension problem in X N , K is: � U − U NK � 1 , y α � N (1 − m ) / 2 � ˆ ∂ m y U � y α + m − 1 �∇ x ( U ( x , 0) − v K ) � + min v K ∈ X K The first part is a typical result for spectral approximation. Unfortunately, the solution is singular at y = 0 so that � ˆ ∂ m y u � y α + m − 1 is only bounded for m = [2 s ] + 1. So the Laguerre spectral method converges very slowly in the y -direction.

  12. L 2 Error 0 −2 −4 log 10 (Error) −6 −8 −10 s=0.5 −12 s=0.31 s=0.82 −14 10 15 20 25 30 35 40 N Figure: λ = 2 , s = 0 . 5 , 0 . 31 , 0 . 82 .

  13. Form of singularities at t = 0 A careful look at the extension problem reveals that the singularity can be explicitly identified so it is possible to use special basic functions to well represent the singular behavior at y = 0. By using a separation of variables approach, one finds that the solution to the extension problem can be expressed as ∞ � � U ( x , y ) = U n ϕ n ( x ) ψ n ( y ) n =1 where ψ n ( y ) is the solution of (Stinga & Torrea ’10)  n ( y ) − 1 − 2 s   − ψ ′′ ψ ′ n ( y ) + λ n ψ n ( y ) = 0 , y ∈ Λ = (0 , ∞ ) , y  ψ n (0) = 1 , y →∞ ψ n ( y ) = 0 , lim  which can be expressed by Bessel function of the 2nd kind K s ( z ): � � c s = 2 1 − s / Γ( s ) . λ n y ) s K s ( ψ n ( y ) = c s ( λ n y ) ,

  14. Form of singularities at t = 0: continued We have ∞ � � z � 2 j + s . I − s ( z ) − I s ( z ) 1 K s ( z ) := s , I s ( z ) := 2 sin( s π ) j !Γ( j + 1 + α ) 2 j =0 So we can derive � � λ n y ) s K s ( ψ n ( y ) = c s ( λ n y ) � � � � � � sc s λ n y ) s I − s ( λ n y ) s I s ( = ( λ n y ) − ( λ n y ) 2sin( s π ) ( √ λ n y ) 2 j − ( √ λ n y ) 2 j +2 s ∞ � sc s = 2 2 j + s j !Γ( j + 2 − 2 s ) 2sin( s π ) j =0 = g 1 , n ( y ) + y 2 s g 2 , n ( y ) , where g 1 , n ( y ) , g 2 , n ( y ) are smooth functions.

  15. Enriched spectral method It is natural to add some some singular parts to the approximation space in the y -direction: N ⊕ { y 2 s ˆ Y α, k = Y α L α j ( y ) : j = 0 , 1 , · · · , k } , N and the new approximation space for the extension problem is: N , K = Y α, k X k × X K . N We have the following error estimate with the new approximation space: NK ( x , 0) � H s (Ω) � N − [2 s ] 2 − k + min � u − U k �∇ ( u − v K ) � . v K ∈ X K

  16. Solution of the linear system One can apply the same matrix diagonalization process as before, but ( S y , M y ) are usually severely ill conditioned since the added singular functions are ”similar at y = 0” and have no orthogonal relation with the Laguerre functions. This approach can only be used for small k (which is usually enough). L 2 Error, N=30 -1 SM -2 ESM:k=1 ESM:k=2 ESM:k=3 -3 ESM:k=4 ESM:k=5 -4 log 10 (Error) -5 -6 -7 -8 -9 -10 5 10 15 20 25 M Figure: Error behaviors with the enriched spectral method (1-D): s=0.2

  17. L Error, N=80 L Error, N=80 0 -1 SM SM ESM:k=1 -2 ESM:k=1 -1 ESM:k=2 ESM:k=2 -3 -2 -4 -3 log 10 (Error) log 10 (Error) -5 -4 -6 -5 -7 -6 -8 -7 -9 -8 -10 4 6 8 10 12 14 16 18 4 6 8 10 12 14 16 18 M 1 =M 2 M 1 =M 2 Figure: Smooth solution with a spectral method in Ω = ( − 1 , 1) 2 : Left: s = 0 . 2, Right: s = 0 . 8 .

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend