ADAPTIVE MESHES AND EMBEDDED BOUNDARY INTEGRAL METHODS Travis - - PowerPoint PPT Presentation

adaptive meshes and embedded boundary integral methods
SMART_READER_LITE
LIVE PREVIEW

ADAPTIVE MESHES AND EMBEDDED BOUNDARY INTEGRAL METHODS Travis - - PowerPoint PPT Presentation

ADAPTIVE MESHES AND EMBEDDED BOUNDARY INTEGRAL METHODS Travis Askham (University of Washington) March 15, 2018. ICERM workshop on Fast Algorithms for Static and Dynamically Changing Point Configurations EMBEDDED BOUNDARY INTEGRAL METHODS


slide-1
SLIDE 1

ADAPTIVE MESHES AND EMBEDDED BOUNDARY INTEGRAL METHODS

Travis Askham (University of Washington) March 15, 2018. ICERM workshop on “Fast Algorithms for Static and Dynamically Changing Point Configurations”

slide-2
SLIDE 2

EMBEDDED BOUNDARY INTEGRAL METHODS

Collaborators: Leslie Greengard Antoine Cerfon Manas Rachh Mary Catherine Kropinski Ludvig af Klinteberg Bryan Quaife

Funding from the Air Force Office of Scientific Research (FA9550-10-1-0180 and FA9550-15-1-0385).

slide-3
SLIDE 3

INTEGRAL EQUATION METHODS FOR FLUIDS

Why integral equation methods? Geometric flexibility Well-conditioned formulations Existence of fast algorithms (FMM)

[Malhotra et al., 2017]

Re{z} Im{z} The error in U(z) −0.5 0.5 −0.5 −0.4 −0.3 −0.2 −0.1 0.1 0.2 0.3 0.4 0.5 −15.5 −15 −14.5 −14 −13.5 −13 −12.5

[Ojala, 2012] Hoskins, Rachh, Serkh

slide-4
SLIDE 4

NAVIER-STOKES TO MODIFIED STOKES

Navier-Stokes ∂u ∂t + (u · ∇)u = −∇p + 1 Re∆u, x ∈ Ω ∇ · u = 0, x ∈ Ω , u = f, x ∈ ∂Ω.

slide-5
SLIDE 5

NAVIER-STOKES TO MODIFIED STOKES

Navier-Stokes ∂u ∂t + (u · ∇)u = −∇p + 1 Re∆u, x ∈ Ω ∇ · u = 0, x ∈ Ω , u = f, x ∈ ∂Ω. IMEX (Euler) Discretization uN+1 − uN δt − 1 Re∆uN+1 + ∇pN+1 = F, x ∈ Ω, ∇ · uN+1 = 0, x ∈ Ω, uN+1 = f, x ∈ ∂Ω.

slide-6
SLIDE 6

NAVIER-STOKES TO MODIFIED STOKES (CONT.)

Let uN+1 = v + uH. Particular Solution (v) v − δt Re∆v + δt∇pV = δtF + uN, x ∈ Ω , ∇ · v = 0, x ∈ Ω .

slide-7
SLIDE 7

NAVIER-STOKES TO MODIFIED STOKES (CONT.)

Let uN+1 = v + uH. Particular Solution (v) v − δt Re∆v + δt∇pV = δtF + uN, x ∈ Ω , ∇ · v = 0, x ∈ Ω . Boundary Correction (uH) — Modified Stokes Equation uH − δt Re∆uH + ∇pH = 0, x ∈ Ω , ∇ · uH = 0, x ∈ Ω , uH = f − v, x ∈ ∂Ω .

slide-8
SLIDE 8

THE MODIFIED STOKESLET

Let λ =

  • Re/δt. The fundamental solution of the modified

Stokes equations is the Modified Stokeslet G(x, y) = (−∇⊥ ⊗ ∇⊥)G(x, y), where Modified Biharmonic Green’s Function G(x, y) = − 1 2πλ2 (log x − y + K0(λx − y)) .

slide-9
SLIDE 9

THE MODIFIED STOKESLET

Let λ =

  • Re/δt. The fundamental solution of the modified

Stokes equations is the Modified Stokeslet G(x, y) = (−∇⊥ ⊗ ∇⊥)G(x, y), where Modified Biharmonic Green’s Function G(x, y) = − 1 2πλ2 (log x − y + K0(λx − y)) . Particular Solution v(x) =

G(x, y)(δtF(y) + uN(y)) dV (y) is a particular solution.

slide-10
SLIDE 10

DOUBLE LAYER POTENTIAL

We represent the boundary correction uH as a Double Layer Potential uH(x) =

  • ∂Ω

D(x, y)σ(y) ds(y) , where D(x, y) = ∇GL(x, y)⊗ν+∇⊥⊗∇⊥(∂νG(x, y))+∇⊥⊗∇(∂τG(x, y)) . Get a second kind integral equation (SKIE) for σ. This is a good thing!

slide-11
SLIDE 11

EVALUATING THE BOUNDARY CORRECTION

slide-12
SLIDE 12

BOUNDARY INTEGRAL EQUATIONS

For good performance, need:

Figure: Visualization of QBX idea. Taken from Kl¨

  • ckner,

et al. 2012.

slide-13
SLIDE 13

BOUNDARY INTEGRAL EQUATIONS

For good performance, need: High-order accurate quadrature for singular integrals (e.g. generalized Gaussian quadrature)

Figure: Visualization of QBX idea. Taken from Kl¨

  • ckner,

et al. 2012.

slide-14
SLIDE 14

BOUNDARY INTEGRAL EQUATIONS

For good performance, need: High-order accurate quadrature for singular integrals (e.g. generalized Gaussian quadrature) Fast solution methods for structured, dense linear systems (e.g. HSS, HODLR, GMRES)

Figure: Visualization of QBX idea. Taken from Kl¨

  • ckner,

et al. 2012.

slide-15
SLIDE 15

BOUNDARY INTEGRAL EQUATIONS

For good performance, need: High-order accurate quadrature for singular integrals (e.g. generalized Gaussian quadrature) Fast solution methods for structured, dense linear systems (e.g. HSS, HODLR, GMRES) Fast, accurate layer potential evaluation, including near-singular points (e.g. quadrature by expansion)

Figure: Visualization of QBX idea. Taken from Kl¨

  • ckner,

et al. 2012.

slide-16
SLIDE 16

FAST, STABLE SUMS

To implement an integral equation method (both fast solvers and fast QBX), we need to be able to compute sums of the form u(xi) =

n

  • j=1

qj∂vjwjG(xi, sj) quickly and stably (and its derivatives)

slide-17
SLIDE 17

FAST, STABLE SUMS

To implement an integral equation method (both fast solvers and fast QBX), we need to be able to compute sums of the form u(xi) =

n

  • j=1

qj∂vjwjG(xi, sj) quickly and stably (and its derivatives) Let A =      ∂v1w1G(x1, s1) ∂v2w2G(x1, s2) · · · ∂vnwnG(x1, sn) ∂v1w1G(x2, s1) ∂v2w2G(x2, s2) · · · ∂vnwnG(x2, sn) . . . . . . . . . ∂v1w1G(xm, s1) ∂v2w2G(xm, s2) · · · ∂vnwnG(xm, sn)     

slide-18
SLIDE 18

LOW-RANK INTERACTIONS

A =       ∂v1w1 G(x1, s1) ∂v2w2 G(x1, s2) · · · ∂vnwn G(x1, sn) ∂v1w1 G(x2, s1) ∂v2w2 G(x2, s2) · · · ∂vnwn G(x2, sn) . . . . . . . . . ∂v1w1 G(xm, s1) ∂v2w2 G(xm, s2) · · · ∂vnwn G(xm, sn)       Well-separated points

slide-19
SLIDE 19

LOW-RANK INTERACTIONS

A =       ∂v1w1 G(x1, s1) ∂v2w2 G(x1, s2) · · · ∂vnwn G(x1, sn) ∂v1w1 G(x2, s1) ∂v2w2 G(x2, s2) · · · ∂vnwn G(x2, sn) . . . . . . . . . ∂v1w1 G(xm, s1) ∂v2w2 G(xm, s2) · · · ∂vnwn G(xm, sn)       Well-separated points singular values of A for various values of m and n

slide-20
SLIDE 20

LOW-RANK INTERACTIONS

A =       ∂v1w1 G(x1, s1) ∂v2w2 G(x1, s2) · · · ∂vnwn G(x1, sn) ∂v1w1 G(x2, s1) ∂v2w2 G(x2, s2) · · · ∂vnwn G(x2, sn) . . . . . . . . . ∂v1w1 G(xm, s1) ∂v2w2 G(xm, s2) · · · ∂vnwn G(xm, sn)       Well-separated points singular values of A for various values of m and n

The rank is low, independent of number of sources and targets

slide-21
SLIDE 21

LOW-RANK INTERACTIONS

A =       ∂v1w1 G(x1, s1) ∂v2w2 G(x1, s2) · · · ∂vnwn G(x1, sn) ∂v1w1 G(x2, s1) ∂v2w2 G(x2, s2) · · · ∂vnwn G(x2, sn) . . . . . . . . . ∂v1w1 G(xm, s1) ∂v2w2 G(xm, s2) · · · ∂vnwn G(xm, sn)       Well-separated points singular values of A for various values of m and n

The rank is low, independent of number of sources and targets For certain kernels, low-rank decompositions are known analytically

slide-22
SLIDE 22

NUMERICAL INSTABILITY

G(x, y) = − 1 2πλ2 (log x − y + K0(λx − y)) . Why not use existing tech for log and K0 and add together?

slide-23
SLIDE 23

NUMERICAL INSTABILITY

G(x, y) = − 1 2πλ2 (log x − y + K0(λx − y)) . Why not use existing tech for log and K0 and add together?

−1.0 −0.5 0.0 0.5 1.0 −1.00 −0.75 −0.50 −0.25 0.00 0.25 0.50 0.75 1.00 −1.0 −0.5 0.0 0.5 1.0 −1.00 −0.75 −0.50 −0.25 0.00 0.25 0.50 0.75 1.00

Numerical Experiment

u(x; λ) =

ns

  • j=1

qj∂vj wj G(x, sj) ,

slide-24
SLIDE 24

NUMERICAL INSTABILITY

G(x, y) = − 1 2πλ2 (log x − y + K0(λx − y)) . Why not use existing tech for log and K0 and add together?

−1.0 −0.5 0.0 0.5 1.0 −1.00 −0.75 −0.50 −0.25 0.00 0.25 0.50 0.75 1.00 −1.0 −0.5 0.0 0.5 1.0 −1.00 −0.75 −0.50 −0.25 0.00 0.25 0.50 0.75 1.00

Numerical Experiment

u(x; λ) =

ns

  • j=1

qj∂vj wj G(x, sj) , uL(x; λ) = − 1 2πλ2

ns

  • j=1

qj∂vj wj log x − sj ,

slide-25
SLIDE 25

NUMERICAL INSTABILITY

G(x, y) = − 1 2πλ2 (log x − y + K0(λx − y)) . Why not use existing tech for log and K0 and add together?

−1.0 −0.5 0.0 0.5 1.0 −1.00 −0.75 −0.50 −0.25 0.00 0.25 0.50 0.75 1.00 −1.0 −0.5 0.0 0.5 1.0 −1.00 −0.75 −0.50 −0.25 0.00 0.25 0.50 0.75 1.00

Numerical Experiment

u(x; λ) =

ns

  • j=1

qj∂vj wj G(x, sj) , uL(x; λ) = − 1 2πλ2

ns

  • j=1

qj∂vj wj log x − sj , uK(x; λ) = 1 2πλ2

ns

  • j=1

qj∂vj wj K0(λx − sj) .

slide-26
SLIDE 26

NUMERICAL INSTABILITY

G(x, y) = − 1 2πλ2 (log x − y + K0(λx − y)) . Why not use existing tech for log and K0 and add together?

−1.0 −0.5 0.0 0.5 1.0 −1.00 −0.75 −0.50 −0.25 0.00 0.25 0.50 0.75 1.00 −1.0 −0.5 0.0 0.5 1.0 −1.00 −0.75 −0.50 −0.25 0.00 0.25 0.50 0.75 1.00

Numerical Experiment

u(x; λ) =

ns

  • j=1

qj∂vj wj G(x, sj) , uL(x; λ) = − 1 2πλ2

ns

  • j=1

qj∂vj wj log x − sj , uK(x; λ) = 1 2πλ2

ns

  • j=1

qj∂vj wj K0(λx − sj) .

What is the error (in floating point) in evaluating u as u = uL − uK?

slide-27
SLIDE 27

NUMERICAL INSTABILITY (CONT.)

G(x, y) = − 1 2πλ2 (log x − y + K0(λx − y)) . Why not use existing tech for log and K0 and add together?

(a) Interior problem

10−5 10−2 101 λR 10−14 10−11 10−8 10−5 10−2 101 Error Potential Gradient Hessian

(b) Exterior problem

10−6 10−3 100 λR 10−14 10−11 10−8 10−5 10−2 101 Error Potential Gradient Hessian

The error increases as the product of λ =

  • Re/δt and the radius
  • f the disc R goes to zero.
slide-28
SLIDE 28

THE MEANING OF λR

Note that λ =

  • Re/δt
slide-29
SLIDE 29

THE MEANING OF λR

Note that λ =

  • Re/δt

The value of λR is small if

slide-30
SLIDE 30

THE MEANING OF λR

Note that λ =

  • Re/δt

The value of λR is small if The Reynolds number is small (viscous fluids)

slide-31
SLIDE 31

THE MEANING OF λR

Note that λ =

  • Re/δt

The value of λR is small if The Reynolds number is small (viscous fluids) The grid is fine

slide-32
SLIDE 32

THE MEANING OF λR

Note that λ =

  • Re/δt

The value of λR is small if The Reynolds number is small (viscous fluids) The grid is fine Time steps are relatively long

slide-33
SLIDE 33

THE MEANING OF λR

Note that λ =

  • Re/δt

The value of λR is small if The Reynolds number is small (viscous fluids) The grid is fine Time steps are relatively long Note that λR < 1 when δt > ReR2, i.e. when the CFL condition is violated. This regime is important for implicit methods for viscous fluids.

slide-34
SLIDE 34

OUR GOAL

Our goal: analytical formulas for the low rank interaction between well separated points which are stable for any λR.

slide-35
SLIDE 35

OUR GOAL

Our goal: analytical formulas for the low rank interaction between well separated points which are stable for any λR. Go back to basics: look that the separation of variables problem for the modified biharmonic equation

slide-36
SLIDE 36

SEPARATION OF VARIABLES

Let Ω be the interior or exterior of a disc of radius R and consider the modified biharmonic equation:

∆(∆ − λ2)u = 0 , x ∈ Ω , u = f , ∂nu = g , x ∈ ∂Ω .

slide-37
SLIDE 37

SEPARATION OF VARIABLES

Let Ω be the interior or exterior of a disc of radius R and consider the modified biharmonic equation:

∆(∆ − λ2)u = 0 , x ∈ Ω , u = f , ∂nu = g , x ∈ ∂Ω .

Separation of Variables Representation u(r, θ) =

  • n=−∞

un(r)einθ .

slide-38
SLIDE 38

SEPARATION OF VARIABLES

Let Ω be the interior or exterior of a disc of radius R and consider the modified biharmonic equation:

∆(∆ − λ2)u = 0 , x ∈ Ω , u = f , ∂nu = g , x ∈ ∂Ω .

Separation of Variables Representation u(r, θ) =

  • n=−∞

un(r)einθ . ODE for un(r) d2 dr2 + 1 r d dr − n2 r2 d2 dr2 + 1 r d dr − n2 r2 − λ2

  • un(r) = 0 .
slide-39
SLIDE 39

SEPARATION OF VARIABLES (CONT.)

ODE for un(r) d2 dr2 + 1 r d dr − n2 r2 d2 dr2 + 1 r d dr − n2 r2 − λ2

  • un(r) = 0 .

Four linearly independent solutions: r|n|, In(λr), r−|n|, and Kn(λr).

slide-40
SLIDE 40

SEPARATION OF VARIABLES (CONT.)

ODE for un(r) d2 dr2 + 1 r d dr − n2 r2 d2 dr2 + 1 r d dr − n2 r2 − λ2

  • un(r) = 0 .

Four linearly independent solutions: r|n|, In(λr), r−|n|, and Kn(λr). Interior Problem By imposing continuity at r = 0, the functions r|n| and In(λr) are a basis for the interior problem.

slide-41
SLIDE 41

SEPARATION OF VARIABLES (CONT.)

ODE for un(r) d2 dr2 + 1 r d dr − n2 r2 d2 dr2 + 1 r d dr − n2 r2 − λ2

  • un(r) = 0 .

Four linearly independent solutions: r|n|, In(λr), r−|n|, and Kn(λr). Interior Problem By imposing continuity at r = 0, the functions r|n| and In(λr) are a basis for the interior problem. Exterior Problem By imposing decay conditions r = ∞, the functions r−|n| and Kn(λr) are a basis for the exterior problem.

slide-42
SLIDE 42

A BAD BASIS (EXT.)

For the exterior problem, we have un(r) = αnr−|n| + βnKn(λr).

slide-43
SLIDE 43

A BAD BASIS (EXT.)

For the exterior problem, we have un(r) = αnr−|n| + βnKn(λr). Coefficient Recovery Problem   R−|n| Kn(λR) −|n|R−|n|−1 −λ 2 (Kn−1(λR) + Kn+1(λR))   αn βn

  • =

fn gn

  • .

This problem is ill-conditioned for small λR. Intuitively, this is because Kn(λr) and r−|n| are similar functions for small r.

slide-44
SLIDE 44

A BAD BASIS (EXT.)

For the exterior problem, we have un(r) = αnr−|n| + βnKn(λr). Coefficient Recovery Problem   R−|n| Kn(λR) −|n|R−|n|−1 −λ 2 (Kn−1(λR) + Kn+1(λR))   αn βn

  • =

fn gn

  • .

This problem is ill-conditioned for small λR. Intuitively, this is because Kn(λr) and r−|n| are similar functions for small r. Asymptotic Expansion for Kn(λr)

Kn (λr) = 1

2( 1 2λr)−|n| |n|−1

  • k=0

(|n| − k − 1)! k! (− 1

4λr 2)k + (−1)|n|+1 ln

1

2λr

  • In (λr)

+ (−1)|n| 1

2( 1 2λr)|n| ∞

  • k=0

(ψ (k + 1) + ψ (|n| + k + 1)) ( 1

4λr 2)k

k!(|n| + k)! .

slide-45
SLIDE 45

A BETTER BASIS (EXT.)

We can define a new basis function for the exterior problem which is not asymptotically similar to r−|n| and Kn.

slide-46
SLIDE 46

A BETTER BASIS (EXT.)

We can define a new basis function for the exterior problem which is not asymptotically similar to r−|n| and Kn. Definition of Qn Qn(r) = Kn(λr) − 2|n|−1 (|n| − 1)! λ|n|r|n| .

slide-47
SLIDE 47

A BETTER BASIS (EXT.)

We can define a new basis function for the exterior problem which is not asymptotically similar to r−|n| and Kn. Definition of Qn Qn(r) = Kn(λr) − 2|n|−1 (|n| − 1)! λ|n|r|n| . Qn has a different leading order term for small λ and R.

slide-48
SLIDE 48

A BETTER BASIS (EXT.)

We can define a new basis function for the exterior problem which is not asymptotically similar to r−|n| and Kn. Definition of Qn Qn(r) = Kn(λr) − 2|n|−1 (|n| − 1)! λ|n|r|n| . Qn has a different leading order term for small λ and R. The pair (Qn, Kn) is a better conditioned basis than (r−|n|, Kn) in the small λR regime.

slide-49
SLIDE 49

A BETTER BASIS (EXT.)

We can define a new basis function for the exterior problem which is not asymptotically similar to r−|n| and Kn. Definition of Qn Qn(r) = Kn(λr) − 2|n|−1 (|n| − 1)! λ|n|r|n| . Qn has a different leading order term for small λ and R. The pair (Qn, Kn) is a better conditioned basis than (r−|n|, Kn) in the small λR regime. Qn is still a solution of the ODE for un because it’s a linear combo of r−|n| and Kn.

slide-50
SLIDE 50

A BETTER BASIS (EXT.)

We can define a new basis function for the exterior problem which is not asymptotically similar to r−|n| and Kn. Definition of Qn Qn(r) = Kn(λr) − 2|n|−1 (|n| − 1)! λ|n|r|n| . Qn has a different leading order term for small λ and R. The pair (Qn, Kn) is a better conditioned basis than (r−|n|, Kn) in the small λR regime. Qn is still a solution of the ODE for un because it’s a linear combo of r−|n| and Kn. It is simple to evaluate Qn with tweaks to existing software.

slide-51
SLIDE 51

A BAD BASIS (INT.)

For the interior problem, we have that un(r) = αnr|n| + βnIn(λr).

slide-52
SLIDE 52

A BAD BASIS (INT.)

For the interior problem, we have that un(r) = αnr|n| + βnIn(λr). Coefficient Recovery Problem   R|n| In(λR) |n|R|n|−1 λ 2 (In−1(λR) + In+1(λR))   αn βn

  • =

fn gn

  • .

This problem is again ill-conditioned for small λR.

slide-53
SLIDE 53

A BAD BASIS (INT.)

For the interior problem, we have that un(r) = αnr|n| + βnIn(λr). Coefficient Recovery Problem   R|n| In(λR) |n|R|n|−1 λ 2 (In−1(λR) + In+1(λR))   αn βn

  • =

fn gn

  • .

This problem is again ill-conditioned for small λR. Asymptotic Expansion for In(λr)

In(λr) =

  • k=0

λr 2 2k+|n| k!(k + |n|)! = 1 2|n||n|! (λr)|n| + 1 2|n|+2(|n| + 1)! (λr)|n|+2 + · · ·

slide-54
SLIDE 54

A BETTER BASIS (INT.)

Again, we can define a new basis function for the interior problem which is not asymptotically similar to r|n| and In.

slide-55
SLIDE 55

A BETTER BASIS (INT.)

Again, we can define a new basis function for the interior problem which is not asymptotically similar to r|n| and In. Definition of Pn Pn(r) = In(λr) − λr 2 |n| 1 |n|! .

slide-56
SLIDE 56

A BETTER BASIS (INT.)

Again, we can define a new basis function for the interior problem which is not asymptotically similar to r|n| and In. Definition of Pn Pn(r) = In(λr) − λr 2 |n| 1 |n|! . Pn has a different leading order term for small λ and R.

slide-57
SLIDE 57

A BETTER BASIS (INT.)

Again, we can define a new basis function for the interior problem which is not asymptotically similar to r|n| and In. Definition of Pn Pn(r) = In(λr) − λr 2 |n| 1 |n|! . Pn has a different leading order term for small λ and R. The pair (r|n|, Pn) is a better conditioned basis than (r|n|, In) in the small λR regime.

slide-58
SLIDE 58

A BETTER BASIS (INT.)

Again, we can define a new basis function for the interior problem which is not asymptotically similar to r|n| and In. Definition of Pn Pn(r) = In(λr) − λr 2 |n| 1 |n|! . Pn has a different leading order term for small λ and R. The pair (r|n|, Pn) is a better conditioned basis than (r|n|, In) in the small λR regime. Pn is still a solution of the ODE for un because it’s a linear combo of r|n| and In.

slide-59
SLIDE 59

A BETTER BASIS (INT.)

Again, we can define a new basis function for the interior problem which is not asymptotically similar to r|n| and In. Definition of Pn Pn(r) = In(λr) − λr 2 |n| 1 |n|! . Pn has a different leading order term for small λ and R. The pair (r|n|, Pn) is a better conditioned basis than (r|n|, In) in the small λR regime. Pn is still a solution of the ODE for un because it’s a linear combo of r|n| and In. It is simple to evaluate Pn with tweaks to existing software.

slide-60
SLIDE 60

NUMERICAL RESULTS (CONT.)

Question What is the practical effect of the condition number of the coefficient recovery problem on the accuracy of the solution?

slide-61
SLIDE 61

NUMERICAL RESULTS (CONT.)

−1.0 −0.5 0.0 0.5 1.0 −1.00 −0.75 −0.50 −0.25 0.00 0.25 0.50 0.75 1.00 −1.0 −0.5 0.0 0.5 1.0 −1.00 −0.75 −0.50 −0.25 0.00 0.25 0.50 0.75 1.00

Numerical Experiment

u(x; λ) =

ns

  • j=1

λ2cjG(x, sj) + λdj∂vj,1G(x, sj) + qj∂vj,2vj,3G(x, sj) . For several values of λ and R:

slide-62
SLIDE 62

NUMERICAL RESULTS (CONT.)

−1.0 −0.5 0.0 0.5 1.0 −1.00 −0.75 −0.50 −0.25 0.00 0.25 0.50 0.75 1.00 −1.0 −0.5 0.0 0.5 1.0 −1.00 −0.75 −0.50 −0.25 0.00 0.25 0.50 0.75 1.00

Numerical Experiment

u(x; λ) =

ns

  • j=1

λ2cjG(x, sj) + λdj∂vj,1G(x, sj) + qj∂vj,2vj,3G(x, sj) . For several values of λ and R: Evaluate u and ∂nu on ∂Ω

slide-63
SLIDE 63

NUMERICAL RESULTS (CONT.)

−1.0 −0.5 0.0 0.5 1.0 −1.00 −0.75 −0.50 −0.25 0.00 0.25 0.50 0.75 1.00 −1.0 −0.5 0.0 0.5 1.0 −1.00 −0.75 −0.50 −0.25 0.00 0.25 0.50 0.75 1.00

Numerical Experiment

u(x; λ) =

ns

  • j=1

λ2cjG(x, sj) + λdj∂vj,1G(x, sj) + qj∂vj,2vj,3G(x, sj) . For several values of λ and R: Evaluate u and ∂nu on ∂Ω Solve corresponding separation of variables problem (order N = 50, using 100 points on ∂Ω) with new and old basis functions

slide-64
SLIDE 64

NUMERICAL RESULTS (CONT.)

−1.0 −0.5 0.0 0.5 1.0 −1.00 −0.75 −0.50 −0.25 0.00 0.25 0.50 0.75 1.00 −1.0 −0.5 0.0 0.5 1.0 −1.00 −0.75 −0.50 −0.25 0.00 0.25 0.50 0.75 1.00

Numerical Experiment

u(x; λ) =

ns

  • j=1

λ2cjG(x, sj) + λdj∂vj,1G(x, sj) + qj∂vj,2vj,3G(x, sj) . For several values of λ and R: Evaluate u and ∂nu on ∂Ω Solve corresponding separation of variables problem (order N = 50, using 100 points on ∂Ω) with new and old basis functions Evaluate error in potential, gradient, and Hessian

slide-65
SLIDE 65

NUMERICAL RESULTS (CONT.)

−1.0 −0.5 0.0 0.5 1.0 −1.00 −0.75 −0.50 −0.25 0.00 0.25 0.50 0.75 1.00 −1.0 −0.5 0.0 0.5 1.0 −1.00 −0.75 −0.50 −0.25 0.00 0.25 0.50 0.75 1.00

Numerical Experiment

u(x; λ) =

ns

  • j=1

λ2cjG(x, sj) + λdj∂vj,1G(x, sj) + qj∂vj,2vj,3G(x, sj) . For several values of λ and R: Evaluate u and ∂nu on ∂Ω Solve corresponding separation of variables problem (order N = 50, using 100 points on ∂Ω) with new and old basis functions Evaluate error in potential, gradient, and Hessian Should be good to about machine precision, with some precision loss in the derivatives

slide-66
SLIDE 66

NUMERICAL RESULTS (CONT.)

Errors for the exterior problem: (r−|n|, Kn) vs (Qn, Kn). Top row: λ → 0. Bottom row: R → 0.

10−6 10−3 100 λR 10−14 10−12 10−10 10−8 10−6 10−4 10−2 Eu (Qn, Kn) (r−n, Kn) Exact Difference 10−6 10−3 100 λR 10−13 10−10 10−7 10−4 10−1 Eu (Qn, Kn) (r−n, Kn) Exact Difference 10−6 10−3 100 λR 10−16 10−13 10−10 10−7 10−4 10−1 Eg (Qn, Kn) (r−n, Kn) Exact Difference 10−6 10−3 100 λR 10−14 10−11 10−8 10−5 10−2 101 Eg (Qn, Kn) (r−n, Kn) Exact Difference 10−6 10−3 100 λR 10−13 10−10 10−7 10−4 10−1 102 Eh (Qn, Kn) (r−n, Kn) Exact Difference 10−6 10−3 100 λR 10−13 10−10 10−7 10−4 10−1 Eh (Qn, Kn) (r−n, Kn) Exact Difference

slide-67
SLIDE 67

NUMERICAL RESULTS (CONT.)

Errors for the interior problem: (r|n|, In) vs (r|n|, Pn). Top row: λ → 0. Bottom row: R → 0.

10−5 10−2 101 λR 10−14 10−12 10−10 10−8 10−6 10−4 10−2 Eu (rn, Pn) (rn, In) Exact Difference 10−5 10−2 101 λR 10−14 10−11 10−8 10−5 10−2 Eu (rn, Pn) (rn, In) Exact Difference 10−5 10−2 101 λR 10−14 10−11 10−8 10−5 10−2 101 Eg (rn, Pn) (rn, In) Exact Difference 10−5 10−2 101 λR 10−14 10−11 10−8 10−5 10−2 101 Eg (rn, Pn) (rn, In) Exact Difference 10−5 10−2 101 λR 10−13 10−10 10−7 10−4 10−1 Eh (rn, Pn) (rn, In) Exact Difference 10−5 10−2 101 λR 10−13 10−10 10−7 10−4 10−1 Eh (rn, Pn) (rn, In) Exact Difference

slide-68
SLIDE 68

REALITY CHECK

How is this a decomposition?

slide-69
SLIDE 69

REALITY CHECK

How is this a decomposition? Recall u(xi) =

n

  • j=1

qj∂vjwjG(xi, sj)

A =       ∂v1w1 G(x1, s1) ∂v2w2 G(x1, s2) · · · ∂vnwn G(x1, sn) ∂v1w1 G(x2, s1) ∂v2w2 G(x2, s2) · · · ∂vnwn G(x2, sn) . . . . . . . . . ∂v1w1 G(xm, s1) ∂v2w2 G(xm, s2) · · · ∂vnwn G(xm, sn)       Well-separated points

slide-70
SLIDE 70

ANALYTICAL DECOMPOSITION

A = LR⊺.

slide-71
SLIDE 71

ANALYTICAL DECOMPOSITION

A = LR⊺. The form of L is straightforward

L =        Q0(|x1 − c|) K0(λ|x1 − c|) · · · Qp(|x1 − c|)eipθ1 Kp(λ|x1 − c|)eipθ1 Q0(|x2 − c|) K0(λ|x2 − c|) · · · Qp(|x2 − c|)eipθ2 Kp(λ|x2 − c|)eipθ2 . . . . . . . . . . . . Q0(|xm − c|) K0(λ|xm − c|) · · · Qp(|xm − c|)eipθm Kp(λ|xm − c|)eipθm       

slide-72
SLIDE 72

ANALYTICAL DECOMPOSITION

A = LR⊺. The form of L is straightforward

L =        Q0(|x1 − c|) K0(λ|x1 − c|) · · · Qp(|x1 − c|)eipθ1 Kp(λ|x1 − c|)eipθ1 Q0(|x2 − c|) K0(λ|x2 − c|) · · · Qp(|x2 − c|)eipθ2 Kp(λ|x2 − c|)eipθ2 . . . . . . . . . . . . Q0(|xm − c|) K0(λ|xm − c|) · · · Qp(|xm − c|)eipθm Kp(λ|xm − c|)eipθm       

What is R⊺?

slide-73
SLIDE 73

ANALYTICAL DECOMPOSITION

A = LR⊺. The form of L is straightforward

L =        Q0(|x1 − c|) K0(λ|x1 − c|) · · · Qp(|x1 − c|)eipθ1 Kp(λ|x1 − c|)eipθ1 Q0(|x2 − c|) K0(λ|x2 − c|) · · · Qp(|x2 − c|)eipθ2 Kp(λ|x2 − c|)eipθ2 . . . . . . . . . . . . Q0(|xm − c|) K0(λ|xm − c|) · · · Qp(|xm − c|)eipθm Kp(λ|xm − c|)eipθm       

What is R⊺? It is the map from the sources to the coefficients R⊺ = each mode solve 2 × 2 for coeffs separate modes with FFT evaluate both u and ∂nu on disc boundary Note that there is an analytical formula for R⊺ [Askham, 2017].

slide-74
SLIDE 74

WHAT OF EFFICIENCY?

Because the formulas for L and R⊺ are known, forming these matrices is O((m + n)p).

slide-75
SLIDE 75

WHAT OF EFFICIENCY?

Because the formulas for L and R⊺ are known, forming these matrices is O((m + n)p). The SVD, on the other hand is O(mn2).

slide-76
SLIDE 76

WHAT OF EFFICIENCY?

Because the formulas for L and R⊺ are known, forming these matrices is O((m + n)p). The SVD, on the other hand is O(mn2). Even randomized methods for the SVD are O(mnp).

slide-77
SLIDE 77

WHAT OF EFFICIENCY?

Because the formulas for L and R⊺ are known, forming these matrices is O((m + n)p). The SVD, on the other hand is O(mn2). Even randomized methods for the SVD are O(mnp). It is not always the case that sources are well-separated from

  • targets. Can we make a stable FMM with the above?
slide-78
SLIDE 78

AN FMM

The preceding provides a stable fast multipole method A fast multipole method is based on:

slide-79
SLIDE 79

AN FMM

The preceding provides a stable fast multipole method A fast multipole method is based on:

1 a formula for representing the sum due to a localized subset of

the points (a multipole expansion). (Qn, Kn)

slide-80
SLIDE 80

AN FMM

The preceding provides a stable fast multipole method A fast multipole method is based on:

1 a formula for representing the sum due to a localized subset of

the points (a multipole expansion). (Qn, Kn)

2 a formula for representing the sum due to points outside of a

disc (a local expansion). (r|n|, Pn)

slide-81
SLIDE 81

AN FMM

The preceding provides a stable fast multipole method A fast multipole method is based on:

1 a formula for representing the sum due to a localized subset of

the points (a multipole expansion). (Qn, Kn)

2 a formula for representing the sum due to points outside of a

disc (a local expansion). (r|n|, Pn)

3 formulas for translating between these representations

(translation operators). see the preprint!

slide-82
SLIDE 82

AN FMM

The preceding provides a stable fast multipole method A fast multipole method is based on:

1 a formula for representing the sum due to a localized subset of

the points (a multipole expansion). (Qn, Kn)

2 a formula for representing the sum due to points outside of a

disc (a local expansion). (r|n|, Pn)

3 formulas for translating between these representations

(translation operators). see the preprint!

4 a hierarchical organization of source and target points in space

slide-83
SLIDE 83

COMPUTING THE PARTICULAR SOLUTION

slide-84
SLIDE 84

EVALUATING THE PARTICULAR SOLUTION

To compute the particular solution, we need to evaluate integrals

  • f the form

v(x) = Vf (x) :=

K(x, y)f (y) dy , where K(x, y) = log |x − y| or K(x, y) = K0(λ|x − y|).

slide-85
SLIDE 85

EVALUATING THE PARTICULAR SOLUTION

To compute the particular solution, we need to evaluate integrals

  • f the form

v(x) = Vf (x) :=

K(x, y)f (y) dy , where K(x, y) = log |x − y| or K(x, y) = K0(λ|x − y|). No solve, just apply

slide-86
SLIDE 86

EVALUATING THE PARTICULAR SOLUTION

To compute the particular solution, we need to evaluate integrals

  • f the form

v(x) = Vf (x) :=

K(x, y)f (y) dy , where K(x, y) = log |x − y| or K(x, y) = K0(λ|x − y|). No solve, just apply Weakly singular integrand

slide-87
SLIDE 87

EVALUATING THE PARTICULAR SOLUTION

To compute the particular solution, we need to evaluate integrals

  • f the form

v(x) = Vf (x) :=

K(x, y)f (y) dy , where K(x, y) = log |x − y| or K(x, y) = K0(λ|x − y|). No solve, just apply Weakly singular integrand Expensive on an unstructured discretization (adpative quadrature, etc.)

slide-88
SLIDE 88

EVALUATING THE PARTICULAR SOLUTION

To compute the particular solution, we need to evaluate integrals

  • f the form

v(x) = Vf (x) :=

K(x, y)f (y) dy , where K(x, y) = log |x − y| or K(x, y) = K0(λ|x − y|). No solve, just apply Weakly singular integrand Expensive on an unstructured discretization (adpative quadrature, etc.) Fast methods for regular domains

Disc solvers “Box codes” (Ethridge and Greengard, Cheng et al., Langston and Zorin)

slide-89
SLIDE 89

BOX CODES, IN BRIEF

Box codes (typically) work on level-restricted trees and are very efficient (density f defined on leaves):

slide-90
SLIDE 90

BOX CODES, IN BRIEF

Box codes (typically) work on level-restricted trees and are very efficient (density f defined on leaves): Limited number of possible local interactions (precomputation

  • f integrals to near machine precision)
slide-91
SLIDE 91

BOX CODES, IN BRIEF

Box codes (typically) work on level-restricted trees and are very efficient (density f defined on leaves): Limited number of possible local interactions (precomputation

  • f integrals to near machine precision)

(plane wave) FMM for far-field

slide-92
SLIDE 92

BOX CODES, IN BRIEF

Box codes (typically) work on level-restricted trees and are very efficient (density f defined on leaves): Limited number of possible local interactions (precomputation

  • f integrals to near machine precision)

(plane wave) FMM for far-field Very fast, even on adaptive grids

slide-93
SLIDE 93

BOX CODE SPEED

slide-94
SLIDE 94

BOX CODE ERROR ESTIMATE

The application of a bounded operator is easy to analyze

slide-95
SLIDE 95

BOX CODE ERROR ESTIMATE

The application of a bounded operator is easy to analyze The box code computes the volume integral at collocation nodes to a specified precision.

slide-96
SLIDE 96

BOX CODE ERROR ESTIMATE

The application of a bounded operator is easy to analyze The box code computes the volume integral at collocation nodes to a specified precision. Notation: ˜ f : approximation to f by polynomials on each leaf ˜ V ˜ f (x): value of V ˜ f (x) computed using box code ǫ: precision of FMM

slide-97
SLIDE 97

BOX CODE ERROR ESTIMATE

The application of a bounded operator is easy to analyze The box code computes the volume integral at collocation nodes to a specified precision. Notation: ˜ f : approximation to f by polynomials on each leaf ˜ V ˜ f (x): value of V ˜ f (x) computed using box code ǫ: precision of FMM From multipole estimates: | ˜ V ˜ f (x) − V ˜ f (x)| ≤ ǫ˜ f 1 ,

slide-98
SLIDE 98

BOX CODE ERROR ESTIMATE

The application of a bounded operator is easy to analyze The box code computes the volume integral at collocation nodes to a specified precision. Notation: ˜ f : approximation to f by polynomials on each leaf ˜ V ˜ f (x): value of V ˜ f (x) computed using box code ǫ: precision of FMM From multipole estimates: | ˜ V ˜ f (x) − V ˜ f (x)| ≤ ǫ˜ f 1 , From triangle inequality and boundedness of V : | ˜ V ˜ f (x) − Vf (x)| ˜ f ∞ ≤ ǫ|Ω| + C(Ω)f − ˜ f ∞ ˜ f ∞ .

slide-99
SLIDE 99

BOX CODE ERROR ESTIMATE

The application of a bounded operator is easy to analyze The box code computes the volume integral at collocation nodes to a specified precision. Notation: ˜ f : approximation to f by polynomials on each leaf ˜ V ˜ f (x): value of V ˜ f (x) computed using box code ǫ: precision of FMM From multipole estimates: | ˜ V ˜ f (x) − V ˜ f (x)| ≤ ǫ˜ f 1 , From triangle inequality and boundedness of V : | ˜ V ˜ f (x) − Vf (x)| ˜ f ∞ ≤ ǫ|Ω| + C(Ω)f − ˜ f ∞ ˜ f ∞ . Gives an a priori error estimate (similar for ∇V ).

slide-100
SLIDE 100

EMBEDDING IN A BOX

0.4 0.2 0.0 0.2 0.4 0.4 0.2 0.0 0.2 0.4

Figure: The domain Ω with an adaptive tree structure

  • verlaying it.

Let Ω be contained in a box ΩB and let fe|Ω = f be defined on all

  • f ΩB. Then

Vfe(x) =

  • ΩB

GL(x, y)fe(y) dy is another particular solution and Vfe can be computed using a box code.

slide-101
SLIDE 101

FUNCTION EXTENSION

0.4 0.2 0.0 0.2 0.4 0.4 0.2 0.0 0.2 0.4

Figure: The domain Ω with an adaptive tree structure

  • verlaying it.

What if a smooth extension fe is not readily available? It must be computed in some way.

slide-102
SLIDE 102

COMPUTING THE EXTENDED FUNCTION

Figure: Example of a “cut-cell”.

slide-103
SLIDE 103

COMPUTING THE EXTENDED FUNCTION

Figure: Example of a “cut-cell”.

Extend by zero [Ethridge and Greengard, 2001]

slide-104
SLIDE 104

COMPUTING THE EXTENDED FUNCTION

Figure: Example of a “cut-cell”.

Extend by zero [Ethridge and Greengard, 2001] Local function extension [Ethridge, 2000, Langston, 2012]

slide-105
SLIDE 105

COMPUTING THE EXTENDED FUNCTION

Figure: Example of a “cut-cell”.

Extend by zero [Ethridge and Greengard, 2001] Local function extension [Ethridge, 2000, Langston, 2012] Global extension by layer potential [Askham, 2016] (C 0) and [Rachh and Askham, 2017] (C 1)

slide-106
SLIDE 106

COMPUTING THE EXTENDED FUNCTION

Figure: Example of a “cut-cell”.

Extend by zero [Ethridge and Greengard, 2001] Local function extension [Ethridge, 2000, Langston, 2012] Global extension by layer potential [Askham, 2016] (C 0) and [Rachh and Askham, 2017] (C 1) Globalized local extension [Fryklund et al., 2017] (PUX)

slide-107
SLIDE 107

EXTENSION WITH LAYER POTENTIALS

Let f be defined on Ω with boundary Γ. Then, define a function w

  • n R2 \ Ω as the solution of

∆w = in R2 \ Ω , w = f |Γ

  • n Γ .

Then fe = f on Ω and fe = w outside is a globally continuous extension of f .

slide-108
SLIDE 108

EXTENSION WITH LAYER POTENTIALS

Let f be defined on Ω with boundary Γ. Then, define a function w

  • n R2 \ Ω as the solution of

∆w = in R2 \ Ω , w = f |Γ

  • n Γ .

Then fe = f on Ω and fe = w outside is a globally continuous extension of f . w can be computed using the same numerical tools as for uh (generalized Gaussian quads, fast solvers, QBX)

slide-109
SLIDE 109

EXTENSION WITH LAYER POTENTIALS

Let f be defined on Ω with boundary Γ. Then, define a function w

  • n R2 \ Ω as the solution of

∆w = in R2 \ Ω , w = f |Γ

  • n Γ .

Then fe = f on Ω and fe = w outside is a globally continuous extension of f . w can be computed using the same numerical tools as for uh (generalized Gaussian quads, fast solvers, QBX) smoother extensions can be obtained as solutions of polyharmonic problems.

slide-110
SLIDE 110

ERROR ESTIMATE FOR NON-SMOOTH fe

Recall the a priori error bound | ˜ V ˜ fe(x) − Vfe(x)| ˜ fe∞ ≤ ǫ|Ω| + C(Ω)fe − ˜ fe∞ ˜ fe∞

slide-111
SLIDE 111

ERROR ESTIMATE FOR NON-SMOOTH fe

Recall the a priori error bound | ˜ V ˜ fe(x) − Vfe(x)| ˜ fe∞ ≤ ǫ|Ω| + C(Ω)fe − ˜ fe∞ ˜ fe∞ Implied convergence rate

  • Conv. Order Vf
  • Conv. Order ∇Vf

zero extension C 0 extension 1 1 C 1 extension 2 2

slide-112
SLIDE 112

ERROR ESTIMATE FOR NON-SMOOTH fe

Recall the a priori error bound | ˜ V ˜ fe(x) − Vfe(x)| ˜ fe∞ ≤ ǫ|Ω| + C(Ω)fe − ˜ fe∞ ˜ fe∞ Implied convergence rate

  • Conv. Order Vf
  • Conv. Order ∇Vf

zero extension C 0 extension 1 1 C 1 extension 2 2 These aren’t amazing. What rate do we observe?

slide-113
SLIDE 113

POISSON EQUATION EXAMPLES

0.4 0.2 0.0 0.2 0.4 0.4 0.2 0.0 0.2 0.4

Figure: The domain Ω with an adaptive tree structure

  • verlaying it.

∆u = f in Ω , u = ub

  • n Γ .

We set f and ub so that the solution u is given by u(x) = sin(10(x1+x2))+x2

1−3x2+8 .

slide-114
SLIDE 114

EXTENDED f

We extend f using the method and tools described above.

slide-115
SLIDE 115

CONVERGENCE RATE (UNIFORM GRID)

Error in potential

104 105

  • no. discretization points

10−10 10−9 10−8 10−7 10−6 10−5 10−4 absolute error zero ext. C0 ext. C1 ext. 2nd order 3rd order 4th order

Error in gradient

104 105

  • no. discretization points

10−7 10−6 10−5 10−4 10−3 10−2 10−1 absolute error zero ext. C0 ext. C1 ext. 1st order 2nd order 3rd order

slide-116
SLIDE 116

CONVERGENCE RATE (UNIFORM GRID)

Error in potential

104 105

  • no. discretization points

10−10 10−9 10−8 10−7 10−6 10−5 10−4 absolute error zero ext. C0 ext. C1 ext. 2nd order 3rd order 4th order

Error in gradient

104 105

  • no. discretization points

10−7 10−6 10−5 10−4 10−3 10−2 10−1 absolute error zero ext. C0 ext. C1 ext. 1st order 2nd order 3rd order

  • Conv. Order u
  • Conv. Order ∇u

predicted

  • bserved

predicted

  • bserved

zero extension 2 1 C 0 extension 1 3 1 2 C 1 extension 2 4 2 3

slide-117
SLIDE 117

OBSERVED CONVERGENCE RATE

To see that you gain 1 order: v(x) = − 1 2π

  • log x−yf (y) dy , ∇v(x) = − 1

  • x − y

x − y2 f (y) dy

x

slide-118
SLIDE 118

OBSERVED CONVERGENCE RATE

To see that you gain 1 order: v(x) = − 1 2π

  • log x−yf (y) dy , ∇v(x) = − 1

  • x − y

x − y2 f (y) dy

x

Local contribution gets weighted by area of a cell (gain h2 for log r and h for 1/r)

slide-119
SLIDE 119

OBSERVED CONVERGENCE RATE

To see that you gain 1 order: v(x) = − 1 2π

  • log x−yf (y) dy , ∇v(x) = − 1

  • x − y

x − y2 f (y) dy

x

Local contribution gets weighted by area of a cell (gain h2 for log r and h for 1/r) For the far-field, only O(1/h) of the boxes are irregular (have to add up carefully for gradient) and each is area h2

slide-120
SLIDE 120

OBSERVED CONVERGENCE RATE

To see that you gain 1 order: v(x) = − 1 2π

  • log x−yf (y) dy , ∇v(x) = − 1

  • x − y

x − y2 f (y) dy

x

Local contribution gets weighted by area of a cell (gain h2 for log r and h for 1/r) For the far-field, only O(1/h) of the boxes are irregular (have to add up carefully for gradient) and each is area h2 The gain of 2 orders for u is somewhat mysterious!

slide-121
SLIDE 121

ADAPTIVE GRIDDING STRATEGIES

What are good (a priori) strategies for adaptive grids? Recall that ˜ fe is the local polynomial interpolant on each box.

slide-122
SLIDE 122

ADAPTIVE GRIDDING STRATEGIES

What are good (a priori) strategies for adaptive grids? Recall that ˜ fe is the local polynomial interpolant on each box.

1 Enforce that fe − ˜

fe ≤ tol on each leaf

slide-123
SLIDE 123

ADAPTIVE GRIDDING STRATEGIES

What are good (a priori) strategies for adaptive grids? Recall that ˜ fe is the local polynomial interpolant on each box.

1 Enforce that fe − ˜

fe ≤ tol on each leaf

2 Enforce that h2fe − ˜

fe ≤ tol on each leaf

slide-124
SLIDE 124

ADAPTIVE GRIDDING STRATEGIES

What are good (a priori) strategies for adaptive grids? Recall that ˜ fe is the local polynomial interpolant on each box.

1 Enforce that fe − ˜

fe ≤ tol on each leaf

2 Enforce that h2fe − ˜

fe ≤ tol on each leaf

3 Enforce that hfe − ˜

fe ≤ tol on each leaf

slide-125
SLIDE 125

ADAPTIVE GRIDDING STRATEGIES

What are good (a priori) strategies for adaptive grids? Recall that ˜ fe is the local polynomial interpolant on each box.

1 Enforce that fe − ˜

fe ≤ tol on each leaf

2 Enforce that h2fe − ˜

fe ≤ tol on each leaf

3 Enforce that hfe − ˜

fe ≤ tol on each leaf

4 Hybrid: enforce one criterion on irregular boxes and another

  • n regular boxes (these perform best)
slide-126
SLIDE 126

ADAPTIVE GRIDDING STRATEGIES

What are good (a priori) strategies for adaptive grids? Recall that ˜ fe is the local polynomial interpolant on each box.

1 Enforce that fe − ˜

fe ≤ tol on each leaf

2 Enforce that h2fe − ˜

fe ≤ tol on each leaf

3 Enforce that hfe − ˜

fe ≤ tol on each leaf

4 Hybrid: enforce one criterion on irregular boxes and another

  • n regular boxes (these perform best)

Note that by storing local expansions and QBX expansions from a QBX FMM, the QBX method gives you an oracle for fe

slide-127
SLIDE 127

ADAPTIVE PERFORMANCE

Results for hybrid schemes

104 105 106

  • no. discretization points

10−10 10−9 10−8 10−7 10−6 10−5 absolute error 1 on reg, h on irreg 1 on reg, h2 on irreg uniform 104 105 106

  • no. discretization points

10−6 10−5 10−4 10−3 10−2 absolute error 1 on reg, h on irreg 1 on reg, h2 on irreg uniform 103 104 105 106

  • no. discretization points

10−8 10−7 10−6 10−5 10−4 absolute error h on reg, h on irreg h on reg, h2 on irreg uniform 103 104 105 106

  • no. discretization points

10−5 10−4 10−3 10−2 10−1 absolute error h on reg, h on irreg h on reg, h2 on irreg uniform

slide-128
SLIDE 128

MORE DIFFICULT PROBLEM

0.4 0.2 0.0 0.2 0.4 0.4 0.2 0.0 0.2 0.4

Figure: Adaptive box structure.

∆u = f in Ω , u = ub

  • n Γ .

We set f and ub so that the solution u is given by u(x) = sin(10(x1 + x2)) + x2

1

−3x2 + 8 + e−(500x1)2 , which requires lots of refinement near the x2 axis.

slide-129
SLIDE 129

ERROR (ADAPTIVE PERFORMANCE)

10

3

10

4

10

5

10

6

10

7

Number of discretization points 10

  • 7

10

  • 6

10

  • 5

10

  • 4

10

  • 3

10

  • 2

10

  • 1

Absolute error

err, cts-uniform err, cts-adaptive 3rd order

Figure: Error in potential vs. number of discretization nodes

10

3

10

4

10

5

10

6

10

7

Number of discretization points 10

  • 5

10

  • 4

10

  • 3

10

  • 2

10

  • 1

10 10

1

10

2

10

3

Absolute error

err, cts-uniform err, cts-adaptive 2nd order

Figure: Error in gradient vs. number of discretization nodes

slide-130
SLIDE 130

FUTURE WORK

Some plans Apply modified biharmonic FMM to Navier-Stokes integral equation methods Release wrapped solver with latest and greatest QBX implementation Implement adaptive-friendly version of biharmonic code

slide-131
SLIDE 131

THANK YOU

Thank you.

slide-132
SLIDE 132

BIBLIOGRAPHY I

[Askham, 2016] Askham, T. (2016). Integral-equation methods for inhomogeneous elliptic partial differential equations in complex geometry. PhD thesis, New York University. [Askham, 2017] Askham, T. (2017). A stabilized separation of variables method for the modified biharmonic equation. arXiv preprint arXiv:1710.05408. [Askham and Cerfon, 2017] Askham, T. and Cerfon, A. J. (2017). An adaptive fast multipole accelerated poisson solver for complex geometries. Journal of Computational Physics, 344:1–22. [Biros et al., 2002] Biros, G., Ying, L., and Zorin, D. (2002). The embedded boundary integral method for the incompressible navier-stokes equations. In Proceedings of the International Association for Boundary Element Methods 2002 Symposium. [Cheng et al., 2006] Cheng, H., Huang, J., and Leiterman, T. J. (2006). An adaptive fast solver for the modified helmholtz equation in two dimensions. Journal of Computational Physics, 211(2):616–637. [Ethridge and Greengard, 2001] Ethridge, F. and Greengard, L. (2001). A new fast-multipole accelerated poisson solver in two dimensions. SIAM Journal on Scientific Computing, 23(3):741–760. [Ethridge, 2000] Ethridge, J. F. (2000). Fast algorithms for volume integrals in potential theory. PhD thesis, New York University. [Fryklund et al., 2017] Fryklund, F., Lehto, E., and Tornberg, A.-K. (2017). Partition of unity extension of functions on complex domains. arXiv preprint arXiv:1712.08461.

slide-133
SLIDE 133

BIBLIOGRAPHY II

[Greengard and Rokhlin, 1987] Greengard, L. and Rokhlin, V. (1987). A fast algorithm for particle simulations. Journal of computational physics, 73(2):325–348. [Langston, 2012] Langston, M. H. (2012). An Adaptive Fast Multipole Method-Based PDE Solver in Three Dimensions. PhD thesis, New York University. [Malhotra et al., 2017] Malhotra, D., Rahimian, A., Zorin, D., and Biros, G. (2017). A parallel algorithm for long-timescale simulation of concentrated vesicle suspensions in three dimensions. preprint. [Mayo, 1984] Mayo, A. (1984). The fast solution of poisson’s and the biharmonic equations on irregular regions. SIAM Journal on Numerical Analysis, 21(2):285–299. [McKenney et al., 1995] McKenney, A., Greengard, L., and Mayo, A. (1995). A fast poisson solver for complex geometries. Journal of Computational Physics, 118(2):348–355. [Ojala, 2012] Ojala, R. (2012). A robust and accurate solver of laplace’s equation with general boundary conditions on general domains in the plane. Journal of Computational Mathematics, 30(4):433–448. [Rachh and Askham, 2017] Rachh, M. and Askham, T. (2017). Integral equation formulation of the biharmonic dirichlet problem. Journal of Scientific Computing, pages 1–20.