Treating Time as Just Another Space Variable Randolph E. Bank - - PowerPoint PPT Presentation

treating time as just another space variable
SMART_READER_LITE
LIVE PREVIEW

Treating Time as Just Another Space Variable Randolph E. Bank - - PowerPoint PPT Presentation

Overview Paradigm DD Solver Treating Time as Just Another Space Variable Randolph E. Bank Department of Mathematics University of California, San Diego With Panayot Vassilevski and Ludmil Zikatanov Space-Time Methods for PDEs RICAM


slide-1
SLIDE 1

Overview Paradigm DD Solver

Treating Time as Just Another Space Variable

Randolph E. Bank

Department of Mathematics University of California, San Diego With Panayot Vassilevski and Ludmil Zikatanov Space-Time Methods for PDEs RICAM November 7, 2016

UCSD Center for Computational Mathematics Slide 1/19, November 7, 2016

slide-2
SLIDE 2

Overview Paradigm DD Solver

Outline of Talk

1

Overview

2

Parallel Adaptive Meshing Paradigm

3

Domain Decomposition Solver

UCSD Center for Computational Mathematics Slide 2/19, November 7, 2016

slide-3
SLIDE 3

Overview Paradigm DD Solver

Time as a Space Variable

Think of this Lu = ut − ∇ · (a∇u) + b · ∇u + cu = f as this Lu = − ˜ ∇ · (A ˜ ∇u) + B · ˜ ∇u + cu = f ˜ ∇u = ∇u ut

  • A =

a

  • B =

b 1

  • UCSD Center for Computational Mathematics

Slide 3/19, November 7, 2016

slide-4
SLIDE 4

Overview Paradigm DD Solver

Why?

This provides new and expanded opportunities for Discretization Adaptivity Parallel Computation but.... One now has a d + 1 space dimensional problem.

UCSD Center for Computational Mathematics Slide 4/19, November 7, 2016

slide-5
SLIDE 5

Overview Paradigm DD Solver

PLTMG examples

Discretization: Artificial Diffusion is inspired by Scharfetter-Gummel discretization Adaptivity: hp adaptivity based on interpolation error estimates, and recovered derivatives. Parallel Computation: Bank-Holst parallel adaptive meshing paradigm.

UCSD Center for Computational Mathematics Slide 5/19, November 7, 2016

slide-6
SLIDE 6

Overview Paradigm DD Solver

Example I

ut − uxx + 100 sin(2πt)ux = 1 in Ω = (0, 1) × (0, 2) u(0, t) = u(1, t) = 0 for 0 ≤ t ≤ 2 u(x, 0) = 0 for 0 ≤ x ≤ 1 Weak form: Find uh ∈ Sh such that B(uh, v) = (1, v) for all v ∈ Sh, where B(u, v) =

uxvx + ǫutvt + 100 sin(2πt)uxv + utv and ǫ = 10−6.

UCSD Center for Computational Mathematics Slide 6/19, November 7, 2016

slide-7
SLIDE 7

Overview Paradigm DD Solver

Example II

ut + uux = 0 in Ω u(0, t) = 1 0 ≤ t ≤ 2 u(x, 0) =    1 1.5 − 2x 0 ≤ x ≤ .25 .25 ≤ x ≤ .75 .75 ≤ x ≤ 2 where Ω = {(x, y)|x > 0, y > 0, x2 + y2 < 4}. Weak form: Find uh ∈ Sh,D such that B(uh, v) = 0 for all v ∈ Sh,0, where B(u, v) =

ǫ(uxvx + utvt) + utv + uuxv and ǫ = 10−3.

UCSD Center for Computational Mathematics Slide 7/19, November 7, 2016

slide-8
SLIDE 8

Overview Paradigm DD Solver

Some Remarks on Analysis

Much analysis for static problems applies, possibly with minor technical challenges, eg A = a

a ǫ

  • Rescale time if needed to avoid thin domains. Space length scale

(0, L); Time scale (0, T). ˆ t = Lt T = κt; ∂u ∂t = κ∂u ∂ˆ t (see Bank, Vassilevski, Zikatanov, 2015). Possibly take VERY big time steps.

UCSD Center for Computational Mathematics Slide 8/19, November 7, 2016

slide-9
SLIDE 9

Overview Paradigm DD Solver

Motivation for Parallel Adaptive Paradigm

1 Make existing sequential adaptive meshing codes parallel with

minimal recoding.

2 Allow adaptive meshing with low load balancing and

communication costs.

UCSD Center for Computational Mathematics Slide 9/19, November 7, 2016

slide-10
SLIDE 10

Overview Paradigm DD Solver

Parallel Adaptive Mesh Paradigm

joint with Michael Holst

Step I:

On coarse mesh, solve the entire problem. Compute a posteriori error estimates. Partition coarse mesh to achieve equal error.

Step II:

Each processor gets complete coarse mesh. Each processor independently solves the entire problem but adaptively refines mainly its subregion.

Step III: Glue together meshes provided by each processor.

Compute global solution using initial guess provided by local solutions.

UCSD Center for Computational Mathematics Slide 10/19, November 7, 2016

slide-11
SLIDE 11

Overview Paradigm DD Solver

Load Balance - 16 Processors

UCSD Center for Computational Mathematics Slide 11/19, November 7, 2016

slide-12
SLIDE 12

Overview Paradigm DD Solver

Dual Problem Weights in Step II

thesis of Jeff Ovall We weight error estimates outside Ωi to discourage refinement there. Weights based on dual problems: Find ψi ∈ Sh(Ω − Ωi) B∗(ψi, v) ≡ B(v, ψi) = 0 for all v ∈ Sh(Ω − Ωi) where ψi ≡ 1 on ¯ Ωi. Provides some extra refinement outside inflow (upwind) part

  • f ∂Ωi

The goal of Step II is to create a good adaptive mesh (accurate solution computed in Step III)

UCSD Center for Computational Mathematics Slide 12/19, November 7, 2016

slide-13
SLIDE 13

Overview Paradigm DD Solver

Motivation for DD Solver

An Embarrassment of Riches

We follow the same philosophy as the adaptive meshing paradigm.

1 Want low communication. 2 Use existing partition generated by Steps I-II. 3 Use existing sequential multigraph solver on each processor. 4 Initial guess provided by fine grid part of solution on all

processors.

5 Use meshes generated by adaptive refinement – built-in coarse

grid (Maximum Overlap).

UCSD Center for Computational Mathematics Slide 13/19, November 7, 2016

slide-14
SLIDE 14

Overview Paradigm DD Solver

Global Saddle Point System – 2 Subdomains

thesis of Shaoying Lu       A11 A1γ Aγ1 Aγγ I Aνν Aν2 −I A2ν A22 I −I             δU1 δUγ δUν δU2 Λ       =       R1 Rγ Rν R2 Uν − Uγ       . I appears because global mesh is conforming. A11, A22 correspond to interior mesh points. Aγγ, Aνν correspond to interface. Λ is Lagrange multiplier (not computed or updated).

UCSD Center for Computational Mathematics Slide 14/19, November 7, 2016

slide-15
SLIDE 15

Overview Paradigm DD Solver

Local Saddle Point System – 2 Subdomains

      A11 A1γ Aγ1 Aγγ I ¯ Aνν ¯ Aν2 −I ¯ A2ν ¯ A22 I −I             δU1 δUγ δ ¯ Uν δ ¯ U2 Λ       =       R1 Rγ Rν Uν − Uγ             −I I −I ¯ Aνν ¯ Aν2 A11 A1γ I Aγ1 Aγγ ¯ A2ν ¯ A22             Λ δ ¯ Uν δU1 δUγ δ ¯ U2       =       Uν − Uγ Rν R1 Rγ      

UCSD Center for Computational Mathematics Slide 15/19, November 7, 2016

slide-16
SLIDE 16

Overview Paradigm DD Solver

Local Schur Complement System – 2 Subdomains

  A11 A1γ Aγ1 Aγγ + ¯ Aνν ¯ Aγ2 ¯ A2ν ¯ A22     δU1 δUγ δ ¯ U2   =   R1 Rγ + Rν + ¯ Aνν(Uν − Uγ) 0 + ¯ A2ν(Uν − Uγ)   . The matrix is the stiffness matrix for the conforming mesh on processor 1. We expect R1 ≈ 0,R2 ≈ 0 at all steps. This approximation substantially cuts communication and calculation costs. Processor 1 sends Rγ, Uγ, and receives Rν, Uν We use δU1 and δUγ to update U1 and Uγ; we discard δ ¯ U2.

UCSD Center for Computational Mathematics Slide 16/19, November 7, 2016

slide-17
SLIDE 17

Overview Paradigm DD Solver

Summary of Calculation on Processor 1

1 locally compute R1 and Rγ. 2 exchange boundary data (send Rγ and Uγ; receive Rν and

Uν).

3 locally compute the right-hand-side of Schur complement

system.

4 locally solve Schur complement system via the multigraph

iteration.

5 update U1 and Uγ using δU1 and δUγ.

The update could be local (U1 ← U1 + δU1; Uγ ← Uγ + δUγ) or could require communication. Here we do a Newton line search.

UCSD Center for Computational Mathematics Slide 17/19, November 7, 2016

slide-18
SLIDE 18

Overview Paradigm DD Solver

The Rate of Convergence

joint with Panayot Vassilevski Theorem: Under suitable hypotheses, the rate of convergence of the DD algorithm is bounded by γ ≤ C H d 2 where C is independent of N, p, h, H, and d. In practice, H ∼ d and the observed rate of convergence is

  • constant. (at least for p ≤ 256 and N ≤ 25m)

The proof makes heavy use of interior estimates.

UCSD Center for Computational Mathematics Slide 18/19, November 7, 2016

slide-19
SLIDE 19

Overview Paradigm DD Solver

Global Solution N = 14095115

UCSD Center for Computational Mathematics Slide 19/19, November 7, 2016