MULTISCALE BASIS OPTIMIZATION
FOR DARCY FLOW
James M. Rath work with Todd Arbogast in the Center for Subsurface Modeling / ICES at the University of Texas at Austin
Problem Steady single-phase flow can be described by: a p = f - - PowerPoint PPT Presentation
M ULTISCALE B ASIS O PTIMIZATION FOR D ARCY F LOW James M. Rath work with Todd Arbogast in the Center for Subsurface Modeling / ICES at the University of Texas at Austin Problem Steady single-phase flow can be described by: a p =
FOR DARCY FLOW
James M. Rath work with Todd Arbogast in the Center for Subsurface Modeling / ICES at the University of Texas at Austin
Problem Steady single-phase flow can be described by: −∇ · a∇p = f This PDE can be approximated in a number of ways. In this talk, we will focus on applying 2D piecewise linear finite elements on triangles.
Challenges Steady single-phase flow can be described by: −∇ · a∇p = f The coefficient a depends on the permeability. The permeability is often geostatistically generated at high resolution. It can be very heterogeneous. Together these conditions make for an ill-conditioned and computationally expensive problem.
Goal Calculate the approximation at the full resolution of the problem capturing all the details of the flow.
Method Calculate the approximation at the full resolution of the problem capturing all the details of the flow. We propose a new iterative method for solving the problem. The principle per iteration costs are only a coarse problem solve and a fine-scale residual
DOFs into the coarse problem, and a coarse solve.
Teaser Calculate the approximation at the full resolution of the problem capturing all the details of the flow. We propose a new iterative method for solving the problem. The principle per iteration costs are only a coarse problem solve and a fine-scale residual
DOFs into the coarse problem, and a coarse solve. The method has a number of unusual features:
coefficient a (i.e., the permeability)
Fine-scale degrees of freedom Let V be piecewise linear functions on a fine mesh. Degrees of freedom (DOFs) are shown above.
Goal Solve Ap = f with pressure p ∈ V , data f ∈ V ′, and matrix A : V → V ′.
Multiscale degrees of freedom From V , take out coarse edge DOFs to get VH.
Multiscale degrees of freedom: corner shape From V , take out coarse edge DOFs to get VH. Fix shapes for corner DOFs using the usual multiscale basis shapes.
Multiscale problem Solve AHpH = fH for pH ∈ VH where
HAIH is the coarsened matrix,
Hf is the coarsened data.
Note that these follow from the Galerkin procedure applied to VH ⊂ V .
Multiscale solution quality Solve AHpH = fH for pH ∈ VH where
HAIH is the coarsened matrix,
Hf is the coarsened data.
Note that these follow from the Galerkin procedure applied to VH ⊂ V . The multiscale solution pH is a pretty good approximation for p: we use almost all the same DOFs and just take out a few. (And multiscale problems are just as easy to solve as coarse ones.)
Oops Solve AHpH = fH for pH ∈ VH where
HAIH is the coarsened matrix,
Hf is the coarsened data.
Note that these follow from the Galerkin procedure applied to VH ⊂ V . The multiscale solution pH is a pretty good approximation for p: we use almost all the same DOFs and just take out a few. (And multiscale problems are just as easy to solve as coarse ones.) But almost always p = pH, and — even worse — p / ∈ VH. That is, we couldn’t possibly get p as the result of a multiscale problem no matter how hard we try; we’re missing some degrees of freedom.
Supplemented multiscale degrees of freedom From VH, add back in some edge shapes to form Vβ.
Supplemented multiscale degrees of freedom: edge shape From VH, add back in some edge shapes to form Vβ. Fix shapes along each coarse edge.
Supplemented multiscale degrees of freedom: another edge shape From VH, add back in some edge shapes to form Vβ. Fix shapes along each coarse edge. Pick any shape you like ...
Supplemented multiscale degrees of freedom: another edge shape From VH, add back in some edge shapes to form Vβ. Fix shapes along each coarse edge. Pick any shape you like ...
Supplemented multiscale degrees of freedom: another edge shape From VH, add back in some edge shapes to form Vβ. Fix shapes along each coarse edge. Pick any shape you like ... But just pick one (for each coarse edge) for any given computation.
Supplemented multiscale degrees of freedom: parameterized family From VH, add back in some edge shapes to form Vβ Fix shapes along each coarse edge. Pick any shape you like ... Record the heights (of the shapes along coarse edges) in a list β.
Supplemented multiscale problem As before, solve Aβpβ = fβ for pβ ∈ Vβ with
β AIβ as the coarsened matrix,
β f as the coarsened data.
A light at the end of the tunnel? As before, solve Aβpβ = fβ for pβ ∈ Vβ with
β AIβ as the coarsened matrix,
β f as the coarsened data.
Again, usually p = pβ and p / ∈ Vβ. That is, for any particular β we’re still missing some degrees of freedom. But at least now V =
β
Vβ. That is, by adjusting β we can find a Vβ that accomodates p.
Motivation As before, solve Aβpβ = fβ for pβ ∈ Vβ with
β AIβ as the coarsened matrix,
β f as the coarsened data.
Again, usually p = pβ and p / ∈ Vβ. That is, for any particular β we’re still missing some degrees of freedom. But at least now V =
β
Vβ. That is, by adjusting β we can find a Vβ that accomodates p. We have traded solving a linear problem for solving a non-linear one. But this is sensible because we trade a large linear system for a small non-linear
An algorithm
Aβpβ = fβ.
An algorithm
Aβpβ = fβ.
rβ = f − AIβpβ.
An algorithm: goal
Aβpβ = fβ.
rβ = f − AIβpβ.
An algorithm
Aβpβ = fβ.
rβ = f − AIβpβ.
An algorithm
Aβpβ = fβ.
rβ = f − AIβpβ.
δβ = −
β
†rβ.
β ← β + δβ.
Jacobian computation is too much
Aβpβ = fβ.
rβ = f − AIβpβ.
δβ = −
β
†rβ.
β ← β + δβ.
Explicitly computing r′
β is expensive!
(So approximate it instead.)
Quick and easy example We apply this method to solving the flow problem on the unit square with constant permeability. There are sources at the bottom-left and top-right
used with a 6 × 6 subgrid (corresponding to a 12 × 12 fine grid).
log10residual
1 2 3 4
iteration number
Note the axis scales: we get quadratic convergence — on a linear problem!
Problem size insensitivity: Fix H and let h → 0
# of Newton iterations
10 15 20 30 50 2 4 6 8 10
1/h
If we solve the flow problem with the same coarse grid (fixing H), but let the subgrid get finer and finer (letting h → 0), then we only need a constant number of Newton iterations regardless of h.
Problem size insensitivity: Fix H/h and let h → 0
# of Newton iterations
10 15 20 30 2 4 6 8 10
1/h
If we solve the flow problem with the same subgrid (fixing H/h), but let the overall grid get finer and finer (let H → 0 and h → 0), then we only need a constant number of Newton iterations regardless of h.
Kick it up a notch: a more heterogeneous permeability The above graphic plots the variation from a statistically generated perme- ability field. Darker areas indicate low permeability; lighter areas indicate high permeability. The permeabilities span about five orders of magnitude (105).
Problem size insensitivity: Fix H and let h → 0
# of Newton iterations
10 15 20 30 50 5 10 15 20 25
# of Newton iterations: maximum median minimum 1/h
Again, fix H but let h → 0. At each h, grab several statistical subsamples
number of Newton iterations is needed regardless of h.
Heterogeneity insensitivity?: κmax/κmin → +∞
# of Newton iterations
2 4 6 8 10 5 10 15 20 25 30
1/h = 8 1/h = 12 1/h = 16 1/h = 20 log10 (κmax/κmin)
Take a subsample of the heterogeneous permeability field and rescale it so that κmax/κmin gets large.
A channel/barrier permeability field In the above diagram, gray represents a permeability of 1 and red represents either a high or low permeability. When high, we have a channel; when low, we have a barrier.
Heterogeneity insensitivity?: κmax/κmin → +∞
# of Newton iterations
2.5 5 7.5 10 5 10 15 20 25 30
1/h = 8 1/h = 12
log10(perm jump) The horizontal axis shows the base-10 log of the permeability of the bar- rier/channel. Points on the left are for a barrier; points in the center are constant permeability everywhere; points on the right are for a channel.
An algorithm
Aβpβ = fβ.
rβ = f − AIβpβ.
δβ = −
β
†rβ.
β ← β + δβ.
A different fix via the chain rule Instead of computing the step δβ directly, we can instead compute the effect of a step on the residual δrβ, and infer the step in β indirectly.
A different fix via the chain rule Instead of computing the step δβ directly, we can instead compute the effect of a step on the residual δrβ, and infer the step in β indirectly. That is, using the chain rule,
β δβ and
β
†rβ so
β
β
†rβ.
A different fix via the chain rule Instead of computing the step δβ directly, we can instead compute the effect of a step on the residual δrβ, and infer the step in β indirectly. That is, using the chain rule,
β δβ and
β
†rβ so
β
β
†rβ. But
β
β
† is a projection that’s easy to compute. (So we can compute the Newton step without derivatives!)