using newton s method to solve linear systems or how i
play

Using Newtons method to solve linear systems or: How I learned to - PowerPoint PPT Presentation

M ULTISCALE B ASIS O PTIMIZATION FOR D ARCY F LOW James M. Rath work with Todd Arbogast in the Center for Subsurface Modeling / ICES at the University of Texas at Austin Using Newtons method to solve linear systems or: How I learned to


  1. M ULTISCALE B ASIS O PTIMIZATION FOR D ARCY F LOW James M. Rath work with Todd Arbogast in the Center for Subsurface Modeling / ICES at the University of Texas at Austin

  2. Using Newton’s method to solve linear systems or: How I learned to stop worrying and love non- linear problems Joint work with Todd Arbogast @ CSM / ICES / UT-Austin

  3. Problem: solving linear systems Solving large, sparse linear systems often requires the user of iterative solvers. Storage and speed are the bogeymen.

  4. That’s the facts, Jack Solving large, sparse linear systems often requires the user of iterative solvers. Storage and speed are the bogeymen. However, generally speaking: • Solvers get only linear convergence rates toward the solution from the initial guess. • Large condition numbers mean solvers behave poorly. They require more and more iterations.

  5. What we’re after Solving large, sparse linear systems often requires the user of iterative solvers. Storage and speed are the bogeymen. However, generally speaking: • Solvers get only linear convergence rates toward the solution from the initial guess. • Large condition numbers mean solvers behave poorly. They require more and more iterations. We want to do better on both these counts.

  6. Dr. Obvious strikes again Solving linear systems requires nonlinear operations, namely, division. x = 17 10 x = 17 ⇒ 10

  7. Newton’s method is da bomb Solving linear systems requires nonlinear operations, namely, division. x = 17 10 x = 17 ⇒ 10 Nonlinear solvers (Newton’s method and its ilk): • Get fast quadratic convergence to a solution, and • As applied to discretizations of nonlinear elliptic PDE, are insensitive to mesh size.

  8. Hope beyond hope Solving linear systems requires nonlinear operations, namely, division. x = 17 10 x = 17 ⇒ 10 Nonlinear solvers (Newton’s method and its ilk): • Get fast quadratic convergence to a solution, and • As applied to discretizations of nonlinear elliptic PDE, are insensitive to mesh size. We want to carry over these properties to solving linear systems.

  9. Oopsie! A naive application of Newton’s method to solving a linear system results in a one-step procedure.

  10. Darn! A naive application of Newton’s method to solving a linear system results in a one-step procedure. Solving: Au = f Objection function: F ( u ) = f − Au Jacobian: F ′ ( u ) = − A Newton step: u i +1 = u i − ( − A ) − 1 ( f − Au i ) = u i + u − u i = u

  11. But even worse To solve your linear system ... Solving: Au = f Newton step: u i +1 = u i − ( − A ) − 1 ( f − Au i ) ... you must solve your linear system.

  12. But even worse To solve your linear system ... Solving: Au = f Newton step: u i +1 = u i − ( − A ) − 1 ( f − Au i ) ... you must solve your linear system. And that’s no fun! Especially if it’s a 10 6 × 10 6 sparse, ill-conditioned sytem you want to solve.

  13. If at first you don’t succeed ... To solve your linear system ... Solving: Au = f Newton step: u i +1 = u i − ( − A ) − 1 ( f − Au i ) ... you must solve your linear system. We have to try harder to find a nonlinear piece to attack, but it’s not obvious where to begin or what will be successful.

  14. Try, try again Let’s examine a 3 × 3 linear system just to keep things simple. A u = f 10 − 6 4 x 10        = − 6 17 0 y 5            4 0 9 z − 1

  15. Hmmmm ... Let’s examine a 3 × 3 linear system just to keep things simple. A u = f 10 − 6 4 ρ cos θ sin φ 10        = − 6 17 0 ρ sin θ sin φ 5            4 0 9 ρ cos φ − 1 But let’s use polar coordinates to represent the unknown.

  16. Rearranging ... Let’s examine a 3 × 3 linear system just to keep things simple. A U σ ρ = f 10 − 6 4 cos θ sin φ 10       − 6 17 0 sin θ sin φ  ρ = 5            4 0 9 cos φ − 1 But let’s use polar coordinates to represent the unknown. And separate direction (or shape) from magnitude.

  17. Rearranging ... Let’s examine a 3 × 3 linear system just to keep things simple. A U σ ρ = f 10 − 6 4 cos θ sin φ 10       − 6 17 0 sin θ sin φ  ρ = 5            4 0 9 cos φ − 1 But let’s use polar coordinates to represent the unknown. And separate direction (or shape) from magnitude. σ = ( θ, φ )

  18. A nonlinear problem Objective function: r ( σ, ρ ) = f − AU σ ρ

  19. Split: some linear, some nonlinear Objective function: r ( σ ) = f − AU σ ρ ( σ ) Determine ρ as the “best” magnitude for a fixed σ : AU σ ρ = f

  20. Split: some linear, some nonlinear Objective function: r ( σ ) = f − AU σ ρ ( σ ) Determine ρ as the “best” magnitude for a fixed σ : U T ρ = U T � � σ AU σ σ f Where: • “Best” = best in least-squares sense (in the energy or A -norm).

  21. Split: some linear, some nonlinear Objective function: r ( σ ) = f − AU σ ρ ( σ ) Determine ρ as the “best” magnitude for a fixed σ : U T ρ = U T � � σ AU σ σ f Where: • “Best” = best in least-squares sense (in the energy or A -norm). • The system U T σ AU σ is a smaller/coarser linear system to solve.

  22. Algorithm ` a la Newton 1. Choose a shape σ . (Fix for now.)

  23. Algorithm ` a la Newton 1. Choose a shape σ . 2. Solve for ρ : U T ρ = U T � � σ AU σ σ f This is an “easy” coarsened problem.

  24. Algorithm ` a la Newton 1. Choose a shape σ . 2. Solve for ρ : U T ρ = U T � � σ AU σ σ f 3. Calculate objective/residual: r ( σ ) = f − AU σ ρ

  25. Algorithm ` a la Newton 1. Choose a shape σ . 2. Solve for ρ : U T ρ = U T � � σ AU σ σ f 3. Calculate objective/residual: r ( σ ) = f − AU σ ρ 4. Calculate Jacobian r ′ ( σ ) .

  26. Algorithm ` a la Newton 1. Choose a shape σ . 2. Solve for ρ : U T ρ = U T � � σ AU σ σ f 3. Calculate objective/residual: r ( σ ) = f − AU σ ρ 4. Calculate Jacobian r ′ ( σ ) . Oops, oh yeah ...

  27. Algorithm ` a la Newton 1. Choose a shape σ . 2. Solve for ρ : U T ρ = U T � � σ AU σ σ f 3. Calculate objective/residual: r ( σ ) = f − AU σ ρ 4. Calculate Jacobian r ′ ( σ ) . 5. Calculate Newton step: r ′ � † r � δσ = −

  28. Algorithm ` a la Newton 1. Choose a shape σ . 2. Solve for ρ : U T ρ = U T � � σ AU σ σ f 3. Calculate objective/residual: r ( σ ) = f − AU σ ρ 4. Calculate Jacobian r ′ ( σ ) . 5. Calculate Newton step: r ′ � † r � δσ = − 6. Update shape σ : σ ← σ + δσ

  29. Algorithm ` a la Newton 1. Choose a shape σ . 2. Solve for ρ : U T ρ = U T � � σ AU σ σ f 3. Calculate objective/residual: r ( σ ) = f − AU σ ρ 4. Calculate Jacobian r ′ ( σ ) . 5. Calculate Newton step: r ′ � † r � δσ = − 6. Update shape σ : σ ← σ + δσ 7. Repeat as necessary.

  30. Bummer • Calculating Jacobian r ′ ( σ ) r ′ � † r � • Solving the linear system

  31. Bummer • Calculating Jacobian r ′ ( σ ) r ′ � † r � • Solving the linear system

  32. Calculus ... yuck! • Calculating Jacobian r ′ ( σ ) r ′ � † r � • Solving the linear system Jacobians require calculus, and who wants to do calculus?

  33. Linear algebra is my bag, baby • Calculating Jacobian r ′ ( σ ) r ′ � † r � • Solving the linear system Jacobians require calculus, and who wants to do calculus? Blech! I wanna do linear algebra ...

  34. Jacobians are expensive • Calculating Jacobian r ′ ( σ ) r ′ � † r � • Solving the linear system Jacobians require calculus, and who wants to do calculus? Blech! I wanna do linear algebra ... (Jacobians are expensive to compute, anyway.)

  35. To be lazy, one must do work ... • Calculating Jacobian r ′ ( σ ) r ′ � † r � • Solving the linear system Jacobians require calculus, and who wants to do calculus? Blech! I wanna do linear algebra ... We’ll use calculus to avoid calculus. (And save the day!)

  36. Chain rule to the rescue! S’pose instead of computing the Newton step: r ′ � † r � δσ = −

  37. Chain rule to the rescue! S’pose instead of computing the Newton step: r ′ � † r � δσ = − We compute the effect that the Newton step would have on the residual: r ′ � � δr = δσ

  38. Chain rule to the rescue! S’pose instead of computing the Newton step: r ′ � † r � δσ = − We compute the effect that the Newton step would have on the residual: r ′ � � δr = δσ r ′ � † r r ′ �� � = −

  39. A-ha! S’pose instead of computing the Newton step: r ′ � † r � δσ = − We compute the effect that the Newton step would have on the residual: r ′ � � δr = δσ r ′ � † r r ′ �� � = − r ′ � † is something familiar: the projection onto the r ′ �� � The operation range of r ′ !

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend