optimization based computational physics and high order
play

Optimization-based computational physics and high-order methods: - PowerPoint PPT Presentation

Optimization-based computational physics and high-order methods: from optimized analysis to design and data assimilation Matthew J. Zahr and Per-Olof Persson CRD Postdoc Seminar Series Lawrence Berkeley National Laboratory, Berkeley, CA


  1. Proposed approach: managed inexactness Replace expensive PDE with inexpensive approximation model • Anisotropic sparse grids used for inexact integration of risk measures • Reduced-order models used for inexact PDE evaluations − → minimize F ( µ ) minimize m ( µ ) µ ∈ R n µ µ ∈ R n µ Manage inexactness with trust region method • Embedded in globally convergent trust region method • Error indicators 1 to account for all sources of inexactness • Refinement of approximation model using greedy algorithms minimize m k ( µ ) µ ∈ R n µ − → minimize F ( µ ) µ ∈ R n µ subject to || µ − µ k || ≤ ∆ k 1 Must be computable and apply to general, nonlinear PDEs 20 / 63

  2. First source of inexactness: anisotropic sparse grids Stochastic collocation using anisotropic sparse grid nodes to approximate integral with summation minimize E [ J ( u , µ , · )] u ∈ R n u , µ ∈ R n µ ∀ ξ ∈ Ξ subject to r ( u , µ , ξ ) = 0 ⇓ minimize E I [ J ( u , µ , · )] u ∈ R n u , µ ∈ R n µ ∀ ξ ∈ Ξ I subject to r ( u , µ , ξ ) = 0 [Kouri et al., 2013, Kouri et al., 2014] 21 / 63

  3. Source of inexactness: anisotropic sparse grids Quad rule ξ 1 ⊗ ξ 2 Index set 1 6 0 . 5 5 4 ξ 2 0 i 2 3 2 − 0 . 5 1 − 1 − 1 − 0 . 5 0 0 . 5 1 1 2 3 4 5 6 i 1 ξ 1 Index set ( I ) – Neighbors ( N ( I ) ) – 22 / 63

  4. Source of inexactness: anisotropic sparse grids Quad rule ξ 1 ⊗ ξ 2 Index set 1 6 0 . 5 5 4 ξ 2 0 i 2 3 2 − 0 . 5 1 − 1 − 1 − 0 . 5 0 0 . 5 1 1 2 3 4 5 6 i 1 ξ 1 Index set ( I ) – Neighbors ( N ( I ) ) – 23 / 63

  5. Second source of inexactness: reduced-order models Stochastic collocation of the reduced-order model over anisotropic sparse grid nodes used to approximate integral with cheap summation E [ J ( u , µ , · )] minimize u ∈ R n u , µ ∈ R n µ subject to r ( u , µ , ξ ) = 0 ∀ ξ ∈ Ξ ⇓ minimize E I [ J ( u , µ , · )] u ∈ R n u , µ ∈ R n µ subject to r ( u , µ , ξ ) = 0 ∀ ξ ∈ Ξ I ⇓ minimize E I [ J ( Φ u r , µ , · )] u r ∈ R k u , µ ∈ R n µ Φ T r ( Φ u r , µ , ξ ) = 0 subject to ∀ ξ ∈ Ξ I 24 / 63

  6. Source of inexactness: projection-based model reduction • Model reduction ansatz: state vector lies in low-dimensional subspace u ≈ Φ u r � � ∈ R n u × k u is the reduced (trial) basis ( n u ≫ k u ) φ k u • Φ = φ 1 · · · • u r ∈ R k u are the reduced coordinates of u • Substitute into r ( u , µ ) = 0 and perform Galerkin projection Φ T r ( Φ u r , µ ) = 0 25 / 63

  7. Few global, data-driven basis functions v. many local ones • Instead of using traditional local shape functions (e.g., FEM), use global shape functions • Instead of a-priori, analytical shape functions, leverage data-rich computing environment by using data-driven modes 26 / 63

  8. Proposed approach: managed inexactness Replace expensive PDE with inexpensive approximation model • Anisotropic sparse grids used for inexact integration of risk measures • Reduced-order models used for inexact PDE evaluations − → minimize F ( µ ) minimize m ( µ ) µ ∈ R n µ µ ∈ R n µ Manage inexactness with trust region method • Embedded in globally convergent trust region method • Error indicators 2 to account for all sources of inexactness • Refinement of approximation model using greedy algorithms minimize m k ( µ ) µ ∈ R n µ − → minimize F ( µ ) µ ∈ R n µ subject to || µ − µ k || ≤ ∆ k 2 Must be computable and apply to general, nonlinear PDEs 27 / 63

  9. Trust region ingredients for global convergence Approximation models m k ( µ ) Error indicators ||∇ F ( µ ) − ∇ m k ( µ ) || ≤ ξϕ k ( µ ) ξ > 0 Adaptivity ϕ k ( µ k ) ≤ κ ϕ min {||∇ m k ( µ k ) || , ∆ k } Global convergence ||∇ F ( µ k ) || = 0 lim inf k →∞ 28 / 63

  10. Trust region method: ROM/SG approximation model Approximation models built on two sources of inexactness E I k [ J ( Φ k u r ( µ , · ) , µ , · )] m k ( µ ) = Error indicators that account for both sources of error ϕ k ( µ ) = α 1 E 1 ( µ ; I k , Φ k ) + α 2 E 2 ( µ ; I k , Φ k ) + α 3 E 4 ( µ ; I k , Φ k ) Reduced-order model errors E 1 ( µ ; I , Φ ) = E I ∪ N ( I ) [ || r ( Φ u r ( µ , · ) , µ , · ) || ] �� �� � r λ ( Φ u r ( µ , · ) , Φ λ r ( µ , · ) , µ , · ) � �� �� E 2 ( µ ; I , Φ ) = E I ∪ N ( I ) Sparse grid truncation errors E 4 ( µ ; I , Φ ) = E N ( I ) [ ||∇J ( Φ u r ( µ , · ) , µ , · ) || ] 29 / 63

  11. Adaptivity: Dimension-adaptive greedy method while E 4 ( Φ , I , µ k ) > κ ϕ min {||∇ m k ( µ k ) || , ∆ k } do 3 α 3 Refine index set : Dimension-adaptive sparse grids j ∗ = arg max I k ← I k ∪ { j ∗ } where E j [ ||∇J ( Φ u r ( µ , · ) , µ , · ) || ] j ∈N ( I k ) 30 / 63

  12. Adaptivity: Dimension-adaptive greedy method while E 4 ( Φ , I , µ k ) > κ ϕ min {||∇ m k ( µ k ) || , ∆ k } do 3 α 3 Refine index set : Dimension-adaptive sparse grids j ∗ = arg max I k ← I k ∪ { j ∗ } where E j [ ||∇J ( Φ u r ( µ , · ) , µ , · ) || ] j ∈N ( I k ) Refine reduced-order basis : Greedy sampling while E 1 ( Φ , I , µ k ) > κ ϕ min {||∇ m k ( µ k ) || , ∆ k } do 3 α 1 � � u ( µ k , ξ ∗ ) λ ( µ k , ξ ∗ ) Φ k ← Φ k ξ ∗ = arg max ρ ( ξ ) || r ( Φ k u r ( µ k , ξ ) , µ k , ξ ) || ξ ∈ Ξ j ∗ end while 30 / 63

  13. Adaptivity: Dimension-adaptive greedy method while E 4 ( Φ , I , µ k ) > κ ϕ min {||∇ m k ( µ k ) || , ∆ k } do 3 α 3 Refine index set : Dimension-adaptive sparse grids j ∗ = arg max I k ← I k ∪ { j ∗ } where E j [ ||∇J ( Φ u r ( µ , · ) , µ , · ) || ] j ∈N ( I k ) Refine reduced-order basis : Greedy sampling while E 1 ( Φ , I , µ k ) > κ ϕ min {||∇ m k ( µ k ) || , ∆ k } do 3 α 1 � � u ( µ k , ξ ∗ ) λ ( µ k , ξ ∗ ) Φ k ← Φ k ξ ∗ = arg max ρ ( ξ ) || r ( Φ k u r ( µ k , ξ ) , µ k , ξ ) || ξ ∈ Ξ j ∗ end while while E 2 ( Φ , I , µ k ) > κ ϕ min {||∇ m k ( µ k ) || , ∆ k } do 3 α 2 � � u ( µ k , ξ ∗ ) λ ( µ k , ξ ∗ ) Φ k ← Φ k ξ ∗ = arg max � �� � r λ ( Φ k u r ( µ k , ξ ) , Φ k λ r ( µ k , ξ ) , µ k , ξ ) � �� ρ ( ξ ) � ξ ∈ Ξ j ∗ end while 30 / 63 end while

  14. Optimal control of steady Burgers’ equation • Optimization problem: �� 1 � 1 � � 1 u ( x )) 2 dx + α z ( µ , x ) 2 dx minimize ρ ( ξ ) 2( u ( µ , ξ , x ) − ¯ d ξ 2 µ ∈ R n µ Ξ 0 0 where u ( µ , ξ , x ) solves − ν ( ξ ) ∂ xx u ( µ , ξ , x ) + u ( µ , ξ , x ) ∂ x u ( µ , ξ , x ) = z ( µ , x ) x ∈ (0 , 1) , ξ ∈ Ξ u ( µ , ξ , 0) = d 0 ( ξ ) u ( µ , ξ , 1) = d 1 ( ξ ) • Target state : ¯ u ( x ) ≡ 1 • Stochastic Space : Ξ = [ − 1 , 1] 3 , ρ ( ξ ) d ξ = 2 − 3 d ξ ξ 2 ξ 3 ν ( ξ ) = 10 ξ 1 − 2 d 0 ( ξ ) = 1 + d 1 ( ξ ) = 1000 1000 • Parametrization : z ( µ , x ) – cubic splines with 51 knots, n µ = 53 31 / 63

  15. Optimal control and statistics 2 z ( µ , x ) 0 0 0 . 2 0 . 4 0 . 6 0 . 8 1 x E [ u ( µ , · , x )] 1 0 0 0 . 2 0 . 4 0 . 6 0 . 8 1 x Optimal control and corresponding mean state ( ) ± one ( ) and two ( ) standard deviations 32 / 63

  16. Global convergence without pointwise agreement 10 − 2 ) | F ( µ k ) − F ( µ ∗ ) | ( µ k ) − F ( µ ∗ ) | ) | F (ˆ ( 10 − 5 ) | m k ( µ k ) − F ( µ ∗ ) | ( µ k ) − F ( µ ∗ ) | ( ) | m k (ˆ 10 − 8 0 1 2 3 4 Major iteration F ( µ k ) m k ( µ k ) F (ˆ µ k ) m k (ˆ µ k ) ||∇ F ( µ k ) || ρ k Success? 6.6506e-02 7.2694e-02 5.3655e-02 5.9922e-02 2.2959e-02 1.0257e+00 1.0000e+00 5.3655e-02 5.9593e-02 5.0783e-02 5.7152e-02 2.3424e-03 9.7512e-01 1.0000e+00 5.0783e-02 5.0670e-02 5.0412e-02 5.0292e-02 1.9724e-03 9.8351e-01 1.0000e+00 5.0412e-02 5.0292e-02 5.0405e-02 5.0284e-02 9.2654e-05 8.7479e-01 1.0000e+00 5.0405e-02 5.0404e-02 5.0403e-02 5.0401e-02 8.3139e-05 9.9946e-01 1.0000e+00 5.0403e-02 5.0401e-02 - - 2.2846e-06 - - Convergence history of trust region method built on two-level approximation 33 / 63

  17. Significant reduction in cost, if ROM only 10 × faster than HDM Cost = nHdmPrim + 0 . 5 × nHdmAdj + τ − 1 × ( nRomPrim + 0 . 5 × nRomAdj ) 10 − 2 | F ( µ ) − F ( µ ∗ ) | 10 − 3 10 − 4 10 − 5 10 1 10 2 10 3 10 4 10 5 Cost 5 -level isotropic SG ( ), dimension-adaptive SG [Kouri et al., 2014] ( ), and proposed ROM/SG for τ = 1 ( ), τ = 10 ( ), τ = 100 ( ), τ = ∞ ( ) 34 / 63

  18. Significant reduction in cost, if ROM only 10 × faster than HDM Cost = nHdmPrim + 0 . 5 × nHdmAdj + τ − 1 × ( nRomPrim + 0 . 5 × nRomAdj ) 10 − 2 | F ( µ ) − F ( µ ∗ ) | 10 − 3 10 − 4 10 − 5 10 1 10 2 10 3 10 4 10 5 Cost 5 -level isotropic SG ( ), dimension-adaptive SG [Kouri et al., 2014] ( ), and proposed ROM/SG for τ = 1 ( ), τ = 10 ( ), τ = 100 ( ), τ = ∞ ( ) 34 / 63

  19. Significant reduction in cost, if ROM only 10 × faster than HDM Cost = nHdmPrim + 0 . 5 × nHdmAdj + τ − 1 × ( nRomPrim + 0 . 5 × nRomAdj ) 10 − 2 | F ( µ ) − F ( µ ∗ ) | 10 − 3 10 − 4 10 − 5 10 1 10 2 10 3 10 4 10 5 Cost 5 -level isotropic SG ( ), dimension-adaptive SG [Kouri et al., 2014] ( ), and proposed ROM/SG for τ = 1 ( ), τ = 10 ( ), τ = 100 ( ), τ = ∞ ( ) 34 / 63

  20. Significant reduction in cost, if ROM only 10 × faster than HDM Cost = nHdmPrim + 0 . 5 × nHdmAdj + τ − 1 × ( nRomPrim + 0 . 5 × nRomAdj ) 10 − 2 | F ( µ ) − F ( µ ∗ ) | 10 − 3 10 − 4 10 − 5 10 1 10 2 10 3 10 4 10 5 Cost 5 -level isotropic SG ( ), dimension-adaptive SG [Kouri et al., 2014] ( ), and proposed ROM/SG for τ = 1 ( ), τ = 10 ( ), τ = 100 ( ), τ = ∞ ( ) 34 / 63

  21. Significant reduction in cost, if ROM only 10 × faster than HDM Cost = nHdmPrim + 0 . 5 × nHdmAdj + τ − 1 × ( nRomPrim + 0 . 5 × nRomAdj ) 10 − 2 | F ( µ ) − F ( µ ∗ ) | 10 − 3 10 − 4 10 − 5 10 1 10 2 10 3 10 4 10 5 Cost 5 -level isotropic SG ( ), dimension-adaptive SG [Kouri et al., 2014] ( ), and proposed ROM/SG for τ = 1 ( ), τ = 10 ( ), τ = 100 ( ), τ = ∞ ( ) 34 / 63

  22. Significant reduction in cost, if ROM only 10 × faster than HDM Cost = nHdmPrim + 0 . 5 × nHdmAdj + τ − 1 × ( nRomPrim + 0 . 5 × nRomAdj ) 10 − 2 | F ( µ ) − F ( µ ∗ ) | 10 − 3 10 − 4 10 − 5 10 1 10 2 10 3 10 4 10 5 Cost 5 -level isotropic SG ( ), dimension-adaptive SG [Kouri et al., 2014] ( ), and proposed ROM/SG for τ = 1 ( ), τ = 10 ( ), τ = 100 ( ), τ = ∞ ( ) 34 / 63

  23. Optimization beyond design/control: high-order shock resolution Fundamental issue: interpolate discontinuity with polynomial basis 35 / 63

  24. Optimization beyond design/control: high-order shock resolution Fundamental issue: interpolate discontinuity with polynomial basis 35 / 63

  25. Optimization beyond design/control: high-order shock resolution Fundamental issue: interpolate discontinuity with polynomial basis 35 / 63

  26. Optimization beyond design/control: high-order shock resolution Fundamental issue: interpolate discontinuity with polynomial basis 35 / 63

  27. Optimization beyond design/control: high-order shock resolution Fundamental issue: interpolate discontinuity with polynomial basis Exising solutions: limiting, adaptive refinement , artificial viscosity usually result in order reduction or very fine discretizations 35 / 63

  28. Optimization beyond design/control: high-order shock resolution Fundamental issue: interpolate discontinuity with polynomial basis Exising solutions: limiting, adaptive refinement , artificial viscosity usually result in order reduction or very fine discretizations 35 / 63

  29. Optimization beyond design/control: high-order shock resolution Fundamental issue: interpolate discontinuity with polynomial basis Exising solutions: limiting, adaptive refinement, artificial viscosity usually result in order reduction or very fine discretizations 35 / 63

  30. Optimization beyond design/control: high-order shock resolution Fundamental issue: interpolate discontinuity with polynomial basis Exising solutions: limiting, adaptive refinement, artificial viscosity usually result in order reduction or very fine discretizations 35 / 63

  31. Optimization beyond design/control: high-order shock resolution Fundamental issue: interpolate discontinuity with polynomial basis Exising solutions: limiting, adaptive refinement, artificial viscosity usually result in order reduction or very fine discretizations Proposed solution align features of solution basis with features in the solution using optimization formulation and solver 35 / 63

  32. Optimization beyond design/control: high-order shock resolution Fundamental issue: interpolate discontinuity with polynomial basis Exising solutions: limiting, adaptive refinement, artificial viscosity usually result in order reduction or very fine discretizations Proposed solution align features of solution basis with features in the solution using optimization formulation and solver 35 / 63

  33. Optimization beyond design/control: high-order shock resolution Fundamental issue: interpolate discontinuity with polynomial basis Exising solutions: limiting, adaptive refinement, artificial viscosity usually result in order reduction or very fine discretizations Proposed solution align features of solution basis with features in the solution using optimization formulation and solver 35 / 63

  34. Shock tracking optimization formulation • Consider the spatial discretization of the conservation law ∇ X · F ( U ; X ) = 0 → r ( u ; x ) = 0 • U , X are the continuous state vector and coordinate • x contains the coordinates of the mesh nodes • u contains the discrete state vector corresponding to U at the mesh nodes • Shock tracking formulation minimize f ( u , x ) u , x subject to r ( u ; x ) = 0 Key assumption: Solution basis supports discontinuties along element edges , i.e., discontinuous Galerkin, finite volume 36 / 63

  35. Shock tracking objective function f ( u ; x ) = f shk ( u ; x ) + αf msh ( x ) Requirements on objective function n e obtains minimum when mesh � u | 2 dV � f shk ( u , x ) = | u − ¯ edge aligned with shock and Ω e ( x ) e =1 monotonically decreases to n e n e � tr G T G � q minimum in (large) neighborhood � � � � f msh ( x ) = � � det G � � e =1 k =1 � � · 10 − 4 8 6 4 2 0 0 . 64 0 . 65 0 . 66 0 . 67 position of mesh edge closest to shock Objective function as an element edge is smoothly swept across shock location for: f shk ( u , x ) ( ), residual-based objective ( ), and Rankine-Hugniot-based objective ( ). 37 / 63

  36. Full space optimization solver for shock tracking Cannot use nested approach to PDE optimization because it requires solving r ( u ; x ) = 0 for x � = x ∗ = ⇒ crash • Full space approach : u → u ∗ and x → x ∗ simultaneously • Define Lagrangian L ( u , x , λ ) = f ( u ; x ) − λ T r ( u ; x ) • First-order optimality (KKT) conditions for full space optimization problem ∇ u L ( u ∗ , x ∗ , λ ∗ ) = 0 , ∇ x L ( u ∗ , x ∗ , λ ∗ ) = 0 , ∇ λ L ( u ∗ , x ∗ , λ ∗ ) = 0 • Apply (quasi-)Newton method 3 to solve nonlinear KKT system for u ∗ , x ∗ , λ ∗ 3 usually requires globalization such as linesearch or trust-region 38 / 63

  37. Nozzle flow: quasi-1d Euler equations A ( x ) 1 0 . 6 0 . 5 0 . 5 Geometry and boundary conditions for nozzle flow. Boundary conditions: inviscid wall ( ), inflow ( ), outflow ( ). 39 / 63

  38. Resolution of 1d transonic flow with only 4 quartic elements 1 ρ ( x ) 0 . 5 0 0 0 . 2 0 . 4 0 . 6 0 . 8 1 0 0 . 2 0 . 4 0 . 6 0 . 8 1 x x The solution of the quasi-1d Euler equations using: 300 linear elements ( ) and 4 quartic elements ( ) on a mesh not aligned ( left ) and aligned ( right ) with the shock. 40 / 63

  39. Supersonic flow around cylinder: 2D Euler equations 4 1 5 Geometry and boundary conditions for supersonic flow around cylinder. Boundary conditions: inviscid wall/symmetry condition ( ) and farfield ( ). 41 / 63

  40. Resolution of 2D supersonic flow with only 67 quadratic elements The solution of the 2d Euler equations using: 67 quadratic elements on a mesh not aligned with the shock ( left ), 67 linear elements on a mesh aligned with the shock ( middle ), 67 quadratic elements on a mesh aligned with the shock ( right ). 42 / 63

  41. Convergence to optimal solution and mesh 43 / 63

  42. PDE-constrained optimization for design/control and beyond • Globally high-order numerical method and adjoint-based gradient computations for efficient design and data assimilation • energetically optimal flapping, energy harvesting mechanisms, super-resolution MRI • Globally convergent multifidelity framework for PDE-constrained optimization under uncertainty • risk-averse flow control • Optimization-based shock tracking framework for highly resolved supersonic flows on extremely coarse meshes 44 / 63

  43. References I Kouri, D. P., Heinkenschloss, M., Ridzal, D., and van Bloemen Waanders, B. G. (2013). A trust-region algorithm with adaptive stochastic collocation for PDE optimization under uncertainty. SIAM Journal on Scientific Computing , 35(4):A1847–A1879. Kouri, D. P., Heinkenschloss, M., Ridzal, D., and van Bloemen Waanders, B. G. (2014). Inexact objective function evaluations in a trust-region algorithm for PDE-constrained optimization under uncertainty. SIAM Journal on Scientific Computing , 36(6):A3011–A3029. Wang, J., Zahr, M. J., and Persson, P.-O. (6/5/2017 – 6/9/2017). Energetically optimal flapping flight based on a fully discrete adjoint method with explicit treatment of flapping frequency. In Proc. of the 23rd AIAA Computational Fluid Dynamics Conference , Denver, Colorado. American Institute of Aeronautics and Astronautics. 45 / 63

  44. References II Zahr, M. J., Huang, Z., and Persson, P.-O. (1/8/2018 – 1/12/2018). Adjoint-based optimization of time-dependent fluid-structure systems using a high-order discontinuous Galerkin discretization. In AIAA Science and Technology Forum and Exposition (SciTech2018) , Kissimmee, Florida. American Institute of Aeronautics and Astronautics. Zahr, M. J. and Persson, P.-O. (2016). An adjoint method for a high-order discretization of deforming domain conservation laws for optimization of flow problems. Journal of Computational Physics . Zahr, M. J., Persson, P.-O., and Wilkening, J. (2016). A fully discrete adjoint method for optimization of flow problems on deforming domains with time-periodicity constraints. Computers & Fluids . 46 / 63

  45. PDE optimization is ubiquitous in science and engineering Control : Drive system to a desired state Boundary flow control Metamaterial cloaking – electromagnetic invisibility 47 / 63

  46. High-order discretization of PDE-constrained optimization • Continuous PDE-constrained optimization problem minimize J ( U , µ ) U , µ subject to C ( U , µ ) ≤ 0 ∂ U ∂t + ∇ · F ( U , ∇ U ) = 0 in v ( µ , t ) • Fully discrete PDE-constrained optimization problem minimize J ( u 0 , . . . , u N t , k 1 , 1 , . . . , k N t ,s , µ ) u 0 , ..., u Nt ∈ R N u , k 1 , 1 , ..., k Nt,s ∈ R N u , µ ∈ R n µ subject to C ( u 0 , . . . , u N t , k 1 , 1 , . . . , k N t ,s , µ ) ≤ 0 u 0 − g ( µ ) = 0 s � u n − u n − 1 − b i k n,i = 0 i =1 Mk n,i − ∆ t n r ( u n,i , µ , t n,i ) = 0 48 / 63

  47. Adjoint equation derivation: outline • Define auxiliary PDE-constrained optimization problem minimize F ( u 0 , . . . , u N t , k 1 , 1 , . . . , k N t ,s , µ ) u 0 , ..., u Nt ∈ R N u , k 1 , 1 , ..., k Nt,s ∈ R N u subject to R 0 = u 0 − g ( µ ) = 0 s � R n = u n − u n − 1 − b i k n,i = 0 i =1 R n,i = Mk n,i − ∆ t n r ( u n,i , µ , t n,i ) = 0 • Define Lagrangian N t N t s T R 0 − T R n − T R n,i � � � L ( u n , k n,i , λ n , κ n,i ) = F − λ 0 λ n κ n,i n =1 n =1 i =1 • The solution of the optimization problem is given by the Karush-Kuhn-Tucker (KKT) sytem ∂ L ∂ L ∂ L ∂ L = 0 , = 0 , = 0 , = 0 ∂ u n ∂ k n,i ∂ λ n ∂ κ n,i 49 / 63

  48. Extension: constraint requiring time-periodicity [Zahr et al., 2016] • Optimization of cyclic problems requires finding time-periodic solution of PDE • Necessary for physical relevance and avoid transients that may lead to crash T minimize F ( U , µ ) ∂F λ N t = λ 0 + U , µ ∂ u N t subject to U ( x , 0) = U ( x , T ) s T T ∂F ∆ t n ∂ r n,i � λ n − 1 = λ n + + κ n,i ∂ U ∂ u n − 1 ∂ u ∂t + ∇ · F ( U , ∇ U ) = 0 i =1 s T T ∂F a ji ∆ t n ∂ r n,i M T κ n,i = � + b i λ n + κ n,j ∂ u N t ∂ u j = i 0 0 power power − 20 − 2 − 40 − 4 − 60 0 2 4 0 2 4 time time Time history of power on airfoil of flow initialized from steady-state ( ) and from a time-periodic solution ( ) 50 / 63

  49. Energetically optimal flapping vs. required thrust: QoI 6 0 . 25 4 0 . 2 W ∗ f ∗ 0 . 15 2 0 . 1 0 0 0 . 5 1 1 . 5 2 2 . 5 0 0 . 5 1 1 . 5 2 2 . 5 2 60 1 . 8 max max 1 . 6 40 y ∗ θ ∗ 1 . 4 1 . 2 20 0 0 . 5 1 1 . 5 2 2 . 5 0 0 . 5 1 1 . 5 2 2 . 5 ¯ ¯ T x T x The optimal flapping energy ( W ∗ ), frequency ( f ∗ ), maximum heaving amplitude ( y ∗ max ), max ) as a function of the thrust constraint ¯ and maximum pitching amplitude ( θ ∗ T x . 51 / 63

  50. Extension: Multiphysics problems [Zahr et al., 2018] • For problems that involve the interaction of multiple types of physical phenomena, no changes required if monolithic system considered M 0 ˙ u 0 = r 0 ( u 0 , c 0 ( u 0 , u 1 )) M 1 ˙ u 1 = r 1 ( u 1 , c 1 ( u 0 , u 1 )) • However, to solve in partitioned manner and achieve high-order, split as follows and apply implicit-explicit Runge-Kutta M 0 ˙ u 0 = r 0 ( u 0 , c 0 ( u 0 , u 1 )) M 1 ˙ u 1 = r 1 ( u 1 , ˜ c 1 ) + ( r 1 ( u 1 , c 1 ( u 0 , u 1 )) − r 1 ( u 1 , ˜ c 1 )) • Adjoint equations inherit explicit-implicit structure 52 / 63

  51. Optimal energy harvesting from foil-damper system Goal: Maximize energy harvested from foil-damper system � T 1 ( c ˙ h 2 ( u s ) − M z ( u f ) ˙ maximize θ ( µ , t )) dt T µ 0 • Fluid: Isentropic Navier-Stokes on deforming domain (ALE) • Structure: Force balance in y -direction between foil and damper • Motion driven by imposed θ ( µ , t ) = µ 1 cos(2 πft ) θ ( µ , t ) h ( u s ) c µ ∗ 1 ≈ 45 ◦ 53 / 63

  52. MRI data assimilation formulation • d ∗ i,n : MRI measurement taken in voxel i at the n th time sample • d i,n ( U , µ ) : computational representation of d ∗ i,n � T � d i,n ( U , µ ) = w i,n ( x , t ) · U ( x , t ) dV dt 0 V w i,n ( x , t ) = χ s ( x ; x i , ∆ x ) χ t ( t ; t n , ∆ t ) 1 1 χ t ( s ; c, w ) = 1 + e − ( s − ( c − 0 . 5 w )) /σ − 1 + e − ( s − ( c +0 . 5 w )) /σ χ s ( x ; c , w ) = χ t ( x 1 ; c 1 , w 1 ) χ t ( x 2 ; c 2 , w 2 ) χ t ( x 3 ; c 3 , w 3 ) • x i - center of i th MRI voxel • t n time instance of n MRI sample • ∆ x - size of MRI voxel in each dimension • ∆ t sampling interval in time n xyz n t α i,n � 2 � � � d i,n ( U , µ ) − d ∗ � �� � �� minimize i,n 2 2 U , µ i =1 n =1 54 / 63

  53. Coarse MRI grid ( 24 × 36 ), 20 time samples, 3% noise Reconstructed flow Synthetic MRI data d ∗ i,n (top) and computational representation of MRI data d i,n (bottom) 55 / 63

  54. Fine MRI grid ( 40 × 60 ), 20 time samples, 3% noise Reconstructed flow Synthetic MRI data d ∗ i,n (top) and computational representation of MRI data d i,n (bottom) 56 / 63

  55. Trust region framework for optimization with ROMs µ -space 57 / 63

  56. Trust region framework for optimization with ROMs µ -space 57 / 63

  57. Trust region framework for optimization with ROMs µ -space 57 / 63

  58. Trust region framework for optimization with ROMs µ -space 57 / 63

  59. Trust region framework for optimization with ROMs µ -space 57 / 63

  60. Trust region framework for optimization with ROMs µ -space 57 / 63

  61. Trust region framework for optimization with ROMs µ -space 57 / 63

  62. Trust region framework for optimization with ROMs µ -space 57 / 63

  63. Trust region framework for optimization with ROMs µ -space 57 / 63

  64. Trust region ingredients for global convergence minimize m k ( µ ) µ ∈ R n µ − → minimize F ( µ ) µ ∈ R n µ subject to || µ − µ k || ≤ ∆ k Approximation models m k ( µ ) , ψ k ( µ ) Error indicators ||∇ F ( µ ) − ∇ m k ( µ ) || ≤ ξϕ k ( µ ) ξ > 0 | F ( µ k ) − F ( µ ) + ψ k ( µ ) − ψ k ( µ k ) | ≤ σθ k ( µ ) σ > 0 Adaptivity ϕ k ( µ k ) ≤ κ ϕ min {||∇ m k ( µ k ) || , ∆ k } µ k ) ω ≤ η min { m k ( µ k ) − m k (ˆ θ k (ˆ µ k ) , r k } 58 / 63

  65. Trust region method with inexact gradients and objective 1: Model update : Choose model m k and error indicator ϕ k ϕ k ( µ k ) ≤ κ ϕ min {||∇ m k ( µ k ) || , ∆ k } 2: Step computation : Approximately solve the trust region subproblem µ k ˆ = arg min µ ∈ R n µ m k ( µ ) subject to || µ − µ k || ≤ ∆ k 3: Step acceptance : Compute approximation of actual-to-predicted reduction ρ k = ψ k ( µ k ) − ψ k (ˆ µ k ) m k ( µ k ) − m k (ˆ µ k ) if ρ k ≥ η 1 then µ k +1 = ˆ µ k else µ k +1 = µ k end if 4: Trust region update : if ρ k ≤ η 1 then ∆ k +1 ∈ (0 , γ || ˆ µ k − µ k || )] end if if ρ k ∈ ( η 1 , η 2 ) then ∆ k +1 ∈ [ γ || ˆ µ k − µ k || , ∆ k ] end if ρ k ≥ η 2 ∆ k +1 ∈ [∆ k , ∆ max ] if then end if 59 / 63

  66. Final requirement for convergence: Adaptivity With the approximation model, m k ( µ ) , and gradient error indicator, ϕ k ( µ ) m k ( µ ) = E I k [ J ( Φ k u r ( µ , · ) , µ , · )] ϕ k ( µ ) = α 1 E 1 ( µ ; I k , Φ k ) + α 2 E 2 ( µ ; I k , Φ k ) + α 3 E 4 ( µ ; I k , Φ k ) the sparse grid I k and reduced-order basis Φ k must be constructed such that the gradient condition holds ϕ k ( µ k ) ≤ κ ϕ min {||∇ m k ( µ k ) || , ∆ k } Define dimension-adaptive greedy method to target each source of error such that the stronger conditions hold E 1 ( µ k ; I , Φ ) ≤ κ ϕ min {||∇ m k ( µ k ) || , ∆ k } 3 α 1 E 2 ( µ k ; I , Φ ) ≤ κ ϕ min {||∇ m k ( µ k ) || , ∆ k } 3 α 2 E 4 ( µ k ; I , Φ ) ≤ κ ϕ min {||∇ m k ( µ k ) || , ∆ k } 3 α 3 60 / 63

  67. Backward facing step: minimize recirculation 0 . 5 0 . 2 0 . 5 1 . 0 Geometry and boundary conditions for backward facing step. Boundary conditions: viscous wall ( ), parametrized inflow( µ ) ( ), stochastic inflow( ξ ) ( ), outflow ( ). Vorticity magnitude minimized in red shaded region. 61 / 63

  68. Mean vorticity corresponding to no inflow (left) and optimal inflow (right) along parametrized boundary. 62 / 63

  69. Significant reduction in cost, if ROM only 10 × faster than HDM Cost = nHdmPrim + 0 . 5 × nHdmAdj + τ − 1 × ( nRomPrim + 0 . 5 × nRomAdj ) 10 1 | F ( µ ) − F ( µ ∗ ) | 10 − 1 10 − 3 10 − 5 10 − 7 10 1 10 2 10 3 10 4 10 5 Cost 5 -level isotropic SG ( ), dimension-adaptive SG [Kouri et al., 2014] ( ), and ), τ = ∞ ( proposed ROM/SG for τ = 1 ( ), τ = 10 ( ), τ = 100 ( ) 63 / 63

  70. Significant reduction in cost, if ROM only 10 × faster than HDM Cost = nHdmPrim + 0 . 5 × nHdmAdj + τ − 1 × ( nRomPrim + 0 . 5 × nRomAdj ) 10 1 | F ( µ ) − F ( µ ∗ ) | 10 − 1 10 − 3 10 − 5 10 − 7 10 1 10 2 10 3 10 4 10 5 Cost 5 -level isotropic SG ( ), dimension-adaptive SG [Kouri et al., 2014] ( ), and ), τ = ∞ ( proposed ROM/SG for τ = 1 ( ), τ = 10 ( ), τ = 100 ( ) 63 / 63

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend