accelerating pde constrained optimization using
play

Accelerating PDE-Constrained Optimization using - PowerPoint PPT Presentation

Motivation PDE-Constrained Optimization Reduced-Order Models ROM-Constrained Optimization Numerical Experiments Conclusion References Accelerating PDE-Constrained Optimization using Progressively-Constructed Reduced-Order Models Matthew J.


  1. Motivation PDE-Constrained Optimization Reduced-Order Models ROM-Constrained Optimization Numerical Experiments Conclusion References Reduced-Order Models (ROMs) ROMs and Exascale Very similar goals enable computational analysis, design, UQ, control of highly-complex systems not feasible with existing tools/technology use computational tool to solve relevant scientific and engineering problems Pursue goals with opposite approaches ROMs: systematic dimensionality reduction while preserving fidelity to drastically reduce cost of simulation Exascale: Leverage O (10 18 ) FLOPS to enable direct simulation of high-fidelity systems Not mutually exclusive ! Zahr and Farhat Progressive ROM-Constrained Optimization

  2. Motivation PDE-Constrained Optimization Reduced-Order Models ROM-Constrained Optimization Numerical Experiments Conclusion References Reduced-Order Models (ROMs) ROMs as Enabling Technology Many-query analyses Optimization: design, control Single objective, single-point Multiobjective, multi-point Uncertainty Quantification Optimization under uncertainty Real-time analysis Figure: Flapping Wing (Persson et al., 2012) Model Predictive Control (MPC) Zahr and Farhat Progressive ROM-Constrained Optimization

  3. Motivation PDE-Constrained Optimization Reduced-Order Models ROM-Constrained Optimization Numerical Experiments Conclusion References Application I: Compressible, Turbulent Flow over Vehicle Benchmark in automotive industry Mesh 2,890,434 vertices 17,017,090 tetra 17,342,604 DOF (a) Ahmed Body: Geometry (Ahmed et al, 1984) CFD Compressible Navier-Stokes DES + Wall func Single forward simulation ≈ 0 . 5 day on 512 cores Desired: shape optimization unsteady effects minimize average drag (b) Ahmed Body: Mesh (Carlberg et al, 2011) Zahr and Farhat Progressive ROM-Constrained Optimization

  4. Motivation PDE-Constrained Optimization Reduced-Order Models ROM-Constrained Optimization Numerical Experiments Conclusion References Application II: Turbulent Flow over Flapping Wing CFD Biologically-inspired flight Compressible Navier-Stokes Micro aerial vehicles Discontinuous Galerkin Mesh Desired: shape optimization + 43,000 vertices control 231,000 tetra ( p = 3) unsteady effects 2,310,000 DOF maximize thrust Figure: Flapping Wing (Persson et al., 2012) Zahr and Farhat Progressive ROM-Constrained Optimization

  5. Motivation PDE-Constrained Optimization Reduced-Order Models ROM-Constrained Optimization Numerical Experiments Conclusion References Outline 1 Motivation 2 PDE-Constrained Optimization 3 Reduced-Order Models Construction of Bases Speedup Potential 4 ROM-Constrained Optimization Reduced Sensitivities Training 5 Numerical Experiments Rocket Nozzle Design Airfoil Design 6 Conclusion Overview Outlook Future Work Zahr and Farhat Progressive ROM-Constrained Optimization

  6. Motivation PDE-Constrained Optimization Reduced-Order Models ROM-Constrained Optimization Numerical Experiments Conclusion References Hierarchy of PDE-Constrained Optimization Static PDE Static Parameter w ( µ ) Zahr and Farhat Progressive ROM-Constrained Optimization

  7. Motivation PDE-Constrained Optimization Reduced-Order Models ROM-Constrained Optimization Numerical Experiments Conclusion References Hierarchy of PDE-Constrained Optimization Static PDE Dynamic PDE Static Parameter Static Parameter w ( µ ) w ( µ , t ) Zahr and Farhat Progressive ROM-Constrained Optimization

  8. Motivation PDE-Constrained Optimization Reduced-Order Models ROM-Constrained Optimization Numerical Experiments Conclusion References Hierarchy of PDE-Constrained Optimization Static PDE Dynamic PDE Dynamic PDE Static Parameter Static Parameter Dynamic Parameter w ( µ ) w ( µ , t ) w ( µ ( t ) , t ) Zahr and Farhat Progressive ROM-Constrained Optimization

  9. Motivation PDE-Constrained Optimization Reduced-Order Models ROM-Constrained Optimization Numerical Experiments Conclusion References Hierarchy of PDE-Constrained Optimization Static PDE Dynamic PDE Dynamic PDE Static Parameter Static Parameter Dynamic Parameter w ( µ ) w ( µ , t ) w ( µ ( t ) , t ) Complexity - Difficulty - CPU Hours Zahr and Farhat Progressive ROM-Constrained Optimization

  10. Motivation PDE-Constrained Optimization Reduced-Order Models ROM-Constrained Optimization Numerical Experiments Conclusion References Hierarchy of PDE-Constrained Optimization Static PDE Dynamic PDE Dynamic PDE Static Parameter Static Parameter Dynamic Parameter w ( µ ) w ( µ , t ) w ( µ ( t ) , t ) Zahr and Farhat Progressive ROM-Constrained Optimization

  11. Motivation PDE-Constrained Optimization Reduced-Order Models ROM-Constrained Optimization Numerical Experiments Conclusion References Hierarchy of PDE-Constrained Optimization Static PDE Dynamic PDE Dynamic PDE Static Parameter Static Parameter Dynamic Parameter w ( µ ) w ( µ , t ) w ( µ ( t ) , t ) Zahr and Farhat Progressive ROM-Constrained Optimization

  12. Motivation PDE-Constrained Optimization Reduced-Order Models ROM-Constrained Optimization Numerical Experiments Conclusion References Problem Formulation Goal: Rapidly solve PDE-constrained optimization problems of the form minimize f ( w , µ ) w ∈ R N , µ ∈ R p Discretize-then-optimize subject to R ( w , µ ) = 0 where R : R N × R p → R N is the discretized (steady, nonlinear) PDE, w is the PDE state vector, µ is the vector of parameters, and N is assumed to be very large . Zahr and Farhat Progressive ROM-Constrained Optimization

  13. Motivation PDE-Constrained Optimization Reduced-Order Models ROM-Constrained Optimization Numerical Experiments Conclusion References Two Approaches Simultaneous Analysis and Design (SAND) minimize f ( w , µ ) w ∈ R N , µ ∈ R p subject to R ( w , µ ) = 0 Treat state and parameters as optimization variables Nested Analysis and Design (NAND) minimize f ( w ( µ ) , µ ) µ ∈ R p w = w ( µ ) through R ( w , µ ) = 0 Treat parameters as only optimization variables Enforce nonlinear equality constraint at every iteration (Gunzburger, 2003), (Hinze et al., 2009) Zahr and Farhat Progressive ROM-Constrained Optimization

  14. Motivation PDE-Constrained Optimization Reduced-Order Models ROM-Constrained Optimization Numerical Experiments Conclusion References Sensitivity Derivation Consider some functional F ( w ( µ ) , µ ) to be differentiated (i.e. objective function or constraint) d F d µ = ∂ F ∂ µ + ∂ F ∂ w ∂ w ∂ µ Zahr and Farhat Progressive ROM-Constrained Optimization

  15. Motivation PDE-Constrained Optimization Reduced-Order Models ROM-Constrained Optimization Numerical Experiments Conclusion References Sensitivity Derivation Consider some functional F ( w ( µ ) , µ ) to be differentiated (i.e. objective function or constraint) d F d µ = ∂ F ∂ µ + ∂ F ∂ w ∂ w ∂ µ ⇒ d R d µ = 0 = ∂ R ∂ µ + ∂ R ∂ w R ( w ( µ ) , µ ) = 0 for all µ = ∂ w ∂ µ � ∂ R � − 1 ∂ R ∂ w ∂ µ = − ∂ w ∂ µ Zahr and Farhat Progressive ROM-Constrained Optimization

  16. Motivation PDE-Constrained Optimization Reduced-Order Models ROM-Constrained Optimization Numerical Experiments Conclusion References Sensitivity Derivation Consider some functional F ( w ( µ ) , µ ) to be differentiated (i.e. objective function or constraint) d µ = ∂ F d F ∂ µ + ∂ F ∂ w ∂ w ∂ µ ⇒ d R d µ = 0 = ∂ R ∂ µ + ∂ R ∂ w R ( w ( µ ) , µ ) = 0 for all µ = ∂ w ∂ µ � ∂ R � − 1 ∂ R ∂ w ∂ µ = − ∂ w ∂ µ Gradient of Functional �� ∂ R � �� ∂ R T � T � − 1 ∂ R � − T ∂ F d µ = ∂ F d F ∂ µ − ∂ F = ∂ F ∂ R ∂ µ − ∂ w ∂ w ∂ µ ∂ w ∂ w ∂ µ Zahr and Farhat Progressive ROM-Constrained Optimization

  17. Motivation PDE-Constrained Optimization Reduced-Order Models ROM-Constrained Optimization Numerical Experiments Conclusion References Summary: NAND formulation, Sensitivity Approach Nested Analysis and Design (NAND) minimize f ( w ( µ ) , µ ) µ ∈ R p w = w ( µ ) through R ( w , µ ) = 0 Gradient of Objective Function (Sensitivity Approach) d f d µ ( w ( µ ) , µ ) = ∂f ∂ µ + ∂f ∂ w ∂ w ∂ µ � ∂ R � − 1 ∂ R ∂ w ∂ µ from d R d µ = ∂ R ∂ µ + ∂ R ∂ w ∂ µ = ∂ µ = 0 ∂ w ∂ w Zahr and Farhat Progressive ROM-Constrained Optimization

  18. Motivation PDE-Constrained Optimization Reduced-Order Models Construction of Bases ROM-Constrained Optimization Speedup Potential Numerical Experiments Conclusion References Outline 1 Motivation 2 PDE-Constrained Optimization 3 Reduced-Order Models Construction of Bases Speedup Potential 4 ROM-Constrained Optimization Reduced Sensitivities Training 5 Numerical Experiments Rocket Nozzle Design Airfoil Design 6 Conclusion Overview Outlook Future Work Zahr and Farhat Progressive ROM-Constrained Optimization

  19. Motivation PDE-Constrained Optimization Reduced-Order Models Construction of Bases ROM-Constrained Optimization Speedup Potential Numerical Experiments Conclusion References Reduced-Order Model Model Order Reduction (MOR) assumption: state vector lies in low-dimensional affine subspace ∂ w ∂ µ ≈ ∂ w r ∂ µ = Φ ∂ y w ≈ w r = ¯ w + Φy = ⇒ ∂ µ where y ∈ R n are the reduced coordinates of w r in the basis Φ ∈ R N × n , and n ≪ N Zahr and Farhat Progressive ROM-Constrained Optimization

  20. Motivation PDE-Constrained Optimization Reduced-Order Models Construction of Bases ROM-Constrained Optimization Speedup Potential Numerical Experiments Conclusion References Reduced-Order Model Model Order Reduction (MOR) assumption: state vector lies in low-dimensional affine subspace ∂ w ∂ µ ≈ ∂ w r ∂ µ = Φ ∂ y w ≈ w r = ¯ w + Φy = ⇒ ∂ µ where y ∈ R n are the reduced coordinates of w r in the basis Φ ∈ R N × n , and n ≪ N Substitute assumption into High-Dimensional Model (HDM), R ( w , µ ) = 0 R ( ¯ w + Φy , µ ) ≈ 0 Zahr and Farhat Progressive ROM-Constrained Optimization

  21. Motivation PDE-Constrained Optimization Reduced-Order Models Construction of Bases ROM-Constrained Optimization Speedup Potential Numerical Experiments Conclusion References Reduced-Order Model Model Order Reduction (MOR) assumption: state vector lies in low-dimensional affine subspace ∂ w ∂ µ ≈ ∂ w r ∂ µ = Φ ∂ y w ≈ w r = ¯ w + Φy = ⇒ ∂ µ where y ∈ R n are the reduced coordinates of w r in the basis Φ ∈ R N × n , and n ≪ N Substitute assumption into High-Dimensional Model (HDM), R ( w , µ ) = 0 R ( ¯ w + Φy , µ ) ≈ 0 Require projection of residual in low-dimensional left subspace , with basis Ψ ∈ R N × n to be zero R r ( y , µ ) = Ψ T R ( ¯ w + Φy , µ ) = 0 Zahr and Farhat Progressive ROM-Constrained Optimization

  22. Motivation PDE-Constrained Optimization Reduced-Order Models Construction of Bases ROM-Constrained Optimization Speedup Potential Numerical Experiments Conclusion References Reduced Optimization Problem Reduce-then-optimize 1 ROM-Constrained Optimization - NAND Formulation minimize f ( ¯ w + Φy ( µ ) , µ ) µ ∈ R p y = y ( µ ) through Ψ T R ( ¯ w + Φy , µ ) = 0 Issues that must be considered Construction of bases Speedup potential Reduced sensitivity derivation Training 1 (Manzoni, 2012) Zahr and Farhat Progressive ROM-Constrained Optimization

  23. Motivation PDE-Constrained Optimization Reduced-Order Models Construction of Bases ROM-Constrained Optimization Speedup Potential Numerical Experiments Conclusion References Definition of Φ : Proper Orthogonal Decomposition 3 Recall MOR assumption ∂ w ∂ µ ≈ Φ ∂ y w − ¯ w ≈ Φy = ⇒ ∂ µ Implication: we desire � � ∂ w � { w ( µ ) − ¯ w } ∂ µ ( µ ) ⊆ range Φ Include translated state vectors and sensitivities as snapshots Previous work considering sensitivity snapshots 2 2 (Carlberg and Farhat, 2008), (Hay et al., 2009), (Carlberg and Farhat, 2011) 3 (Sirovich, 1987) Zahr and Farhat Progressive ROM-Constrained Optimization

  24. Motivation PDE-Constrained Optimization Reduced-Order Models Construction of Bases ROM-Constrained Optimization Speedup Potential Numerical Experiments Conclusion References Definition of Φ : Proper Orthogonal Decomposition Recall MOR assumption ∂ w ∂ µ ≈ Φ ∂ y w − ¯ w ≈ Φy = ⇒ ∂ µ State-Sensitivity 4 POD Collect state and sensitivity snapshots by sampling HDM � w ( µ 1 ) − ¯ w � X = w ( µ 2 ) − ¯ · · · w ( µ n ) − ¯ w w � � ∂ w ∂ w ∂ w Y = ∂ µ ( µ 1 ) ∂ µ ( µ 2 ) · · · ∂ µ ( µ n ) 4 (Washabaugh and Farhat, 2013),(Zahr and Farhat, 2014) Zahr and Farhat Progressive ROM-Constrained Optimization

  25. Motivation PDE-Constrained Optimization Reduced-Order Models Construction of Bases ROM-Constrained Optimization Speedup Potential Numerical Experiments Conclusion References Definition of Φ : Proper Orthogonal Decomposition Recall MOR assumption ∂ w ∂ µ ≈ Φ ∂ y w − ¯ w ≈ Φy = ⇒ ∂ µ State-Sensitivity 4 POD Collect state and sensitivity snapshots by sampling HDM � w ( µ 1 ) − ¯ w � X = w ( µ 2 ) − ¯ · · · w ( µ n ) − ¯ w w � � ∂ w ∂ w ∂ w Y = ∂ µ ( µ 1 ) ∂ µ ( µ 2 ) · · · ∂ µ ( µ n ) Use Proper Orthogonal Decomposition to generate reduced bases from each individually Φ X = POD( X ) Φ Y = POD( Y ) 4 (Washabaugh and Farhat, 2013),(Zahr and Farhat, 2014) Zahr and Farhat Progressive ROM-Constrained Optimization

  26. Motivation PDE-Constrained Optimization Reduced-Order Models Construction of Bases ROM-Constrained Optimization Speedup Potential Numerical Experiments Conclusion References Definition of Φ : Proper Orthogonal Decomposition Recall MOR assumption ∂ w ∂ µ ≈ Φ ∂ y w − ¯ w ≈ Φy = ⇒ ∂ µ State-Sensitivity 4 POD Collect state and sensitivity snapshots by sampling HDM � w ( µ 1 ) − ¯ w � X = w ( µ 2 ) − ¯ · · · w ( µ n ) − ¯ w w � � ∂ w ∂ w ∂ w Y = ∂ µ ( µ 1 ) ∂ µ ( µ 2 ) · · · ∂ µ ( µ n ) Use Proper Orthogonal Decomposition to generate reduced bases from each individually Φ X = POD( X ) Φ Y = POD( Y ) Concatenate to get ROB � Φ X � Φ = Φ Y 4 (Washabaugh and Farhat, 2013),(Zahr and Farhat, 2014) Zahr and Farhat Progressive ROM-Constrained Optimization

  27. Motivation PDE-Constrained Optimization Reduced-Order Models Construction of Bases ROM-Constrained Optimization Speedup Potential Numerical Experiments Conclusion References Definition of Ψ : Minimum-Residual ROM ROM governing equation: R r ( y , µ ) ≡ Ψ T R ( ¯ w + Φy , µ ) = 0 Standard options for choice of left basis Ψ Ψ = Φ = ⇒ Galerkin Ψ = ∂ R Least-Squares Petrov-Galerkin (LSPG) 5 , 6 ∂ w Φ = ⇒ 5 (Bui-Thanh et al., 2008) 6 (Carlberg et al., 2011) 7 (Fahl, 2001) Zahr and Farhat Progressive ROM-Constrained Optimization

  28. Motivation PDE-Constrained Optimization Reduced-Order Models Construction of Bases ROM-Constrained Optimization Speedup Potential Numerical Experiments Conclusion References Definition of Ψ : Minimum-Residual ROM ROM governing equation: R r ( y , µ ) ≡ Ψ T R ( ¯ w + Φy , µ ) = 0 Standard options for choice of left basis Ψ Ψ = Φ = ⇒ Galerkin Ψ = ∂ R Least-Squares Petrov-Galerkin (LSPG) 5 , 6 ∂ w Φ = ⇒ Minimum-Residual Property A ROM possesses the minimum-residual property if R r ( y , µ ) = 0 is equivalent to the optimality condition of (Θ ≻ 0) minimize || R ( ¯ w + Φy , µ ) || Θ y ∈ R n 5 (Bui-Thanh et al., 2008) 6 (Carlberg et al., 2011) 7 (Fahl, 2001) Zahr and Farhat Progressive ROM-Constrained Optimization

  29. Motivation PDE-Constrained Optimization Reduced-Order Models Construction of Bases ROM-Constrained Optimization Speedup Potential Numerical Experiments Conclusion References Definition of Ψ : Minimum-Residual ROM ROM governing equation: R r ( y , µ ) ≡ Ψ T R ( ¯ w + Φy , µ ) = 0 Standard options for choice of left basis Ψ Ψ = Φ = ⇒ Galerkin Ψ = ∂ R Least-Squares Petrov-Galerkin (LSPG) 5 , 6 ∂ w Φ = ⇒ Minimum-Residual Property A ROM possesses the minimum-residual property if R r ( y , µ ) = 0 is equivalent to the optimality condition of (Θ ≻ 0) minimize || R ( ¯ w + Φy , µ ) || Θ y ∈ R n LSPG possesses minimum-residual property 6 5 (Bui-Thanh et al., 2008) 6 (Carlberg et al., 2011) 7 (Fahl, 2001) Zahr and Farhat Progressive ROM-Constrained Optimization

  30. Motivation PDE-Constrained Optimization Reduced-Order Models Construction of Bases ROM-Constrained Optimization Speedup Potential Numerical Experiments Conclusion References Definition of Ψ : Minimum-Residual ROM ROM governing equation: R r ( y , µ ) ≡ Ψ T R ( ¯ w + Φy , µ ) = 0 Standard options for choice of left basis Ψ Ψ = Φ = ⇒ Galerkin Ψ = ∂ R Least-Squares Petrov-Galerkin (LSPG) 5 , 6 ∂ w Φ = ⇒ Minimum-Residual Property A ROM possesses the minimum-residual property if R r ( y , µ ) = 0 is equivalent to the optimality condition of (Θ ≻ 0) minimize || R ( ¯ w + Φy , µ ) || Θ y ∈ R n LSPG possesses minimum-residual property 6 Implications Recover exact solution when basis not truncated (consistent 6 ) Monotonic improvement of solution as basis size increases Ensures sensitivity information in Φ cannot degrade state approximation 7 5 (Bui-Thanh et al., 2008) 6 (Carlberg et al., 2011) 7 (Fahl, 2001) Zahr and Farhat Progressive ROM-Constrained Optimization

  31. Motivation PDE-Constrained Optimization Reduced-Order Models Construction of Bases ROM-Constrained Optimization Speedup Potential Numerical Experiments Conclusion References Nonlinear ROM Bottleneck R r ( y , µ ) = Ψ T R ( ¯ w + Φy , µ ) = 0 y w + ¯ Φ Zahr and Farhat Progressive ROM-Constrained Optimization

  32. Motivation PDE-Constrained Optimization Reduced-Order Models Construction of Bases ROM-Constrained Optimization Speedup Potential Numerical Experiments Conclusion References Nonlinear ROM Bottleneck R r ( y , µ ) = Ψ T R ( ¯ w + Φy , µ ) = 0 y R ( w + ¯ ) Φ Zahr and Farhat Progressive ROM-Constrained Optimization

  33. Motivation PDE-Constrained Optimization Reduced-Order Models Construction of Bases ROM-Constrained Optimization Speedup Potential Numerical Experiments Conclusion References Nonlinear ROM Bottleneck R r ( y , µ ) = Ψ T R ( ¯ w + Φy , µ ) = 0 Ψ T R Zahr and Farhat Progressive ROM-Constrained Optimization

  34. Motivation PDE-Constrained Optimization Reduced-Order Models Construction of Bases ROM-Constrained Optimization Speedup Potential Numerical Experiments Conclusion References Nonlinear ROM Bottleneck R r ( y , µ ) = Ψ T R ( ¯ w + Φy , µ ) = 0 R r = y Ψ T R ( w + ¯ ) Φ Zahr and Farhat Progressive ROM-Constrained Optimization

  35. Motivation PDE-Constrained Optimization Reduced-Order Models Construction of Bases ROM-Constrained Optimization Speedup Potential Numerical Experiments Conclusion References Nonlinear ROM Bottleneck ∂ R r ∂ y ( y , µ ) = Ψ T ∂ R ∂ w ( ¯ w + Φy , µ ) Φ = 0 y w + ¯ Φ Zahr and Farhat Progressive ROM-Constrained Optimization

  36. Motivation PDE-Constrained Optimization Reduced-Order Models Construction of Bases ROM-Constrained Optimization Speedup Potential Numerical Experiments Conclusion References Nonlinear ROM Bottleneck ∂ R r ∂ y ( y , µ ) = Ψ T ∂ R ∂ w ( ¯ w + Φy , µ ) Φ = 0 y ∂ R ∂ w ( w + ¯ ) Φ Zahr and Farhat Progressive ROM-Constrained Optimization

  37. Motivation PDE-Constrained Optimization Reduced-Order Models Construction of Bases ROM-Constrained Optimization Speedup Potential Numerical Experiments Conclusion References Nonlinear ROM Bottleneck ∂ R r ∂ y ( y , µ ) = Ψ T ∂ R ∂ w ( ¯ w + Φy , µ ) Φ = 0 Ψ T ∂ R Φ ∂ w Zahr and Farhat Progressive ROM-Constrained Optimization

  38. Motivation PDE-Constrained Optimization Reduced-Order Models Construction of Bases ROM-Constrained Optimization Speedup Potential Numerical Experiments Conclusion References Nonlinear ROM Bottleneck ∂ R r ∂ y ( y , µ ) = Ψ T ∂ R ∂ w ( ¯ w + Φy , µ ) Φ = 0 ∂ R r = Ψ T y ∂ y ∂ R ∂ w ( w + ¯ ) Φ Φ Zahr and Farhat Progressive ROM-Constrained Optimization

  39. Motivation PDE-Constrained Optimization Reduced-Order Models Construction of Bases ROM-Constrained Optimization Speedup Potential Numerical Experiments Conclusion References Hyperreduction Several different forms of hyperreduction exist to alleviate bottleneck caused by nonlinear terms If nonlinearity polynomial, precompute tensorial coefficients Linearize (or “polynomialize”) about specific points in state space 8 Gappy POD to reconstruct nonlinear residual from a few entries 9 Empirical Interpolation Method (EIM) 10 Discrete Empirical Interpolation Method (DEIM) 11 Gauss-Newton with Approximated Tensors (GNAT) 12 8 (Rewienski, 2003) 9 (Everson and Sirovich, 1995) 10 (Barrault et al., 2004) 11 (Chaturantabut and Sorensen, 2010) 12 (Carlberg et al., 2011) , (Carlberg et al., 2013) Zahr and Farhat Progressive ROM-Constrained Optimization

  40. Motivation PDE-Constrained Optimization Reduced-Order Models Construction of Bases ROM-Constrained Optimization Speedup Potential Numerical Experiments Conclusion References Hyperreduction: Gappy POD 13 Assume nonlinear terms (residual/Jacobian) lie in low-dimensional subspace R ( w , µ ) ≈ Φ R r ( w , µ ) where Φ ∈ R N × n R and r : R N × R p → R n R are the reduced coordinates; n R ≪ N 13 (Everson and Sirovich, 1995),(Chaturantabut and Sorensen, 2010),(Carlberg et al., 2011) Zahr and Farhat Progressive ROM-Constrained Optimization

  41. Motivation PDE-Constrained Optimization Reduced-Order Models Construction of Bases ROM-Constrained Optimization Speedup Potential Numerical Experiments Conclusion References Hyperreduction: Gappy POD 13 Assume nonlinear terms (residual/Jacobian) lie in low-dimensional subspace R ( w , µ ) ≈ Φ R r ( w , µ ) where Φ ∈ R N × n R and r : R N × R p → R n R are the reduced coordinates; n R ≪ N Determine R by solving gappy least-squares problem a ∈ R nR || Z T Φ R a − Z T R ( w , µ ) || r ( w , µ ) = arg min where Z is a restriction operator 13 (Everson and Sirovich, 1995),(Chaturantabut and Sorensen, 2010),(Carlberg et al., 2011) Zahr and Farhat Progressive ROM-Constrained Optimization

  42. Motivation PDE-Constrained Optimization Reduced-Order Models Construction of Bases ROM-Constrained Optimization Speedup Potential Numerical Experiments Conclusion References Hyperreduction: Gappy POD 13 Assume nonlinear terms (residual/Jacobian) lie in low-dimensional subspace R ( w , µ ) ≈ Φ R r ( w , µ ) where Φ ∈ R N × n R and r : R N × R p → R n R are the reduced coordinates; n R ≪ N Determine R by solving gappy least-squares problem a ∈ R nR || Z T Φ R a − Z T R ( w , µ ) || r ( w , µ ) = arg min where Z is a restriction operator Analytical solution � � † � � Z T Φ R Z T R ( w , µ ) r ( w , µ ) = 13 (Everson and Sirovich, 1995),(Chaturantabut and Sorensen, 2010),(Carlberg et al., 2011) Zahr and Farhat Progressive ROM-Constrained Optimization

  43. Motivation PDE-Constrained Optimization Reduced-Order Models Construction of Bases ROM-Constrained Optimization Speedup Potential Numerical Experiments Conclusion References Hyperreduction: Gappy POD 13 Assume nonlinear terms (residual/Jacobian) lie in low-dimensional subspace R ( w , µ ) ≈ Φ R r ( w , µ ) where Φ ∈ R N × n R and r : R N × R p → R n R are the reduced coordinates; n R ≪ N Determine R by solving gappy least-squares problem a ∈ R nR || Z T Φ R a − Z T R ( w , µ ) || r ( w , µ ) = arg min where Z is a restriction operator Analytical solution � � † � � Z T Φ R Z T R ( w , µ ) r ( w , µ ) = Hyperreduced model � � † � � R g ( y , µ ) = Ψ T Φ R Z T Φ R Z T R ( ¯ w + Φy , µ ) = 0 13 (Everson and Sirovich, 1995),(Chaturantabut and Sorensen, 2010),(Carlberg et al., 2011) Zahr and Farhat Progressive ROM-Constrained Optimization

  44. Motivation PDE-Constrained Optimization Reduced-Order Models Construction of Bases ROM-Constrained Optimization Speedup Potential Numerical Experiments Conclusion References Gappy POD in Practice: Euler Vortex Zahr and Farhat Progressive ROM-Constrained Optimization

  45. Motivation PDE-Constrained Optimization Reduced-Order Models Construction of Bases ROM-Constrained Optimization Speedup Potential Numerical Experiments Conclusion References Gappy POD in Practice: Euler Vortex Zahr and Farhat Progressive ROM-Constrained Optimization

  46. Motivation PDE-Constrained Optimization Reduced-Order Models Construction of Bases ROM-Constrained Optimization Speedup Potential Numerical Experiments Conclusion References Gappy POD in Practice: Ahmed Body (a) 253 sample nodes (b) 378 sample nodes (c) 505 sample nodes Zahr and Farhat Progressive ROM-Constrained Optimization

  47. Motivation PDE-Constrained Optimization Reduced-Order Models Construction of Bases ROM-Constrained Optimization Speedup Potential Numerical Experiments Conclusion References Bottleneck Alleviation Using the Gappy POD approximation, the hyper-reduced governing equations are � � † � � R h ( y , µ ) = Ψ T Φ R Z T Φ R Z T R ( ¯ w + Φy , µ ) = 0 where � � † E = Ψ T Φ R Z T Φ R is known offline and can be precomputed Z T R R g = E Size scales independent of large dimension N Amenable to online or deployed computations Zahr and Farhat Progressive ROM-Constrained Optimization

  48. Motivation PDE-Constrained Optimization Reduced-Order Models Reduced Sensitivities ROM-Constrained Optimization Training Numerical Experiments Conclusion References Outline 1 Motivation 2 PDE-Constrained Optimization 3 Reduced-Order Models Construction of Bases Speedup Potential 4 ROM-Constrained Optimization Reduced Sensitivities Training 5 Numerical Experiments Rocket Nozzle Design Airfoil Design 6 Conclusion Overview Outlook Future Work Zahr and Farhat Progressive ROM-Constrained Optimization

  49. Motivation PDE-Constrained Optimization Reduced-Order Models Reduced Sensitivities ROM-Constrained Optimization Training Numerical Experiments Conclusion References Reduced Optimization Problem ROM-Constrained Optimization - NAND Formulation minimize f ( ¯ w + Φy ( µ ) , µ ) µ ∈ R p y = y ( µ ) through r ( y , µ ) = 0 For ROM only: r ( y , µ ) = Ψ T R ( ¯ w + Φy , µ ) � � † � � For ROM + hyperreduction: r ( y , µ ) = Ψ T Φ R Z T Φ R Z T R ( ¯ w + Φy , µ ) Issues that must be considered Construction of bases Speedup potential Reduced sensitivity derivation Training Zahr and Farhat Progressive ROM-Constrained Optimization

  50. Motivation PDE-Constrained Optimization Reduced-Order Models Reduced Sensitivities ROM-Constrained Optimization Training Numerical Experiments Conclusion References Gradient of Reduced Objective Function Recall MOR assumption: ∂ w r ∂ µ = Φ ∂ y w r = ¯ w + Φy = ⇒ ∂ µ For gradient-based optimization, the gradient of the reduced objective function is required d f ∂ ( ¯ w + Φy ) w + Φy ( µ ) , µ ) = ∂f ∂f ∂ y d µ ( ¯ ∂ µ + ∂ ( ¯ w + Φy ) ∂ y ∂ µ = ∂f ∂ µ + ∂f Φ ∂ y ∂ w r ∂ µ = ∂f ∂ µ + ∂f ∂ w r ∂ w r ∂ µ Recall HDM gradient: d µ ( w ( µ ) , µ ) = ∂f d f ∂ µ + ∂f ∂ w ∂ w ∂ µ Zahr and Farhat Progressive ROM-Constrained Optimization

  51. Motivation PDE-Constrained Optimization Reduced-Order Models Reduced Sensitivities ROM-Constrained Optimization Training Numerical Experiments Conclusion References Sensitivities HDM sensitivities � ∂ R � − 1 ∂ R ⇒ ∂ R ∂ µ + ∂ R ∂ w ⇒ ∂ w R ( w ( µ ) , µ ) = 0 = ∂ µ = 0 = ∂ µ = − ∂ w ∂ w ∂ µ Zahr and Farhat Progressive ROM-Constrained Optimization

  52. Motivation PDE-Constrained Optimization Reduced-Order Models Reduced Sensitivities ROM-Constrained Optimization Training Numerical Experiments Conclusion References Sensitivities HDM sensitivities � ∂ R � − 1 ∂ R ⇒ ∂ R ∂ µ + ∂ R ∂ w ⇒ ∂ w R ( w ( µ ) , µ ) = 0 = ∂ µ = 0 = ∂ µ = − ∂ w ∂ w ∂ µ ROM sensitivities Recall: R r ( y ( µ ) , µ ) = Ψ T R ( ¯ w r = ¯ w + Φy w + Φy ( µ ) , µ ) Zahr and Farhat Progressive ROM-Constrained Optimization

  53. Motivation PDE-Constrained Optimization Reduced-Order Models Reduced Sensitivities ROM-Constrained Optimization Training Numerical Experiments Conclusion References Sensitivities HDM sensitivities � ∂ R � − 1 ∂ R ⇒ ∂ R ∂ µ + ∂ R ∂ w ⇒ ∂ w R ( w ( µ ) , µ ) = 0 = ∂ µ = 0 = ∂ µ = − ∂ w ∂ w ∂ µ ROM sensitivities Recall: R r ( y ( µ ) , µ ) = Ψ T R ( ¯ w r = ¯ w + Φy w + Φy ( µ ) , µ ) ⇒ ∂ R r ∂ µ + ∂ R r ∂ y ∂ w r ∂ µ = Φ ∂ y ∂ µ = ΦA − 1 B R r ( y ( µ ) , µ ) = 0 = ∂ µ = 0 = ⇒ ∂ y Zahr and Farhat Progressive ROM-Constrained Optimization

  54. Motivation PDE-Constrained Optimization Reduced-Order Models Reduced Sensitivities ROM-Constrained Optimization Training Numerical Experiments Conclusion References Sensitivities HDM sensitivities � ∂ R � − 1 ∂ R ⇒ ∂ R ∂ µ + ∂ R ∂ w ⇒ ∂ w R ( w ( µ ) , µ ) = 0 = ∂ µ = 0 = ∂ µ = − ∂ w ∂ w ∂ µ ROM sensitivities Recall: R r ( y ( µ ) , µ ) = Ψ T R ( ¯ w r = ¯ w + Φy w + Φy ( µ ) , µ ) ⇒ ∂ R r ∂ µ + ∂ R r ∂ y ∂ w r ∂ µ = Φ ∂ y ∂ µ = ΦA − 1 B R r ( y ( µ ) , µ ) = 0 = ∂ µ = 0 = ⇒ ∂ y � � � �   Ψ T e j Ψ T e j N N � ∂ � ∂ Φ + Ψ T ∂ R + Ψ T ∂ R   A = R j ∂ wΦ , B = − R j ∂ w ∂ µ ∂ µ j =1 j =1 Zahr and Farhat Progressive ROM-Constrained Optimization

  55. Motivation PDE-Constrained Optimization Reduced-Order Models Reduced Sensitivities ROM-Constrained Optimization Training Numerical Experiments Conclusion References Minimum-Error Reduced Sensitivities ROM sensitivities May not represent HDM sensitivities well May be difficult to compute if Ψ = Ψ ( µ ) Zahr and Farhat Progressive ROM-Constrained Optimization

  56. Motivation PDE-Constrained Optimization Reduced-Order Models Reduced Sensitivities ROM-Constrained Optimization Training Numerical Experiments Conclusion References Minimum-Error Reduced Sensitivities ROM sensitivities May not represent HDM sensitivities well May be difficult to compute if Ψ = Ψ ( µ ) Ψ T e j Ψ T e j � � � � ⇒ ∂ , ∂ ∂ 2 R ∂ 2 R LSPG: Ψ = ∂ R ∂ w Φ = involve ∂ w ∂ w , ∂ w ∂ µ ∂ w ∂ µ Zahr and Farhat Progressive ROM-Constrained Optimization

  57. Motivation PDE-Constrained Optimization Reduced-Order Models Reduced Sensitivities ROM-Constrained Optimization Training Numerical Experiments Conclusion References Minimum-Error Reduced Sensitivities ROM sensitivities May not represent HDM sensitivities well May be difficult to compute if Ψ = Ψ ( µ ) Ψ T e j Ψ T e j � � � � ⇒ ∂ , ∂ ∂ 2 R ∂ 2 R LSPG: Ψ = ∂ R ∂ w Φ = involve ∂ w ∂ w , ∂ w ∂ µ ∂ w ∂ µ Define quantity that minimizes the sensitivity error in some norm Θ ≻ 0 � ∂ y || ∂ w ∂ µ = arg min ∂ µ − Φa || Θ a Zahr and Farhat Progressive ROM-Constrained Optimization

  58. Motivation PDE-Constrained Optimization Reduced-Order Models Reduced Sensitivities ROM-Constrained Optimization Training Numerical Experiments Conclusion References Minimum-Error Reduced Sensitivities ROM sensitivities May not represent HDM sensitivities well May be difficult to compute if Ψ = Ψ ( µ ) Ψ T e j Ψ T e j � � � � ⇒ ∂ , ∂ ∂ 2 R ∂ 2 R LSPG: Ψ = ∂ R ∂ w Φ = involve ∂ w ∂ w , ∂ w ∂ µ ∂ w ∂ µ Define quantity that minimizes the sensitivity error in some norm Θ ≻ 0 � ∂ y || ∂ w ∂ µ = arg min ∂ µ − Φa || Θ a � � † − 1 ∂ R � ∂ y Θ 1 / 2 ∂ R Θ 1 / 2 Φ = ⇒ ∂ µ = − ∂ w ∂ µ Zahr and Farhat Progressive ROM-Constrained Optimization

  59. Motivation PDE-Constrained Optimization Reduced-Order Models Reduced Sensitivities ROM-Constrained Optimization Training Numerical Experiments Conclusion References Minimum-Error Reduced Sensitivities Similar in spirit to the derivation of LSPG, select Θ 1 / 2 = ∂ R ∂ w � ∂ R � † ∂ R � ∂ y ∂ µ = − ∂ wΦ ∂ µ Zahr and Farhat Progressive ROM-Constrained Optimization

  60. Motivation PDE-Constrained Optimization Reduced-Order Models Reduced Sensitivities ROM-Constrained Optimization Training Numerical Experiments Conclusion References Minimum-Error Reduced Sensitivities Similar in spirit to the derivation of LSPG, select Θ 1 / 2 = ∂ R ∂ w � ∂ R � † ∂ R � ∂ y ∂ µ = − ∂ wΦ ∂ µ Instead of true objective gradient d f r d µ ( w r ( µ ) , µ ) = ∂f r ∂ µ + ∂f r ∂ w Φ ∂ y ∂ µ � ∂ µ as a surrogate for ∂ y ∂ y use ∂ µ � � d f r d µ ( w r , µ ) = ∂f r ∂ µ + ∂f r ∂ y ∂ w Φ ∂ µ Zahr and Farhat Progressive ROM-Constrained Optimization

  61. Motivation PDE-Constrained Optimization Reduced-Order Models Reduced Sensitivities ROM-Constrained Optimization Training Numerical Experiments Conclusion References Minimum-Error Reduced Sensitivities Minimum-Error Reduced Sensitivities � ∂ R � † ∂ R � � � ∂ y ∂ w r ∂ y ∂ µ = − ∂ µ = Φ ∂ wΦ ∂ µ ∂ µ Advantages Error between HDM/ROM sensitivities decreases monotonically as vectors added to Φ � ∂ w � � ∂ w r ∂ µ = ∂ w If ⊂ range Φ , exact sensitivities recovered ∂ µ ∂ µ If sensitivity basis not truncated, exact derivatives recovered at training points Zahr and Farhat Progressive ROM-Constrained Optimization

  62. Motivation PDE-Constrained Optimization Reduced-Order Models Reduced Sensitivities ROM-Constrained Optimization Training Numerical Experiments Conclusion References Minimum-Error Reduced Sensitivities Minimum-Error Reduced Sensitivities � ∂ R � † ∂ R � � � ∂ y ∂ w r ∂ y ∂ µ = − ∂ µ = Φ ∂ wΦ ∂ µ ∂ µ Advantages Error between HDM/ROM sensitivities decreases monotonically as vectors added to Φ � ∂ w � � ∂ w r ∂ µ = ∂ w If ⊂ range Φ , exact sensitivities recovered ∂ µ ∂ µ If sensitivity basis not truncated, exact derivatives recovered at training points Disadvantages � � ∂ y ∂ µ � = ∂ y d f r d µ � = d f r In general, ∂ µ = ⇒ d µ Convergence issues for reduced optimization problem Zahr and Farhat Progressive ROM-Constrained Optimization

  63. Motivation PDE-Constrained Optimization Reduced-Order Models Reduced Sensitivities ROM-Constrained Optimization Training Numerical Experiments Conclusion References Minimum-Error Reduced Sensitivities and LSPG ROM sensitivities ∂ w r ∂ µ = Φ ∂ y ∂ µ = ΦA − 1 B � � � �   Ψ T e j Ψ T e j � N ∂ � N ∂ Φ + Ψ T ∂ R + Ψ T ∂ R   A = ∂ wΦ , B = − R j R j ∂ w ∂ µ ∂ µ j =1 j =1 For LSPG ROM � ∂ y ∂ µ = ∂ y ∂ µ with second derivatives dropped � ∂ µ → ∂ y ∂ y || R || → 0 = ⇒ ∂ µ Zahr and Farhat Progressive ROM-Constrained Optimization

  64. Motivation PDE-Constrained Optimization Reduced-Order Models Reduced Sensitivities ROM-Constrained Optimization Training Numerical Experiments Conclusion References Offline-Online (Database) Approach Offline-Online Approach to ROM-Constrained Optimization Identify samples in offline phase to be used for training Space-fill sampling (i.e. latin hypercube) Greedy sampling Collect snapshots from HDM Build ROB Φ Solve optimization problem minimize f ( ¯ w + Φy , µ ) y ∈ R n , µ ∈ R p Ψ T R ( ¯ subject to w + Φy , µ ) = 0 (LeGresley and Alonso, 2000), (Lassila and Rozza, 2010), (Rozza and Manzoni, 2010), (Manzoni et al., 2012) Zahr and Farhat Progressive ROM-Constrained Optimization

  65. Motivation PDE-Constrained Optimization Reduced-Order Models Reduced Sensitivities ROM-Constrained Optimization Training Numerical Experiments Conclusion References Offline-Online Approach Optimizer HDM HDM Compress ROB Φ , Ψ HDM ROM HDM Offline Figure: Schematic of Algorithm Zahr and Farhat Progressive ROM-Constrained Optimization

  66. Motivation PDE-Constrained Optimization Reduced-Order Models Reduced Sensitivities ROM-Constrained Optimization Training Numerical Experiments Conclusion References Offline-Online Approach (a) Idealized Optimization Trajectory: Parameter Space ROM ROM ROM ROM ROM ROM ROM ROM ROM ROM ROM ROM HDM HDM HDM (b) Breakdown of Computational Effort Zahr and Farhat Progressive ROM-Constrained Optimization

  67. Motivation PDE-Constrained Optimization Reduced-Order Models Reduced Sensitivities ROM-Constrained Optimization Training Numerical Experiments Conclusion References Progressive/Adaptive Approach Progressive Approach to ROM-Constrained Optimization Collect snapshots from HDM at sparse sampling of the parameter space Initial condition for optimization problem Build ROB Φ from sparse training Solve optimization problem minimize f ( ¯ w + Φy , µ ) y ∈ R n , µ ∈ R p Ψ T R ( ¯ subject to w + Φy , µ ) = 0 1 w + Φy , µ ) || 2 2 || R ( ¯ 2 ≤ ǫ Use solution of above problem to enrich training and repeat until convergence (Arian et al., 2000), (Fahl, 2001), (Afanasiev and Hinze, 2001), (Kunisch and Volkwein, 2008), (Hinze and Matthes, 2013), (Yue and Meerbergen, 2013), (Zahr and Farhat, 2014) Zahr and Farhat Progressive ROM-Constrained Optimization

  68. Motivation PDE-Constrained Optimization Reduced-Order Models Reduced Sensitivities ROM-Constrained Optimization Training Numerical Experiments Conclusion References Progressive Approach Optimizer HDM HDM ROB Φ , Ψ Compress HDM ROM Figure: Schematic of Algorithm Zahr and Farhat Progressive ROM-Constrained Optimization

  69. Motivation PDE-Constrained Optimization Reduced-Order Models Reduced Sensitivities ROM-Constrained Optimization Training Numerical Experiments Conclusion References Progressive Approach (a) Idealized Optimization Trajectory: Parameter Space ROM ROM ROM ROM ROM ROM ROM ROM ROM ROM ROM ROM HDM HDM HDM (b) Breakdown of Computational Effort Zahr and Farhat Progressive ROM-Constrained Optimization

  70. Motivation PDE-Constrained Optimization Reduced-Order Models Reduced Sensitivities ROM-Constrained Optimization Training Numerical Experiments Conclusion References Progressive Approach Ingredients of Proposed Approach (Zahr and Farhat, 2014) Minimum-residual ROM (LSPG) and minimum-error sensitivities d f r d µ ( µ ) = d f d µ ( µ ) for training parameters µ Reduced optimization (sub)problem minimize f ( ¯ w + Φy , µ ) y ∈ R n , µ ∈ R p Ψ T R ( ¯ subject to w + Φy , µ ) = 0 1 w + Φy , µ ) || 2 2 || R ( ¯ 2 ≤ ǫ Reference vector ¯ w and initial guess for each reduced optimization problem f r ( µ ) = f ( µ ) for training parameters µ Efficiently update ROB with additional snapshots or new translation vector Without re-computing SVD of entire snapshot matrix Adaptive selection of ǫ → trust-region approach Zahr and Farhat Progressive ROM-Constrained Optimization

  71. Motivation PDE-Constrained Optimization Reduced-Order Models Reduced Sensitivities ROM-Constrained Optimization Training Numerical Experiments Conclusion References Initial guess for reduced optimization Let − 1 = µ (0) µ ∗ = initial condition for PDE-constrained optimization 0 µ ( k ) = k th iteration of j th reduced optimization problem j µ ∗ j = solution of j th reduced optimization problem Define j = { µ ∗ − 1 , µ ∗ 0 , . . . , µ ∗ f ( w ( µ ∗ j ) , µ ∗ j ) − f ( w ( µ ∗ j − 1 ) , µ ∗ j − 1 ) S µ j } ρ j = f ( w r ( µ ∗ j ) , µ ∗ j ) − f ( w r ( µ ∗ j − 1 ) , µ ∗ j − 1 ) S w j = { w ( µ ∗ − 1 ) , w ( µ ∗ 0 ) , . . . , w ( µ ∗ j ) } Initial Guess for Reduced Optimization: Parameter Space µ (0) j +1 = arg min f ( w ( µ ) , µ ) µ ∈S µ j Robustness to poor selection of ǫ Zahr and Farhat Progressive ROM-Constrained Optimization

  72. Motivation PDE-Constrained Optimization Reduced-Order Models Reduced Sensitivities ROM-Constrained Optimization Training Numerical Experiments Conclusion References Affine offset and initial guess for ROM solve Let − 1 = µ (0) µ ∗ = initial condition for PDE-constrained optimization 0 µ ( k ) = k th iteration of j th reduced optimization problem j µ ∗ j = solution of j th reduced optimization problem Define j = { µ ∗ − 1 , µ ∗ 0 , . . . , µ ∗ f ( w ( µ ∗ j ) , µ ∗ j ) − f ( w ( µ ∗ j − 1 ) , µ ∗ S µ j − 1 ) j } ρ j = f ( w r ( µ ∗ j ) , µ ∗ j ) − f ( w r ( µ ∗ j − 1 ) , µ ∗ j − 1 ) S w j = { w ( µ ∗ − 1 ) , w ( µ ∗ 0 ) , . . . , w ( µ ∗ j ) } Initial Guess for ROM Solve: State Space w = w (0) ¯ w (0) = arg min || R ( w , µ ) || µ ∈S w j ROM exact at training points = ⇒ ROM/HDM objective identical Zahr and Farhat Progressive ROM-Constrained Optimization

  73. Motivation PDE-Constrained Optimization Reduced-Order Models Reduced Sensitivities ROM-Constrained Optimization Training Numerical Experiments Conclusion References Adaptive Selection of Trust-Region Radius Let − 1 = µ (0) µ ∗ = initial condition for PDE-constrained optimization 0 µ ( k ) = k th iteration of j th reduced optimization problem j µ ∗ j = solution of j th reduced optimization problem Define j = { µ ∗ − 1 , µ ∗ 0 , . . . , µ ∗ f ( w ( µ ∗ j ) , µ ∗ j ) − f ( w ( µ ∗ j − 1 ) , µ ∗ j − 1 ) S µ j } ρ j = f ( w r ( µ ∗ j ) , µ ∗ j ) − f ( w r ( µ ∗ j − 1 ) , µ ∗ j − 1 ) S w j = { w ( µ ∗ − 1 ) , w ( µ ∗ 0 ) , . . . , w ( µ ∗ j ) } Trust-Region Radius  1  τ ǫ ρ k ∈ [0 . 5 , 2]  ǫ ′ = ρ k ∈ [0 . 25 , 0 . 5) ∪ (2 , 4] ǫ   τǫ otherwise Zahr and Farhat Progressive ROM-Constrained Optimization

  74. Motivation PDE-Constrained Optimization Reduced-Order Models Reduced Sensitivities ROM-Constrained Optimization Training Numerical Experiments Conclusion References Fast Updates to Reduced-Order Basis Two situations where snapshot matrix modified (Zahr and Farhat, 2014) Additional snapshots to be incorporated � � Φ ′ = POD( ) given Φ = POD( X ) X Y Offset vector modified Φ ′ = POD( X − ˜ w1 T ) w1 T ) given Φ = POD( X − ¯ Zahr and Farhat Progressive ROM-Constrained Optimization

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend