interpolation in high dimensions non intrusive reduced
play

Interpolation in high dimensions: Non-intrusive reduced order - PowerPoint PPT Presentation

Introduction Interpolation Non-adaptive Adaptive Conclusion Interpolation in high dimensions: Non-intrusive reduced order modeling Akil Narayan 1 1 Department of Mathematics University of Massachusetts Dartmouth June 7, 2013 GR-ROM @


  1. Introduction Interpolation Non-adaptive Adaptive Conclusion Interpolation in high dimensions: Non-intrusive reduced order modeling Akil Narayan 1 1 Department of Mathematics University of Massachusetts Dartmouth June 7, 2013 GR-ROM @ Caltech Pasadena, CA A. Narayan (University of Massachusetts Dartmouth) Nonadaptive and adaptive interpolation

  2. Introduction Interpolation Non-adaptive Adaptive Conclusion Parameterized functions Problems of interest are often functions that depend both on space x and a parametric variable µ . Let x ∈ D ⊆ R p be a space-like variable ( p = 1 , 2 , 3) and µ ∈ Ω ⊆ R d a parameter ( d ≥ 1). If u = u ( x ; µ ) , an approximation u N ≃ u is usually formed via some combined spatial ( x ) discretization and parametric ( µ ) discretization. The whole game: compute u N . For each µ evaluating u ( x ; µ ) is expensive. The main goal: approximate u ( x ; µ ) with as µ 1 few parametric degrees of freedom as possible. µ 2 Ω In particular, use only µ -point-evaluation information. A. Narayan (University of Massachusetts Dartmouth) Nonadaptive and adaptive interpolation

  3. Introduction Interpolation Non-adaptive Adaptive Conclusion The main ideas Why is this important? need u ( x ; µ ) for numerous values of µ for a given µ , need fast query of u ( x ; µ ) want µ -moment information about u ( x ; µ ) The major points and discussions in this talk: Interpolatory ("non-intrusive") methods can perform on par with projective ("intrusive") methods Non-adaptive interpolatory methods: single-dimension fundamentals and high-dimensional techniques Adaptive interpolatory methods: optimal approximation spaces and reconstructions Themes throughout: greedy schemes, pivoted linear algebra routines A. Narayan (University of Massachusetts Dartmouth) Nonadaptive and adaptive interpolation

  4. Introduction Interpolation Non-adaptive Adaptive Conclusion General setup We are concerned with the standard linear approximation techniques: N N ∑ ∑ u N ( x ; µ ) = c n b n ( µ ) v n ( x ) = C n ( µ ) v n ( x ) m , n = 1 n = 1 The coefficients C n ( µ ) and the basis v n determine the approximation. Some notation throughout: V approximates x V n subspace of V with basis v n B approximates µ B n subspace of B with basis b n Generally, simulation tools are developed to evaluate the following map: µ �→ u ( x ; µ ) , µ fixed (1) This limited information about u ( x ; µ ) constrains our knowledge. A. Narayan (University of Massachusetts Dartmouth) Nonadaptive and adaptive interpolation

  5. Introduction Interpolation Non-adaptive Adaptive Conclusion Intrusive methods One approach: with some preconceived basis v n ( x ) , b n ( µ ) in a Hilbert space V × B , construct N ∑ u N ( x ; µ ) = c m , n b m ( µ ) v n ( x ) , m , n = 1 and ask that u N = proj V N × B N u . (Or appropriate residual formulation for DE.) Determining the approximation coefficients c m , n requires information ⟨ u ( x ; µ ) , v n ( x ) b m ( µ ) ⟩ V × B , but we can only evaluate the map u ( x ; µ ) for a fixed µ . Thus we require data beyond what (1) can provide, so a rewrite of existing simulation tools is necessary: intrusive. A. Narayan (University of Massachusetts Dartmouth) Nonadaptive and adaptive interpolation

  6. Introduction Interpolation Non-adaptive Adaptive Conclusion Non-intrusive methods A second approach: with some preconceived basis v n ( x ) , b n ( µ ) , construct N ∑ u N ( x ; µ ) = c m , n b m ( µ ) v n ( x ) , m , n = 1 and ask that u N ( · ; µ n ) = proj V N u ( · ; µ n ) for some chosen nodes µ n . Note: In principle no Hilbertian structure on V necessary. This is an interpolatory approach; the only data we need is u ( x ; µ n ) at the sites µ n . Thus we can use the existing simulation tools: non-intrusive A. Narayan (University of Massachusetts Dartmouth) Nonadaptive and adaptive interpolation

  7. Introduction Interpolation Non-adaptive Adaptive Conclusion A short, sweet example For concreteness, consider an elliptic problem ( ) − d κ ( x , µ ) d u ( x ; µ ) = f ( x ; µ ) , d x d x with x ⊂ R and µ ∈ [ − 1 , 1 ] 8 ⊂ R 8 . The diffusion coefficient is given by 8 ∑ 1 κ ( x ; µ ) = 1 + π j 2 cos ( 2 π jx ) µ j , j = 1 We seek to approximate u ( x ; µ ) : In this case, a non-intrusive method can perform comparably to an intrusive method. A. Narayan (University of Massachusetts Dartmouth) Nonadaptive and adaptive interpolation

  8. Introduction Interpolation Non-adaptive Adaptive Conclusion A classical ("intrusive") approach Consider a single variable ( x , µ ) and perform a Galerkin (FEM-like) approximation: Find u N ∈ V N × B N such that ⟨ ⟩ ( ) ′ , v ( x ; µ ) κ ( x ; µ ) u N ′ ( x ; µ ) = ⟨ f ( x ; µ ) , v ( x ; µ ) ⟩ , − ∀ v ∈ V N × B N . This is intrusive: we require projective, non-interpolatory information about u ( µ ) . The non-intrusive interpolatory approach: select some parametric locations µ n , n = 1 , . . . , N , and find u N ∈ V N × B N such that u N ( x ; µ n ) = u ( x ; µ n ) , ∀ µ n A. Narayan (University of Massachusetts Dartmouth) Nonadaptive and adaptive interpolation

  9. Introduction Interpolation Non-adaptive Adaptive Conclusion Convergence Intrusive methods are generally more accurate, but are more expensive. 10 0 Intrusive Galerkin Non-intrusive interpolation Galerkin-FEM intrusive k = 6 10 − 4 solution requires a linear solve of size ∼ 10 5 L 2 error 10 − 8 The non-intrusive interpolatory k = 6 10 − 12 approach requires ∼ 3000 linear solves of size ∼ 30. 10 − 16 1 2 3 4 5 6 Polynomial order k A. Narayan (University of Massachusetts Dartmouth) Nonadaptive and adaptive interpolation

  10. Introduction Interpolation Non-adaptive Adaptive Conclusion Non-adaptive interpolation topics Interpolation: u ( · ; µ n ) = u N ( · ; µ n ) , n = 1 , . . . , N . Choice of µ n is the next subject under discussion. Lagrange interpolation and Lebesgue constants polynomials: one-dimensional grids higher dimensions: tensorizations, sparse grids greedy, 'unstructured' methods: Fekete and Leja points A. Narayan (University of Massachusetts Dartmouth) Nonadaptive and adaptive interpolation

  11. Introduction Interpolation Non-adaptive Adaptive Conclusion Non-adaptive interpolation For non-adaptive interpolation, both the basis and the coefficients are chosen independent of the function u : N ∑ u ( x ; µ ) ≃ u N = C n ( µ ) u ( x ; µ n ) n = 1 Both the parametric locations µ n and the parametric dependence C n ( µ ) are free to be chosen. In order to intelligently choose µ n , we specify a basis for C n : N ∑ C n ( µ ) = c n , m b m ( µ ) , n = 1 , . . . , N . m = 1 Realistically, the b n are selected from a standard µ -approximation set, e.g. polynomials, trigonometric functions, wavelets, splines, etc. A. Narayan (University of Massachusetts Dartmouth) Nonadaptive and adaptive interpolation

  12. Introduction Interpolation Non-adaptive Adaptive Conclusion Lagrange interpolation A clearer way to see what is happening: solve for c n , m so that u N interpolates u at µ n . Then: N ∑ C n ( µ ) = c n , m b m ( µ ) = ℓ n ( µ ) , m = 1 with ℓ n the cardinal Lagrange interpolant of the b n at the sites µ n . I.e.: n = 1 , . . . , N ℓ n ( µ m ) = δ n , m , ℓ n ( µ ) determines "how much" information from u ( · ; µ n ) contributes to reconstruction at µ . These can be constructed without the data from u . This process is 100% independent from u . A. Narayan (University of Massachusetts Dartmouth) Nonadaptive and adaptive interpolation

  13. Introduction Interpolation Non-adaptive Adaptive Conclusion Lagrange interpolation 1 1 0 . 8 0 . 5 0 . 6 0 . 4 µ 2 0 µ 0 . 2 − 0 . 5 0 − 0 . 2 − 1 -1 -0.5 0 0.5 1 µ 1 µ A. Narayan (University of Massachusetts Dartmouth) Nonadaptive and adaptive interpolation

  14. Introduction Interpolation Non-adaptive Adaptive Conclusion Interpolation Error Error estimates from classical spatial interpolation may be augmented here. � � sup ∥ u ( · ; µ ) − u N ( · ; µ ) ∥ V N ≤ sup � u ( · ; µ ) − proj V N u ( · ; µ ) � µ ∈ Ω µ ∈ Ω [ ] + ( 1 + Λ) d proj V N u ( · ; µ ) , V N × B N Some of the terms are optimal so we cannot do better. They depend only on the approximation spaces V N and B N . But there is a penalty for interpolation; the "Lebesgue constant". With the approximation space fixed, it depends only on the choice of interpolation nodes. One central question in the formulation of non-adaptive interpolation methods: how to choose µ n to minimize Λ ? A. Narayan (University of Massachusetts Dartmouth) Nonadaptive and adaptive interpolation

  15. Introduction Interpolation Non-adaptive Adaptive Conclusion Polynomial interpolation For concreteness, consider polynomials. Some one-dimensional intuition: the Lebesgue constant for any nodal array is unbounded in N equispaced nodes are bad -- exponentially growing Λ arcsine-distributed nodes are good -- logarithmically growing Λ Bad Good A. Narayan (University of Massachusetts Dartmouth) Nonadaptive and adaptive interpolation

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend