function interpolation and compressed sensing
play

Function interpolation and compressed sensing Ben Adcock Department - PowerPoint PPT Presentation

Weighted 1 minimization Introduction Infinite-dimensional framework References Function interpolation and compressed sensing Ben Adcock Department of Mathematics Simon Fraser University 1 / 28 Weighted 1 minimization Introduction


  1. Weighted ℓ 1 minimization Introduction Infinite-dimensional framework References Function interpolation and compressed sensing Ben Adcock Department of Mathematics Simon Fraser University 1 / 28

  2. Weighted ℓ 1 minimization Introduction Infinite-dimensional framework References Outline Introduction Infinite-dimensional framework New recovery guarantees for weighted ℓ 1 minimization References 2 / 28

  3. Weighted ℓ 1 minimization Introduction Infinite-dimensional framework References Outline Introduction Infinite-dimensional framework New recovery guarantees for weighted ℓ 1 minimization References 3 / 28

  4. Weighted ℓ 1 minimization Introduction Infinite-dimensional framework References High-dimensional approximation Let • D ⊆ R d be a domain, d ≫ 1 • f : D → C be a (smooth) function • { t i } m i =1 be a set of sample points Goal: Approximate f from { f ( t i ) } m i =1 . Applications: Uncertainty Quantification (UQ), scattered data approximation, numerical PDEs,.... Main issue: curse of dimensionality (exponential blow-up with d ). 4 / 28

  5. Weighted ℓ 1 minimization Introduction Infinite-dimensional framework References Quantifying uncertainty via polynomial chaos expansions Uncertainty Quantification: Understand how output f (the quantity of interest) of a physical system behave as functions of the inputs t (the parameters). Polynomial Chaos Expansions: (Xiu & Karniadakis, 2002). Expand f ( t ) using multivariate orthogonal polynomials M � f ( t ) ≈ x i φ i ( t ) . i =1 Non-intrusive methods: Recover { x i } M i =1 from samples { f ( t i ) } m i =1 . 5 / 28

  6. Weighted ℓ 1 minimization Introduction Infinite-dimensional framework References Stochastic Collocation Two widely-used approaches: Structured meshes and interpolation ( M = m ): E.g. Sparse grids. • Efficient interpolation schemes in moderate dimensions • But may be too structured for very high dimensions, or miss certain features (e.g. anisotropic behaviour). Unstructured meshes and regression ( m > M ): Random sampling combined with least-squares fitting. • For the right distributions, can obtain stable approximation with d -independent scaling of m and M . • But still inefficient, especially in high dimensions. Question Can compressed sensing techniques be useful here? 6 / 28

  7. Weighted ℓ 1 minimization Introduction Infinite-dimensional framework References Stochastic Collocation Two widely-used approaches: Structured meshes and interpolation ( M = m ): E.g. Sparse grids. • Efficient interpolation schemes in moderate dimensions • But may be too structured for very high dimensions, or miss certain features (e.g. anisotropic behaviour). Unstructured meshes and regression ( m > M ): Random sampling combined with least-squares fitting. • For the right distributions, can obtain stable approximation with d -independent scaling of m and M . • But still inefficient, especially in high dimensions. Question Can compressed sensing techniques be useful here? 6 / 28

  8. Weighted ℓ 1 minimization Introduction Infinite-dimensional framework References Compressed sensing in UQ Theoretical work: • Rauhut & Ward (2011), 1D Legendre polynomials • Yan, Guo & Xiu (2012), dD Legendre polynomials • Tang & Iaccarino (2014), randomized quadratures • Hampton & Doostan (2014), coherence-optimized sampling • Xu & Zhou (2014), deterministic sampling • Rauhut & Ward (2014), weighted ℓ 1 minimization • Chkifa, Dexter, Tran & Webster (2015), weighted ℓ 1 minimization Applications to UQ: • Doostan & Owhadi (2011), Mathelin & Gallivan (2012), Lei, Yang, Zheng, Lin & Baker (2014), Rauhut & Schwab (2015), Yang, Lei, Baker & Lin (2015), Jakeman, Eldred & Sargsyan (2015), Karagiannis, Konomi & Lin (2015), Guo, Narayan, Xiu & Zhou (2015) and others. 7 / 28

  9. Weighted ℓ 1 minimization Introduction Infinite-dimensional framework References Are polynomial coefficient sparse? Low dimensions: polynomial coefficients exhibit decay, not sparsity: Polynomial coefficients Wavelet coefficients 1.2 1.0 0.08 0.8 0.06 0.6 0.04 0.4 0.02 0.2 20 40 60 80 100 120 20 40 60 80 100 120 Decay Sparsity Nonlinear approximation error ≈ Linear approximation error We may as well use interpolation/least squares. 8 / 28

  10. Weighted ℓ 1 minimization Introduction Infinite-dimensional framework References Are polynomial coefficient sparse? Higher dimensions: polynomial coefficients are increasingly sparse (Doostan et al., Schwab et al., Webster et al.,....). 1.0 0.8 0.6 0.4 0.2 5000 10000 15000 Polynomial coefficients, d = 10 Nonlinear approximation error ≪ Linear approximation error 9 / 28

  11. Weighted ℓ 1 minimization Introduction Infinite-dimensional framework References Sparsity and lower sets In high dimensions, polynomial coefficients concentrate on lower sets: Definition (Lower set) A set ∆ ⊆ N d is lower if, for any i = ( i 1 , . . . , i d ) and j = ( j 1 , . . . , j d ) with j k ≤ i k , ∀ k , we have i ∈ ∆ ⇒ j ∈ ∆ . s log( s ) d − 1 � � Note: The number of lower sets of size s is O . 10 / 28

  12. Weighted ℓ 1 minimization Introduction Infinite-dimensional framework References Outline Introduction Infinite-dimensional framework New recovery guarantees for weighted ℓ 1 minimization References 11 / 28

  13. Weighted ℓ 1 minimization Introduction Infinite-dimensional framework References Setup Let • ν be a measure on D with � D d ν = 1, • T = { t i } m i =1 ⊆ D , m ∈ N be drawn independently from ν , • { φ j } j ∈ N be an orthonormal system in L 2 ν ( D ) ∩ L ∞ ( D ) (typically, tensor algebraic polynomials). Suppose that � f = x j φ j , x j = � f , φ j � L 2 ν , j ∈ N where { x j } j ∈ N are the coefficients of f in the system { φ j } j ∈ N . 12 / 28

  14. Weighted ℓ 1 minimization Introduction Infinite-dimensional framework References Current approaches – discretize first Most existing approaches follow a ‘discretize first’ approach. Choose M ≥ m and solve the finite-dimensional problem z ∈ C M � z � 1 , w subject to � Az − y � 2 ≤ δ, min ( ⋆ ) for some δ ≥ 0, where � z � 1 , w = � M i =1 w i | z i | , { w i } M i =1 are weights and A = { φ j ( t i ) } m , M y = { f ( t i ) } m i =1 , j =1 , i =1 . x ∈ C M is a minimizer, set f ≈ ˜ f = � M If ˆ i =1 ˆ x i φ i . 13 / 28

  15. Weighted ℓ 1 minimization Introduction Infinite-dimensional framework References Current approaches – discretize first Most existing approaches follow a ‘discretize first’ approach. Choose M ≥ m and solve the finite-dimensional problem z ∈ C M � z � 1 , w subject to � Az − y � 2 ≤ δ, min ( ⋆ ) for some δ ≥ 0, where � z � 1 , w = � M i =1 w i | z i | , { w i } M i =1 are weights and A = { φ j ( t i ) } m , M y = { f ( t i ) } m i =1 , j =1 , i =1 . x ∈ C M is a minimizer, set f ≈ ˜ f = � M If ˆ i =1 ˆ x i φ i . 13 / 28

  16. Weighted ℓ 1 minimization Introduction Infinite-dimensional framework References The choice of δ The parameter δ is chosen so that the best approximation � M i =1 x i φ i to f from span { φ i } M i =1 is feasible for ( ⋆ ). In other words, we require � M � � � � � � � � � δ ≥ � f − x i φ i = x i φ i . � � � � � � � � i =1 � � i > M � L ∞ L ∞ Equivalently, we treat the expansion tail as noise in the data. Problems • This tail error is unknown in general. • A good estimation is necessary in order to get good accuracy. • Empirical estimation via cross validation is tricky and wasteful. • Solutions of ( ⋆ ) do not interpolate the data. 14 / 28

  17. Weighted ℓ 1 minimization Introduction Infinite-dimensional framework References The choice of δ The parameter δ is chosen so that the best approximation � M i =1 x i φ i to f from span { φ i } M i =1 is feasible for ( ⋆ ). In other words, we require � M � � � � � � � � � δ ≥ � f − x i φ i = x i φ i . � � � � � � � � i =1 � � i > M � L ∞ L ∞ Equivalently, we treat the expansion tail as noise in the data. Problems • This tail error is unknown in general. • A good estimation is necessary in order to get good accuracy. • Empirical estimation via cross validation is tricky and wasteful. • Solutions of ( ⋆ ) do not interpolate the data. 14 / 28

  18. Weighted ℓ 1 minimization Introduction Infinite-dimensional framework References New approach We propose the infinite-dimensional ℓ 1 minimization inf w ( N ) � z � 1 , w subject to Uz = y , z ∈ ℓ 1 where y = { f ( t i ) } m i =1 , { w i } i ∈ N are weights and U = { φ j ( t i ) } m , ∞ i =1 , j =1 ∈ C m ×∞ , is an infinitely fat matrix. Advantages • Solutions are interpolatory. • No need to know the expansion tail. • Agnostic to the ordering of the functions { φ i } i ∈ N . Note: a similar setup can also handle noisy data. 15 / 28

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend