a new quasi monte carlo technique based on nonnegative
play

A new quasi-Monte Carlo technique based on nonnegative least - PowerPoint PPT Presentation

A new quasi-Monte Carlo technique based on nonnegative least squares and approximate Fekete points 1 Stefano De Marchi Department of Mathematics - University of Padova Antwerp - December 5, 2014 1 Joint work with Claudia Bittante and Alvise


  1. A new quasi-Monte Carlo technique based on nonnegative least squares and approximate Fekete points 1 Stefano De Marchi Department of Mathematics - University of Padova Antwerp - December 5, 2014 1 Joint work with Claudia Bittante and Alvise Sommariva

  2. Motivation The computation of integrals in high dimensions and on general domains, when no explicit cubature rules are known; 2 of 42

  3. Motivation The computation of integrals in high dimensions and on general domains, when no explicit cubature rules are known; The use of as less (good) points as possible to approximate integrals on general domains; 2 of 42

  4. Motivation The computation of integrals in high dimensions and on general domains, when no explicit cubature rules are known; The use of as less (good) points as possible to approximate integrals on general domains; Nonnegative least-square and approximate Fekete points for computing the cubature weights. 2 of 42

  5. Outline The problem 1 2 The Monte Carlo and quasi-Monte Carlo approach 3 Nonnegative Least Squares and cubature Approximate Fekete Points and cubature 4 The QR algorithm for AFP 1-2-3 dimensional WAMs: examples Cones (truncated) 5 Numerical examples Tests on 2d domains Test on a 3d domain Conclusions 6 3 of 42

  6. Cubature Notation Ω = [ 0 , 1 ] d , i.e. the d -dimensional unit cube X = { x 1 , . . . x N } of Ω , the set of samples Both MC and QMC compute N f ( x ) dx ≈ λ d (Ω) � � f ( x i ) (1) N Ω i = 1 where λ d (Ω) is the Lebesgue measure of the domain Ω (in the case of the unit cube it is simply 1). In QMC the samples are taken as a low-discrepancy sequence (quasi-random points) (ex: Halton, Sobol); Using low-discrepancy sequences allow O ( N − 1 ) instead of O ( N − 1 / 2 ) convergence rate. 4 of 42

  7. Discrepancy Definition Given a sequence X = { x 1 , ..., x N } its discrepancy is � � #( X , B ) � � D N ( X ) := sup � − λ d ( B ) � (2) � � N � � B ∈ J � � where i = 1 [ a i , b i ) = { x ∈ R d : a i ≤ x i ≤ b i , 0 ≤ a i < b i < 1 } J := � d ( d -dimensional intervals), #( X , B ) is the number of points of X in B . λ d is Lebesgue measure Note. When D N ( X ) ≈ mis ( B ) then D N is called low discrepancy . 5 of 42

  8. Discrepancy Why the discrepancy is important? 6 of 42

  9. Discrepancy Why the discrepancy is important? Theorem (Koksma-Hlawka inequality) Let f : Ω → R be BV with variation V ( f ) , and let X ⊂ Ω be a (low discrepancy) sequence of N points of Ω . Let E N ( f ) be the cubature error, then E N ( f ) ≤ V ( f ) D N . (3) 6 of 42

  10. Low-discrepancy examples Halton and Sobol these sequences have discrepancy much lower than random points! Figure : 200 Halton (Left) and Sobol points on the square. Notice : they form a nested sequence For Halton points in dimension d , it is known D N ( H N , d ) ≤ C ( log N ) d . N Note. Matlab has built-in functions haltonset, sobolset 7 of 42

  11. Low-discrepancy examples Halton and Sobol Figure : 1024 Halton points (Left) and Sobol points 8 of 42

  12. Nonnegative Least Squares Definition Definition Let A be a m × n matrix and b ∈ R m a column vector, G a r × n matrix and h ∈ R r . A Linear System of Inequalities (LSI) problem is a least squares problem with linear constraints, that is the optimization problem min x ∈ R n � Ax − b � 2 (4) Gx ≥ h . 9 of 42

  13. Nonnegative Least Squares How to use them for cubature A = V T , with V the Vandermonde matrix at the sequence X = { x i , i = 1 , . . . , n } for the polynomial basis { p j , j = 1 , . . . , m } ; b being the column vector of size m of the moments, that is � b j = p j ( x ) d µ ( x ) Ω for some measure µ on Ω G = I , h = ( 0 , . . . , 0 ) T both of order n then the problem arg min x � V T x − b � 2 subject to x ≥ 0. 10 of 42

  14. Nonnegative Least Squares How to use them for cubature A = V T , with V the Vandermonde matrix at the sequence X = { x i , i = 1 , . . . , n } for the polynomial basis { p j , j = 1 , . . . , m } ; b being the column vector of size m of the moments, that is � b j = p j ( x ) d µ ( x ) Ω for some measure µ on Ω G = I , h = ( 0 , . . . , 0 ) T both of order n then the problem arg min x � V T x − b � 2 subject to x ≥ 0. Algorithm : Lawson, Hanson Solving Least Squares Problems, PH 1974, p. 161 gives the nonnegative weights x for the cubature at the point set X . 10 of 42

  15. Nonnegative Least Squares How to use them for cubature A = V T , with V the Vandermonde matrix at the sequence X = { x i , i = 1 , . . . , n } for the polynomial basis { p j , j = 1 , . . . , m } ; b being the column vector of size m of the moments, that is � b j = p j ( x ) d µ ( x ) Ω for some measure µ on Ω G = I , h = ( 0 , . . . , 0 ) T both of order n then the problem arg min x � V T x − b � 2 subject to x ≥ 0. Algorithm : Lawson, Hanson Solving Least Squares Problems, PH 1974, p. 161 gives the nonnegative weights x for the cubature at the point set X . Matlab has built-in function lsqnonneg in the optim toolbox. 10 of 42

  16. Fekete Points Definition Definition Let K ⊂ C d be a compact and S n = span { q j , j = 1 , . . . , ν n } a finite-dimensional space of linearly independent functions. The points { ξ 1 , . . . , ξ ν n } of K are Fekete points if they are unisolvent for the interpolation on S n and maximize the absolute value of the Vandermonde determinant. 11 of 42

  17. Fekete Points Definition Definition Let K ⊂ C d be a compact and S n = span { q j , j = 1 , . . . , ν n } a finite-dimensional space of linearly independent functions. The points { ξ 1 , . . . , ξ ν n } of K are Fekete points if they are unisolvent for the interpolation on S n and maximize the absolute value of the Vandermonde determinant. n ) = ( n + d In polynomial interpolation ν n = dim ( P d d ) For Fekete points | l i | ≤ 1 , ∀ i . As a consequence, � ν n Λ n = max x ∈ Ω i = 1 | l i ( x ) | ≤ ν n . This bound is indeed an overestimate of the Lebegsue constant as it is known (see [Bos et al. NMTA11]), but gives the idea of the importance (and quasi-optimality) of Fekete points for interpolation/cubature. To compute Fekete points we have to solve a NP-hard (discrete) optimization problem (cf. [Civril&Magdon-Ismail TCS09]). Fekete points are known only on the interval, complex circle, 11 of 42 square&cube for tensor product interpolation.

  18. Approximate Fekete Points Definition Given a polynomial determining compact set K ⊂ R d or K ⊂ C d (i.e., polynomials vanishing there are identically zero), [Calvi and Levenberg in JAT08] introduced the idea of Weakly Admissible Meshes (WAM) as sequence of discrete subsets A n ⊂ K that satisfy the polynomial inequality � p � K ≤ C ( A n ) � p � A n , ∀ p ∈ P d n ( K ) (5) where both card ( A n ) ≥ N := dim ( P d n ( K )) and C ( A n ) grow at most polynomially with n . 12 of 42

  19. Approximate Fekete Points Definition Given a polynomial determining compact set K ⊂ R d or K ⊂ C d (i.e., polynomials vanishing there are identically zero), [Calvi and Levenberg in JAT08] introduced the idea of Weakly Admissible Meshes (WAM) as sequence of discrete subsets A n ⊂ K that satisfy the polynomial inequality � p � K ≤ C ( A n ) � p � A n , ∀ p ∈ P d n ( K ) (5) where both card ( A n ) ≥ N := dim ( P d n ( K )) and C ( A n ) grow at most polynomially with n . • When C ( A n ) is bounded we speak of an Admissible Mesh (AM). • For our purposes it sufficies to consider WAMs. 12 of 42

  20. Discrete Extremal Sets Idea: extracting a maximum determinant N × N submatrix from the M × N Vandermonde matrix V = V ( a , p ) = [ p j ( a i )] 13 of 42

  21. Discrete Extremal Sets Idea: extracting a maximum determinant N × N submatrix from the M × N Vandermonde matrix V = V ( a , p ) = [ p j ( a i )] NP-hard problem 13 of 42

  22. Discrete Extremal Sets Idea: extracting a maximum determinant N × N submatrix from the M × N Vandermonde matrix V = V ( a , p ) = [ p j ( a i )] NP-hard problem We look for approximate solutions 13 of 42

  23. Discrete Extremal Sets Idea: extracting a maximum determinant N × N submatrix from the M × N Vandermonde matrix V = V ( a , p ) = [ p j ( a i )] NP-hard problem We look for approximate solutions This can be done by basic numerical linear algebra 13 of 42

  24. Discrete Extremal Sets Idea: extracting a maximum determinant N × N submatrix from the M × N Vandermonde matrix V = V ( a , p ) = [ p j ( a i )] NP-hard problem We look for approximate solutions This can be done by basic numerical linear algebra Key asymptotic result (cf. [Bos/De Marchi et al. NMTMA11]): Discrete Extremal Sets extracted from a WAM by the greedy algorithms below, have the same asymptotic behavior of the true Fekete points N µ n := 1 N →∞ � − − − − → d µ K δ ξ j N j = 1 where µ K is the pluripotential equilibrium measure of K 13 of 42

  25. Approximate Fekete Points: algorithm Idea: greedy maximization of submatrix volumes [Sommariva/Vianello ETNA10] core: select the largest norm row, row i k ( V ) , and remove from each row of V its orthogonal projection onto row i k onto the largest norm one (preserves volumes as with parallelograms) implementation: QR factorization with column pivoting [Businger/Golub 1965] applied to V t Matlab script: w = V ′ \ ones ( 1 : N ) ; ind = find ( w � 0 ) ; ξ = a ( ind ) where a is the array of the WAM. 14 of 42

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend