low rank tensor approximation of multivariate functions
play

Low-Rank Tensor Approximation of Multivariate Functions for Quantum - PowerPoint PPT Presentation

Low-Rank Tensor Approximation of Multivariate Functions for Quantum Chemistry Applications Prashant Rai Sandia National Laboratories, Livermore, CA Joint work with : M. Hermes, S. Hirata (UIUC) K. Sargsyan, H. Najm (Sandia) Dec 3, 2016


  1. Low-Rank Tensor Approximation of Multivariate Functions for Quantum Chemistry Applications Prashant Rai Sandia National Laboratories, Livermore, CA Joint work with : M. Hermes, S. Hirata (UIUC) K. Sargsyan, H. Najm (Sandia) Dec 3, 2016 BASCD, Stanford

  2. High Dimensional Integrals are Difficult! What we want? � I ( u ) = u ( x ) d x Ω x = ( x 1 , . . . , x m ); m = 10 , 100 , 1000 . . . Find I ( u ) by sampling the u ( x ) at random/well chosen points Why is it difficult? The amount of information (or samples) needed to integrate a high dimensional function increases exponentially with dimension Is there a way? Monte Carlo • Sample the function at a large number of (quasi) random points • Compute average as I ( u ) • Convergence rate independent of dimension � • Dependence on variance of the function � 2 / 21

  3. High Dimensional Integrals are Difficult! What we want? � I ( u ) = u ( x ) d x Ω x = ( x 1 , . . . , x m ); m = 10 , 100 , 1000 . . . Find I ( u ) by sampling the u ( x ) at random/well chosen points Why is it difficult? The amount of information (or samples) needed to integrate a high dimensional function increases exponentially with dimension Is there a way? Monte Carlo • Sample the function at a large number of (quasi) random points • Compute average as I ( u ) • Convergence rate independent of dimension � • Dependence on variance of the function � 2 / 21

  4. High Dimensional Integrals are Difficult! What we want? � I ( u ) = u ( x ) d x Ω x = ( x 1 , . . . , x m ); m = 10 , 100 , 1000 . . . Find I ( u ) by sampling the u ( x ) at random/well chosen points Why is it difficult? The amount of information (or samples) needed to integrate a high dimensional function increases exponentially with dimension Is there a way? Monte Carlo • Sample the function at a large number of (quasi) random points • Compute average as I ( u ) • Convergence rate independent of dimension � • Dependence on variance of the function � 2 / 21

  5. Key Idea: Separated Integration Integration Problem � I ( u ) = u ( x ) ρ ( x ) d x • u ( x ) = u ( x 1 , . . . , x m ) Ω • ρ ( x ) = ρ ( x 1 ) · · · ρ ( x m ) Low rank approximation of u ( x ) r � v ( 1 ) µ ( x 1 ) · · · v ( m ) u ( x ) ≈ v ( x 1 , . . . , x m ) = µ ( x m ) µ = 1 Separated Integration r �� � � � v ( 1 ) v ( m ) I ( u ) ≈ I ( v ) = µ ( x 1 ) ρ ( x 1 ) dx 1 · · · µ ( x m ) ρ ( x m ) dx m Ω 1 Ω m µ = 1 How to construct a low rank approximation of u ( x ) ? 3 / 21

  6. Functional Representation Linear Approximation n � • u j ∈ R are coefficients u ( x ) ≈ u j φ j ( x ) j = 1 • φ j ( x ) are basis functions How should we choose basis set? • Simplicity: polynomial, trigonometric functions • Low Cardinality: small n Problem In high dimensions, both are competing objectives ! 4 / 21

  7. Curse of dimensionality in approximation Approximation of a bivariate function u ( x , y ) n n � � u j 1 j 2 φ ( 1 ) j 1 ( x ) φ ( 2 ) u ( x , y ) ≈ j 2 ( y ) j 1 = 1 j 2 = 1 Y 3 y 2 y 3 y 3 3 3 x y x x 2 y 2 y 3 y xy 2 x 2 x 2 y 2 y 3 y xy x x X 3 1 x 2 x x Approximation of a m -variate function u ( x 1 , . . . , x m ) n n � � u j 1 ··· j m φ ( 1 ) j 1 ( x 1 ) · · · φ ( m ) u ( x 1 , . . . , x m ) ≈ · · · j m ( x m ) j 1 = 1 j m = 1 5 / 21

  8. In Search of Low Rank Structures Low rank structure from Singular Value Decomposition ( 2 ) ( 2 ) v 2 v 1 ≈ U ( 1 ) ( 1 ) v 2 v 1 Separated representation of a function ( m = 2 ) u ( x , y ) ≈ v ( 1 ) 1 ( x ) v ( 2 ) 1 ( y ) + v ( 1 ) 2 ( x ) v ( 2 ) 2 ( y ) + · · · 6 / 21

  9. Example v ( 1 ) 1 ( x ) v ( 2 ) 1 ( y ) u ( x , y ) v ( 1 ) 2 ( x ) v ( 2 ) 2 ( y ) 7 / 21

  10. Canonical Tensor Format Generalization of SVD for m = 3 ( 3 ) v 1 ( 3 ) v r ( 2 ) v 1 ( 2 ) v r ⨂ ⋯ ⨂ ≈ U ( 1 ) v 1 ( 1 ) v r Representation of a function in canonical format r n � � v ( i ) µ, j φ ( i ) v ( 1 ) µ ( x 1 ) · · · v ( m ) µ ( x m ); v ( i ) u ( x ) ≈ v ( x 1 , . . . , x m ) = µ ( x i ) = j ( x i ) µ = 1 j = 1 8 / 21

  11. Least-Squares Approximation Functional representation • Φ ∈ R S × n , Φ si = φ i ( x s ) : Measurement n � u ( x ) = v i φ i ( x ) matrix i = 1 • v ∈ R n : Coefficient vector u = Φ v • u ∈ R S , u s = u ( x s ) : Vector of function evaluations Optimization Problem v ∈ R n � u − Φ v � 2 ˆ v = min 2 ( Φ − 1 Φ )ˆ v = Φ − 1 u 9 / 21

  12. ALS in rank one format • Consider a rank one approximation of a 3- d function u ( x ) ≈ v ( 1 ) ( x 1 ) v ( 2 ) ( x 2 ) v ( 3 ) ( x 3 ) • ALS � � n � j = 1 v ( 1 ) φ ( 1 ) � ALS in dimension 1: u ( x ) ≈ v ( 2 ) ( x 2 ) v ( 3 ) ( x 3 ) ( x 1 ) j j � � n � j = 1 v ( 2 ) φ ( 2 ) � ALS in dimension 2: u ( x ) ≈ v ( 1 ) ( x 1 ) v ( 3 ) ( x 3 ) ( x 2 ) j j � � n � j = 1 v ( 3 ) φ ( 3 ) � ALS in dimension 3: u ( x ) ≈ v ( 1 ) ( x 1 ) v ( 2 ) ( x 2 ) ( x 3 ) j j ( 3 ) v 1 ( 2 ) v 1 ⨂ ≈ U ( 1 ) v 1 10 / 21

  13. ALS in rank one format • Consider a rank one approximation of a 3- d function u ( x ) ≈ v ( 1 ) ( x 1 ) v ( 2 ) ( x 2 ) v ( 3 ) ( x 3 ) • ALS � � n � j = 1 v ( 1 ) φ ( 1 ) � ALS in dimension 1: u ( x ) ≈ v ( 2 ) ( x 2 ) v ( 3 ) ( x 3 ) ( x 1 ) j j � � n � j = 1 v ( 2 ) φ ( 2 ) � ALS in dimension 2: u ( x ) ≈ v ( 1 ) ( x 1 ) v ( 3 ) ( x 3 ) ( x 2 ) j j � � n � j = 1 v ( 3 ) φ ( 3 ) � ALS in dimension 3: u ( x ) ≈ v ( 1 ) ( x 1 ) v ( 2 ) ( x 2 ) ( x 3 ) j j ( 3 ) v 1 ( 2 ) v 1 ⨂ ≈ U ( 1 ) v 1 10 / 21

  14. ALS in rank one format • Consider a rank one approximation of a 3- d function u ( x ) ≈ v ( 1 ) ( x 1 ) v ( 2 ) ( x 2 ) v ( 3 ) ( x 3 ) • ALS � � n � j = 1 v ( 1 ) φ ( 1 ) � ALS in dimension 1: u ( x ) ≈ v ( 2 ) ( x 2 ) v ( 3 ) ( x 3 ) ( x 1 ) j j � � n � j = 1 v ( 2 ) φ ( 2 ) � ALS in dimension 2: u ( x ) ≈ v ( 1 ) ( x 1 ) v ( 3 ) ( x 3 ) ( x 2 ) j j � � n � j = 1 v ( 3 ) φ ( 3 ) � ALS in dimension 3: u ( x ) ≈ v ( 1 ) ( x 1 ) v ( 2 ) ( x 2 ) ( x 3 ) j j ( 3 ) v 1 ( 2 ) v 1 ⨂ ≈ U ( 1 ) v 1 10 / 21

  15. ALS in rank one format • Consider a rank one approximation of a 3- d function u ( x ) ≈ v ( 1 ) ( x 1 ) v ( 2 ) ( x 2 ) v ( 3 ) ( x 3 ) • ALS � � n � j = 1 v ( 1 ) φ ( 1 ) � ALS in dimension 1: u ( x ) ≈ v ( 2 ) ( x 2 ) v ( 3 ) ( x 3 ) ( x 1 ) j j � � n � j = 1 v ( 2 ) φ ( 2 ) � ALS in dimension 2: u ( x ) ≈ v ( 1 ) ( x 1 ) v ( 3 ) ( x 3 ) ( x 2 ) j j � � n � j = 1 v ( 3 ) φ ( 3 ) � ALS in dimension 3: u ( x ) ≈ v ( 1 ) ( x 1 ) v ( 2 ) ( x 2 ) ( x 3 ) j j ( 3 ) v 1 ( 3 ) v 2 ( 2 ) ( 2 ) v 1 v 2 ⨂ ⨂ ≈ U ( 1 ) ( 1 ) v 1 v 2 10 / 21

  16. Quantum Chemistry Application • Accurate prediction of vibrational spectra of molecules using perturbation theory require � Anharmonic approximation of potential energy surface � First and second order corrections to energy � First and second order corrections to frequencies per d.o.f • Energy and frequency corrections are formulated as non singular integrals which can be high order and multi-centered Objective improve integration efficiency and scalability 11 / 21

  17. First Order Corrections � I 1 = Φ 0 ( x )∆ V ( x )Φ 0 ( x ) λ 1 ( x ) d x m ∆ V ( x ) = V ( x ) − V ref − 1 � ω 2 i x 2 i 2 i = 1 • V ( x ) : Potential energy (PE) containing up to n -th order force constants. As a default scenario, n = 4. • V ref : the minimum/reference PE, i.e. at the equilibrium geometry. • ω i : the i -th mode frequency of a reference mean-field theory. Found by solving Dyson equation. m − ω i x 2 h n i ( √ ω i x i ) � i Φ 0 ( x ) = η 0 ( x i ); η n i ( x i ) = C ( n i , ω i ) e 2 i = 1 • m : vibrational degrees of freedom. For H 2 O (water), m = 3 and H 2 CO (formaldehyde), m=6. • η n i ( x i ) : the harmonic-oscillator wave function with quantum number n i along the i -th normal mode x i . • h n i : Hermite polynomial of degree n i 12 / 21

  18. First Order Corrections (contd..) � I 1 = Φ 0 ( x )∆ V ( x )Φ 0 ( x ) λ 1 ( x ) d x λ 1 ( x ) = � m i = 1 λ ( i ) 1 ( x i ) I 1 λ ( i ) Energy ( E 1 ) 1 ( x i ) = 1 , 1 ≤ i ≤ m 1 ( x i ) = 2 1 / 2 η 2 ( x i ) Frequencies ( Σ ( i ) λ ( i ) , λ ( j ) 1 ) 1 = 1 ; j � = i η 0 ( x i ) First order correction integrands can be reformulated as Gaussian times polynomials � I 1 = x e ( x ) P 1 ( x ) d x m � e − ω i x 2 i , e ( x ) = i = 1 m � λ ( i ) P 1 ( x ) = ∆ V ( x ) 1 ( x i ) i = 1 13 / 21

  19. Second Order Corrections � � x ′ Φ 0 ( x )∆ V ( x ) G 2 ( x , x ′ )∆ V ( x ′ )Φ 0 ( x ′ ) λ 2 ( x , x ′ ) d x d x ′ I 2 = x using real-space Green’s function G 2 ( x , x ′ ) = e ( x , x ′ ) H ( x , x ′ ) � �� � � �� � Gaussian High rank polynomial on Hermite basis m � i ) = e − ω i 2 ( x 2 i + x ′ 2 e ( x , x ′ ) = e ( i ) ( x i , x ′ i ); e ( i ) ( x i , x ′ i ) i = 1 n max n max � m m i = 1 C 2 ( n i , ω i ) h n i ( √ ω i x i ) h n i ( √ ω i x ′ � � � H ( x , x ′ ) = · · · i ) − � m i = 1 n i ω i n 1 = 1 n m = 1 i = 1 14 / 21

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend