introduction to quantum monte carlo methods lectures i
play

Introduction to quantum Monte Carlo methods: Lectures I and II - PowerPoint PPT Presentation

Introduction to quantum Monte Carlo methods: Lectures I and II Claudia Filippi Instituut-Lorentz, Universiteit Leiden, The Netherlands Summer School: QMC from Minerals and Materials to Molecules July 9-19, 2007, University of Illinois at


  1. Introduction to quantum Monte Carlo methods: Lectures I and II Claudia Filippi Instituut-Lorentz, Universiteit Leiden, The Netherlands Summer School: QMC from Minerals and Materials to Molecules July 9-19, 2007, University of Illinois at Urbana-Champaign

  2. A quick reminder: what is electronic structure theory? A quantum mechanical and first-principle approach − → Collection of ions + electrons ↓ Only input: Z α , N α Work in the Born-Oppenheimer approximation Solve the Schr¨ odinger equation for the electrons in the ionic field � � � H = − 1 v ext ( r i ) + 1 1 ∇ 2 i + 2 2 | r i − r j | i � = j i i

  3. Solving the many-electron Schr¨ odinger equation � � � H = − 1 v ext ( r i ) + 1 1 ∇ 2 i + 2 2 | r i − r j | i � = j i i What do we want to compute? Fermionic ground state and low-lying excited states � Ψ n |O| Ψ n � Evaluate expectation values � Ψ n | Ψ n � Where is the difficulty? Electron-electron interaction → non-separable

  4. Is there an optimal theoretical approach? • Density functional theory methods Large systems but approximate exchange/correlation • Quantum chemistry post-Hartree-Fock methods ← ← ← CI MCSCF CC . . . Very accurate on small systems • Quantum Monte Carlo techniques Fully-correlated calculations Stochastic solution of Schr¨ odinger equation Most accurate benchmarks for medium-large systems

  5. An analogy Density functional theory Quantum chemistry Quantum Monte Carlo

  6. If you can, use density functional theo ry! HUMAN TIME Wave function methods Density functional theory N 3 Quantum chemistry Quantum Monte Carlo 6 4 > N N COMPUTATIONAL COST

  7. All is relative . . . We think of density functional theory as cheap and painless!

  8. . . . but density functional theory does not always work A “classical” example: Adsorption/desorption of H 2 on Si(001) E ads Si H + E des E ads E des E rxn For a small model cluster a a DFT 0.69 2.86 2.17 eV QMC 1.01(6) 3.65(6) 2.64(6) DFT error persists for larger models!

  9. Favorable scaling of QMC with system size QMC possible for realistic clusters with 2, 3, 4 . . . surface dimers Accurate QMC calculations doable from small to large scales Error of DFT is large → 0.8 eV on desorption barrier ! Healy, Filippi et al. PRL (2001); Filippi et al. PRL (2002)

  10. What about DFT and excited states? − Restricted open-shell Kohn-Sham method (DFT-ROKS) − Time-dependent density functional theory (TDDFT) 5.0 S0-S1 adiabatic excitation: ROKS geometries Minimal model of rhodopsin Excitation energy (eV) 4.0 H N h ν + C 5 H 6 NH 2 3.0 ROKS TDDFT C 2.0 0 30 60 90 120 150 180 Torsional angle (deg) Comparison with QMC → Neither approach is reliable

  11. When DFT has problems → Wave function based methods Wave function Ψ ( x 1 , . . . , x N ) where x = ( r , σ ) and σ = ± 1 How do we compute expectation values? Many-body wave functions in traditional quantum chemistry Interacting Ψ ( x 1 , . . . , x N ) ↔ One-particle basis ψ ( x ) Ψ expanded in determinants of single-particle orbitals ψ ( x ) Single-particle orbitals expanded on Gaussian basis ⇒ All integrals can be computed analytically

  12. Many-body wave functions in traditional quantum chemistry A jungle of acronyms: CI, CASSCF, MRCI, CASPT2 . . . Expansion in linear combination of determinants � � � � ψ 1 ( x 1 ) . . . ψ 1 ( x N ) � � � � . . . . . . Ψ ( x 1 , . . . , x N ) − → D HF = � � � � � � ψ N ( x 1 ) . . . ψ N ( x N ) − − ← ← c 0 D HF + c 1 D 1 + c 2 D 2 + . . . millions of determinants − ← � � � � ψ 1 ( x 1 ) . . . ψ 1 ( x N ) � � � � . . . . . . � � � � � � ψ N +1 ( x 1 ) . . . ψ N +1 ( x N ) Integrals computed analytically but slowly converging expansion

  13. Can we use a more compact Ψ ? We want to construct an accurate and more compact Ψ Explicit dependence on the inter-electronic distances r ij How do we compute expectation values if no single-electron basis?

  14. A different way of writing the expectation values Consider the expectation value of the Hamiltonian on Ψ � d R Ψ ∗ ( R ) H Ψ ( R ) = � Ψ |H| Ψ � � E V = ≥ E 0 � Ψ | Ψ � d R Ψ ∗ ( R ) Ψ ( R ) � | Ψ ( R ) | 2 d R H Ψ ( R ) = � d R | Ψ ( R ) | 2 Ψ ( R ) − ← � = d R E L ( R ) ρ ( R ) = � E L ( R ) � ρ ρ is a distribution function and E L ( R ) = H Ψ ( R ) the local energy Ψ ( R )

  15. Variational Monte Carlo: a random walk of the electrons Use Monte Carlo integration to compute expectation values ⊲ Sample R from ρ ( R ) using Metropolis algorithm ⊲ Average local energy E L ( R ) = H Ψ ( R ) to obtain E V as Ψ ( R ) M � E V = � E L ( R ) � ρ ≈ 1 E L ( R i ) M i =1 R Random walk in 3N dimensions, R = ( r 1 , . . . , r N ) Just a trick to evaluate integrals in many dimensions

  16. Is it really “just” a trick? Si 21 H 22 Number of electrons 4 × 21 + 22 = 106 Number of dimensions 3 × 106 = 318 Integral on a grid with 10 points/dimension → 10 318 points! MC is a powerful trick ⇒ Freedom in form of the wave function Ψ

  17. Are there any conditions on many-body Ψ to be used in VMC? Within VMC, we can use any “computable” wave function if ⊲ Continuous, normalizable, proper symmetry ⊲ Finite variance σ 2 = � Ψ | ( H − E V ) 2 | Ψ � = � ( E L ( R ) − E V ) 2 � ρ � Ψ | Ψ � σ since the Monte Carlo error goes as err ( E V ) ∼ √ M Zero variance principle: if Ψ → Ψ 0 , E L ( R ) does not fluctuate

  18. Variational Monte Carlo and the generalized Metropolis algorithm | Ψ ( R ) | 2 How do we sample distribution function ρ ( R ) = � d R | Ψ ( R ) | 2 ? Aim → Obtain a set of { R 1 , R 2 , . . . , R M } distributed as ρ ( R ) Let us generate a Markov chain ⊲ Start from arbitrary initial state R i ⊲ Use stochastic transition matrix M ( R f | R i ) � M ( R f | R i ) ≥ 0 M ( R f | R i ) = 1 . R f as probability of making transition R i → R f ⊲ Evolve the system by repeated application of M

  19. Stationarity condition To sample ρ , use M which satisfies stationarity condition : � M ( R f | R i ) ρ ( R i ) = ρ ( R f ) ∀ R f i ⊲ Stationarity condition ⇒ If we start with ρ , we continue to sample ρ ⊲ Stationarity condition + stochastic property of M + ergodicity ⇒ Any initial distribution will evolve to ρ

  20. More stringent condition In practice, we impose detailed balance condition M ( R f | R i ) ρ ( R i ) = M ( R i | R f ) ρ ( R f ) Stationarity condition can be obtained by summing over R i � � M ( R f | R i ) ρ ( R i ) = M ( R i | R f ) ρ ( R f ) = ρ ( R f ) i i Detailed balance is a sufficient but not necessary condition

  21. How do we construct the transition matrix M in practice? Write transition matrix M as proposal T × acceptance A M ( R f | R i ) = A ( R f | R i ) T ( R f | R i ) M and T are stochastic matrices but A is not Rewriting detailed balance condition M ( R f | R i ) ρ ( R i ) M ( R i | R f ) ρ ( R f ) = A ( R f | R i ) T ( R f | R i ) ρ ( R i ) = A ( R i | R f ) T ( R i | R f ) ρ ( R f ) A ( R f | R i ) T ( R i | R f ) ρ ( R f ) or = A ( R i | R f ) T ( R f | R i ) ρ ( R i )

  22. Choice of acceptance matrix A (1) Detailed balance condition is A ( R f | R i ) T ( R i | R f ) ρ ( R f ) = A ( R i | R f ) T ( R f | R i ) ρ ( R i ) For a given choice of T , infinite choices of A satisfy this equation � T ( R i | R f ) ρ ( R f ) � A ( R f | R i ) = F Any function with T ( R f | R i ) ρ ( R i ) F ( x ) F (1 / x ) = x will do the job!

  23. Choice of acceptance matrix A (2) Original choice by Metropolis et al. maximizes the acceptance � � 1 , T ( R i | R f ) ρ ( R f ) A ( R f | R i ) = min T ( R f | R i ) ρ ( R i ) Note: ρ ( R ) does not have to be normalized Original Metropolis method � � 1 , ρ ( R f ) Symmetric T ( R f | R i ) = 1 / ∆ 3 N ⇒ A ( R f | R i ) = min ρ ( R i )

  24. Original Metropolis method Aim → Obtain a set of { R 1 , R 2 , . . . , R M } distributed as ρ ( R ) Operationally, simple algorithm: 1. Pick a starting R and evaluate ρ ( R ) 2. Choose R ′ at random in a box centered at R 3. If ρ ( R ′ ) ≥ ρ ( R ), move accepted → put R ′ in the set 4. If ρ ( R ′ ) < ρ ( R ), move accepted with p = ρ ( R ′ ) ρ ( R ) To do this, pick a random number χ ∈ [0 , 1]: a) If χ < p , move accepted → put R ′ in the set b) If χ > p , move rejected → put another entry of R in the set

  25. Choice of proposal matrix T (1) Is the original choice of T by Metropolis the best possible choice ? Walk sequentially correlated ⇒ M eff < M independent observations M M eff = with T corr autocorrelation time of desired observable T corr Aim is to achieve fast evolution of the system and reduce T corr Use freedom in choice of T to have high acceptance T ( R i | R f ) ρ ( R f ) T ( R f | R i ) ρ ( R i ) ≈ 1 ⇒ A ( R f | R i ) ≈ 1 and small T corr of desired observable Limitation: we need to be able to sample T directly!

  26. Choice of proposal matrix T (2) If ∆ is the linear dimension of domain around R i A ( R f | R i ) A ( R i | R f ) = T ( R i | R f ) ρ ( R f ) ρ ( R i ) ≈ 1 − O (∆ m ) T ( R f | R i ) ⊲ T symmetric as in original Metropolis algorithm gives m = 1 ⊲ A choice motivated by diffusion Monte Carlo with m = 2 is � � − ( R f − R i − V ( R i ) τ ) 2 with V ( R i ) = ∇ Ψ ( R i ) T ( R f | R i ) = N exp 2 τ Ψ ( R i ) ⊲ Other (better) choices of T are possible

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend