problem reduction memory and renormalization
play

Problem reduction, memory and renormalization Panos Stinis - PowerPoint PPT Presentation

Problem reduction, memory and renormalization Panos Stinis Northwest Institute for Advanced Computing Pacific Northwest National Laboratory Stanford, June 2016 A simple example The linear differential system for x ( t ) and y ( t ) given by


  1. Problem reduction, memory and renormalization Panos Stinis Northwest Institute for Advanced Computing Pacific Northwest National Laboratory Stanford, June 2016

  2. A simple example The linear differential system for x ( t ) and y ( t ) given by dx dt = x + y , x ( 0 ) = x 0 dy dt = − y + x , y ( 0 ) = y 0 can be reduced into an equation for x ( t ) alone. � t dx e − ( t − s ) x ( s ) ds + y 0 e − t dt = x + 0 Reduction leads to memory effects We want a formalism which allows us to generalize this observation to nonlinear systems of arbitrary (but finite) dimension. Stanford, June 2016

  3. A simple example The linear differential system for x ( t ) and y ( t ) given by dx dt = x + y , x ( 0 ) = x 0 dy dt = − y + x , y ( 0 ) = y 0 can be reduced into an equation for x ( t ) alone. � t dx e − ( t − s ) x ( s ) ds + y 0 e − t dt = x + 0 Reduction leads to memory effects We want a formalism which allows us to generalize this observation to nonlinear systems of arbitrary (but finite) dimension. Stanford, June 2016

  4. The Mori-Zwanzig formalism Zwanzig(1961), Mori(1965), Chorin, Hald, Kupferman (2000) Suppose we are given an M -dimensional system of ordinary differential equations d φ ( u 0 , t ) = R ( φ ( u 0 , t )) (1) dt with initial condition φ ( u 0 , 0 ) = u 0 . Transform into a system of linear partial differential equations ∂ ∂ t e tL u 0 k = Le tL u 0 k , k = 1 , . . . , M where the Liouvillian operator L = � M i = 1 R i ( u 0 ) ∂ ∂ u 0 i . Note that Lu 0 j = R j ( u 0 ) . Stanford, June 2016

  5. Derivation of the Liouville equation Let g ( u 0 ) be any (smooth) function of u 0 and define u ( u 0 , t ) = g ( φ ( u 0 , t )) . We now proceed to derive a PDE satisfied by u ( u 0 , t ) . ∂ ( ∂ g )( φ ( u 0 , t )) ∂ � ∂ t ( u ( u 0 , t )) = ∂ t ( φ i ( u 0 , t )) ∂ u 0 i i R i ( φ ( u 0 , t ))( ∂ g � = )( φ ( u 0 , t )) . (2) ∂ u 0 i i We now want to prove that R i ( φ ( u 0 , t ))( ∂ g R i ( u 0 ) ∂ � � )( φ ( u 0 , t )) = ( g ( φ ( u 0 , t ))) . ∂ u 0 i ∂ u 0 i i i (3) Stanford, June 2016

  6. First we prove the following useful identity R ( φ ( u 0 , t )) = D u 0 φ ( u 0 , t ) R ( u 0 ) . (4) In this formula D u 0 φ ( u 0 , t ) is the Jacobian of φ ( u 0 , t ) and multiplication on the right hand side is a matrix vector multiplication. Define F ( u 0 , t ) to be the difference of the left hand side and the right hand side of (4) F ( u 0 , t ) = R ( φ ( u 0 , t )) − D u 0 φ ( u 0 , t ) R ( u 0 ) . (5) Then at t = 0 we have F ( u 0 , 0 ) = R ( φ ( u 0 , 0 )) − D u 0 φ ( u 0 , 0 ) R ( u 0 ) (6) = R ( u 0 ) − D u 0 ( u 0 ) · R ( u 0 ) = R ( u 0 ) − I · R ( u 0 ) ≡ 0 . (7) Stanford, June 2016

  7. Differentiating F with respect to t we get ∂ t F ( u 0 , t ) = ∂ ∂ ∂ t R ( φ ( u 0 , t )) − ∂ ∂ t ( D u 0 φ ( u 0 , t ) R ( u 0 )) = = ∂ ∂ t R ( φ ( u 0 , t )) − ( ∂ ∂ t ( D u 0 φ ( u 0 , t ))) R ( u 0 ) = ( D u 0 R )( φ ( u 0 , t )) · ∂ ∂ t φ ( u 0 , t ) − ( D u 0 ( ∂ ∂ t φ ( u 0 , t ))) R ( u 0 ) = ( D u 0 R )( φ ( u 0 , t )) · ∂ ∂ t φ ( u 0 , t ) − ( D u 0 ( R ( φ ( u 0 , t )))) · R ( u 0 ) = ( D u 0 R )( φ ( u 0 , t )) · R ( φ ( u 0 , t )) − ( D u 0 R )( φ ( u 0 , t )) · D u 0 φ ( u 0 , t ) · R ( u 0 ) = ( D u 0 R )( φ ( u 0 , t )) · [ R ( φ ( u 0 , t )) − D u 0 φ ( u 0 , t ) · R ( u 0 )] = ( D u 0 R )( φ ( u 0 , t )) · F ( u 0 , t ) . (8) From (7) and (8) above we conclude that F ( u 0 , t ) ≡ 0. But F ( u 0 , t ) ≡ 0 implies (4). Stanford, June 2016

  8. We now use (4) to establish (3). Indeed R i ( φ ( u 0 , t ))( ∂ g � )( φ ( u 0 , t )) = ∂ u 0 i i ∂φ i ( u 0 , t ) R j ( u 0 ))( ∂ g � � = ( )( φ ( u 0 , t )) = ∂ u 0 j ∂ u 0 i i j ( ∂ g )( φ ( u 0 , t )) ∂φ i � � = R j ( u 0 )( ( u 0 , t )) = ∂ u 0 i ∂ u 0 j j i R j ( u 0 ) ∂ � = ( g ( φ ( u 0 , t ))) (9) ∂ u 0 j j The first equality above follows from (4). Stanford, June 2016

  9. From (2) and (3) we conclude that u ( u 0 , t ) solves � ∂ j R j ( u 0 ) ∂ ∂ t u ( u 0 , t ) = � ∂ u 0 j u ( u 0 , t ) = Lu ( u 0 , t ) (10) u ( u 0 , 0 ) = g ( u 0 ) i R i ( u 0 ) ∂ where L is the linear differential operator L = � ∂ u 0 i . Define the evolution operator e tL as follows: � � e tL g ( u 0 ) = g ( u ( u 0 , t )) For g ( u 0 ) = u 0 we have that (10) becomes ∂ ∂ t e tL u 0 = Le tL u 0 . Remark : For stochastic systems this is called the backward Kolmogorov equation. The equation for the density is the Liouville equation (forward Kolmogorov equation). Stanford, June 2016

  10. Let u 0 = (ˆ u 0 , ˜ u 0 ) where ˆ u 0 is N -dimensional and ˜ u 0 is M − N -dimensional. Define a projection operator P : F ( u 0 ) → ˆ F (ˆ u 0 ) . Also, define the operator Q = I − P . ∂ ∂ t e tL u 0 k = e tL PLu 0 k + e tL QLu 0 k � t = e tL PLu 0 k + e tQL QLu 0 k + e ( t − s ) L PLe sQL QLu 0 k ds (11) 0 for k = 1 , . . . , N . We have used Dyson’s formula (Duhamel’s principle) � t e tL = e tQL + e ( t − s ) L PLe sQL ds . (12) 0 Stanford, June 2016

  11. If we write e tQL QLu 0 k = w k , w k ( u 0 , t ) satisfies the equation � ∂ ∂ t w k ( u 0 , t ) = QLw k ( u 0 , t ) (13) w k ( u 0 , 0 ) = QLu 0 k = R k ( u 0 ) − ( PR k )(ˆ u 0 ) . The solution of (13) is at all times orthogonal to the range of P . We call it the orthogonal dynamics equation. Remark : The difficulty with the orthogonal dynamics equation is that, in general, it cannot be written as a closed equation for w k ( u 0 , t ) . This means that its numerical solution is usually prohibitively expensive (“law of conservation of trouble"). Stanford, June 2016

  12. Since the solutions of the orthogonal dynamics equation remain orthogonal to the range of P , we can project the Mori-Zwanzig equation (11) and find � t ∂ ∂ t Pe tL u 0 k = Pe tL PLu 0 k + P e ( t − s ) L PLe sQL QLu 0 k ds . (14) 0 Use (14) as the starting point of approximations for the evolution of the quantity Pe tL u 0 k for k = 1 , . . . , N (note that equation (14) involves the orthogonal dynamics operator e tQL ) . Construct reduced models based on mathematical, physical and numerical observations. These models come directly from the original equations and the terms appearing in them are not introduced by hand. Stanford, June 2016

  13. Since the solutions of the orthogonal dynamics equation remain orthogonal to the range of P , we can project the Mori-Zwanzig equation (11) and find � t ∂ ∂ t Pe tL u 0 k = Pe tL PLu 0 k + P e ( t − s ) L PLe sQL QLu 0 k ds . (14) 0 Use (14) as the starting point of approximations for the evolution of the quantity Pe tL u 0 k for k = 1 , . . . , N (note that equation (14) involves the orthogonal dynamics operator e tQL ) . Construct reduced models based on mathematical, physical and numerical observations. These models come directly from the original equations and the terms appearing in them are not introduced by hand. Stanford, June 2016

  14. Fluctuation-dissipation theorems Assume that one has access to the p.d.f. of the initial conditions, say ρ ( u 0 ) . 1) Conditional expectation : For a function f ( u 0 ) we have � f ( u 0 ) ρ ( u 0 ) d ˜ u 0 E [ f ( u 0 ) | ˆ u 0 ] = . � ρ ( u 0 ) d ˜ u 0 The conditional expectation is the best in an L 2 sense, meaning u 0 ] | 2 ] ≤ E [ | f − h (ˆ u 0 ) | 2 ] for all functions h . E [ | f − E [ f | ˆ 2) Finite-rank projection : Denote the space of u 0 as ˆ square-integrable functions of ˆ L 2 . Let h 1 (ˆ u 0 ) , h 2 (ˆ u 0 ) , . . . be an orthonormal set of basis functions of ˆ L 2 , i.e. E [ h i h j ] = δ ij (w.r.t. the p.d.f. ρ ( u 0 ) ). Then, l � ( Pf )(ˆ a j h j (ˆ u 0 ) = u 0 ) , j = 1 where a j = E [ fh j ] , for j = 1 , . . . , l . Stanford, June 2016

  15. Remark : If we keep only the linear terms in the expansion, we get the so called “linear" projection, which is the most popular (implicit assumption of being near equilibrium). Fluctuation-dissipation theorem of the first kind : Consider the case of only one resolved variable, say u 01 and keep only the linear term in the projection, Pf ( u 0 ) = ( f , u 01 ) u 01 where we assume ( u 01 , u 01 ) = 1 . The MZ equation becomes � t ∂ ∂ t e tL u 01 = e tL PLu 01 + e tQL QLu 01 + e ( t − s ) L PLe sQL QLu 01 ds , 0 or ∂ ∂ t e tL u 01 = ( Lu 01 , u 01 ) e tL u 01 + e tQL QLu 01 � t ( Le sQL QLu 01 , u 01 ) e ( t − s ) L u 01 ds . + (15) 0 Stanford, June 2016

  16. We take the inner product of (15) with u 01 and find ∂ ∂ t ( e tL u 01 , u 01 ) = ( Lu 01 , u 01 )( e tL u 01 , u 01 ) � t + ( e tQL QLu 01 , u 01 ) + ( Le sQL QLu 01 , u 01 ) e ( t − s ) L u 01 ds 0 = ( Lu 01 , u 01 )( e tL u 01 , u 01 ) � t ( Le sQL QLu 01 , u 01 )( e ( t − s ) L u 01 , u 01 ) ds , + (16) 0 because Pe tQL QLu 01 = ( e tQL QLu 01 , u 01 ) u 01 = 0 and hence ( e tQL QLu 01 , u 01 ) = 0 . Remark : Equation (16) describes the evolution of the autocorrelation ( e tL u 01 , u 01 ) . Multiply equation (16) with u 01 and recall that Pe tL u 01 = ( e tL u 01 , u 01 ) u 01 . Stanford, June 2016

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend