Dimension Reduction: Analysis and Algorithms Raz Kupferman - - PowerPoint PPT Presentation

dimension reduction analysis and algorithms
SMART_READER_LITE
LIVE PREVIEW

Dimension Reduction: Analysis and Algorithms Raz Kupferman - - PowerPoint PPT Presentation

Dimension Reduction: Analysis and Algorithms Raz Kupferman Institute of Mathematics The Hebrew University ( based on a review with A. Stuart ) Instructions for Audience Ive nagged speakers for 2.5 days. Now you have the opportunity to take


slide-1
SLIDE 1

Dimension Reduction: Analysis and Algorithms

Raz Kupferman

Institute of Mathematics The Hebrew University (based on a review with A. Stuart)

slide-2
SLIDE 2

Instructions for Audience

I’ve nagged speakers for 2.5 days. Now you have the opportunity to take revenge. Please do!

slide-3
SLIDE 3

Modeling

Confining complex real-world situations into succinct systems of equations (e.g., Newton’s laws of mechanics). The central theme of the exact sciences.

slide-4
SLIDE 4

Modeling

Confining complex real-world situations into succinct systems of equations (e.g., Newton’s laws of mechanics). The central theme of the exact sciences. In many cases, even though we believe to have reliable models, they are useless because we can not solve them, even not numerically. They are too complex (e.g., weather prediction, cell biology).

slide-5
SLIDE 5

Modeling

Confining complex real-world situations into succinct systems of equations (e.g., Newton’s laws of mechanics). The central theme of the exact sciences. In many cases, even though we believe to have reliable models, they are useless because we can not solve them, even not numerically. They are too complex (e.g., weather prediction, cell biology). A “second level” of modeling is required to obtain useful

  • models. This second step is often called dimension

reduction, coarse-graining, homogenization, or simply, modeling.

slide-6
SLIDE 6

Mathematical Setting

Full (“microscopic”) dynamics: dz dt = h(z) + γ(z)dW dt

slide-7
SLIDE 7

Mathematical Setting

Full (“microscopic”) dynamics: dz dt = h(z) + γ(z)dW dt

  • Dynamics z(t) take place in a high-dimensional

(possibly infinite-dimensional) space Z.

slide-8
SLIDE 8

Mathematical Setting

Full (“microscopic”) dynamics: dz dt = h(z) + γ(z)dW dt

  • Dynamics z(t) take place in a high-dimensional

(possibly infinite-dimensional) space Z.

  • May be deterministic or stochastic.
slide-9
SLIDE 9

Mathematical Setting

Full (“microscopic”) dynamics: dz dt = h(z) + γ(z)dW dt

  • Dynamics z(t) take place in a high-dimensional

(possibly infinite-dimensional) space Z.

  • May be deterministic or stochastic.
  • Too complex to be solved.
slide-10
SLIDE 10

Mathematical Setting

Full (“microscopic”) dynamics: dz dt = h(z) + γ(z)dW dt

  • Dynamics z(t) take place in a high-dimensional

(possibly infinite-dimensional) space Z.

  • May be deterministic or stochastic.
  • Too complex to be solved.
  • Dynamics of interest take place in a (low-

dimensional) subspace X of Z.

slide-11
SLIDE 11

Mathematical Setting

Full (“microscopic”) dynamics: dz dt = h(z) + γ(z)dW dt

  • Dynamics z(t) take place in a high-dimensional

(possibly infinite-dimensional) space Z.

  • May be deterministic or stochastic.
  • Too complex to be solved.
  • Dynamics of interest take place in a (low-

dimensional) subspace X of Z.

  • Objective: find a self-contained description of the

dynamics in X without resolving the dynamics in Y=Z∕X.

slide-12
SLIDE 12

Using coordinates x ∈X and y ∈Y: dx dt = f(x, y) + α(x, y)dU dt dy dt = g(x, y) + β(x, y)dV dt microscopic

slide-13
SLIDE 13

Using coordinates x ∈X and y ∈Y: dx dt = f(x, y) + α(x, y)dU dt dy dt = g(x, y) + β(x, y)dV dt microscopic The goal is to obtain a reduced, macroscopic, closure equation in X: such that X(t) approximates well the component x(t) of the full dynamics. dX dt = F(X) + A(X)dU dt macroscopic

slide-14
SLIDE 14

Running Themes

slide-15
SLIDE 15

Running Themes

  • Controlled versus uncontrolled approximation

Dimension reduction is sometimes backed by analysis, along with error estimates (e.g., systems with scale separation). In other cases, it is based on heuristic reasoning, and scope of validity is unknown (e.g., K-ε models of turbulence).

slide-16
SLIDE 16

Running Themes

  • Controlled versus uncontrolled approximation

Dimension reduction is sometimes backed by analysis, along with error estimates (e.g., systems with scale separation). In other cases, it is based on heuristic reasoning, and scope of validity is unknown (e.g., K-ε models of turbulence).

  • Deterministic versus stochastic

Dynamics in Z and X may be either deterministic or

  • stochastic. W

e classify systems accordingly as DD, DS, SD, and SS.

slide-17
SLIDE 17

Running Themes (cont.)

slide-18
SLIDE 18

Running Themes (cont.)

  • Reduction principle

Two steps: identification of the subspace X (often not known) and derivation of dynamics in X.

slide-19
SLIDE 19

Running Themes (cont.)

  • Reduction principle

Two steps: identification of the subspace X (often not known) and derivation of dynamics in X.

  • Memory

Generally, variable elimination introduces memory (non- Markovian dynamics; e.g., HMM). Need to choose X such that memory is negligible.

slide-20
SLIDE 20

Running Themes (cont.)

  • Reduction principle

Two steps: identification of the subspace X (often not known) and derivation of dynamics in X.

  • Memory

Generally, variable elimination introduces memory (non- Markovian dynamics; e.g., HMM). Need to choose X such that memory is negligible.

  • Path-wise versus distributions

Often the dynamical system is “lifted” to an evolution of probability distribution. Higher dimensional but linear (hence, functional analytical techniques).

slide-21
SLIDE 21

Mori-Zwanzig Formalism

A general technique to (formally) reduce the dimensionality of systems of ODEs. Developed in the context of irreversible

  • stat. mech. In essence, a rewriting of the equations in a

suggestive form. dx dt = f(x, y) dy dt = g(x, y)

slide-22
SLIDE 22

Mori-Zwanzig Formalism

A general technique to (formally) reduce the dimensionality of systems of ODEs. Developed in the context of irreversible

  • stat. mech. In essence, a rewriting of the equations in a

suggestive form. dx dt = f(x, y) dy dt = g(x, y) The basic idea: (1) treat x(t) as a given function and integrate (formally) the y-equation. The solution y(t) depends on the entire history of x(t) (“variation of constants”). (2) Substitute y(t) in the x-equation, resulting in a closed reduced system.

slide-23
SLIDE 23

Mori-Zwanzig Formalism

A general technique to (formally) reduce the dimensionality of systems of ODEs. Developed in the context of irreversible

  • stat. mech. In essence, a rewriting of the equations in a

suggestive form. dx dt = f(x, y) dy dt = g(x, y) The basic idea: (1) treat x(t) as a given function and integrate (formally) the y-equation. The solution y(t) depends on the entire history of x(t) (“variation of constants”). (2) Substitute y(t) in the x-equation, resulting in a closed reduced system.

dx(t) dt = ˜ f(x(t)) + t K(x(t − s), s) ds + N(x(0), y(0), t)

“memory” “noise” “Markovian” Generalized Langevin Equation

slide-24
SLIDE 24

Scale Separation

slide-25
SLIDE 25

Scale Separation

slide-26
SLIDE 26

Scale Separation

An important class of systems in which complexity can

  • ften be reduced in a controlled way is systems in

which there exists a disparity of scales (spatial or temporal).

slide-27
SLIDE 27

Scale Separation

An important class of systems in which complexity can

  • ften be reduced in a controlled way is systems in

which there exists a disparity of scales (spatial or temporal). (Spatial) Homegenization: dynamics depend on small scale features, but phenomena of interest are on large scales (e.g., porous media, climate prediction).

slide-28
SLIDE 28

Scale Separation

An important class of systems in which complexity can

  • ften be reduced in a controlled way is systems in

which there exists a disparity of scales (spatial or temporal). (Spatial) Homegenization: dynamics depend on small scale features, but phenomena of interest are on large scales (e.g., porous media, climate prediction). (Temporal) Averaging: dynamics include short timescale features, but phenomena of interest over long time scales (e.g., molecular conformations, climate prediction).

slide-29
SLIDE 29

Scale Separation (cont.)

dx dt = f(x, y) + α(x, y)dU dt dy dt = 1 ǫ g(x, y) + 1 √ǫβ(x, y)dV dt In our context, x(t) are the “slow” variables, whereas y(t) are the “fast” variables:

Fast to slow timescale ratio

slide-30
SLIDE 30

Scale Separation (cont.)

dx dt = f(x, y) + α(x, y)dU dt dy dt = 1 ǫ g(x, y) + 1 √ǫβ(x, y)dV dt In our context, x(t) are the “slow” variables, whereas y(t) are the “fast” variables:

Fast to slow timescale ratio

Goal: “integrate” over the fast dynamics to obtain a reduced equation for x(t), and prove that it is exact in the limit ε→0.

slide-31
SLIDE 31

Scale Separation (cont.)

dx dt = f(x, y) + α(x, y)dU dt dy dt = 1 ǫ g(x, y) + 1 √ǫβ(x, y)dV dt In our context, x(t) are the “slow” variables, whereas y(t) are the “fast” variables:

Fast to slow timescale ratio

Goal: “integrate” over the fast dynamics to obtain a reduced equation for x(t), and prove that it is exact in the limit ε→0. Several scenarios, depending on the functions f,g,α,β.

slide-32
SLIDE 32

Attracting Manifolds

Goes back to Tikhonov (East) and Levinson (W est).

Starting point: ODEs with scale separations: dy dt = 1 ǫ g(x, y) dx dt = f(x, y)

slide-33
SLIDE 33

Attracting Manifolds

Goes back to Tikhonov (East) and Levinson (W est).

Starting point: ODEs with scale separations: dy dt = 1 ǫ g(x, y) dx dt = f(x, y) Assumption: for every fixed x, the y-dynamics has a unique attracting fixed point, y=η(x).

slide-34
SLIDE 34

Attracting Manifolds

Goes back to Tikhonov (East) and Levinson (W est).

Starting point: ODEs with scale separations: dy dt = 1 ǫ g(x, y) dx dt = f(x, y) Up to mild regularity assumptions on f,g, it can be shown that y(t)=η(x(t)) + O(ε). To O(ε) corrections, x(t) is approximated by the solution X(t) of the reduced equation: dX dt = f(X, η(X)) Assumption: for every fixed x, the y-dynamics has a unique attracting fixed point, y=η(x).

slide-35
SLIDE 35

Attracting Manifolds

Goes back to Tikhonov (East) and Levinson (W est).

Starting point: ODEs with scale separations: dy dt = 1 ǫ g(x, y) dx dt = f(x, y) Systems of class DD Up to mild regularity assumptions on f,g, it can be shown that y(t)=η(x(t)) + O(ε). To O(ε) corrections, x(t) is approximated by the solution X(t) of the reduced equation: dX dt = f(X, η(X)) Assumption: for every fixed x, the y-dynamics has a unique attracting fixed point, y=η(x).

slide-36
SLIDE 36

x y η(x) attracting manifold Actual trajectory y-dynamics “slaved” to x-dynamics

slide-37
SLIDE 37

Example:

dx1 dt = −x2 − x3 dx2 dt = x1 + 1 5x2 dx3 dt = 1 5 + y − 5x3 dy dt = −y ǫ + x1x3 ǫ

slide-38
SLIDE 38

Example:

dx1 dt = −x2 − x3 dx2 dt = x1 + 1 5x2 dx3 dt = 1 5 + y − 5x3 dy dt = −y ǫ + x1x3 ǫ

For fixed x y → x1x3

slide-39
SLIDE 39

Example:

dx1 dt = −x2 − x3 dx2 dt = x1 + 1 5x2 dx3 dt = 1 5 + y − 5x3 dy dt = −y ǫ + x1x3 ǫ

For fixed x y → x1x3 Reduced dynamics:

dX1 dt = −X2 − X3 dX2 dt = X1 + 1 5X2 dX3 dt = 1 5 + (X1 − 5)X3

Rössler system

slide-40
SLIDE 40

Example:

dx1 dt = −x2 − x3 dx2 dt = x1 + 1 5x2 dx3 dt = 1 5 + y − 5x3 dy dt = −y ǫ + x1x3 ǫ

For fixed x y → x1x3 Reduced dynamics:

dX1 dt = −X2 − X3 dX2 dt = X1 + 1 5X2 dX3 dt = 1 5 + (X1 − 5)X3

Rössler system

−8 −6 −4 −2 2 4 6 8 10 −10 −8 −6 −4 −2 2 4 6 8
  • Atrractor. ε=0.01

x1 x2

ǫ = 0.01

slide-41
SLIDE 41

Example:

dx1 dt = −x2 − x3 dx2 dt = x1 + 1 5x2 dx3 dt = 1 5 + y − 5x3 dy dt = −y ǫ + x1x3 ǫ

For fixed x y → x1x3 Reduced dynamics:

dX1 dt = −X2 − X3 dX2 dt = X1 + 1 5X2 dX3 dt = 1 5 + (X1 − 5)X3

Rössler system

−8 −6 −4 −2 2 4 6 8 10 −10 −8 −6 −4 −2 2 4 6 8 Attractor.

X1 X2

−8 −6 −4 −2 2 4 6 8 10 −10 −8 −6 −4 −2 2 4 6 8
  • Atrractor. ε=0.01

x1 x2

ǫ = 0.01

slide-42
SLIDE 42

Averaging

First used in 3-body celestial mechanics (Lagrange 1788).

dy dt = 1 ǫ g(x, y) dx dt = f(x, y)

slide-43
SLIDE 43

Averaging

First used in 3-body celestial mechanics (Lagrange 1788).

dy dt = 1 ǫ g(x, y) dx dt = f(x, y)

Assumption: for fixed x, the y-dynamics are ergodic. Let denote the solution operator of the y-dynamics:

ϕt

x(y)

d dtϕt

x(y) = g(x, ϕt x(y))

ϕ0

x(y) = y

slide-44
SLIDE 44

Averaging

First used in 3-body celestial mechanics (Lagrange 1788).

dy dt = 1 ǫ g(x, y) dx dt = f(x, y)

Assumption: for fixed x, the y-dynamics are ergodic. Let denote the solution operator of the y-dynamics:

ϕt

x(y)

d dtϕt

x(y) = g(x, ϕt x(y))

ϕ0

x(y) = y

Ergodic dynamics induce a Y

  • ung measure on Y:

µx(A) = lim

T →∞

T IA(ϕt

x(y)) dt

measure depends on x indicator function independent of y

slide-45
SLIDE 45

Averaging (cont.)

Anosov’s theorem states that x(t) converges uniformly

  • n any bounded time interval to the solution X(t) of the

averaged equation: dX dt =

  • Y

f(X, y)µX(dy) class DD

slide-46
SLIDE 46

Averaging (cont.)

Comments: Anosov’s theorem states that x(t) converges uniformly

  • n any bounded time interval to the solution X(t) of the

averaged equation: dX dt =

  • Y

f(X, y)µX(dy) class DD

slide-47
SLIDE 47

Averaging (cont.)

Comments:

  • Extensive literature (mostly Russian).

Anosov’s theorem states that x(t) converges uniformly

  • n any bounded time interval to the solution X(t) of the

averaged equation: dX dt =

  • Y

f(X, y)µX(dy) class DD

slide-48
SLIDE 48

Averaging (cont.)

Comments:

  • Extensive literature (mostly Russian).
  • Extension to cases where ergodicity fails on

sufficiently small sets (Arnold, Neistadt). Anosov’s theorem states that x(t) converges uniformly

  • n any bounded time interval to the solution X(t) of the

averaged equation: dX dt =

  • Y

f(X, y)µX(dy) class DD

slide-49
SLIDE 49

Averaging (cont.)

Comments:

  • Extensive literature (mostly Russian).
  • Extension to cases where ergodicity fails on

sufficiently small sets (Arnold, Neistadt).

  • Extension to non-autonomous systems (Artstein).

Anosov’s theorem states that x(t) converges uniformly

  • n any bounded time interval to the solution X(t) of the

averaged equation: dX dt =

  • Y

f(X, y)µX(dy) class DD

slide-50
SLIDE 50

Averaging (cont.)

Comments:

  • Extensive literature (mostly Russian).
  • Extension to cases where ergodicity fails on

sufficiently small sets (Arnold, Neistadt).

  • Extension to non-autonomous systems (Artstein).
  • Extension to non-unique invariant measure

(differential inclusions, Artstein). Anosov’s theorem states that x(t) converges uniformly

  • n any bounded time interval to the solution X(t) of the

averaged equation: dX dt =

  • Y

f(X, y)µX(dy) class DD

slide-51
SLIDE 51

Averaging (cont.)

Comments:

  • Extensive literature (mostly Russian).
  • Extension to cases where ergodicity fails on

sufficiently small sets (Arnold, Neistadt).

  • Extension to non-autonomous systems (Artstein).
  • Extension to non-unique invariant measure

(differential inclusions, Artstein).

  • Invariant measure may depend on y(0).

Anosov’s theorem states that x(t) converges uniformly

  • n any bounded time interval to the solution X(t) of the

averaged equation: dX dt =

  • Y

f(X, y)µX(dy) class DD

slide-52
SLIDE 52

Application: Stiff Hamiltonian Systems

Ubiquitous in molecular systems: Hamiltonian systems with strong potential forces resuling in fast oscillatory motion around a sub-manifold, along with weaker forces responsible for conformational changes over longer timescales (Rubin and

Ungar, 1957, Neistadt 1984, Bornemann and Schuette 1997).

H(z, p) = 1 2

  • i

p2

i

2mi + V (z) + 1 ǫ2 U(z)

“soft” potential “stifg” potential

The stiff potential is minimal on a smooth submanifold M. Goal: approximate solution by a flow on M.

slide-53
SLIDE 53

Example: a two-particle system

H(x, p, y, v) = 1 2(p2 + v2) + V (x) + ω2(x) 2ǫ2 y2

constraining manifold y=0

slide-54
SLIDE 54

Example: a two-particle system

H(x, p, y, v) = 1 2(p2 + v2) + V (x) + ω2(x) 2ǫ2 y2

constraining manifold y=0

Equations of motion:

dx dt = p dp dt = −V ′(x) − ω′(x) 2ǫ2 y2 dy dt = v dv dt = −ω(x) ǫ2 y

ratio of small parameters

slide-55
SLIDE 55

Example: a two-particle system

H(x, p, y, v) = 1 2(p2 + v2) + V (x) + ω2(x) 2ǫ2 y2

constraining manifold y=0

Assumption: energy E does not depend on ε, hence y→0 as ε→0. Equations of motion:

dx dt = p dp dt = −V ′(x) − ω′(x) 2ǫ2 y2 dy dt = v dv dt = −ω(x) ǫ2 y

ratio of small parameters

slide-56
SLIDE 56

Example: a two-particle system

H(x, p, y, v) = 1 2(p2 + v2) + V (x) + ω2(x) 2ǫ2 y2

constraining manifold y=0

Assumption: energy E does not depend on ε, hence y→0 as ε→0. Equations of motion:

dx dt = p dp dt = −V ′(x) − ω′(x) 2ǫ2 y2 dy dt = v dv dt = −ω(x) ǫ2 y

ratio of small parameters

Naive solution: set y=0. Wrong!

slide-57
SLIDE 57

Example: a two-particle system

H(x, p, y, v) = 1 2(p2 + v2) + V (x) + ω2(x) 2ǫ2 y2

constraining manifold y=0

Assumption: energy E does not depend on ε, hence y→0 as ε→0. This system is still not in the desired form of scale separation because the “slow” (x,p) equations depend on ε. Change variables: η = y/ε. Equations of motion:

dx dt = p dp dt = −V ′(x) − ω′(x) 2ǫ2 y2 dy dt = v dv dt = −ω(x) ǫ2 y

ratio of small parameters

Naive solution: set y=0. Wrong!

slide-58
SLIDE 58

T ransformed system of equations:

dx dt = p dp dt = −V ′(x) − ω′(x) 2 η2 dη dt = 1 ǫ v dv dt = −ω(x) ǫ η

slow fast

slide-59
SLIDE 59

T ransformed system of equations:

dx dt = p dp dt = −V ′(x) − ω′(x) 2 η2 dη dt = 1 ǫ v dv dt = −ω(x) ǫ η

slow fast

The y-dynamics are ergodic: (harmonic oscillator with x-dependent frequency). The invariant measure depends on the total energy (i.e.,

  • n initial data of full system).
slide-60
SLIDE 60

T ransformed system of equations:

dx dt = p dp dt = −V ′(x) − ω′(x) 2 η2 dη dt = 1 ǫ v dv dt = −ω(x) ǫ η

slow fast

The y-dynamics are ergodic: (harmonic oscillator with x-dependent frequency). The invariant measure depends on the total energy (i.e.,

  • n initial data of full system).

Applying the averaging principle (here, a variation of Anosov’s theorem)

dX dt = P dP dt = −V ′(X) − J (ω1/2(X))′

efgective (Fixman) potential (non-trivial) constant depending on initial data

slide-61
SLIDE 61

Stochastic Averaging

A (relatively) strightforward generalizartion of the averaging method to stochastic systems:

dx dt = f(x, y) dy dt = 1 ǫ g(x, y) + 1 √ǫβ(x, y)dV dt

slide-62
SLIDE 62

Stochastic Averaging

A (relatively) strightforward generalizartion of the averaging method to stochastic systems:

dx dt = f(x, y) dy dt = 1 ǫ g(x, y) + 1 √ǫβ(x, y)dV dt

If for fixed x the y-dynamics is ergodic with invariant measure μx(dy), then as ε→0, x(t) converges uniformly to X(t): dX dt =

  • Y

f(X, y)µX(dy) class SD

slide-63
SLIDE 63

Sketch of proof (asymptotic analysis can be backed up by a limit theorem):

dx dt = f(x, y) dy dt = 1 ǫ g(x, y) + 1 √ǫβ(x, y)dV dt

slide-64
SLIDE 64

Sketch of proof (asymptotic analysis can be backed up by a limit theorem):

dx dt = f(x, y) dy dt = 1 ǫ g(x, y) + 1 √ǫβ(x, y)dV dt

Step 1: write corresponding Kolmogorov (Fokker-Planck) equation for φ(x,y,t):

∂φ ∂t = − ∂ ∂x (f φ)

  • L1φ

−1 ǫ ∂ ∂y (g φ) + 1 ǫ ∂2 ∂y2

  • β2 φ
  • 1

ǫ L0φ

slide-65
SLIDE 65

Sketch of proof (asymptotic analysis can be backed up by a limit theorem):

dx dt = f(x, y) dy dt = 1 ǫ g(x, y) + 1 √ǫβ(x, y)dV dt

Step 1: write corresponding Kolmogorov (Fokker-Planck) equation for φ(x,y,t):

∂φ ∂t = − ∂ ∂x (f φ)

  • L1φ

−1 ǫ ∂ ∂y (g φ) + 1 ǫ ∂2 ∂y2

  • β2 φ
  • 1

ǫ L0φ

Step 2: power series expansion:

φ(x, y, t) = φ0(x, y, t) + ǫ φ1(x, y, t) + . . .

slide-66
SLIDE 66

Step 3: equate terms of same order: O(1/ε) terms:

L0φ = − ∂ ∂y (g φ) + 1 2 ∂2 ∂y2 (β2 φ) the generator of the y-dynamics

L0φ0 = 0 solution:

φ0(x, y, t) = π(x, t)φx

eq(y)

invariant distribution

  • f y-dynamics
slide-67
SLIDE 67

L1φ = − ∂ ∂x(f φ)

the generator of the x-dynamics

O(1) terms:

L0φ1 = ∂φ0 ∂t − L1φ0

Step 3: equate terms of same order: O(1/ε) terms:

L0φ = − ∂ ∂y (g φ) + 1 2 ∂2 ∂y2 (β2 φ) the generator of the y-dynamics

L0φ0 = 0 solution:

φ0(x, y, t) = π(x, t)φx

eq(y)

invariant distribution

  • f y-dynamics
slide-68
SLIDE 68

L1φ = − ∂ ∂x(f φ)

the generator of the x-dynamics

O(1) terms:

L0φ1 = ∂φ0 ∂t − L1φ0

Solvability condition: right-hand side orthogonal to the kernel of L0* (constant functions). Integrate over y:

∂π ∂t = − ∂ ∂x

  • π(x, t)
  • f(x, y)φx

eq(y) dy

  • Step 3: equate terms of same order:

O(1/ε) terms:

L0φ = − ∂ ∂y (g φ) + 1 2 ∂2 ∂y2 (β2 φ) the generator of the y-dynamics

L0φ0 = 0 solution:

φ0(x, y, t) = π(x, t)φx

eq(y)

invariant distribution

  • f y-dynamics
slide-69
SLIDE 69

Step 4: W e identify this equation as the Liouville equation of the deterministic system

∂π ∂t = − ∂ ∂x

  • π(x, t)
  • f(x, y)φx

eq(y) dy

  • dX

dt =

  • f(X, y)φX

eq(y) dy ≡ F(X)

slide-70
SLIDE 70

Step 4: W e identify this equation as the Liouville equation of the deterministic system

∂π ∂t = − ∂ ∂x

  • π(x, t)
  • f(x, y)φx

eq(y) dy

  • dX

dt =

  • f(X, y)φX

eq(y) dy ≡ F(X)

This asymptotic expansion can be made into a rigorous convergence proof (e.g., through limit theorem for semi- groups).

slide-71
SLIDE 71

Stochastic Limits

A more subtle case of scale-separated system occurs for the following scaling:

dx dt = 1 √ǫf0(x, y) + f1(x, y) dy dt = 1 ǫ g(y) + 1 √ǫβ(y)dV dt

x-equation contains a “fast” term y-equation independent

  • f x (skew-symmetric, not essential)
slide-72
SLIDE 72

Stochastic Limits

A more subtle case of scale-separated system occurs for the following scaling:

dx dt = 1 √ǫf0(x, y) + f1(x, y) dy dt = 1 ǫ g(y) + 1 √ǫβ(y)dV dt

x-equation contains a “fast” term y-equation independent

  • f x (skew-symmetric, not essential)

The setting is such that f0(x,y) averages to zero under the invariant measure of the y-dynamics. A large term that averages to zero becomes, as ε→0, white noise.

slide-73
SLIDE 73

Asymptotic expansion Step 1: Switch to the Kolmogorov equation:

∂φ ∂t = 1 ǫ L0φ + 1 √ǫL1φ + L2φ

L0φ = − ∂ ∂y (g φ) + 1 2 ∂2 ∂y2 (β2 φ)

L1φ = − ∂ ∂x(f0 φ) L2φ = − ∂ ∂x(f1 φ)

slide-74
SLIDE 74

Asymptotic expansion Step 1: Switch to the Kolmogorov equation:

∂φ ∂t = 1 ǫ L0φ + 1 √ǫL1φ + L2φ

L0φ = − ∂ ∂y (g φ) + 1 2 ∂2 ∂y2 (β2 φ)

L1φ = − ∂ ∂x(f0 φ) L2φ = − ∂ ∂x(f1 φ)

Step 2: Asymptotic series expansion:

φ(x, y, t) = φ0(x, y, t) + √ǫφ1(x, y, t) + ǫφ2(x, y, t) + . . .

slide-75
SLIDE 75

Asymptotic expansion Step 1: Switch to the Kolmogorov equation:

∂φ ∂t = 1 ǫ L0φ + 1 √ǫL1φ + L2φ

L0φ = − ∂ ∂y (g φ) + 1 2 ∂2 ∂y2 (β2 φ)

L1φ = − ∂ ∂x(f0 φ) L2φ = − ∂ ∂x(f1 φ)

Step 2: Asymptotic series expansion:

φ(x, y, t) = φ0(x, y, t) + √ǫφ1(x, y, t) + ǫφ2(x, y, t) + . . .

Step 3: Equate terms of same order O(1/ε) terms: L0φ0 = 0 solution: φ0(x, y, t) = π(x, t)φeq(y)

slide-76
SLIDE 76

O(1/√ε) terms: L0φ1 = −L1φ0 Solvability condition requires that integral of RHS be zero. This follows from the properties of f0. Solution:

RHS = ∂ ∂x [f0(x, y)φeq(y)π(x, t)]

φ1 = −L−1

0 L1φ0

slide-77
SLIDE 77

O(1/√ε) terms: L0φ1 = −L1φ0 Solvability condition requires that integral of RHS be zero. This follows from the properties of f0. Solution:

RHS = ∂ ∂x [f0(x, y)φeq(y)π(x, t)]

φ1 = −L−1

0 L1φ0

O(1) terms:

L0φ2 = ∂φ0 ∂t − L1φ1 − L2φ0

Again, apply same solvability condition, and obtain an equation for the (leading order) marginal π(x,t):

∂π ∂t = −

  • L1L−1

0 L1φeq(y)π(x, t) dy +

  • L2φeq(y)π(x, t) dy
slide-78
SLIDE 78

Step 4: identification of reduced problem:

∂π ∂t = −

  • L1L−1

0 L1φeq(y)π(x, t) dy +

  • L2φeq(y)π(x, t) dy

first-order operator in x

  • perator in y

drift difgusion

slide-79
SLIDE 79

Step 4: identification of reduced problem:

∂π ∂t = −

  • L1L−1

0 L1φeq(y)π(x, t) dy +

  • L2φeq(y)π(x, t) dy

first-order operator in x

  • perator in y

drift difgusion

W e identify the equation for the marginal π(x,t) as a Kolmogorov equation of a difgusion process X(t). The drift and diffusion may be difficult to evaluate analytically due to the need to invert L0.

slide-80
SLIDE 80

Example:

dx dt = 1 √ǫyx − λx dy dt = −1 ǫ y + 1 √ǫ dV dt Ornstein-Uhlenbeck process f0(x,y) f1(x,y)

slide-81
SLIDE 81

Example:

dx dt = 1 √ǫyx − λx dy dt = −1 ǫ y + 1 √ǫ dV dt Ornstein-Uhlenbeck process f0(x,y) f1(x,y)

L0φ = ∂ ∂y (y φ) + 1 2 ∂2φ ∂y2

φeq(y) = 1 √π e−y2

slide-82
SLIDE 82

Example:

dx dt = 1 √ǫyx − λx dy dt = −1 ǫ y + 1 √ǫ dV dt Ornstein-Uhlenbeck process f0(x,y) f1(x,y)

L0φ = ∂ ∂y (y φ) + 1 2 ∂2φ ∂y2

φeq(y) = 1 √π e−y2

L1φ = −y ∂ ∂x(x φ) L2φ = λ ∂ ∂x(x φ)

slide-83
SLIDE 83

Example:

dx dt = 1 √ǫyx − λx dy dt = −1 ǫ y + 1 √ǫ dV dt Ornstein-Uhlenbeck process f0(x,y) f1(x,y)

L0φ = ∂ ∂y (y φ) + 1 2 ∂2φ ∂y2

φeq(y) = 1 √π e−y2

L1φ = −y ∂ ∂x(x φ) L2φ = λ ∂ ∂x(x φ)

Everything can be calculated analytically.

Reduced Kolmogorov equation ∂π ∂t = − ∂ ∂x 1 2 − λ

  • + 1

2 ∂2 ∂x2

  • x2 π
slide-84
SLIDE 84

Example:

dx dt = 1 √ǫyx − λx dy dt = −1 ǫ y + 1 √ǫ dV dt Ornstein-Uhlenbeck process f0(x,y) f1(x,y)

L0φ = ∂ ∂y (y φ) + 1 2 ∂2φ ∂y2

φeq(y) = 1 √π e−y2

L1φ = −y ∂ ∂x(x φ) L2φ = λ ∂ ∂x(x φ)

Everything can be calculated analytically.

Reduced Kolmogorov equation ∂π ∂t = − ∂ ∂x 1 2 − λ

  • + 1

2 ∂2 ∂x2

  • x2 π
  • Reduced SDE

dX dt = 1 2 − λ

  • X + X dU

dt

slide-85
SLIDE 85

Original system:

dx dt = 1 √ǫyx − λx dy dt = −1 ǫ y + 1 √ǫ dV dt

Reduced system:

dX dt = 1 2 − λ

  • X + X dU

dt

class SS

slide-86
SLIDE 86

Original system:

dx dt = 1 √ǫyx − λx dy dt = −1 ǫ y + 1 √ǫ dV dt

Reduced system:

dX dt = 1 2 − λ

  • X + X dU

dt

class SS Solution of reduced system:

X(t) = X(0) exp [−λt + U(t)]

Properties (a.s):

λ > 0 → lim

t→∞ X(t) = 0

λ = 0 → lim sup

t→∞ X(t) = ∞

λ < 0 → lim

t→∞ X(t) = ∞

slide-87
SLIDE 87

Original system:

dx dt = 1 √ǫyx − λx dy dt = −1 ǫ y + 1 √ǫ dV dt

Reduced system:

dX dt = 1 2 − λ

  • X + X dU

dt

class SS Solution of reduced system:

X(t) = X(0) exp [−λt + U(t)]

Properties (a.s):

λ > 0 → lim

t→∞ X(t) = 0

λ = 0 → lim sup

t→∞ X(t) = ∞

λ < 0 → lim

t→∞ X(t) = ∞

Numerical solution of log x(t) for ε=0.1.

10 20 30 40 50 60 70 80 90 100 −20 20 40 60 80 100 120

log(x) time λ = −1 ε = 0.1

10 20 30 40 50 60 70 80 90 100 −6 −4 −2 2 4 6 8

log(x) time λ = 0 ε = 0.1

10 20 30 40 50 60 70 80 90 100 −120 −100 −80 −60 −40 −20 20

log(x) time λ = 1 ε = 0.1

λ=-1 λ=1 λ=0

slide-88
SLIDE 88

Example of Class DS

dx dt = 1 √ǫf0(x, y) + f1(x, y) dy dt = 1 ǫ g(y) + 1 √ǫβ(y)dV dt

x-equation contains a “fast” term y-equation independent

  • f x (skew-symmetric, not essential)

W e dealt with systems of the following form that yields dimension reduction of class SS.

slide-89
SLIDE 89

Example of Class DS

The same type of arguments remain valid if the y-dynamics are deterministic, but sufficiently-well mixing.

dx dt = 1 √ǫf0(x, y) + f1(x, y) dy dt = 1 ǫ g(y) + 1 √ǫβ(y)dV dt

x-equation contains a “fast” term y-equation independent

  • f x (skew-symmetric, not essential)

W e dealt with systems of the following form that yields dimension reduction of class SS.

slide-90
SLIDE 90

Example:

dx dt = x − x3 + 4 90√ǫy2 dy1 dt = 10 ǫ (y2 − y1) dy2 dt = 1 ǫ (28y1 − y2 − y1y3) dy3 dt = 1 ǫ (y1y2 − 8 3y3)

y(t) satisfying the (accelerated) chaotic Lorenz equations fast term that averages to zero

slide-91
SLIDE 91

Example:

dx dt = x − x3 + 4 90√ǫy2 dy1 dt = 10 ǫ (y2 − y1) dy2 dt = 1 ǫ (28y1 − y2 − y1y3) dy3 dt = 1 ǫ (y1y2 − 8 3y3)

y(t) satisfying the (accelerated) chaotic Lorenz equations fast term that averages to zero

For ε→0, the slow component x(t) converges (in Law) to the solution X(t)

  • f the SDE:

(σ=0.126) class DS.

dX dt = X − X3 + σ dU dt

slide-92
SLIDE 92

Example:

dx dt = x − x3 + 4 90√ǫy2 dy1 dt = 10 ǫ (y2 − y1) dy2 dt = 1 ǫ (28y1 − y2 − y1y3) dy3 dt = 1 ǫ (y1y2 − 8 3y3)

y(t) satisfying the (accelerated) chaotic Lorenz equations fast term that averages to zero

For ε→0, the slow component x(t) converges (in Law) to the solution X(t)

  • f the SDE:

(σ=0.126) class DS.

dX dt = X − X3 + σ dU dt

Reduced equation describes noisy particle in quartic

  • potential. Equlibrium

distribution is bi-modal.

−2 −1.5 −1 −0.5 0.5 1 1.5 2 0.2 0.4 0.6 0.8 1 1.2 1.4 x Empirical Measure

ε2=0.001

ε=0.001 empirical distribution

slide-93
SLIDE 93

Large Systems

slide-94
SLIDE 94

Large Systems

Another class of systems for which the reduced system can be derived rigorously as a limit of the full dynamics, is systems which many DOFs. The reduced system is obtained in the limit where the number of DOFs tends to infinity (the “thermodynamics limit”). An instance of such systems are mechanical systems of heat

  • baths. (Will be addressed in detail in tomorrow’s lecture).

Also systems of class DS.

slide-95
SLIDE 95

Birth-Death Systems

Chemical reactions are commonly modeled by stochastic birth-death systems. The model: There are m species with populations x=(x1,x2,...,xm). There are n reactions with rates hi(x) and stoichiometry numbers νij. Can easily be simulated by the Gillespie algorithm (generative simulation of cont.-time Markov chains)

slide-96
SLIDE 96
slide-97
SLIDE 97

The Gillespie algorithm:

slide-98
SLIDE 98

The Gillespie algorithm:

  • Initialize xi(0) for i=1,...,m.
slide-99
SLIDE 99

The Gillespie algorithm:

  • Initialize xi(0) for i=1,...,m.
  • Compute the reaction rates rj=hj(x(0)) for j=1,...,n.
slide-100
SLIDE 100

The Gillespie algorithm:

  • Initialize xi(0) for i=1,...,m.
  • Compute the reaction rates rj=hj(x(0)) for j=1,...,n.
  • Set the total rate r =r1+...+rn.
slide-101
SLIDE 101

The Gillespie algorithm:

  • Initialize xi(0) for i=1,...,m.
  • Compute the reaction rates rj=hj(x(0)) for j=1,...,n.
  • Set the total rate r =r1+...+rn.
  • Select the (random) transition time t by picking

a random variable p~U[0,1] and setting t=-(log p)/r.

slide-102
SLIDE 102

The Gillespie algorithm:

  • Initialize xi(0) for i=1,...,m.
  • Compute the reaction rates rj=hj(x(0)) for j=1,...,n.
  • Set the total rate r =r1+...+rn.
  • Select the (random) transition time t by picking

a random variable p~U[0,1] and setting t=-(log p)/r.

  • Select the j-th reaction with probability rj/r.
slide-103
SLIDE 103

The Gillespie algorithm:

  • Initialize xi(0) for i=1,...,m.
  • Compute the reaction rates rj=hj(x(0)) for j=1,...,n.
  • Set the total rate r =r1+...+rn.
  • Select the (random) transition time t by picking

a random variable p~U[0,1] and setting t=-(log p)/r.

  • Select the j-th reaction with probability rj/r.
  • Update x(t) accordingly and return to step 2.
slide-104
SLIDE 104

The Gillespie algorithm provides a “pathwise” description. Alternatively, one can consider the master equation (the discrete analog of the Fokker-Planck equation).

slide-105
SLIDE 105

The Gillespie algorithm provides a “pathwise” description. Alternatively, one can consider the master equation (the discrete analog of the Fokker-Planck equation). Example: a 3-species system

2 1 3

x3

x1 x1 N x2 x1 N

At time t=0:

x1 = 0 x2 = 0 x3 = N

N is the total number

  • f particles
slide-106
SLIDE 106

2 1 3

x3

x1 x1 N x2 x1 N

The master equation: for p(x1,x2,x3,t)

dp dt = (x1 + 1)2 N p(x1 + 1, x2 − 1, x3) − x2

1

N p(x1, x2, x3) + x1(x2 + 1) N p(x1, x2 + 1, x3 − 1) − x1x2 N p(x1, x2, x3) + (x3 + 1)p(x1 − 1, x2, x3 + 1) − x3p(x1, x2, x3)

slide-107
SLIDE 107

2 1 3

x3

x1 x1 N x2 x1 N

The master equation: for p(x1,x2,x3,t)

dp dt = (x1 + 1)2 N p(x1 + 1, x2 − 1, x3) − x2

1

N p(x1, x2, x3) + x1(x2 + 1) N p(x1, x2 + 1, x3 − 1) − x1x2 N p(x1, x2, x3) + (x3 + 1)p(x1 − 1, x2, x3 + 1) − x3p(x1, x2, x3)

Because occupancies are O(N), each reaction changes by little the “density” Xi=xi/N.

slide-108
SLIDE 108

2 1 3

x3

x1 x1 N x2 x1 N

The master equation: for p(x1,x2,x3,t)

dp dt = (x1 + 1)2 N p(x1 + 1, x2 − 1, x3) − x2

1

N p(x1, x2, x3) + x1(x2 + 1) N p(x1, x2 + 1, x3 − 1) − x1x2 N p(x1, x2, x3) + (x3 + 1)p(x1 − 1, x2, x3 + 1) − x3p(x1, x2, x3)

Because occupancies are O(N), each reaction changes by little the “density” Xi=xi/N. V an-Kampen’s Ω-expansion: A change of variables. T reat the Xi as continuous variables.

ρ(X1, X2, X3) = p(NX1, NX2, NX3)

slide-109
SLIDE 109

The master equation in terms of ρ(X1,X2,X3):

ǫdρ dt = (X1 + ǫ)2ρ(X1 + ǫ, X2 − ǫ, X3) − X2

1ρ(X1, X2, X3)

+ X1(X2 + ǫ)p(X1, X2 + ǫ, X3 − ǫ) − X1X2ρ(X1, X2, X3) + (X3 + ǫ)ρ(X1 − ǫ, X2, X3 + ǫ) − X3ρ(X1, X2, X3)

where ε=1/N.

slide-110
SLIDE 110

The master equation in terms of ρ(X1,X2,X3):

ǫdρ dt = (X1 + ǫ)2ρ(X1 + ǫ, X2 − ǫ, X3) − X2

1ρ(X1, X2, X3)

+ X1(X2 + ǫ)p(X1, X2 + ǫ, X3 − ǫ) − X1X2ρ(X1, X2, X3) + (X3 + ǫ)ρ(X1 − ǫ, X2, X3 + ǫ) − X3ρ(X1, X2, X3)

where ε=1/N.

∂ρ ∂t = ∂ ∂X1

  • (X2

1 − X3)ρ

  • +

∂ ∂X2

  • (X1X2 − X2

1)ρ

  • +

∂ ∂X3 [(X3 − X1X2)]

Taylor expand and take ε→0:

slide-111
SLIDE 111

The master equation in terms of ρ(X1,X2,X3):

ǫdρ dt = (X1 + ǫ)2ρ(X1 + ǫ, X2 − ǫ, X3) − X2

1ρ(X1, X2, X3)

+ X1(X2 + ǫ)p(X1, X2 + ǫ, X3 − ǫ) − X1X2ρ(X1, X2, X3) + (X3 + ǫ)ρ(X1 − ǫ, X2, X3 + ǫ) − X3ρ(X1, X2, X3)

where ε=1/N. This is the Liouville equation of the (deterministic) system (the rate equations):

dX1 dt = −X2

1 + X3

dX2 dt = −X1X2 + X2

1

dX3 dt = −X3 + X1X2

2 1 3

x3

x1 x1 N x2 x1 N

class SD

∂ρ ∂t = ∂ ∂X1

  • (X2

1 − X3)ρ

  • +

∂ ∂X2

  • (X1X2 − X2

1)ρ

  • +

∂ ∂X3 [(X3 − X1X2)]

Taylor expand and take ε→0:

slide-112
SLIDE 112

The master equation in terms of ρ(X1,X2,X3):

ǫdρ dt = (X1 + ǫ)2ρ(X1 + ǫ, X2 − ǫ, X3) − X2

1ρ(X1, X2, X3)

+ X1(X2 + ǫ)p(X1, X2 + ǫ, X3 − ǫ) − X1X2ρ(X1, X2, X3) + (X3 + ǫ)ρ(X1 − ǫ, X2, X3 + ǫ) − X3ρ(X1, X2, X3)

where ε=1/N. This is the Liouville equation of the (deterministic) system (the rate equations):

dX1 dt = −X2

1 + X3

dX2 dt = −X1X2 + X2

1

dX3 dt = −X3 + X1X2

2 1 3

x3

x1 x1 N x2 x1 N

class SD

if O(ε) terms are retained

  • ne gets a second-order

Fokker-Planck equation

  • f a stochastic system.

class SS

∂ρ ∂t = ∂ ∂X1

  • (X2

1 − X3)ρ

  • +

∂ ∂X2

  • (X1X2 − X2

1)ρ

  • +

∂ ∂X3 [(X3 − X1X2)]

Taylor expand and take ε→0:

slide-113
SLIDE 113

Numerical simulation: Comparison between a single realization of the Gillespie algorithm with the solution of the deterministic reduced system.

1 2 3 4 5 6 7 8 9 10 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 time y(t)=x(t)/N N = 100

N=100

slide-114
SLIDE 114 1 2 3 4 5 6 7 8 9 10 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 time y(t)=x(t)/N N = 500

N=500 Numerical simulation: Comparison between a single realization of the Gillespie algorithm with the solution of the deterministic reduced system.

1 2 3 4 5 6 7 8 9 10 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 time y(t)=x(t)/N N = 100

N=100

slide-115
SLIDE 115 1 2 3 4 5 6 7 8 9 10 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 time y(t)=x(t)/N N = 500

N=500 Numerical simulation: Comparison between a single realization of the Gillespie algorithm with the solution of the deterministic reduced system.

1 2 3 4 5 6 7 8 9 10 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 time y(t)=x(t)/N N = 100

N=100

1 2 3 4 5 6 7 8 9 10 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 time y(t)=x(t)/N N = 1000

N=1000

slide-116
SLIDE 116

Algorithms

slide-117
SLIDE 117

W e have seen two classes of systems in which dimension reduction can be rigorously obtained as a limit (scale separation and large systems). Most case of (real) interest do not belong to any of these classes (at least not in a strict sense), yet, “something needs to be done”. In the remaning of this lecture we will review uncontrolled approximations, as well as computational algorithms of dimension reduction.

[I apologize in advance: I will only refer to a small part of the recent developments.]

slide-118
SLIDE 118

Projective Integration

Kevrekidis and co-workers 2003 and later

Suppose we have a deterministic system with scale separation, but we are unable to derive the reduced

  • model. Goal: approximate x(t).

dx dt = f(x, y) dy dt = 1 ǫ g(x, y)

slide-119
SLIDE 119

Projective Integration

Kevrekidis and co-workers 2003 and later

Suppose we have a deterministic system with scale separation, but we are unable to derive the reduced

  • model. Goal: approximate x(t).

dx dt = f(x, y) dy dt = 1 ǫ g(x, y)

Case I: for fixed x the y-dynamics are attracted to an invariant manifold. Algorithm: perform a number of short time steps to let y reach the invariant manifold. Evalulate the time-derivative of x, and then advance x by a “large” step.

short steps long steps

slide-120
SLIDE 120

short steps long steps

Projective integration scheme: Integrate the y-equation with “small” time steps δt<ε, for a time long enough to reach the manifold: given (xn, yn)

yn,m+1 = yn,m + δt ǫ g(xn, yn,m) i = 0, . . . , M − 1

slide-121
SLIDE 121

short steps long steps

Projective integration scheme: Integrate the y-equation with “small” time steps δt<ε, for a time long enough to reach the manifold: given (xn, yn)

yn,m+1 = yn,m + δt ǫ g(xn, yn,m) i = 0, . . . , M − 1

Evolve x in time “macroscopically”:

xn+1 = xn + ∆t f(xn, yn,M)

Can be generalized to higher-order schemes.

slide-122
SLIDE 122

Example:

dx dt = y dy dt = 1 ǫ

  • −x + y − y3
slide-123
SLIDE 123

Example:

dx dt = y dy dt = 1 ǫ

  • −x + y − y3

−1.5 −1 −0.5 0.5 1 1.5 −2 −1.5 −1 −0.5 0.5 1 1.5 2

y y−y3

For |x|<1/√3 the y-equation is bi-stable. Suppose x(0)<1/√3 and y is near the positive fixed point. x grows until it exceeds 1/√3, then y jumps to the negative branch and x decreased until it gets under -1/√3 and so on.

slide-124
SLIDE 124

Example:

dx dt = y dy dt = 1 ǫ

  • −x + y − y3

−1.5 −1 −0.5 0.5 1 1.5 −2 −1.5 −1 −0.5 0.5 1 1.5 2

y y−y3

For |x|<1/√3 the y-equation is bi-stable. Suppose x(0)<1/√3 and y is near the positive fixed point. x grows until it exceeds 1/√3, then y jumps to the negative branch and x decreased until it gets under -1/√3 and so on. Projective integration:

yn,m+1 = yn,m + δt ǫ (−xn + yn,m − y3

n,m)

xn+1 = xn + ∆t yn,M

slide-125
SLIDE 125

Numerical results:

0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 −0.4 −0.3 −0.2 −0.1 0.1 0.2 0.3 0.4

t x,y ε=0.0001 Δ t=0.005 ε/h=10 τR/ε=10

ǫ = 10−4 δt = 0.1ǫ M δt = 10ǫ ∆t = 0.005

slide-126
SLIDE 126

This idea can even be used in cases where we do not know the parition of slow and fast variables (but we know that such a partitions exists). In many cases, no harm if we evolve x as well in the first (relaxation) phase, and then project forward both x and y (y will tend back to the invariant manifold in the following relaxation phase). This idea was proposed by Gear and Kevrekidis as a mean to accelerate existing “legacy codes”, by wrapping them with a projective integrator.

slide-127
SLIDE 127

Equation-Free Closures

Kevrekidis and co-worker developed numerous algorithms

  • n the premises that one does not even have equations

(e.g., the legacy code), or, equivalently, that the equations are known but useless. The assumption is that we have control of a numerical solver that we can use at will (e.g., initialize with various initial data), but only for short time intervals (“bursts”). The idea is to make a clever use of these short calculations to predict “coarse” properties of the system.

slide-128
SLIDE 128

Example [Siettos et al. 2003]: Liquid crystalline polymers are characterized by an

  • rientation unit vector u. The distribution ψ(u,t) satisfies

the (nonlinear) Smoluchowski equation:

∂ψ ∂t = ∂ ∂u · 1 2 ∂ψ ∂u + ψ(u) ∂ ∂uV (ψ, u)

slide-129
SLIDE 129

Example [Siettos et al. 2003]: Liquid crystalline polymers are characterized by an

  • rientation unit vector u. The distribution ψ(u,t) satisfies

the (nonlinear) Smoluchowski equation:

∂ψ ∂t = ∂ ∂u · 1 2 ∂ψ ∂u + ψ(u) ∂ ∂uV (ψ, u)

  • V(ψ,u) is the potential, given by the mean field expression:

V (ψ, u) = −αuu : S S = uu − I 3

potential intensity averaging w.r.t ψ

slide-130
SLIDE 130

Example [Siettos et al. 2003]: Liquid crystalline polymers are characterized by an

  • rientation unit vector u. The distribution ψ(u,t) satisfies

the (nonlinear) Smoluchowski equation:

∂ψ ∂t = ∂ ∂u · 1 2 ∂ψ ∂u + ψ(u) ∂ ∂uV (ψ, u)

  • V(ψ,u) is the potential, given by the mean field expression:

V (ψ, u) = −αuu : S S = uu − I 3

potential intensity averaging w.r.t ψ

Goal: find the equilibrium value of a “coarse order- parameter” as function of the potential intensity α. (high dimensional problem)

slide-131
SLIDE 131

∂ψ ∂t = ∂ ∂u · 1 2 ∂ψ ∂u + ψ(u) ∂ ∂uV (ψ, u)

  • V (ψ, u) = −αuu : S

S = uu − I 3

slide-132
SLIDE 132

∂ψ ∂t = ∂ ∂u · 1 2 ∂ψ ∂u + ψ(u) ∂ ∂uV (ψ, u)

  • V (ψ, u) = −αuu : S

S = uu − I 3

Algorithm:

  • Step 1: select the “coarse”

variables:

  • Step 2: given X, “lift” it to many

“microscopic” states u that are consistent with the value of X.

  • Step 3: evolve each macroscopic

state for a short duration T (here, evolve the corresponding SDE, and approximate ensemble averages by empirical averages).

  • Step 4: project the ensemble

u(T) onto the coarse variables X.

X = uzuz − 1 3

slide-133
SLIDE 133

∂ψ ∂t = ∂ ∂u · 1 2 ∂ψ ∂u + ψ(u) ∂ ∂uV (ψ, u)

  • V (ψ, u) = −αuu : S

S = uu − I 3

Algorithm:

  • Step 1: select the “coarse”

variables:

  • Step 2: given X, “lift” it to many

“microscopic” states u that are consistent with the value of X.

  • Step 3: evolve each macroscopic

state for a short duration T (here, evolve the corresponding SDE, and approximate ensemble averages by empirical averages).

  • Step 4: project the ensemble

u(T) onto the coarse variables X.

X = uzuz − 1 3

The lift-evolve- project cycle provides a “projective integrator” for the coarse variables (separation of scales implicity assumed). Use it to do “numerical analysis” (e.g., find fixed points, bifurcation analysis, etc.)

slide-134
SLIDE 134

Results: bifurcation diagram potential intensity α

  • rder parameter
slide-135
SLIDE 135

System Identification

slide-136
SLIDE 136

System Identification

Idea: observe trajectories of the system of interest (experiments or simulations) and fit its behavior to an

  • ptimally selected low-dimensional model.
slide-137
SLIDE 137

System Identification

Idea: observe trajectories of the system of interest (experiments or simulations) and fit its behavior to an

  • ptimally selected low-dimensional model.

Motivating example:

slide-138
SLIDE 138

System Identification

Idea: observe trajectories of the system of interest (experiments or simulations) and fit its behavior to an

  • ptimally selected low-dimensional model.

Motivating example:

  • we observe the time evolution of the conformation
  • f a complex molecule.
slide-139
SLIDE 139

System Identification

Idea: observe trajectories of the system of interest (experiments or simulations) and fit its behavior to an

  • ptimally selected low-dimensional model.

Motivating example:

  • we observe the time evolution of the conformation
  • f a complex molecule.
  • W

e assume that the effective dynamics consist of rare transitions between metastable states.

slide-140
SLIDE 140

System Identification

Idea: observe trajectories of the system of interest (experiments or simulations) and fit its behavior to an

  • ptimally selected low-dimensional model.

Motivating example:

  • we observe the time evolution of the conformation
  • f a complex molecule.
  • W

e assume that the effective dynamics consist of rare transitions between metastable states.

  • W

e model the transitions between essential conformations by a continuous-time Markov process.

slide-141
SLIDE 141

System Identification

Idea: observe trajectories of the system of interest (experiments or simulations) and fit its behavior to an

  • ptimally selected low-dimensional model.

Motivating example:

  • we observe the time evolution of the conformation
  • f a complex molecule.
  • W

e assume that the effective dynamics consist of rare transitions between metastable states.

  • W

e model the transitions between essential conformations by a continuous-time Markov process.

  • Unclear what conformations are (we observe

points in a high-dimensional space) and how many.

slide-142
SLIDE 142

Example [Huisinga et al. 2003]: Assume skew symmetric dynamics:

dx dt = f(x, y) dy dt = g(y)

slide-143
SLIDE 143

Example [Huisinga et al. 2003]: Assume skew symmetric dynamics:

dx dt = f(x, y) dy dt = g(y)

A trajectory of the system is sampled at time intervals τ. This induces a propagation operator (“embedded Markov chain”):

(xn+1, yn+1) = Φ(xn, yn)

slide-144
SLIDE 144

Example [Huisinga et al. 2003]: Assume skew symmetric dynamics:

dx dt = f(x, y) dy dt = g(y)

A trajectory of the system is sampled at time intervals τ. This induces a propagation operator (“embedded Markov chain”):

(xn+1, yn+1) = Φ(xn, yn)

Denoting by Π the projection (x,y)➙x, we are interested in the (non-Markovian) dynamics. xn+1 = ΠΦ(xn, yn)

slide-145
SLIDE 145

Example [Huisinga et al. 2003]: Assume skew symmetric dynamics:

dx dt = f(x, y) dy dt = g(y)

A trajectory of the system is sampled at time intervals τ. This induces a propagation operator (“embedded Markov chain”):

(xn+1, yn+1) = Φ(xn, yn)

Denoting by Π the projection (x,y)➙x, we are interested in the (non-Markovian) dynamics. xn+1 = ΠΦ(xn, yn) Goal: identity the meta-stable states in X and the corresponding Markov transition matrix.

slide-146
SLIDE 146

xn+1 = ΠΦ(xn, yn)

slide-147
SLIDE 147

xn+1 = ΠΦ(xn, yn) The underlying assumption is that some kind of scale separation exists, so that the process xn can be approximated by a Markov chain on X.

slide-148
SLIDE 148

xn+1 = ΠΦ(xn, yn) The underlying assumption is that some kind of scale separation exists, so that the process xn can be approximated by a Markov chain on X. Key idea: metastability is associated with the eigenfunctions with eigenvalues close to 1. These eigenfunctions are approximately piece-wise constant. The meta-stable states are the piece-wise constant intervals.

slide-149
SLIDE 149

xn+1 = ΠΦ(xn, yn) The underlying assumption is that some kind of scale separation exists, so that the process xn can be approximated by a Markov chain on X. Key idea: metastability is associated with the eigenfunctions with eigenvalues close to 1. These eigenfunctions are approximately piece-wise constant. The meta-stable states are the piece-wise constant intervals. Algorithm: (1) partition into a finite (large) number of

  • intervals. (2) construct the Markov transition matrix Pij by

empirical counting. (3) identify the cluster of e.v. close to 1. (4) identify the meta-stable states. (5) project Pij onto the coarser partition.

slide-150
SLIDE 150

Model example: a particle interacting with a “heat bath”

dx dt = −V ′(x) +

N

  • j=1

uj duj dt = −j uj j = 1, . . . , N

V(x)

slide-151
SLIDE 151

Model example: a particle interacting with a “heat bath”

dx dt = −V ′(x) +

N

  • j=1

uj duj dt = −j uj j = 1, . . . , N

V(x)

  • Evolve the system and sample x(t) at time intervals 1.
slide-152
SLIDE 152

Model example: a particle interacting with a “heat bath”

dx dt = −V ′(x) +

N

  • j=1

uj duj dt = −j uj j = 1, . . . , N

V(x)

  • Evolve the system and sample x(t) at time intervals 1.
  • Divide the axis into many small intervals.
slide-153
SLIDE 153

Model example: a particle interacting with a “heat bath”

dx dt = −V ′(x) +

N

  • j=1

uj duj dt = −j uj j = 1, . . . , N

V(x)

  • Evolve the system and sample x(t) at time intervals 1.
  • Divide the axis into many small intervals.
  • Construct a transition matrix Pij by counting.
slide-154
SLIDE 154

Model example: a particle interacting with a “heat bath”

dx dt = −V ′(x) +

N

  • j=1

uj duj dt = −j uj j = 1, . . . , N

V(x)

  • Evolve the system and sample x(t) at time intervals 1.
  • Divide the axis into many small intervals.
  • Construct a transition matrix Pij by counting.
  • Calculate the spectrum of this matrix.
slide-155
SLIDE 155

Model example: a particle interacting with a “heat bath”

dx dt = −V ′(x) +

N

  • j=1

uj duj dt = −j uj j = 1, . . . , N

V(x)

  • Evolve the system and sample x(t) at time intervals 1.
  • Divide the axis into many small intervals.
  • Construct a transition matrix Pij by counting.
  • Calculate the spectrum of this matrix.
slide-156
SLIDE 156

Concluding Remarks

slide-157
SLIDE 157

Concluding Remarks

  • Active field of research. Systems of increasing

complexity are being studied, raising the demand for dimension-reduction techniques.

slide-158
SLIDE 158

Concluding Remarks

  • Active field of research. Systems of increasing

complexity are being studied, raising the demand for dimension-reduction techniques.

  • Analyses are very instructive, but usually restricted to

systems of “academic” interest.

slide-159
SLIDE 159

Concluding Remarks

  • Active field of research. Systems of increasing

complexity are being studied, raising the demand for dimension-reduction techniques.

  • Analyses are very instructive, but usually restricted to

systems of “academic” interest.

  • In “real life” one has to be content with uncontrolled
  • approximations. Has to be “tailored” to the problem at
  • hand. No technique is able to solve all problems.
slide-160
SLIDE 160

Concluding Remarks

  • Active field of research. Systems of increasing

complexity are being studied, raising the demand for dimension-reduction techniques.

  • Analyses are very instructive, but usually restricted to

systems of “academic” interest.

  • In “real life” one has to be content with uncontrolled
  • approximations. Has to be “tailored” to the problem at
  • hand. No technique is able to solve all problems.
  • Many open ends (e.g., what is the mathematical

framework for equationsless closures?)