Coarse-graining Markov state models with PCCA Coarse-graining - - PowerPoint PPT Presentation

coarse graining markov state models with pcca coarse
SMART_READER_LITE
LIVE PREVIEW

Coarse-graining Markov state models with PCCA Coarse-graining - - PowerPoint PPT Presentation

Coarse-graining Markov state models with PCCA Coarse-graining Markov state models Coarse-graining Markov state models here means finding a smaller transition matrix that does a similar job as the large original transition matrix. We have


slide-1
SLIDE 1

Coarse-graining Markov state models with PCCA

slide-2
SLIDE 2

Coarse-graining Markov state models

  • Coarse-graining Markov state models here means

finding a smaller transition matrix that does a similar job as the large original transition matrix.

  • We have already seen one way of reducing the

dimension of a transition matrix. Let’s take this as

  • ur starting point…
slide-3
SLIDE 3

The truncated eigendecomposition

  • The eigendecomposition of !(#) reads

!(#) = &' # (

  • We have seen that for sufficiently large lag times #, the majority
  • f eigenvalues become almost zero.
  • We can therefore truncate the matrix '(#).

1 ⋯ 0.99 ⋯ ⋱ ⋱ ⋮ ⋮ ⋱ ⋱ Delete this and call the reduced matrix 0 '. We can also ignore the corresponding eigenvectors in &, ( and call the reduced matrix 0 &, 1 (.

slide-4
SLIDE 4

The truncated eigendecomposition

  • We now have ! " ≈ $

%$ &(")) *.

  • And also ! " + ≈ $

%$ &+(")) * since ) *$ % = Id.

  • So did we find what we wanted?
  • $

&(") replaces ! for large " ✓

  • $

&(") is a small matrix ✓

  • But $

&(") is not a transition matrix. e.g. $ &/ ≠ /

  • Can we correct the last point?
slide-5
SLIDE 5

A closer look at the eigenvectors

slide-6
SLIDE 6

A closer look at the eigenvectors

= "## "$# "%# "&# "## "$$ "%% "&$ "#% "$% "%% "&% "#& "$& "%% "&&

'(

=

)(

* +(

  • The dominant eigenvectors can be linearly

transformed into a indicator vectors for the metastable states.

  • These indicators are called memberships.
slide-7
SLIDE 7

Coarse-graining with PCCA

  • Use eigendecomposition and insert !!"#:

$ = & '( ) * + = & '! !"#( ) ! !"#* +

  • We have $, = &

'!$-

,!"#*

+

  • Are we done now?
  • $- replaces $ for large ) ✓ Same eigenvalue as $ ✓
  • $- is a small matrix ✓
  • $-. = . (without proof) ✓
  • $- can be interpreted as the transition matrix between the

metastable states. ✓

  • $- is a Koopman matrix. (without proof) ✓
  • $- ≱ 0

$-

slide-8
SLIDE 8

PCCA in PyEmma

  • ! ... metastable memberships
  • "! ... metastable distributions
  • argmax( χ*( … metastable assignments
  • +* = {. ∣ argmax( χ0( = 1} … metastable sets +* *34,…,7
slide-9
SLIDE 9

Further reading

  • Susanna Röblitz, Marcus Weber, “Fuzzy spectral

clustering by PCCA+: application to Markov state models and data classification”, Advances in Data Analysis and Classification, 7, 147 (2013)

  • Marcus Weber, Konstantin Fackeldey, "G-PCCA:

Spectral Clustering for Non-reversible Markov Chains", Konrad-Zuse-Zentrum für Informationstechnik Berlin, ZIB-Report 15-35 (2015)

slide-10
SLIDE 10

Appendix: Proof that !"# = #

  • Memberships must sum to one %#&×( = #)×(
  • The first right eigenvector is constant *+( = #)×(.
  • ⇒ %#&×( = *+(
  • Use definition of %: %#&×( = *-#&×(
  • Therefore *+( = *-#&×( which is satisfied by
  • #&×( = +(.
  • ⇒ !"# = -.(/ 0 -# = -.(/ 0 +( = -.(+( = #
slide-11
SLIDE 11

Appendix: Computing A

Cov %, % = ()*)+*( Overlap matrix of metastable states, weighted by stationary distribution +, = diag(()*)2) Stationary weight of the metastable states Inserted into the diagonal of a matrix. tr(+,

67()*)+*() → min

slide-12
SLIDE 12
slide-13
SLIDE 13
slide-14
SLIDE 14
slide-15
SLIDE 15
  • ! ∈ ℝ$×&

matrix of dominant eigenvectors

  • ' ∈ ℝ$×&

matrix of memberships

  • ' ≥ 0

non-negativity

  • ∑+,-

&

' = 1 partition of 1

  • ' ≈ !1

spectral clustering