bayesian dynamic mode decomposition
play

Bayesian Dynamic Mode Decomposition Naoya Takeishi , Yoshinobu - PowerPoint PPT Presentation

Bayesian Dynamic Mode Decomposition Naoya Takeishi , Yoshinobu Kawahara , , Yasuo Tabei , Takehisa Yairi Dept. of Aeronautics & Astronautics, The University of Tokyo The Institute of Scientific and Industrial Research,


  1. Bayesian Dynamic Mode Decomposition Naoya Takeishi § , Yoshinobu Kawahara † , ‡ , Yasuo Tabei ‡ , Takehisa Yairi § § Dept. of Aeronautics & Astronautics, The University of Tokyo † The Institute of Scientific and Industrial Research, Osaka University ‡ RIKEN Center for Advanced Intelligence Project (AIP) 22 August 2017, IJCAI @ Melbourne

  2. Motivation: Analysis of dynamical systems ◮ Various types of complex phenomena can be described in terms of (nonlinear) dynamical systems . x ∈ M (state space) x t +1 = f ( x t ) , � When f is nonlinear, analysis based on trajectories of x is difficult. N. Takeishi (The Univ. of Tokyo) Bayesian DMD 2 of 14

  3. Operator-theoretic view of dynamical systems Definition (Koopman operator [Koopman ’31, Mezi´ c ’05] ) Koopman operator (composition operator) K represents time-evolution of observables (i.e., observation function) g : M → R or C . K g ( x ) = g ( f ( x )) , g ∈ F (function space) ◮ K describes temporal evolution of function (infinite-dimensional vector) instead of the finite-dimensional state vector. ◮ Defining K , we can lift the analysis of nonlinear dynamical systems into a linear (but infinite-dimensional) regime! � Since K is linear, we can analyze dynamics using the spectra of K . N. Takeishi (The Univ. of Tokyo) Bayesian DMD 3 of 14

  4. Koopman mode decomposition (KMD) ◮ Eigenvalues and eigenfunctions of K : K ϕ i ( x ) = λ i ϕ i ( x ) for i = 1 , 2 , . . . . ◮ Projection of g ( x ) to span { ϕ 1 ( x ) , ϕ 2 ( x ) , . . . } (i.e., transformation to a ⇒ Coefficients are called Koopman modes . canonical form). ∞ � g ( x ) = ϕ i ( x ) v i i =1 ◮ Since ϕ is eigenfunction, ∞ � λ t g ( x t ) = i ϕ i ( x 0 ) v i , (KMD) � �� � i =1 w i where | λ i | = decay rate of w i , ∠ λ i = frequency of w i . ◮ A numerical realization of KMD is dynamic mode decomposition (DMD) [Rowley+ ’09, Schmid ’10, Tu+ ’14] . N. Takeishi (The Univ. of Tokyo) Bayesian DMD 4 of 14

  5. Dynamic mode decomposition (DMD) Assumption ( K -invariant subspace [Budiši´ c+ ’12] ) Dataset is generated with a set of observables � � T g ( x ) = g 1 ( x ) g 2 ( x ) · · · g n ( x ) that spans (approximately) K -invariant subspace. ⇒ Then, KMD can be (approximately) realized by DMD. Algorithm (DMD [Tu+ ’14] ) Input time-series ( y 0 , . . . , y m ) s.t. y t = g ( x t ) eigenvalues { λ }, eigenfunctions { ϕ } , and modes { w } Output 1. Estimate a linear model y t +1 ≈ Ay t . 2. On A , compute eigenvalues λ i and right-/left-eigenvectors w i , z H i . 3. Compute ϕ i,t = z H i y t . N. Takeishi (The Univ. of Tokyo) Bayesian DMD 5 of 14

  6. Quasi-periodic modes extraction by KMD/DMD ◮ Review: KMD/DMD computes the decomposition of time-series into modes w i that evolve with frequency ∠ λ i and decay rate | λ i | . ◮ w i is termed dynamic modes . ∞ � λ t g ( x t ) ≈ i w i i =1 Example (2D fluid flow past a cylinder) Flow past a cylinder is universal in many natural/engineering situations. DMD − − − → + + w 1 w 2 w 3 + + + · · · · · · w 4 w 5 N. Takeishi (The Univ. of Tokyo) Bayesian DMD 6 of 14

  7. Other applications of KMD/DMD ◮ Lots of applications in a wide range of domains ◮ fluid mechanics [Rowley+ ’09, Schmid ’10, & many more] , ◮ neuroscience [Brunton+ ’16] , ◮ image processing [Kutz+ ’16, Takeishi+ ’17] , ◮ analysis of power systems [Raak+ ’16, Susuki+ ’16] , ◮ epidemiology [Proctor&Eckhoff ’15] , ◮ optimal control [Mauroy&Goncalves ’16] , ◮ finance [Mann&Kutz ’16] , ◮ medical care [Bourantas+ ’14] , ◮ robotics [Berger+ ’15] , etc. N. Takeishi (The Univ. of Tokyo) Bayesian DMD 7 of 14

  8. Issue ◮ DMD relies on linear modeling g ( x t +1 ) ≈ Ag ( x t ) and eigendecomposition of A . ◮ So it lacks an associated probabilistic/Bayesian framework , by which we can ◮ consider observation noise explicitly, ◮ perform a posterior inference , ◮ consider DMD extensions in a unified manner, etc. ◮ Let’s do it! ◮ analogously to PCA’s formulation as probabilistic/Bayesian PCA [Tipping&Bishop ’99, Bishop ’99] N. Takeishi (The Univ. of Tokyo) Bayesian DMD 8 of 14

  9. Proposed method (1/2): Probabilistic DMD ◮ Dataset: snapshot pairs with observation noise � � D = ( y 0 , 1 , y 1 , 1 ) , . . . , ( y 0 ,t , y 1 ,t ) , . . . , ( y 0 ,m , y 1 ,m ) , where y 0 ,t = g ( x t ) + e 0 ,t and y 1 ,t = g ( x t +∆ t ) + e 1 ,t , Definition (Generative model of probabilistic DMD)  �� k � i =1 ϕ t,i w i , σ 2 I y 0 ,t ∼ CN  �� k � i =1 λ i ϕ t,i w i , σ 2 I y 1 ,t ∼ CN  ϕ t,i ∼ CN (0 , 1) � If k = n and σ 2 → 0 , the MLE of ( λ, w ) coincides with DMD’s solution. N. Takeishi (The Univ. of Tokyo) Bayesian DMD 9 of 14

  10. Proposed method (2/2): Bayesian DMD Definition (Prior on parameters for Bayesian DMD) � � �� w i | v 2 v 2 i, 1 , . . . , v 2 v 2 i, 1: n ∼ CN 0 , diag , i,d ∼ InvGamma ( α v , β v ) i,n λ i ∼ CN (0 , 1) σ 2 ∼ InvGamma ( α σ , β σ ) v 2 σ 2 y 0 , t w i i, 1: n λ i ϕ t k y 1 , t m � For a posterior inference , a Gibbs sampler can be constructed easily. N. Takeishi (The Univ. of Tokyo) Bayesian DMD 10 of 14

  11. Extension example: Sparse Bayesian DMD Definition (Prior on parameters for sparse Bayesian DMD) � � �� w i | v 2 0 , σ 2 diag v 2 i, 1 , . . . , v 2 v 2 i,d ∼ Exponential( γ 2 i, 1: n ∼ CN , i / 2) i,n λ i ∼ CN (0 , 1) σ 2 ∼ InvGamma ( α σ , β σ ) v 2 σ 2 y 0 , t w i i, 1: n λ i ϕ t k y 1 , t m � We can extend the model in a unified Bayesian manner. N. Takeishi (The Univ. of Tokyo) Bayesian DMD 11 of 14

  12. Numerical example (1) Example (Fixed-point attractor) Generate data by � � T � � T y t = λ t + λ t 2 2 2 − 2 + e t , 1 2 where e is Gaussian observation noise. True eigenvalues are λ 1 = 0 . 9 and λ 2 = 0 . 8 . 1.3 1.2 truth 1.2 1.1 DMD 1.1 TLS-DMD 1 6 1 ) 1 6 2 ) 0.9 Re(~ Re(~ 0.9 0.8 0.8 0.7 0.7 0.6 0.6 0.5 0.5 0.4 0 0.05 0.1 0.15 0.2 0.25 0 0.05 0.1 0.15 0.2 0.25 < < N. Takeishi (The Univ. of Tokyo) Bayesian DMD 12 of 14

  13. Numerical example (2) Example (Limit-cycle attractor) Generate data from Stuart–Landau equation r t +1 = r t + ∆ t ( µr t − r 3 t ) , θ t +1 = θ t + ∆ t ( γ − βr 2 t ) , and Gaussian observation noise. True (continuous-time) eigenvalues lie on the imaginary axis. DMD TLS-DMD BDMD (average) Re(log( 6 ) = " t ) 0.5 0 -0.5 -1 -1.5 -3 -2 -1 0 1 2 3 Im(log( 6 ) = " t ) N. Takeishi (The Univ. of Tokyo) Bayesian DMD 13 of 14

  14. K g ( x ) = g ( f ( x )) Analysis of dynamical systems based on Koopman operator is a useful tool. Dynamic mode decomposition (DMD) is a numerical method for Koopman analysis. In this work, we developed probabilistic & Bayesian DMDs to ◮ consider observation noise, v 2 σ 2 w i y 0 , t i, 1: n ◮ infer posterior distribution, λ i ϕ t ◮ extend DMD in a unified manner, etc. k y 1 , t m Implementation available at https://github.com/thetak11/bayesiandmd N. Takeishi (The Univ. of Tokyo) Bayesian DMD 14 of 14

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend