matrix correlated random variables a statistical physics
play

Matrix-correlated random variables: A statistical physics and signal - PowerPoint PPT Presentation

Introduction Duality Statistical properties Random vectors synthesis Limit laws for the sum Matrix-correlated random variables: A statistical physics and signal processing duet Florian Angeletti Work in collaboration with Hugo Touchette,


  1. Introduction Duality Statistical properties Random vectors synthesis Limit laws for the sum Matrix-correlated random variables: A statistical physics and signal processing duet Florian Angeletti Work in collaboration with Hugo Touchette, Patrice Abry and Eric Bertin. 4 December 2015

  2. Introduction Duality Statistical properties Random vectors synthesis Limit laws for the sum Random vectors Random vectors in signal processing Joint probability density : P ( x 1 , . . . , x n ) i . i . d . : P ( x 1 , . . . , x n ) = f ( x 1 ) . . . f ( x n ) generalization to non- i . i . d . ?

  3. Introduction Duality Statistical properties Random vectors synthesis Limit laws for the sum Random vectors Random vectors in signal processing Joint probability density : P ( x 1 , . . . , x n ) i . i . d . : P ( x 1 , . . . , x n ) = f ( x 1 ) . . . f ( x n ) generalization to non- i . i . d . ? Out-of-equilibrium physics Asymmetric Simple Exclusion Process model [Derrida et al., J. Phys. A , 1993] p ( x 1 , . . . , x n ) ∝ R ( x 1 ) . . . R ( x n ): f scalar ⇒ R matrix Preserved product structure. Signal processing application?

  4. Introduction Duality Statistical properties Random vectors synthesis Limit laws for the sum Matrix-correlated random variables p ( x 1 , . . . , x n ) = L ( R ( x 1 ) . . . R ( x n )) L ( E n ) linear form L : L ( M ) = tr ( A T M ) A : d × d positive matrix R ( x ): d × d positive matrix function structure matrix � E = R ( x ) dx R probability density function matrix R i , j ( x ) = E i , j P i , j ( x ) d > 1: Non-commutativity = ⇒ Correlation

  5. Introduction Duality Statistical properties Random vectors synthesis Limit laws for the sum Objectives Mathematical model p ( x 1 , . . . , x n ) ≈ R ( x 1 ) · · · R ( x n ) Study the statistical properties of theses models Large deviation functions Hidden Markov model Limit distributions for the representation sums Topology induces Limit distributions for the correlation extremes Signal processing applications?

  6. Introduction Duality Statistical properties Random vectors synthesis Limit laws for the sum Correlation Product structure: p ( x 1 , . . . , x n ) = L ( R ( x 1 ) ... R ( x n )) L ( E n ) x q R ( x ) dx � Moment matrix: Q ( q ) = E k − 1 Q ( p ) E n − k � � = L X p � � E k L ( E n ) E k − 1 Q (1) E l − k − 1 Q (1) E n − l � � E [ X k X l ] = L L ( E n ) E k − 1 Q (1) E l − k − 1 Q (1) E m − l − 1 Q (1) E n − m � � E [ X k X l X m ] = L L ( E n ) . . .

  7. Introduction Duality Statistical properties Random vectors synthesis Limit laws for the sum Stationarity Translation invariance: p ( X k 1 = x 1 , . . . , X k l = x l ) = p ( X c + k 1 = x 1 , . . . , X c + k l = x l ) Sufficient condition [ A T , E ] = A T E − EA T = 0 ∀ M , L ( M E ) = L ( E M ) L ( R ( x ) E n − 1 ) p ( X k = x ) = L ( E n ) L ( R ( x ) E l − k − 1 R ( y ) E n −| l − k |− 1 ) p ( X k = x , X l = y ) = L ( E n ) p ( X k = x , X l = y , X m = z ) = L ( R ( x ) E l − k − 1 R ( y ) E m − l − 1 R ( z ) E n −| m − k |− 1 ) L ( E n )

  8. Introduction Duality Statistical properties Random vectors synthesis Limit laws for the sum Numerical generation p ( x 1 , . . . , x n ) = L ( R ( x 1 ) . . . R ( x n )) L ( E n ) How do we generate a random vector X for a given triple ( A , E , P )? Expand the matrix product L ( E n ) p ( x 1 , . . . , x n ) = � A Γ 1 , Γ n +1 E Γ 1 , Γ 2 P Γ 1 , Γ 2 ( x 1 ) . . . E Γ n , Γ n +1 P Γ n , Γ n +1 ( x n ) Γ ∈{ 1 ,..., d } n +1 � p ( x 1 , . . . , x n ) = P (Γ) P ( X | Γ) Γ Γ, hidden Markov chain

  9. Introduction Duality Statistical properties Random vectors synthesis Limit laws for the sum Hidden Markov Model representation Hidden Markov Chain Conditional pdf ( X | Γ) p (Γ) = A Γ 1 , Γ n +1 � E Γ k , Γ k +1 p ( X k = x | Γ) = P Γ k , Γ k +1 ( x ) L ( E n ) k E non-stochastic = ⇒ Non-homogeneous markov chain Specific non-homogeneous Hidden Markov model: Hidden Markov Model � Matrix representation

  10. Introduction Duality Statistical properties Random vectors synthesis Limit laws for the sum Dual representation Matrix representation Hidden Markov Model Algebraic properties 2-layer model: correlated layer + independant layer Statistical properties Efficient synthesis computation

  11. Introduction Duality Statistical properties Random vectors synthesis Limit laws for the sum Correlation and Jordan decomposition E k − 1 Q (1) E l − k − 1 Q (1) E n − l � � E [ X k X l ] = L L ( E n ) The dependency structure of X depends on the behavior of E n λ c eigenvalues of E ordered by their real parts ℜ ( λ 1 ) ℜ ( λ 2 ) > · · · > λ r J m , p Jordan block associated with eigeivalue λ m   λ m 1 0 · · · 0 . ... ... ... .     J 1 , 1 0 0 0 .    .  ... ... ... ... E = B − 1 .  B , J m , s =     . 0    .  ... ...  0 J m , p .  . 1    0 · · · · · · 0 λ m

  12. Introduction Duality Statistical properties Random vectors synthesis Limit laws for the sum Dependency structure 1 . 2 Case 1: Short-range correlation 0 . 8 Corr(1 , k ) λ 2 exists: 0 . 4 E [ X k X l ] − E [ X k ] E [ X l ] ≈ 0 . 0 � α m λ m | k − l | − 0 . 4 0 25 50 75 100 λ 1 k Case 2: Constant correlation More than one block J 1 , s : Constant correlation term Case 3: Long-range correlation 40 J 1 , s with size p > 1: l 20 E [ X k X l ] − E [ X k ] E [ X l ] ≈ P ( k n , k − l n , l n ) , P ∈ 0 R [ X , Y , Z ] 0 15 30 45 k

  13. Introduction Duality Statistical properties Random vectors synthesis Limit laws for the sum Short-range correlation: Ergodic chain E irreducible ⇐ ⇒ Γ ergodic Irreducible matrix E ⇐ ⇒ G ( E ) is strongly connected Short-range correlation

  14. Introduction Duality Statistical properties Random vectors synthesis Limit laws for the sum Constant correlation: Identity E Disconnected components:   1 0 ... E =     0 1 The chain Γ is trapped inside its starting state Constant correlation: L ( Q (1) 2 ) −L ( Q (1)) 2 E [ X k X l ] − E [ X k ] E [ X l ] = L ( E )

  15. Introduction Duality Statistical properties Random vectors synthesis Limit laws for the sum Long-range correlation: Linear irreducible E Irreversible transitions: n 0 1 n-1   1 ǫ 0 ... ... E =     0 1 The chain Γ can only stay in its current state or jump to the next All chains with a non-zero probability and the same starting and ending points are equiprobable Polynomial correlation: � r � l − k � s � � t � k 1 − l � E [ X k X l ] ≈ c r , s , t n n n r + s + t = d − 1

  16. Introduction Duality Statistical properties Random vectors synthesis Limit laws for the sum General shape of E 23 25 18 19 26 24 20 21 15 14 12   I 1 ∗ T k , l 22 16 17 ... 5 E =   7 ∗ 13   2 6 0 I r 4 1 8 11 3 10 9 Irreducible blocks I k Irreversibles transitions T k , l Correlation: Mixture of short-range, constant and long-range correlations

  17. Introduction Duality Statistical properties Random vectors synthesis Limit laws for the sum Summary Short-range correlation = ⇒ Strongly connected component of size s > 1 More than one weakly connected component = ⇒ Constant correlation Polynomial correlation = ⇒ More than one strongly connected component Necessary but non sufficient conditions

  18. Introduction Duality Statistical properties Random vectors synthesis Limit laws for the sum Synthesis p ( x 1 , . . . , x n ) = L ( R ( x 1 ) . . . R ( x n )) L ( E n ) How to choose d ? E ? P ? A ?

  19. Introduction Duality Statistical properties Random vectors synthesis Limit laws for the sum Constraints Classical constraints Marginal distribution: P S Autocovariance: c 1 , 1 ≡ E [ X 0 X t ] − E [ X 0 ] E [ X t ] Higher-order dependencies: X q 1 0 X q 2 X q 1 X q 2 � � � � � � c q 1 , q 2 ( t ) ≡ E − E E t t 0 Limitations: sum of r expoinential time scales θ k with amplitudes β ( q 1 , q 2 ) r � β ( q 1 , q 2 ) k θ t � � c q 1 , q 2 ( t ) = Re k k =1

  20. Introduction Duality Statistical properties Random vectors synthesis Limit laws for the sum Choice of d , A , E , P     1 · · · 1 0 1 . . ... ... � . . α k J k A =  , J d =  , E =     . . d   1 · · · 1 1 0 k Stationnarity: [ A T , E ] = 0 Dependencies: α = F (˜ θ ) Objectives ⇒ Free parameters r ⇒ d θ ⇒ α β ⇒ M ( q ) P S ⇒ P

  21. Introduction Duality Statistical properties Random vectors synthesis Limit laws for the sum Stationary time series examples Realisation Marginal Correlation Sq. Corr. 0.5 10 0.4 0.4 0.4 5 0.3 0.3 0.3 0 0.2 0.2 0.2 0.1 0.1 −5 0.1 0 0 −10 −0.1 −0.1 0 0 2000 4000 6000 8000 10000 0 50 100 150 200 250 300 0 50 100 150 200 250 300 −6 −4 −2 0 2 4 6 0.5 0.4 0.4 0.4 0.3 0.3 0.3 Target 0.2 0.2 0.2 0.1 0.1 0.1 0 0 −0.1 −0.1 0 0 50 100 150 200 250 300 0 50 100 150 200 250 300 −6 −4 −2 0 2 4 6

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend