probabilistic unsupervised learning expectation
play

Probabilistic & Unsupervised Learning Expectation Propagation - PowerPoint PPT Presentation

Probabilistic & Unsupervised Learning Expectation Propagation Maneesh Sahani maneesh@gatsby.ucl.ac.uk Gatsby Computational Neuroscience Unit, and MSc ML/CSML, Dept Computer Science University College London Term 1, Autumn 2018


  1. Nonlinear state-space model (NLSSM) u 1 u 2 u 3 u T � � � � B t B t B t B t z t + 1 = f ( z t , u t ) + w t � � � � A t A t A t A t y 1 y 2 y 3 • • • y T x t = g ( z t , u t ) + v t � � � � D t D t D t D t � � � � C t C t C t C t w t , v t usually still Gaussian. x 1 x 2 x 3 x T z t Extended Kalman Filter (EKF) : linearise nonlinear functions about current estimate, ˆ t : � f ( z t ) � + ∂ f z t � z t z t + 1 ≈ f ( ˆ t , u t ) ( z t − ˆ t ) + w t � ∂ z t � �� � z t ˆ t � � �� � B t u t � A t � � + ∂ g z t − 1 � z t − 1 x t ≈ g ( ˆ , u t ) ( z t − ˆ ) + v t � t t ∂ z t � �� � z t − 1 ˆ z t t � D t u t � �� � z t ˆ � C t t Run the Kalman filter (smoother) on non-stationary linearised system ( � A t , � B t , � C t , � D t ): ◮ Adaptively approximates non-Gaussian messages by Gaussians. ◮ Local linearisation depends on central point of distribution ⇒ approximation degrades with increased state uncertainty. May work acceptably for close-to-linear systems. Can base EM-like algorithm on EKF/EKS (or alternatives).

  2. Other message approximations Consider the forward messages on a latent chain: � P ( z t | x 1 : t ) = 1 Z P ( x t | z t ) d z t –1 P ( z t | z t –1 ) P ( z t –1 | x 1 : t –1 ) We want to approximate the messages to retain a tractable form (i.e. Gaussian). � P ( z t | x 1 : t ) ≈ 1 ˜ ˜ Z P ( x t | z t ) P ( z t | z t –1 ) P ( z t –1 | x 1 : t –1 ) d z t –1 � �� � � �� � N ( f ( z t –1 ) , Q ) N (ˆ z t –1 , V t –1 )

  3. Other message approximations Consider the forward messages on a latent chain: � P ( z t | x 1 : t ) = 1 Z P ( x t | z t ) d z t –1 P ( z t | z t –1 ) P ( z t –1 | x 1 : t –1 ) We want to approximate the messages to retain a tractable form (i.e. Gaussian). � P ( z t | x 1 : t ) ≈ 1 ˜ ˜ Z P ( x t | z t ) P ( z t | z t –1 ) P ( z t –1 | x 1 : t –1 ) d z t –1 � �� � � �� � N ( f ( z t –1 ) , Q ) N (ˆ z t –1 , V t –1 ) ◮ Linearisation at the peak (EKF) is only one approach.

  4. Other message approximations Consider the forward messages on a latent chain: � P ( z t | x 1 : t ) = 1 Z P ( x t | z t ) d z t –1 P ( z t | z t –1 ) P ( z t –1 | x 1 : t –1 ) We want to approximate the messages to retain a tractable form (i.e. Gaussian). � P ( z t | x 1 : t ) ≈ 1 ˜ ˜ Z P ( x t | z t ) P ( z t | z t –1 ) P ( z t –1 | x 1 : t –1 ) d z t –1 � �� � � �� � N ( f ( z t –1 ) , Q ) N (ˆ z t –1 , V t –1 ) ◮ Linearisation at the peak (EKF) is only one approach. ◮ Laplace filter: use mode and curvature of integrand.

  5. Other message approximations Consider the forward messages on a latent chain: � P ( z t | x 1 : t ) = 1 Z P ( x t | z t ) d z t –1 P ( z t | z t –1 ) P ( z t –1 | x 1 : t –1 ) We want to approximate the messages to retain a tractable form (i.e. Gaussian). � P ( z t | x 1 : t ) ≈ 1 ˜ ˜ Z P ( x t | z t ) P ( z t | z t –1 ) P ( z t –1 | x 1 : t –1 ) d z t –1 � �� � � �� � N ( f ( z t –1 ) , Q ) N (ˆ z t –1 , V t –1 ) ◮ Linearisation at the peak (EKF) is only one approach. ◮ Laplace filter: use mode and curvature of integrand. ◮ Sigma-point (“unscented”) filter:

  6. Other message approximations Consider the forward messages on a latent chain: � P ( z t | x 1 : t ) = 1 Z P ( x t | z t ) d z t –1 P ( z t | z t –1 ) P ( z t –1 | x 1 : t –1 ) We want to approximate the messages to retain a tractable form (i.e. Gaussian). � P ( z t | x 1 : t ) ≈ 1 ˜ ˜ Z P ( x t | z t ) P ( z t | z t –1 ) P ( z t –1 | x 1 : t –1 ) d z t –1 � �� � � �� � N ( f ( z t –1 ) , Q ) N (ˆ z t –1 , V t –1 ) ◮ Linearisation at the peak (EKF) is only one approach. ◮ Laplace filter: use mode and curvature of integrand. ◮ Sigma-point (“unscented”) filter: √ λ v ) for eigenvalues, eigenvectors ˆ ◮ Evaluate f (ˆ z t –1 ) , f (ˆ V t − 1 v = λ v . z t –1 ±

  7. Other message approximations Consider the forward messages on a latent chain: � P ( z t | x 1 : t ) = 1 Z P ( x t | z t ) d z t –1 P ( z t | z t –1 ) P ( z t –1 | x 1 : t –1 ) We want to approximate the messages to retain a tractable form (i.e. Gaussian). � P ( z t | x 1 : t ) ≈ 1 ˜ ˜ Z P ( x t | z t ) P ( z t | z t –1 ) P ( z t –1 | x 1 : t –1 ) d z t –1 � �� � � �� � N ( f ( z t –1 ) , Q ) N (ˆ z t –1 , V t –1 ) ◮ Linearisation at the peak (EKF) is only one approach. ◮ Laplace filter: use mode and curvature of integrand. ◮ Sigma-point (“unscented”) filter: √ λ v ) for eigenvalues, eigenvectors ˆ ◮ Evaluate f (ˆ z t –1 ) , f (ˆ V t − 1 v = λ v . z t –1 ± ◮ “Fit” Gaussian to these 2 K + 1 points.

  8. Other message approximations Consider the forward messages on a latent chain: � P ( z t | x 1 : t ) = 1 Z P ( x t | z t ) d z t –1 P ( z t | z t –1 ) P ( z t –1 | x 1 : t –1 ) We want to approximate the messages to retain a tractable form (i.e. Gaussian). � P ( z t | x 1 : t ) ≈ 1 ˜ ˜ Z P ( x t | z t ) P ( z t | z t –1 ) P ( z t –1 | x 1 : t –1 ) d z t –1 � �� � � �� � N ( f ( z t –1 ) , Q ) N (ˆ z t –1 , V t –1 ) ◮ Linearisation at the peak (EKF) is only one approach. ◮ Laplace filter: use mode and curvature of integrand. ◮ Sigma-point (“unscented”) filter: √ λ v ) for eigenvalues, eigenvectors ˆ ◮ Evaluate f (ˆ z t –1 ) , f (ˆ V t − 1 v = λ v . z t –1 ± ◮ “Fit” Gaussian to these 2 K + 1 points. ◮ Equivalent to numerical evaluation of mean and covariance by Gaussian quadrature.

  9. Other message approximations Consider the forward messages on a latent chain: � P ( z t | x 1 : t ) = 1 Z P ( x t | z t ) d z t –1 P ( z t | z t –1 ) P ( z t –1 | x 1 : t –1 ) We want to approximate the messages to retain a tractable form (i.e. Gaussian). � P ( z t | x 1 : t ) ≈ 1 ˜ ˜ Z P ( x t | z t ) P ( z t | z t –1 ) P ( z t –1 | x 1 : t –1 ) d z t –1 � �� � � �� � N ( f ( z t –1 ) , Q ) N (ˆ z t –1 , V t –1 ) ◮ Linearisation at the peak (EKF) is only one approach. ◮ Laplace filter: use mode and curvature of integrand. ◮ Sigma-point (“unscented”) filter: √ λ v ) for eigenvalues, eigenvectors ˆ ◮ Evaluate f (ˆ z t –1 ) , f (ˆ V t − 1 v = λ v . z t –1 ± ◮ “Fit” Gaussian to these 2 K + 1 points. ◮ Equivalent to numerical evaluation of mean and covariance by Gaussian quadrature. ◮ One form of “Assumed Density Filtering” and EP .

  10. Other message approximations Consider the forward messages on a latent chain: � P ( z t | x 1 : t ) = 1 Z P ( x t | z t ) d z t –1 P ( z t | z t –1 ) P ( z t –1 | x 1 : t –1 ) We want to approximate the messages to retain a tractable form (i.e. Gaussian). � P ( z t | x 1 : t ) ≈ 1 ˜ ˜ Z P ( x t | z t ) P ( z t | z t –1 ) P ( z t –1 | x 1 : t –1 ) d z t –1 � �� � � �� � N ( f ( z t –1 ) , Q ) N (ˆ z t –1 , V t –1 ) ◮ Linearisation at the peak (EKF) is only one approach. ◮ Laplace filter: use mode and curvature of integrand. ◮ Sigma-point (“unscented”) filter: √ λ v ) for eigenvalues, eigenvectors ˆ ◮ Evaluate f (ˆ z t –1 ) , f (ˆ V t − 1 v = λ v . z t –1 ± ◮ “Fit” Gaussian to these 2 K + 1 points. ◮ Equivalent to numerical evaluation of mean and covariance by Gaussian quadrature. ◮ One form of “Assumed Density Filtering” and EP . �� � � ˆ �� � z t , ˆ ◮ Parametric variational: argmin KL N V t d z t –1 . . . . Requires Gaussian � ⇒ may be challenging. expectations of log

  11. Other message approximations Consider the forward messages on a latent chain: � P ( z t | x 1 : t ) = 1 Z P ( x t | z t ) d z t –1 P ( z t | z t –1 ) P ( z t –1 | x 1 : t –1 ) We want to approximate the messages to retain a tractable form (i.e. Gaussian). � P ( z t | x 1 : t ) ≈ 1 ˜ ˜ Z P ( x t | z t ) P ( z t | z t –1 ) P ( z t –1 | x 1 : t –1 ) d z t –1 � �� � � �� � N ( f ( z t –1 ) , Q ) N (ˆ z t –1 , V t –1 ) ◮ Linearisation at the peak (EKF) is only one approach. ◮ Laplace filter: use mode and curvature of integrand. ◮ Sigma-point (“unscented”) filter: √ λ v ) for eigenvalues, eigenvectors ˆ ◮ Evaluate f (ˆ z t –1 ) , f (ˆ V t − 1 v = λ v . z t –1 ± ◮ “Fit” Gaussian to these 2 K + 1 points. ◮ Equivalent to numerical evaluation of mean and covariance by Gaussian quadrature. ◮ One form of “Assumed Density Filtering” and EP . �� � � ˆ �� � z t , ˆ ◮ Parametric variational: argmin KL N V t d z t –1 . . . . Requires Gaussian � ⇒ may be challenging. expectations of log �� � � ˆ �� � N z t , ˆ ◮ The other KL: argmin KL d z t –1 V t needs only first and second moments of nonlinear message ⇒ EP.

  12. Variational learning Free energy: F ( q , θ ) = � log P ( X , Z| θ ) � q ( Z|X ) + H [ q ] = log P ( X| θ ) − KL [ q ( Z ) � P ( Z|X , θ )] ≤ ℓ ( θ )

  13. Variational learning Free energy: F ( q , θ ) = � log P ( X , Z| θ ) � q ( Z|X ) + H [ q ] = log P ( X| θ ) − KL [ q ( Z ) � P ( Z|X , θ )] ≤ ℓ ( θ ) E-steps: ◮ Exact EM: q ( Z ) = argmax F = P ( Z|X , θ ) q

  14. Variational learning Free energy: F ( q , θ ) = � log P ( X , Z| θ ) � q ( Z|X ) + H [ q ] = log P ( X| θ ) − KL [ q ( Z ) � P ( Z|X , θ )] ≤ ℓ ( θ ) E-steps: ◮ Exact EM: q ( Z ) = argmax F = P ( Z|X , θ ) q ◮ Saturates bound: converges to local maximum of likelihood.

  15. Variational learning Free energy: F ( q , θ ) = � log P ( X , Z| θ ) � q ( Z|X ) + H [ q ] = log P ( X| θ ) − KL [ q ( Z ) � P ( Z|X , θ )] ≤ ℓ ( θ ) E-steps: ◮ Exact EM: q ( Z ) = argmax F = P ( Z|X , θ ) q ◮ Saturates bound: converges to local maximum of likelihood. ◮ (Factored) variational approximation: q ( Z ) = F = KL [ q 1 ( Z 1 ) q 2 ( Z 2 ) � P ( Z|X , θ )] argmax argmin q 1 ( Z 1 ) q 2 ( Z 2 ) q 1 ( Z 1 ) q 2 ( Z 2 )

  16. Variational learning Free energy: F ( q , θ ) = � log P ( X , Z| θ ) � q ( Z|X ) + H [ q ] = log P ( X| θ ) − KL [ q ( Z ) � P ( Z|X , θ )] ≤ ℓ ( θ ) E-steps: ◮ Exact EM: q ( Z ) = argmax F = P ( Z|X , θ ) q ◮ Saturates bound: converges to local maximum of likelihood. ◮ (Factored) variational approximation: q ( Z ) = F = KL [ q 1 ( Z 1 ) q 2 ( Z 2 ) � P ( Z|X , θ )] argmax argmin q 1 ( Z 1 ) q 2 ( Z 2 ) q 1 ( Z 1 ) q 2 ( Z 2 ) ◮ Increases bound: converges, but not necessarily to ML.

  17. Variational learning Free energy: F ( q , θ ) = � log P ( X , Z| θ ) � q ( Z|X ) + H [ q ] = log P ( X| θ ) − KL [ q ( Z ) � P ( Z|X , θ )] ≤ ℓ ( θ ) E-steps: ◮ Exact EM: q ( Z ) = argmax F = P ( Z|X , θ ) q ◮ Saturates bound: converges to local maximum of likelihood. ◮ (Factored) variational approximation: q ( Z ) = F = KL [ q 1 ( Z 1 ) q 2 ( Z 2 ) � P ( Z|X , θ )] argmax argmin q 1 ( Z 1 ) q 2 ( Z 2 ) q 1 ( Z 1 ) q 2 ( Z 2 ) ◮ Increases bound: converges, but not necessarily to ML. ◮ Other approximations: q ( Z ) ≈ P ( Z|X , θ )

  18. Variational learning Free energy: F ( q , θ ) = � log P ( X , Z| θ ) � q ( Z|X ) + H [ q ] = log P ( X| θ ) − KL [ q ( Z ) � P ( Z|X , θ )] ≤ ℓ ( θ ) E-steps: ◮ Exact EM: q ( Z ) = argmax F = P ( Z|X , θ ) q ◮ Saturates bound: converges to local maximum of likelihood. ◮ (Factored) variational approximation: q ( Z ) = F = KL [ q 1 ( Z 1 ) q 2 ( Z 2 ) � P ( Z|X , θ )] argmax argmin q 1 ( Z 1 ) q 2 ( Z 2 ) q 1 ( Z 1 ) q 2 ( Z 2 ) ◮ Increases bound: converges, but not necessarily to ML. ◮ Other approximations: q ( Z ) ≈ P ( Z|X , θ ) ◮ Usually no guarantees, but if learning converges it may be more accurate than the factored approximation

  19. Approximating the posterior Linearisation (or local Laplace, sigma-point and other such approaches) seem ad hoc . A more principled approach might look for an approximate q that is closest to P in some sense. q = argmin D ( P ↔ q ) q ∈Q

  20. Approximating the posterior Linearisation (or local Laplace, sigma-point and other such approaches) seem ad hoc . A more principled approach might look for an approximate q that is closest to P in some sense. q = argmin D ( P ↔ q ) q ∈Q Open choices: ◮ form of the metric D ◮ nature of the constraint space Q

  21. Approximating the posterior Linearisation (or local Laplace, sigma-point and other such approaches) seem ad hoc . A more principled approach might look for an approximate q that is closest to P in some sense. q = argmin D ( P ↔ q ) q ∈Q Open choices: ◮ form of the metric D ◮ nature of the constraint space Q ◮ Variational methods: D = KL [ q � P ] .

  22. Approximating the posterior Linearisation (or local Laplace, sigma-point and other such approaches) seem ad hoc . A more principled approach might look for an approximate q that is closest to P in some sense. q = argmin D ( P ↔ q ) q ∈Q Open choices: ◮ form of the metric D ◮ nature of the constraint space Q ◮ Variational methods: D = KL [ q � P ] . ◮ Choosing Q = { tree-factored distributions } leads to efficient message passing.

  23. Approximating the posterior Linearisation (or local Laplace, sigma-point and other such approaches) seem ad hoc . A more principled approach might look for an approximate q that is closest to P in some sense. q = argmin D ( P ↔ q ) q ∈Q Open choices: ◮ form of the metric D ◮ nature of the constraint space Q ◮ Variational methods: D = KL [ q � P ] . ◮ Choosing Q = { tree-factored distributions } leads to efficient message passing. ◮ Can we use other divergences?

  24. The other KL What about the ‘other’ KL ( q = argmin KL [ P � q ] )?

  25. The other KL What about the ‘other’ KL ( q = argmin KL [ P � q ] )? For a factored approximation the (clique) marginals obtained by minimising this KL are correct:

  26. The other KL What about the ‘other’ KL ( q = argmin KL [ P � q ] )? For a factored approximation the (clique) marginals obtained by minimising this KL are correct: � � � � � � � P ( Z|X ) q j ( Z j |X ) − d Z P ( Z|X ) log q j ( Z j |X ) argmin KL � = argmin q i q i j

  27. The other KL What about the ‘other’ KL ( q = argmin KL [ P � q ] )? For a factored approximation the (clique) marginals obtained by minimising this KL are correct: � � � � � � � P ( Z|X ) q j ( Z j |X ) − d Z P ( Z|X ) log q j ( Z j |X ) argmin KL � = argmin q i q i j � � = argmin − d Z P ( Z|X ) log q j ( Z j |X ) q i j

  28. The other KL What about the ‘other’ KL ( q = argmin KL [ P � q ] )? For a factored approximation the (clique) marginals obtained by minimising this KL are correct: � � � � � � � P ( Z|X ) q j ( Z j |X ) − d Z P ( Z|X ) log q j ( Z j |X ) argmin KL � = argmin q i q i j � � = argmin − d Z P ( Z|X ) log q j ( Z j |X ) q i j � = argmin − d Z i P ( Z i |X ) log q i ( Z i |X ) q i

  29. The other KL What about the ‘other’ KL ( q = argmin KL [ P � q ] )? For a factored approximation the (clique) marginals obtained by minimising this KL are correct: � � � � � � � P ( Z|X ) q j ( Z j |X ) − d Z P ( Z|X ) log q j ( Z j |X ) argmin KL � = argmin q i q i j � � = argmin − d Z P ( Z|X ) log q j ( Z j |X ) q i j � = argmin − d Z i P ( Z i |X ) log q i ( Z i |X ) q i = P ( Z i |X ) and the marginals are what we need for learning (although if factored over disjoint sets as in the variational approximation some cliques will be missing).

  30. The other KL What about the ‘other’ KL ( q = argmin KL [ P � q ] )? For a factored approximation the (clique) marginals obtained by minimising this KL are correct: � � � � � � � P ( Z|X ) q j ( Z j |X ) − d Z P ( Z|X ) log q j ( Z j |X ) argmin KL � = argmin q i q i j � � = argmin − d Z P ( Z|X ) log q j ( Z j |X ) q i j � = argmin − d Z i P ( Z i |X ) log q i ( Z i |X ) q i = P ( Z i |X ) and the marginals are what we need for learning (although if factored over disjoint sets as in the variational approximation some cliques will be missing). Perversely, this means finding the best q for this KL is intractable!

  31. The other KL What about the ‘other’ KL ( q = argmin KL [ P � q ] )? For a factored approximation the (clique) marginals obtained by minimising this KL are correct: � � � � � � � P ( Z|X ) q j ( Z j |X ) − d Z P ( Z|X ) log q j ( Z j |X ) argmin KL � = argmin q i q i j � � = argmin − d Z P ( Z|X ) log q j ( Z j |X ) q i j � = argmin − d Z i P ( Z i |X ) log q i ( Z i |X ) q i = P ( Z i |X ) and the marginals are what we need for learning (although if factored over disjoint sets as in the variational approximation some cliques will be missing). Perversely, this means finding the best q for this KL is intractable! But it raises the hope that approximate minimisation might still yield useful results.

  32. Approximate optimisation The posterior distribution in a graphical model is a (normalised) product of factors: N � � P ( Z|X ) = P ( Z , X ) = 1 P ( Z i | pa ( Z i )) ∝ f i ( Z i ) P ( X ) Z i = 1 i where the Z i are not necessarily disjoint. In the language of EP the f i are called sites.

  33. Approximate optimisation The posterior distribution in a graphical model is a (normalised) product of factors: N � � P ( Z|X ) = P ( Z , X ) = 1 P ( Z i | pa ( Z i )) ∝ f i ( Z i ) P ( X ) Z i = 1 i where the Z i are not necessarily disjoint. In the language of EP the f i are called sites. Consider q with the same factorisation, but potentially approximated sites: � N def ˜ q ( Z ) f i ( Z i ) . We would like to minimise (at least in some sense) KL [ P � q ] . = i = 1

  34. Approximate optimisation The posterior distribution in a graphical model is a (normalised) product of factors: N � � P ( Z|X ) = P ( Z , X ) = 1 P ( Z i | pa ( Z i )) ∝ f i ( Z i ) P ( X ) Z i = 1 i where the Z i are not necessarily disjoint. In the language of EP the f i are called sites. Consider q with the same factorisation, but potentially approximated sites: � N def ˜ q ( Z ) f i ( Z i ) . We would like to minimise (at least in some sense) KL [ P � q ] . = i = 1 Possible optimisations:

  35. Approximate optimisation The posterior distribution in a graphical model is a (normalised) product of factors: N � � P ( Z|X ) = P ( Z , X ) = 1 P ( Z i | pa ( Z i )) ∝ f i ( Z i ) P ( X ) Z i = 1 i where the Z i are not necessarily disjoint. In the language of EP the f i are called sites. Consider q with the same factorisation, but potentially approximated sites: � N def ˜ q ( Z ) f i ( Z i ) . We would like to minimise (at least in some sense) KL [ P � q ] . = i = 1 Possible optimisations: � � N � � � N � ˜ f i ( Z i ) f i ( Z i ) min KL � (global: intractable) { ˜ f i } i = 1 i = 1

  36. Approximate optimisation The posterior distribution in a graphical model is a (normalised) product of factors: N � � P ( Z|X ) = P ( Z , X ) = 1 P ( Z i | pa ( Z i )) ∝ f i ( Z i ) P ( X ) Z i = 1 i where the Z i are not necessarily disjoint. In the language of EP the f i are called sites. Consider q with the same factorisation, but potentially approximated sites: � N def ˜ q ( Z ) f i ( Z i ) . We would like to minimise (at least in some sense) KL [ P � q ] . = i = 1 Possible optimisations: � � N � � � N � ˜ f i ( Z i ) f i ( Z i ) min KL � (global: intractable) { ˜ f i } i = 1 i = 1 � � � � � ˜ KL f i ( Z i ) f i ( Z i ) min (local, fixed: simple, inaccurate) ˜ f i

  37. Approximate optimisation The posterior distribution in a graphical model is a (normalised) product of factors: N � � P ( Z|X ) = P ( Z , X ) = 1 P ( Z i | pa ( Z i )) ∝ f i ( Z i ) P ( X ) Z i = 1 i where the Z i are not necessarily disjoint. In the language of EP the f i are called sites. Consider q with the same factorisation, but potentially approximated sites: � N def ˜ q ( Z ) f i ( Z i ) . We would like to minimise (at least in some sense) KL [ P � q ] . = i = 1 Possible optimisations: � � N � � � N � ˜ f i ( Z i ) f i ( Z i ) min KL � (global: intractable) { ˜ f i } i = 1 i = 1 � � � � � ˜ KL f i ( Z i ) f i ( Z i ) min (local, fixed: simple, inaccurate) ˜ f i � � � � � � ˜ � ˜ ˜ f i ( Z i ) f j ( Z j ) f i ( Z i ) f j ( Z j ) min KL (local, contextual: iterative, accurate) ˜ f i j � = i j � = i

  38. Approximate optimisation The posterior distribution in a graphical model is a (normalised) product of factors: N � � P ( Z|X ) = P ( Z , X ) = 1 P ( Z i | pa ( Z i )) ∝ f i ( Z i ) P ( X ) Z i = 1 i where the Z i are not necessarily disjoint. In the language of EP the f i are called sites. Consider q with the same factorisation, but potentially approximated sites: � N def ˜ q ( Z ) f i ( Z i ) . We would like to minimise (at least in some sense) KL [ P � q ] . = i = 1 Possible optimisations: � � N � � � N � ˜ f i ( Z i ) f i ( Z i ) min KL � (global: intractable) { ˜ f i } i = 1 i = 1 � � � � � ˜ KL f i ( Z i ) f i ( Z i ) min (local, fixed: simple, inaccurate) ˜ f i � � � � � � ˜ � ˜ ˜ f i ( Z i ) f j ( Z j ) f i ( Z i ) f j ( Z j ) (local, contextual: iterative, accurate) ← EP min KL ˜ f i j � = i j � = i

  39. Expectation? Propagation? EP is really two ideas: ◮ Approximation of factors.

  40. Expectation? Propagation? EP is really two ideas: ◮ Approximation of factors. ◮ Usually by “projection” to exponential families. ◮ This involves finding expected sufficient statistics, hence expectation.

  41. Expectation? Propagation? EP is really two ideas: ◮ Approximation of factors. ◮ Usually by “projection” to exponential families. ◮ This involves finding expected sufficient statistics, hence expectation. ◮ Local divergence minimization in the context of other factors.

  42. Expectation? Propagation? EP is really two ideas: ◮ Approximation of factors. ◮ Usually by “projection” to exponential families. ◮ This involves finding expected sufficient statistics, hence expectation. ◮ Local divergence minimization in the context of other factors. ◮ This leads to a message passing approach, hence propagation.

  43. Local updates Each EP update involves a KL minimisation: � � � def f new ˜ ˜ ( Z ) ← argmin KL [ f i ( Z i ) q ¬ i ( Z ) � f ( Z i ) q ¬ i ( Z )] q ¬ i ( Z ) f j ( Z j ) = i f ∈{ ˜ f } j � = i def Write q ¬ i ( Z ) = q ¬ i ( Z i ) q ¬ i ( Z ¬ i |Z i ) . Then: [ Z ¬ i = Z\Z i ] min KL [ f i ( Z i ) q ¬ i ( Z ) � f ( Z i ) q ¬ i ( Z )] f � d Z i d Z ¬ i f i ( Z i ) q ¬ i ( Z ) log f ( Z i ) q ¬ i ( Z ) = max f � � = max d Z i d Z ¬ i f i ( Z i ) q ¬ i ( Z i ) q ¬ i ( Z ¬ i |Z i ) log f ( Z i ) q ¬ i ( Z i ) + log q ¬ i ( Z ¬ i |Z i ) f � � � = max d Z i f i ( Z i ) q ¬ i ( Z i ) log f ( Z i ) q ¬ i ( Z i )) d Z ¬ i q ¬ i ( Z ¬ i |Z i ) f KL [ f i ( Z i ) q ¬ i ( Z i ) � f ( Z i ) q ¬ i ( Z i )] = min f q ¬ i ( Z i ) is sometimes called the cavity distribution.

  44. Expectation Propagation (EP) Input f 1 ( Z 1 ) . . . f N ( Z N ) f i ( Z i ) = 1 for i > 1, q ( Z ) ∝ � Initialize ˜ KL [ f 1 ( Z 1 ) � f 1 ( Z 1 )] , ˜ i ˜ f 1 ( Z 1 ) = argmin f i ( Z i ) f ∈{ ˜ f } repeat for i = 1 . . . N do � Delete: q ¬ i ( Z ) ← q ( Z ) ˜ = f j ( Z j ) ˜ f i ( Z i ) j � = i Project: ˜ f new ( Z ) ← argmin KL [ f i ( Z i ) q ¬ i ( Z i ) � f ( Z i ) q ¬ i ( Z i )] i f ∈{ ˜ f } Include: q ( Z ) ← ˜ f new ( Z i ) q ¬ i ( Z ) i end for until convergence

  45. Message Passing ◮ The cavity distribution (in a tree) can be further broken down into a product of terms from each neighbouring clique: � q ¬ i ( Z i ) = M j → i ( Z j ∩ Z i ) j ∈ ne ( i )

  46. Message Passing ◮ The cavity distribution (in a tree) can be further broken down into a product of terms from each neighbouring clique: � q ¬ i ( Z i ) = M j → i ( Z j ∩ Z i ) j ∈ ne ( i ) ◮ Once the i th site has been approximated, the messages can be passed on to neighbouring cliques by marginalising to the shared variables (SSM example follows). ⇒ belief propagation.

  47. Message Passing ◮ The cavity distribution (in a tree) can be further broken down into a product of terms from each neighbouring clique: � q ¬ i ( Z i ) = M j → i ( Z j ∩ Z i ) j ∈ ne ( i ) ◮ Once the i th site has been approximated, the messages can be passed on to neighbouring cliques by marginalising to the shared variables (SSM example follows). ⇒ belief propagation. ◮ In loopy graphs, we can use loopy belief propagation. In that case � q ¬ i ( Z i ) = M j → i ( Z j ∩ Z i ) j ∈ ne ( i ) becomes an approximation to the true cavity distribution (or we can recast the approximation directly in terms of messages ⇒ later lecture).

  48. Message Passing ◮ The cavity distribution (in a tree) can be further broken down into a product of terms from each neighbouring clique: � q ¬ i ( Z i ) = M j → i ( Z j ∩ Z i ) j ∈ ne ( i ) ◮ Once the i th site has been approximated, the messages can be passed on to neighbouring cliques by marginalising to the shared variables (SSM example follows). ⇒ belief propagation. ◮ In loopy graphs, we can use loopy belief propagation. In that case � q ¬ i ( Z i ) = M j → i ( Z j ∩ Z i ) j ∈ ne ( i ) becomes an approximation to the true cavity distribution (or we can recast the approximation directly in terms of messages ⇒ later lecture). ◮ For some approximations (e.g. Gaussian) may be able to compute true loopy cavity using approximate sites, even if computing exact message would have been intractable.

  49. Message Passing ◮ The cavity distribution (in a tree) can be further broken down into a product of terms from each neighbouring clique: � q ¬ i ( Z i ) = M j → i ( Z j ∩ Z i ) j ∈ ne ( i ) ◮ Once the i th site has been approximated, the messages can be passed on to neighbouring cliques by marginalising to the shared variables (SSM example follows). ⇒ belief propagation. ◮ In loopy graphs, we can use loopy belief propagation. In that case � q ¬ i ( Z i ) = M j → i ( Z j ∩ Z i ) j ∈ ne ( i ) becomes an approximation to the true cavity distribution (or we can recast the approximation directly in terms of messages ⇒ later lecture). ◮ For some approximations (e.g. Gaussian) may be able to compute true loopy cavity using approximate sites, even if computing exact message would have been intractable. ◮ In either case, message updates can be scheduled in any order.

  50. Message Passing ◮ The cavity distribution (in a tree) can be further broken down into a product of terms from each neighbouring clique: � q ¬ i ( Z i ) = M j → i ( Z j ∩ Z i ) j ∈ ne ( i ) ◮ Once the i th site has been approximated, the messages can be passed on to neighbouring cliques by marginalising to the shared variables (SSM example follows). ⇒ belief propagation. ◮ In loopy graphs, we can use loopy belief propagation. In that case � q ¬ i ( Z i ) = M j → i ( Z j ∩ Z i ) j ∈ ne ( i ) becomes an approximation to the true cavity distribution (or we can recast the approximation directly in terms of messages ⇒ later lecture). ◮ For some approximations (e.g. Gaussian) may be able to compute true loopy cavity using approximate sites, even if computing exact message would have been intractable. ◮ In either case, message updates can be scheduled in any order. ◮ No guarantee of convergence (but see “power-EP” methods).

  51. EP for a NLSSM • • • z i − 2 z i − 1 z i z i + 1 z i + 2 • • • x i − 2 x i − 1 x i x i + 1 x i + 2 e.g. exp ( −� z i − h s ( z i − 1 ) � 2 / 2 σ 2 ) P ( z i | z i − 1 ) = φ i ( z i , z i − 1 ) e.g. exp ( −� x i − h o ( z i ) � 2 / 2 σ 2 ) P ( x i | z i ) = ψ i ( z i )

  52. EP for a NLSSM • • • z i − 2 z i − 1 z i z i + 1 z i + 2 • • • x i − 2 x i − 1 x i x i + 1 x i + 2 e.g. exp ( −� z i − h s ( z i − 1 ) � 2 / 2 σ 2 ) P ( z i | z i − 1 ) = φ i ( z i , z i − 1 ) e.g. exp ( −� x i − h o ( z i ) � 2 / 2 σ 2 ) P ( x i | z i ) = ψ i ( z i ) Then f i ( z i , z i − 1 ) = φ i ( z i , z i − 1 ) ψ i ( z i ) . As φ i and ψ i are non-linear, inference is not generally tractable.

  53. EP for a NLSSM • • • z i − 2 z i − 1 z i z i + 1 z i + 2 • • • x i − 2 x i − 1 x i x i + 1 x i + 2 e.g. exp ( −� z i − h s ( z i − 1 ) � 2 / 2 σ 2 ) P ( z i | z i − 1 ) = φ i ( z i , z i − 1 ) e.g. exp ( −� x i − h o ( z i ) � 2 / 2 σ 2 ) P ( x i | z i ) = ψ i ( z i ) Then f i ( z i , z i − 1 ) = φ i ( z i , z i − 1 ) ψ i ( z i ) . As φ i and ψ i are non-linear, inference is not generally tractable. Assume ˜ f i ( z i , z i − 1 ) is Gaussian. Then, � � � � � � ˜ ˜ ˜ q ¬ i ( z i , z i − 1 ) = f i ′ ( z i ′ , z i ′ − 1 ) = f i ′ ( z i ′ , z i ′ − 1 ) f i ′ ( z i ′ , z i ′ − 1 ) i ′ � = i i ′ < i i ′ > i z 1 ... z i − 2 z 1 ... z i − 2 z i + 1 ... z n z i + 1 ... z i � �� � � �� � α i − 1 ( z i − 1 ) β i ( z i ) with both α and β Gaussian.

  54. EP for a NLSSM • • • z i − 2 z i − 1 z i z i + 1 z i + 2 • • • x i − 2 x i − 1 x i x i + 1 x i + 2 e.g. exp ( −� z i − h s ( z i − 1 ) � 2 / 2 σ 2 ) P ( z i | z i − 1 ) = φ i ( z i , z i − 1 ) e.g. exp ( −� x i − h o ( z i ) � 2 / 2 σ 2 ) P ( x i | z i ) = ψ i ( z i ) Then f i ( z i , z i − 1 ) = φ i ( z i , z i − 1 ) ψ i ( z i ) . As φ i and ψ i are non-linear, inference is not generally tractable. Assume ˜ f i ( z i , z i − 1 ) is Gaussian. Then, � � � � � � ˜ ˜ ˜ q ¬ i ( z i , z i − 1 ) = f i ′ ( z i ′ , z i ′ − 1 ) = f i ′ ( z i ′ , z i ′ − 1 ) f i ′ ( z i ′ , z i ′ − 1 ) i ′ � = i i ′ < i i ′ > i z 1 ... z i − 2 z 1 ... z i − 2 z i + 1 ... z n z i + 1 ... z i � �� � � �� � α i − 1 ( z i − 1 ) β i ( z i ) with both α and β Gaussian. � � � ˜ � f ( z i , z i − 1 ) α i − 1 ( z i − 1 ) β i ( z i ) f i ( z i , z i − 1 ) = argmin KL φ i ( z i , z i − 1 ) ψ i ( z i ) α i − 1 ( z i − 1 ) β i ( z i ) f ∈N

  55. NLSSM EP message updates � � � ˜ � f ( z i , z i − 1 ) q ¬ i ( z i , z i − 1 ) f i ( z i , z i − 1 ) = argmin f ( z i , z i − 1 ) q ¬ i ( z i , z i − 1 ) KL f ∈N

  56. NLSSM EP message updates � � � ˜ � f ( z i , z i − 1 ) α i − 1 ( z i − 1 ) β i ( z i ) f i ( z i , z i − 1 ) = argmin φ i ( z i , z i − 1 ) ψ i ( z i ) α i − 1 ( z i − 1 ) β i ( z i ) KL f ∈N z i β i f q ¬ i α i − 1 z i − 1

  57. NLSSM EP message updates � � � ˜ � f ( z i , z i − 1 ) α i − 1 ( z i − 1 ) β i ( z i ) f i ( z i , z i − 1 ) = argmin φ i ( z i , z i − 1 ) ψ i ( z i ) α i − 1 ( z i − 1 ) β i ( z i ) KL f ∈N � �� � � �� � � P ( z i − 1 , z i ) P ( z i − 1 , z i ) z i β i f q ¬ i � P α i − 1 z i − 1

  58. NLSSM EP message updates � � � ˜ � f ( z i , z i − 1 ) α i − 1 ( z i − 1 ) β i ( z i ) f i ( z i , z i − 1 ) = argmin φ i ( z i , z i − 1 ) ψ i ( z i ) α i − 1 ( z i − 1 ) β i ( z i ) KL f ∈N � �� � � �� � � P ( z i − 1 , z i ) P ( z i − 1 , z i ) � �� � ˜ � P ( z i − 1 , z i ) P ( z i − 1 , z i ) = argmin KL P ( z i − 1 , z i ) P ∈N z i z i β i f q ¬ i � ˜ P P � P α i − 1 z i − 1 z i − 1

  59. NLSSM EP message updates � � � ˜ � f ( z i , z i − 1 ) α i − 1 ( z i − 1 ) β i ( z i ) f i ( z i , z i − 1 ) = argmin φ i ( z i , z i − 1 ) ψ i ( z i ) α i − 1 ( z i − 1 ) β i ( z i ) KL f ∈N � �� � � �� � � P ( z i − 1 , z i ) P ( z i − 1 , z i ) � ˜ �� � P ( z i − 1 , z i ) ˜ � P ( z i − 1 , z i ) ˜ P ( z i − 1 , z i ) = argmin KL P ( z i − 1 , z i ) f i ( z i , z i − 1 ) = α i − 1 ( z i − 1 ) β i ( z i ) P ∈N z i z i β i f q ¬ i � ˜ P P � P α i − 1 z i − 1 z i − 1

  60. NLSSM EP message updates � � � ˜ � f ( z i , z i − 1 ) α i − 1 ( z i − 1 ) β i ( z i ) f i ( z i , z i − 1 ) = argmin φ i ( z i , z i − 1 ) ψ i ( z i ) α i − 1 ( z i − 1 ) β i ( z i ) KL f ∈N � �� � � �� � � P ( z i − 1 , z i ) P ( z i − 1 , z i ) � ˜ �� � P ( z i − 1 , z i ) ˜ � P ( z i − 1 , z i ) ˜ P ( z i − 1 , z i ) = argmin KL P ( z i − 1 , z i ) f i ( z i , z i − 1 ) = α i − 1 ( z i − 1 ) β i ( z i ) P ∈N � � � � 1 ˜ α i − 1 ( z i − 1 )˜ ˜ α i ( z i ) = f i ′ ( z i ′ , z i ′ − 1 ) = f i ( z i , z i − 1 ) = P ( z i − 1 , z i ) β i ( z i ) i ′ < i + 1 z 1 ... z i − 1 z i − 1 z i − 1 � � � � 1 ˜ β i ( z i )˜ ˜ β i − 1 ( z i − 1 ) = f i ′ ( z i ′ , z i ′ − 1 ) = f i ( z i , z i − 1 ) = P ( z i − 1 , z i ) α i − 1 ( z i − 1 ) i ′ > i z i + 1 ... z i z i z i z i z i β i f q ¬ i α i � ˜ P P � P α i − 1 β i − 1 z i − 1 z i − 1

  61. Moment Matching Each EP update involves a KL minimisation: ˜ f new ( Z ) ← argmin KL [ f i ( Z i ) q ¬ i ( Z ) � f ( Z i ) q ¬ i ( Z )] i f ∈{ ˜ f } Usually, both q ¬ i ( Z i ) and ˜ Z ( θ ) e T ( x ) · θ . Then 1 f are in the same exponential family. Let q ( x ) = � � � � � � � 1 � q ( x ) � Z ( θ ) e T ( x ) · θ p ( x ) = argmin p ( x ) argmin KL KL � q θ � 1 Z ( θ ) e T ( x ) · θ = argmin − dx p ( x ) log θ � − dx p ( x ) T ( x ) · θ + log Z ( θ ) = argmin θ � � ∂ ∂ 1 dx e T ( x ) · θ ∂ θ = − dx p ( x ) T ( x ) + Z ( θ ) ∂ θ � 1 dx e T ( x ) · θ T ( x ) = −� T ( x ) � p + Z ( θ ) = −� T ( x ) � p + � T ( x ) � q So minimum is found by matching sufficient stats. This is usually moment matching.

  62. Numerical issues How do we calculate � T ( x ) � p ?

  63. Numerical issues How do we calculate � T ( x ) � p ? Often analytically tractable, but even if not requires a (relatively) low-dimensional integral: ◮ Quadrature methods.

  64. Numerical issues How do we calculate � T ( x ) � p ? Often analytically tractable, but even if not requires a (relatively) low-dimensional integral: ◮ Quadrature methods. ◮ Classical Gaussian quadrature (same Gauss, but nothing to do with the distribution) gives an iterative version of Sigma-point methods.

  65. Numerical issues How do we calculate � T ( x ) � p ? Often analytically tractable, but even if not requires a (relatively) low-dimensional integral: ◮ Quadrature methods. ◮ Classical Gaussian quadrature (same Gauss, but nothing to do with the distribution) gives an iterative version of Sigma-point methods. ◮ Positive definite joints, but not guaranteed to give positive definite messages.

  66. Numerical issues How do we calculate � T ( x ) � p ? Often analytically tractable, but even if not requires a (relatively) low-dimensional integral: ◮ Quadrature methods. ◮ Classical Gaussian quadrature (same Gauss, but nothing to do with the distribution) gives an iterative version of Sigma-point methods. ◮ Positive definite joints, but not guaranteed to give positive definite messages. ◮ Heuristics include skipping non-positive-definite steps, or damping messages by interpolation or exponentiating to power < 1.

  67. Numerical issues How do we calculate � T ( x ) � p ? Often analytically tractable, but even if not requires a (relatively) low-dimensional integral: ◮ Quadrature methods. ◮ Classical Gaussian quadrature (same Gauss, but nothing to do with the distribution) gives an iterative version of Sigma-point methods. ◮ Positive definite joints, but not guaranteed to give positive definite messages. ◮ Heuristics include skipping non-positive-definite steps, or damping messages by interpolation or exponentiating to power < 1. ◮ Other quadrature approaches ( e.g. GP quadrature) may be more accurate, and may allow formal constraint to pos-def cone.

  68. Numerical issues How do we calculate � T ( x ) � p ? Often analytically tractable, but even if not requires a (relatively) low-dimensional integral: ◮ Quadrature methods. ◮ Classical Gaussian quadrature (same Gauss, but nothing to do with the distribution) gives an iterative version of Sigma-point methods. ◮ Positive definite joints, but not guaranteed to give positive definite messages. ◮ Heuristics include skipping non-positive-definite steps, or damping messages by interpolation or exponentiating to power < 1. ◮ Other quadrature approaches ( e.g. GP quadrature) may be more accurate, and may allow formal constraint to pos-def cone. ◮ Laplace approximation.

  69. Numerical issues How do we calculate � T ( x ) � p ? Often analytically tractable, but even if not requires a (relatively) low-dimensional integral: ◮ Quadrature methods. ◮ Classical Gaussian quadrature (same Gauss, but nothing to do with the distribution) gives an iterative version of Sigma-point methods. ◮ Positive definite joints, but not guaranteed to give positive definite messages. ◮ Heuristics include skipping non-positive-definite steps, or damping messages by interpolation or exponentiating to power < 1. ◮ Other quadrature approaches ( e.g. GP quadrature) may be more accurate, and may allow formal constraint to pos-def cone. ◮ Laplace approximation. ◮ Equivalent to Laplace propagation.

  70. Numerical issues How do we calculate � T ( x ) � p ? Often analytically tractable, but even if not requires a (relatively) low-dimensional integral: ◮ Quadrature methods. ◮ Classical Gaussian quadrature (same Gauss, but nothing to do with the distribution) gives an iterative version of Sigma-point methods. ◮ Positive definite joints, but not guaranteed to give positive definite messages. ◮ Heuristics include skipping non-positive-definite steps, or damping messages by interpolation or exponentiating to power < 1. ◮ Other quadrature approaches ( e.g. GP quadrature) may be more accurate, and may allow formal constraint to pos-def cone. ◮ Laplace approximation. ◮ Equivalent to Laplace propagation. ◮ As long as messages remain positive definite will converge to global Laplace approximation.

  71. EP for Gaussian process classification EP provides a succesful framework for Gaussian-process modelling of non-Gaussian observations ( e.g. for classification).

  72. EP for Gaussian process classification EP provides a succesful framework for Gaussian-process modelling of non-Gaussian observations ( e.g. for classification). x 1 x 2 x 3 • • • x n g 1 g 2 g 3 • • • g n Recall: ◮ A GP defines a multivariate Gaussian distribution on any finite subset of random vars { g 1 . . . g n } drawn from a (usually uncountable) potential set indexed by “inputs” x i .

  73. EP for Gaussian process classification EP provides a succesful framework for Gaussian-process modelling of non-Gaussian observations ( e.g. for classification). x 1 x 2 x 3 • • • x n K g 1 g 2 g 3 • • • g n Recall: ◮ A GP defines a multivariate Gaussian distribution on any finite subset of random vars { g 1 . . . g n } drawn from a (usually uncountable) potential set indexed by “inputs” x i . ◮ The Gaussian parameters depend on the inputs: ( µ = [ µ ( x i )] , Σ = [ K ( x i , x j )] ).

  74. EP for Gaussian process classification EP provides a succesful framework for Gaussian-process modelling of non-Gaussian observations ( e.g. for classification). x 1 x 2 x 3 • • • x n K g 1 g 2 g 3 • • • g n Recall: ◮ A GP defines a multivariate Gaussian distribution on any finite subset of random vars { g 1 . . . g n } drawn from a (usually uncountable) potential set indexed by “inputs” x i . ◮ The Gaussian parameters depend on the inputs: ( µ = [ µ ( x i )] , Σ = [ K ( x i , x j )] ). ◮ If we think of the g s as function values, a GP provides a prior over functions.

  75. EP for Gaussian process classification EP provides a succesful framework for Gaussian-process modelling of non-Gaussian observations ( e.g. for classification). x 1 x 2 x 3 • • • x n K g 1 g 2 g 3 • • • g n y 1 y 2 y 3 y n Recall: ◮ A GP defines a multivariate Gaussian distribution on any finite subset of random vars { g 1 . . . g n } drawn from a (usually uncountable) potential set indexed by “inputs” x i . ◮ The Gaussian parameters depend on the inputs: ( µ = [ µ ( x i )] , Σ = [ K ( x i , x j )] ). ◮ If we think of the g s as function values, a GP provides a prior over functions. ◮ In a GP regression model, noisy observations y i are conditionally independent given g i .

  76. EP for Gaussian process classification EP provides a succesful framework for Gaussian-process modelling of non-Gaussian observations ( e.g. for classification). x 1 x 2 x 3 • • • x n x ′ K g ′ g 1 g 2 g 3 • • • g n y ′ y 1 y 2 y 3 y n Recall: ◮ A GP defines a multivariate Gaussian distribution on any finite subset of random vars { g 1 . . . g n } drawn from a (usually uncountable) potential set indexed by “inputs” x i . ◮ The Gaussian parameters depend on the inputs: ( µ = [ µ ( x i )] , Σ = [ K ( x i , x j )] ). ◮ If we think of the g s as function values, a GP provides a prior over functions. ◮ In a GP regression model, noisy observations y i are conditionally independent given g i . ◮ No parameters to learn (though often hyperparameters); instead, we make predictions on test data directly: [assuming µ = 0, and matrix Σ incorporates diagonal noise] � X , X Σ X , x ′ � P ( y ′ | x ′ , D ) = N Σ x ′ , X Σ − 1 X , X z , Σ x ′ , x ′ − Σ x ′ , X Σ − 1

  77. GP EP updates g 1 g 2 g 3 • • • g n y 1 y 2 y 3 y n ◮ We can write the GP joint on g i and y i as a factor graph: � � � y i | g i , σ 2 P ( g 1 . . . g n , y 1 , . . . y n ) = N ( g 1 . . . g n | 0 , K ) N i i

  78. GP EP updates g 1 g 2 g 3 • • • g n y 1 y 2 y 3 y n ◮ We can write the GP joint on g i and y i as a factor graph: � � � y i | g i , σ 2 P ( g 1 . . . g n , y 1 , . . . y n ) = N ( g 1 . . . g n | 0 , K ) N i � �� � � �� � i f 0 ( G ) f i ( g i )

  79. GP EP updates g 1 g 2 g 3 • • • g n y 1 y 2 y 3 y n ◮ We can write the GP joint on g i and y i as a factor graph: � � � y i | g i , σ 2 P ( g 1 . . . g n , y 1 , . . . y n ) = N ( g 1 . . . g n | 0 , K ) N i � �� � � �� � i f 0 ( G ) f i ( g i ) ◮ The same factorisation applies to non-Gaussian P ( y i | g i ) ( e.g. P ( y i =1 ) = 1 / ( 1 + e − g i ) ).

  80. GP EP updates g 1 g 2 g 3 • • • g n y 1 y 2 y 3 y n ◮ We can write the GP joint on g i and y i as a factor graph: � � � y i | g i , σ 2 P ( g 1 . . . g n , y 1 , . . . y n ) = N ( g 1 . . . g n | 0 , K ) N i � �� � � �� � i f 0 ( G ) f i ( g i ) ◮ The same factorisation applies to non-Gaussian P ( y i | g i ) ( e.g. P ( y i =1 ) = 1 / ( 1 + e − g i ) ). � � ◮ EP: approximate non-Gaussian f i ( g i ) by Gaussian ˜ µ i , ˜ ψ 2 f i ( g i ) = N ˜ . i

  81. GP EP updates g 1 g 2 g 3 • • • g n y 1 y 2 y 3 y n ◮ We can write the GP joint on g i and y i as a factor graph: � � � y i | g i , σ 2 P ( g 1 . . . g n , y 1 , . . . y n ) = N ( g 1 . . . g n | 0 , K ) N i � �� � � �� � i f 0 ( G ) f i ( g i ) ◮ The same factorisation applies to non-Gaussian P ( y i | g i ) ( e.g. P ( y i =1 ) = 1 / ( 1 + e − g i ) ). � � ◮ EP: approximate non-Gaussian f i ( g i ) by Gaussian ˜ µ i , ˜ ψ 2 f i ( g i ) = N ˜ . i � � ˜ 1 . . . ˜ ◮ q ¬ i ( g i ) can be constructed by the usual GP marginalisation. If Σ = K + diag ψ 2 ψ 2 n � � Σ i , ¬ i Σ − 1 µ ¬ i , K i , i − Σ i , ¬ i Σ − 1 q ¬ i ( g i ) = N ¬ i , ¬ i ˜ ¬ i , ¬ i Σ ¬ i , i

  82. GP EP updates g 1 g 2 g 3 • • • g n y 1 y 2 y 3 y n ◮ We can write the GP joint on g i and y i as a factor graph: � � � y i | g i , σ 2 P ( g 1 . . . g n , y 1 , . . . y n ) = N ( g 1 . . . g n | 0 , K ) N i � �� � � �� � i f 0 ( G ) f i ( g i ) ◮ The same factorisation applies to non-Gaussian P ( y i | g i ) ( e.g. P ( y i =1 ) = 1 / ( 1 + e − g i ) ). � � ◮ EP: approximate non-Gaussian f i ( g i ) by Gaussian ˜ µ i , ˜ ψ 2 f i ( g i ) = N ˜ . i � � ˜ 1 . . . ˜ ◮ q ¬ i ( g i ) can be constructed by the usual GP marginalisation. If Σ = K + diag ψ 2 ψ 2 n � � Σ i , ¬ i Σ − 1 µ ¬ i , K i , i − Σ i , ¬ i Σ − 1 q ¬ i ( g i ) = N ¬ i , ¬ i ˜ ¬ i , ¬ i Σ ¬ i , i ◮ The EP updates thus require calculating Gaussian expectations of f i ( g ) g { 1 , 2 } : �� � � � dg q ¬ i ( g ) f i ( g ) g 2 − (˜ f new ˜ µ new ) 2 ( g i ) = N dg q ¬ i ( g ) f i ( g ) g , q ¬ i ( g i ) i i

  83. EP GP prediction x 1 x 2 x 3 • • • x n K • • • g 1 g 2 g 3 g n y 1 y 2 y 3 y n ◮ Once appoximate site potentials have stabilised, they can be used to make predictions.

  84. EP GP prediction x 1 x 2 x 3 • • • x n x ′ K • • • g 1 g 2 g 3 g n y 1 y 2 y 3 y n ◮ Once appoximate site potentials have stabilised, they can be used to make predictions. ◮ Introducing a test point changes K , but does not affect the marginal P ( g 1 . . . g n ) (by consistency of the GP).

  85. EP GP prediction x 1 x 2 x 3 • • • x n x ′ K • • • g ′ g 1 g 2 g 3 g n y ′ y 1 y 2 y 3 y n ◮ Once appoximate site potentials have stabilised, they can be used to make predictions. ◮ Introducing a test point changes K , but does not affect the marginal P ( g 1 . . . g n ) (by consistency of the GP). ◮ The unobserved output factor provides no information about g ′ ( ⇒ constant factor on g ′ )

  86. EP GP prediction x 1 x 2 x 3 • • • x n x ′ K • • • g ′ g 1 g 2 g 3 g n y ′ y 1 y 2 y 3 y n ◮ Once appoximate site potentials have stabilised, they can be used to make predictions. ◮ Introducing a test point changes K , but does not affect the marginal P ( g 1 . . . g n ) (by consistency of the GP). ◮ The unobserved output factor provides no information about g ′ ( ⇒ constant factor on g ′ ) ◮ Thus no change is needed to the approximating potentials ˜ f i .

  87. EP GP prediction x 1 x 2 x 3 • • • x n x ′ K • • • g ′ g 1 g 2 g 3 g n y ′ y 1 y 2 y 3 y n ◮ Once appoximate site potentials have stabilised, they can be used to make predictions. ◮ Introducing a test point changes K , but does not affect the marginal P ( g 1 . . . g n ) (by consistency of the GP). ◮ The unobserved output factor provides no information about g ′ ( ⇒ constant factor on g ′ ) ◮ Thus no change is needed to the approximating potentials ˜ f i . ◮ Predictions are obtained by marginalising the approximation: [let ˜ Ψ = diag [ ˜ 1 . . . ˜ ψ 2 ψ 2 n ] ] � � g ′ | K x ′ , X ( K X , X + ˜ dg ′ P ( y ′ | g ′ ) N Ψ) − 1 ˜ P ( y ′ | x ′ , D ) = µ , � K x ′ , x ′ − K x ′ , X ( K X , X + ˜ Ψ) − 1 K X , x ′

  88. Normalisers ◮ Approximate sites determined by moment matching are naturally normalised.

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend