dynamic sampling fs om graphical models
play

Dynamic Sampling fs om Graphical Models Yitong Yin Nanjing - PowerPoint PPT Presentation

Dynamic Sampling fs om Graphical Models Yitong Yin Nanjing University Joint work with W eiming Feng ( Nanjing ) Nisheeth Vishnoi ( EPFL ) Graphical Model instance of graphical model: I = ( V, E, [ q ] , ) V : variables v E 2


  1. Dynamic Sampling fs om Graphical Models Yitong Yin Nanjing University Joint work with W eiming Feng ( Nanjing ) Nisheeth Vishnoi ( EPFL )

  2. Graphical Model instance of graphical model: I = ( V, E, [ q ] , Φ ) • V : variables ϕ v • E ⊂ 2 V : constraints • [ q ] = {0,1, …, q -1} : domain constraint ϕ e • Φ = ( 𝜚 v ) v ∈ V ∪ ( 𝜚 e ) e ∈ E : factors e • Gibbs distribution µ over all σ ∈ [ q ] V : μ ( σ ) ∝ ∏ V ϕ v ( σ v ) ∏ ϕ e ( σ e ) v ∈ V e ∈ E

  3. Graphical Model instance of graphical model: I = ( V, E, [ q ] , Φ ) • Each v ∈ V is a variable with domain [ q ] ϕ v over [ q ] and has a distribution ϕ v ϕ v : [ q ] → [0,1] • Each e ∈ E is a set of variables and constraint ϕ e corresponds to a constraint (factor) e ϕ e : [ q ] e → [0,1] • Gibbs distribution µ over all σ ∈ [ q ] V : μ ( σ ) ∝ ∏ V ϕ v ( σ v ) ∏ ϕ e ( σ e ) v ∈ V e ∈ E

  4. Graphical Model • Gibbs distribution µ over all σ ∈ [ q ] V : μ ( σ ) ∝ ∏ ϕ v ( σ v ) ∏ ϕ e ( σ e ) v ∈ V e ∈ E ϕ v is a distribution over [ q ] ϕ e : [ q ] e → [0,1] • each v ∈ V independently samples X v ∈ [ q ] according to 𝜚 v ; • each e ∈ E is passed independently with probability 𝜚 e ( X e ) ; • X is accepted if all constraints e ∈ E are passed.

  5. Graphical Model • Gibbs distribution µ over all σ ∈ [ q ] V : μ ( σ ) ∝ ∏ G ( V , E ) ∏ ϕ v ( σ v ) ϕ e ( σ u , σ v ) v ∈ V e =( u , v ) ∈ E ϕ v ϕ e • hardcore morel: v u [ q ] = {0,1} ϕ e ( σ u , σ v ) = { if σ u = σ v = 1 0 otherwise 1 1 if σ v = 0 1 + λ v λ v > 0 is (local) fugacity ϕ v ( σ v ) = λ v if σ v = 1 1 + λ v

  6. Graphical Model • Gibbs distribution µ over all σ ∈ [ q ] V : μ ( σ ) ∝ ∏ G ( V , E ) ∏ ϕ v ( σ v ) ϕ e ( σ u , σ v ) v ∈ V e =( u , v ) ∈ E ϕ v ϕ e • Ising/Potts model: v u or { (ferromagnetic) ϕ e ( σ u , σ v ) = { if σ u = σ v 1 β e ∈ [0,1] otherwise (anti-ferromagnetic) ϕ e ( σ u , σ v ) = { β e ∈ [0,1] if σ u = σ v otherwise 1 ϕ v is a distribution over [ q ] (arbitrary local fields)

  7. Dynamic Sampling • Gibbs distribution µ over all σ ∈ [ q ] V : ϕ ′ � v μ ( σ ) ∝ ∏ ϕ v ( σ v ) ∏ ϕ v ϕ e ( σ e ) ϕ e v u v ∈ V e ∈ E ϕ ′ � e current sample: X ~ µ dynamic update: • adding/deleting a constraint e } new distribution • changing a factor 𝜚 v or 𝜚 e µ’ • adding/deleting an independent variable v Question: Obtain X’ ~ µ’ from X ~ µ with small incremental cost.

  8. Dynamic Sampling instance of graphical model: I = ( V, E, [ q ] , Φ ) • Gibbs distribution µ over all σ ∈ [ q ] V : ϕ v μ ( σ ) ∝ ∏ ϕ v ( σ v ) ∏ ϕ e ( σ e ) v ∈ V e ∈ E constraint ϕ e e current sample: X ~ µ • V : variables • E ⊂ 2 V : constraints V • [ q ] = {0,1, …, q -1} : domain • Φ = ( 𝜚 v ) v ∈ V ∪ ( 𝜚 e ) e ∈ E : factors

  9. Dynamic Sampling instance of graphical model: I = ( V, E, [ q ] , Φ ) update: ( D , 𝜚 D ) is the set of changed variables and constraints D ⊂ V ∪ 2 V Φ D = ( ϕ v ) v ∈ V ∩ D ∪ ( ϕ e ) e ∈ 2 V ∩ D specifies the new factors ( D , Φ D ) ( V , E ′ � , [ q ], Φ′ � ) ( V , E , [ q ], Φ ) E ′ � = E ∪ (2 V ∩ D ) Φ′ � = ( ϕ ′ � a ) a ∈ V ∪ E ′ � Φ D if a ∈ D a is as specified in { where each ϕ ′ � otherwise Φ

  10. Dynamic Sampling instance of graphical model: I = ( V, E, [ q ] , Φ ) update: ( D , 𝜚 D ) is the set of changed variables and constraints D ⊂ V ∪ 2 V Φ D = ( ϕ v ) v ∈ V ∩ D ∪ ( ϕ e ) e ∈ 2 V ∩ D specifies the new factors ( D , Φ D ) ( V , E ′ � , [ q ], Φ′ � ) ( V , E , [ q ], Φ ) Input : a graphical model with Gibbs distribution µ a sample X ~ µ, and an update ( D , 𝜚 D ) Output : X’ ~ µ’ where µ’ is the new Gibbs distribution ( D , 𝜚 D ) is fixed by an offline adversary independently of X ~ µ

  11. Dynamic Sampling Input : a graphical model with Gibbs distribution µ a sample X ~ µ, and an update ( D , 𝜚 D ) Output : X’ ~ µ’ where µ’ is the new Gibbs distribution • inference/learning tasks where the graphical model is changing dynamically • video de-noising • online learning with dynamic or streaming data • sampling/inference/learning algorithms which adaptively and locally change the joint distribution • stochastic gradient descent • JSV algorithm for perfect matching

  12. Dynamic Sampling Input : a graphical model with Gibbs distribution µ a sample X ~ µ, and an update ( D , 𝜚 D ) Output : X’ ~ µ’ where µ’ is the new Gibbs distribution Goal: transform a X ~ µ to a X’ ~ µ’ by local changes Current sampling techniques are not powerful enough: • µ may be changed significantly by dynamic updates; • Monte Carlo sampling does not know when to stop; • notions such as mixing time give worst-case estimation.

  13. Graphical Model instance of graphical model: I = ( V, E, [ q ] , Φ ) • V : variables ϕ v • E ⊂ 2 V : constraints • [ q ] = {0,1, …, q -1} : domain constraint ϕ e • Φ = ( 𝜚 v ) v ∈ V ∪ ( 𝜚 e ) e ∈ E : factors e • Gibbs distribution µ over all σ ∈ [ q ] V : μ ( σ ) ∝ ∏ V ϕ v ( σ v ) ∏ ϕ e ( σ e ) v ∈ V e ∈ E

  14. Notations instance of graphical model: I = ( V, E, [ q ] , Φ ) for D ⊆ V ∪ 2 V 𝗐𝖼𝗆 ( D ) ≜ ( V ∩ D ) ∪ ( ⋃ e ∈ D ∩ E e ) (involved variables) R ⊆ V for E ( R ) ≜ { e ∈ E ∣ e ⊆ R } (internal constraints) δ ( R ) ≜ { e ∈ E ∖ E ( R ) ∣ e ∩ R ≠ ∅ } (boundary constraints) E + ( R ) ≜ { e ∈ E ∣ e ∩ R ≠ ∅ } (incident constraints) = E ( R ) ∪ δ ( R )

  15. Dynamic Sampler Input : a graphical model with Gibbs distribution µ a sample X ~ µ, and an update ( D , 𝜚 D ) Output : X’ ~ µ’ where µ’ is the new Gibbs distribution Upon receiving update ( D , 𝜚 D ): • apply changes ( D , 𝜚 D ) to the current graphical model; R ← 𝗐𝖼𝗆 ( D ) ≜ ( V ∩ D ) ∪ ( ⋃ e ∈ D ∩ E e ) ; • • while R ≠ ∅ : • ( X , R ) ← 𝚂𝚏𝚝𝚋𝚗𝚚𝚖𝚏 ( X , R );

  16. Dynamic Sampler Upon receiving update ( D , 𝜚 D ): • apply changes ( D , 𝜚 D ) to the current graphical model; R ← 𝗐𝖼𝗆 ( D ) ≜ ( V ∩ D ) ∪ ( ⋃ e ∈ D ∩ E e ) ; • • while R ≠ ∅ : • ( X , R ) ← 𝚂𝚏𝚝𝚋𝚗𝚚𝚖𝚏 ( X , R ); 𝚂𝚏𝚝𝚋𝚗𝚚𝚖𝚏 ( X , R ) : • each e ∈ E + ( R ) computes κ e = min ϕ e ( x e )/ ϕ e ( X e ) x e : x e ∩ R = X e ∩ R • each v ∈ R resamples X v ∈ [ q ] independently according to 𝜚 v ; • each e ∈ E + ( R ) is passed independently with prob. κ e · 𝜚 e ( X e ) ; (otherwise e is violated) R ← ⋃ e ∈ E : violated e e ; •

  17. Resampling 𝚂𝚏𝚝𝚋𝚗𝚚𝚖𝚏 ( X , R ) : • each e ∈ E + ( R ) computes κ e = min ϕ e ( x e )/ ϕ e ( X e ) x e : x e ∩ R = X e ∩ R • each v ∈ R resamples X v ∈ [ q ] independently according to 𝜚 v ; • each e ∈ E + ( R ) is passed independently with prob. κ e · 𝜚 e ( X e ) ; (otherwise e is violated) R ← ⋃ e ∈ E : violated e e ; • • each boundary constraint e ∈ δ ( R ) is violated ind. with prob. ; 1 − min ϕ e ( x e )/ ϕ e ( X e ) R x e : x e ∩ R = X e ∩ R e • X e ∩ R each v ∈ R resamples X v ind. from 𝜚 v ; ? • each non-violated incident constraint e ∈ E + ( R ) is violated ind. with prob. 1- 𝜚 e ( X e ) ; • all violating variables form the new R ; V

  18. Resampling 𝚂𝚏𝚝𝚋𝚗𝚚𝚖𝚏 ( X , R ) : • each e ∈ E + ( R ) computes κ e = min ϕ e ( x e )/ ϕ e ( X e ) x e : x e ∩ R = X e ∩ R • each v ∈ R resamples X v ∈ [ q ] independently according to 𝜚 v ; • each e ∈ E + ( R ) is passed independently with prob. κ e · 𝜚 e ( X e ) ; (otherwise e is violated) R ← ⋃ e ∈ E : violated e e ; • • each boundary constraint e ∈ δ ( R ) is violated ind. with prob. ; 1 − min ϕ e ( x e )/ ϕ e ( X e ) R x e : x e ∩ R = X e ∩ R e • X e ∩ R each v ∈ R resamples X v ind. from 𝜚 v ; wrong ? distribution • each non-violated incident constraint e ∈ E + ( R ) is violated ind. with prob. 1- 𝜚 e ( X e ) ; • all violating variables form the new R ; V A more “natural” algorithm?

  19. Dynamic Sampler Upon receiving update ( D , 𝜚 D ): • apply changes ( D , 𝜚 D ) to the current graphical model; R ← 𝗐𝖼𝗆 ( D ) ≜ ( V ∩ D ) ∪ ( ⋃ e ∈ D ∩ E e ) ; • • while R ≠ ∅ : • ( X , R ) ← 𝚂𝚏𝚝𝚋𝚗𝚚𝚖𝚏 ( X , R ); 𝚂𝚏𝚝𝚋𝚗𝚚𝚖𝚏 ( X , R ) : • each e ∈ E + ( R ) computes κ e = min ϕ e ( x e )/ ϕ e ( X e ) x e : x e ∩ R = X e ∩ R • each v ∈ R resamples X v ∈ [ q ] independently according to 𝜚 v ; • each e ∈ E + ( R ) is passed independently with prob. κ e · 𝜚 e ( X e ) ; (otherwise e is violated) R ← ⋃ e ∈ E : violated e e ; •

  20. Correctness of Sampling Correctness : Assuming input sample X ~ µ , upon termination, the dynamic sampler returns a sample from the updated distribution µ’ .

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend