probabilistic graphical models
play

Probabilistic Graphical Models Lecture 8 Junction Trees CS/CNS/EE - PowerPoint PPT Presentation

Probabilistic Graphical Models Lecture 8 Junction Trees CS/CNS/EE 155 Andreas Krause Announcements Homework 2 due next Wednesday (Nov 4) in class Start early!!! Project milestones due Monday (Nov 9) 4 pages of writeup, NIPS format


  1. Probabilistic Graphical Models Lecture 8 – Junction Trees CS/CNS/EE 155 Andreas Krause

  2. Announcements Homework 2 due next Wednesday (Nov 4) in class Start early!!! Project milestones due Monday (Nov 9) 4 pages of writeup, NIPS format http://nips.cc/PaperInformation/StyleFiles Best project award!! 2

  3. Key questions How do we specify distributions that satisfy particular independence properties? � Representation How can we identify independence properties present in data? � Learning How can we exploit independence properties for efficient computation? � Inference 3

  4. Typical queries: Conditional distribution Compute distribution of some E B variables given values for others A J M 4

  5. Typical queries: Maxizimization MPE (Most probable explanation): E B Given values for some vars, compute most likely assignment to all remaining vars A J M MAP (Maximum a posteriori): Compute most likely assignment to some variables 5

  6. Hardness of inference for general BNs Computing conditional distributions: Exact solution: #P-complete Approximate solution: Maximization: MPE: NP-complete MAP: NP PP -complete Inference in general BNs is really hard � Is all hope lost? 6

  7. Inference Can exploit structure (conditional independence) to efficiently perform exact inference in many practical situations For BNs where exact inference is not possible, can use algorithms for approximate inference (later this term) 7

  8. Variable elimination algorithm Given BN and Query P(X | E = e ) Remove irrelevant variables for {X, e } Choose an ordering of X 1 ,…,X n Set up initial factors: f i = P(X i | Pa i ) For i =1:n, X i ∉ {X, E } Collect all factors f that include X i Generate new factor by marginalizing out X i Add g to set of factors Renormalize P(x, e ) to get P(x | e ) 8

  9. Variable elimination for polytrees 9

  10. Complexity of variable elimination Tree graphical models Using correct elimination order, factor sizes do not increase! Inference in linear time!! General graphical models Ultimately NP-hard.. Need to understand what happens if there are loops 10

  11. Variable elimination with loops A J M R L 11

  12. Elimination as graph transformation: Moralization Coherence Difficulty Intelligence Grade SAT Letter Job Happy 12

  13. Elimination: Filling edges Coherence Difficulty Intelligence Grade SAT Letter Job Happy 13

  14. Impact of elimination order {C,D,S,I,L,H,J,G} {G,C,D,S,I,L,H,J} Coherence Coherence Difficulty Intelligence Difficulty Intelligence Grade SAT Grade SAT Letter Letter Job Job Happy Happy Different elim. order induce different graphs! 14

  15. Induced graph and VE complexity Theorem : Coherence All factors arising in VE are defined over cliques (fully connected subgraphs) of the Difficulty Intelligence induced graph All maximal cliques of induced graph arise Grade SAT as factors in VE Letter Treewidth for ordering = Size of largest clique of induced graph -1 Job Happy Treewidth of a graph = minimal treewidth under optimal ordering VE exponential in treewidth! 15

  16. Compact representation � small treewidth? 16

  17. Finding the optimal elimination order Theorem : Deciding whether there exists an elimination order with induced with at most K is NP-hard Proof by reduction from MAX-CLIQUE In fact, can find elimination order in time exponential in treewidth Finding optimal ordering as hard as inference… For which graphs can we find optimal elimination order? 17

  18. Finding optimal elimination order For trees can find optimal ordering (saw before) A graph is called chordal if every cycle of length \geq 4 has a chord (an edge between some pair of non- consecutive nodes) Every tree is chordal! Can find optimal elimination ordering for chordal graphs 18

  19. Summary so far Variable elimination complexity exponential in induced width for elimination ordering Finding optimal ordering is NP-hard Many good heuristics Exact for trees, chordal graphs Ultimately, inference is NP-hard Only difference between cond. prob. queries and MPE is � vs. max Variable elimination building block for many exact and approximate inference techniques 19

  20. Answering multiple queries X 1 X 2 X 3 X 4 X 5 20

  21. Reusing computation X 1 X 2 X 3 X 4 X 5 21

  22. Next Will learn about algorithm for efficiently computing all marginals P(X i | E=e ) given fixed evidence E=e Need appropriate data structure for storing the computation � Junction trees 22

  23. Junction trees CD C A junction tree for a collection of factors: D I DIG A tree, where each node is a cluster of variables Every factor contained G S GIS in some cluster C i L Running intersection GJSL JSL property : If X � C i and J X � C j , and C m is on the H path between C i and C j , HGJ then X � C m 23

  24. VE constructs a junction tree X 1 X 2 X 3 X 4 X 5 One clique C i for each factor f i created in VE C i connected to C j if f i used to generate f j Every factor used only once � Tree Theorem : resulting tree satisfies RIP 24

  25. Example: JT from VE {C,D,I,H,G,S,L} C D I G S L J H 25

  26. Constructing JT from chordal graph C D I G S L J H 26

  27. Junction trees and independence Theorem : CD Suppose T is a junction tree for graph G and factors F DIG Consider edge C i – C j with separator S i, j Variables X and Y on opposite sites of GIS separator Then X � Y | S i,j GJSL JSL Furthermore, I(T) � I(G) HGJ 27

  28. Variable elimination in junction trees CD DIG GIS GJSL HGJ C D I G S Associate each CPTs with a clique L Potential � � of clique C is product J of assigned CPTs H 28

  29. VE as message passing CD DIG GIS GJSL HGJ C VE for computing X i D I Pick root (any clique containing X i ) Don’t eliminate, only send messages G S recursively from leaves to root Multiply incoming messages with clique potential L Marginalize variables not in separator J Root “ready” when received all messages H 29

  30. Correctness of message passing CD DIG GIS GJSL HGJ Theorem : When root ready (received all messages), all variables in root have correct potentials Follows from correctness of VE So far, no gain in efficiency � 30

  31. Does the choice of root affect messages? 1: CD 2: DIG 3: GIS 4: GJSL 5: HGJ 1: CD 2: DIG 3: GIS 4: GJSL 5: HGJ 31

  32. Shenoy-Shafer algorithm Clique i ready if received messages from all neighbors C 1 Leaves always ready While there exists a message � � � � ready to transmit send C 2 C 4 message C 3 C 5 C 6 Complexity? Theorem : At convergence, every clique has correct beliefs 32

  33. Inference using VE Want to incorporate evidence E=e Multiply all cliques containing evidence variables with indicator potential 1 e Perform variable elimination 33

  34. Summary so far Junction trees represent distribution Constructed using elimination order Make complexity of inference explicitly visible Can implement variable elimination on junction trees to compute correct beliefs on all nodes Now: Belief propagation – an important alternative to VE on junction trees. Will later generalize to approximate inference! Key difference: Messages obtained by division rather than multiplication 34

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend