an ising model inspired extension of the product based mp
play

An Ising Model inspired Extension of the Product-based MP Framework - PowerPoint PPT Presentation

Background Message Passing Modifying the PMPF Conclusions An Ising Model inspired Extension of the Product-based MP Framework for SAT Oliver Gableske 1 1 Institute of Theoretical Computer Science Ulm University Germany oliver@gableske.net


  1. Background Message Passing Modifying the PMPF Conclusions An Ising Model inspired Extension of the Product-based MP Framework for SAT Oliver Gableske 1 1 Institute of Theoretical Computer Science Ulm University Germany oliver@gableske.net https://www.gableske.net SAT 2014, 17.07.2014 1 / 26

  2. Background Message Passing Modifying the PMPF Conclusions Background and Motivation (1) General context Message Passing algorithms used in CNF-SAT solving Message Passing algorithms used to provide biases for variables in a given CNF formula β ( v ) ∈ [ − 1 . 0 , +1 . 0] a variable bias estimates the marginal assignment in all solutions realize a variable ordering heuristic (via absolute value of bias) realize a value ordering heuristic (via sign of bias) 2 / 26

  3. Background Message Passing Modifying the PMPF Conclusions Background and Motivation (2) Message Passing algorithms are comprised of two parts MP framework 1 governs the overall process to compute biases defines the message types defines how the message are updated (iterations) defines how long the updates are performed (convergence) defines how equilibrium messages are used to compute biases MP heuristic 2 influences the overall process to compute biases specifies messages specifies equations to compute biases apply an MP heuristic in an MP framework in order to compute biases 3 / 26

  4. Background Message Passing Modifying the PMPF Conclusions Background and Motivation (3) Last years paper explained, that all product-based MP algorithms (e.g. BP, SP, EM variants) currently available for SAT can be represented with the same MP framework (PMPF) in conjunction with the respective MP heuristics equations It furthermore showed, that all MP heuristics can be combined into a single heuristic ( ρσ PMP) which interpolates between the “original” equations ( ρ = 0 , σ = 0 → BP equations; ρ = 1 , σ = 0 → SP equations, . . . ) The final result was, that all MP algorithms can be realized by applying the same MP heuristic ( ρσ PMP) in the same MP framework (PMPF) 4 / 26

  5. Background Message Passing Modifying the PMPF Conclusions Background and Motivation (4) Current situation we have a rather flexible MP heuristic ( ρσ PMP) we have to implement only one heuristic to “get” all of the product-based MP algorithms allows parameter tuning (adapt ρ, σ for a given type of CNF formula) allows us to understand “theoretical connections” between the various MP heuristics we have a totally static MP framework (PMPF) the PMPF does influence the MP algorithm but we cannot adapt it One possible goal to improve the situation: extend the PMPF in a “reasonable” (theoretically motivated) way increase its flexibility in practice increase its usefulness in theory 5 / 26

  6. Background Message Passing Modifying the PMPF Conclusions Message Passing on a conceptual level (1) Example F = ( v 1 ∨ v 2 ∨ v 3 ) ∧ ( v 1 ∨ ¯ v 2 ∨ v 3 ) ∧ ( ¯ v 1 ∨ ¯ v 2 ∨ ¯ v 3 ) It is helpful to understand F as a factor graph . v c 1 1 c 2 v 2 v c 3 3 Undirected, bipartite graph Two types of nodes (variable nodes (circles), clause nodes (squares)) Two types of edges (positive edges (solid), negative edges (dashed)) Edges constitute literal occurrences 6 / 26

  7. Background Message Passing Modifying the PMPF Conclusions Message Passing on a conceptual level (2) Example F = ( v 1 ∨ v 2 ∨ v 3 ) ∧ ( v 1 ∨ ¯ v 2 ∨ v 3 ) ∧ ( ¯ v 1 ∨ ¯ v 2 ∨ ¯ v 3 ) v c 1 1 c 2 v 2 v c 3 3 MP algorithm sends around messages along the edges clauses and variables “negotiate” about possible assignments assume variable v is contained in clause c as literal l MP framework defines two types of messages. 7 / 26

  8. Background Message Passing Modifying the PMPF Conclusions Message Passing on a conceptual level (3) Example F = ( v 1 ∨ v 2 ∨ v 3 ) ∧ ( v 1 ∨ ¯ v 2 ∨ v 3 ) ∧ ( ¯ v 1 ∨ ¯ v 2 ∨ ¯ v 3 ) d v c 1 1 1 d 2 c 2 d 3 c 3 1. Disrespect Messages (from variable nodes towards clause nodes): δ ( l, c ) ∈ [0 . 0 , 1 . 0] The chance that l will not satisfy c Intuitive meaning of δ ( l, c ) ≈ 1 . 0 : Variable v tells clause c that it cannot satisfy it via its occurrence l . 8 / 26

  9. Background Message Passing Modifying the PMPF Conclusions Message Passing on a conceptual level (4) Example F = ( v 1 ∨ v 2 ∨ v 3 ) ∧ ( v 1 ∨ ¯ v 2 ∨ v 3 ) ∧ ( ¯ v 1 ∨ ¯ v 2 ∨ ¯ v 3 ) w v c 1 1 1 w 2 c 2 w 3 c 3 2. Warning Messages (from clause nodes towards variable nodes): ω ( c, v ) ∈ [0 . 0 , 1 . 0] The chance that no other literal in c can satisfy c Intuitive meaning of ω ( c, v ) ≈ 1 . 0 : Clause c is telling variable v that it needs it to be satisfied. 9 / 26

  10. Background Message Passing Modifying the PMPF Conclusions Message Passing on a conceptual level (5) Example F = ( v 1 ∨ v 2 ∨ v 3 ) ∧ ( v 1 ∨ ¯ v 2 ∨ v 3 ) ∧ ( ¯ v 1 ∨ ¯ v 2 ∨ ¯ v 3 ) The MP framework defines the warning computation as � ω ( c, v ) = δ ( l, c ) l ∈ c \{ v, ¯ v } . v c 1 = w d 1 d 3 1 2 d 1 v 2 d 3 v 3 10 / 26

  11. Background Message Passing Modifying the PMPF Conclusions Message Passing on a conceptual level (6) The MP framework defines the cavity freedom value computations. 1. Cavity freedom of l to satisfy c . � [0 . 0 , 1 . 0] ∋ S ( l, c ) = [1 − ω ( d, Var ( l ))] d ∈ C ¬ l 2. Cavity freedom of l to not satisfy c . � [0 . 0 , 1 . 0] ∋ U ( l, c ) = [1 − ω ( d, ( V ar )( l ))] d ∈ C l \{ c } 11 / 26

  12. Background Message Passing Modifying the PMPF Conclusions Message Passing on a conceptual level (7) In summary, the MP framework defines, that δ values are used to compute the ω values ω values are used to compute the S, U values However, the MP algorithms do not update the values arbitrarily. MP framework governs the overall process to update values. following a random clause permutation π ∈ S m update one clause at a time (clause-update). ∀ i ∈ { 1 , . . . , m } : ∀ l ∈ c π ( i ) : update δ ( l, c π ( i ) ) 1 ∀ v ∈ Var ( c π ( i ) ) : update ω ( c π ( i ) , v ) 2 ∀ l ∈ c π ( i ) : update S ( l, c π ( i ) ) , U ( l, c π ( i ) ) 3 updating all clauses once is called an iteration ( z = 0 , 1 , 2 , . . . ) multiple iterations form a cycle ( y = 1 , 2 , 3 , . . . ) the cycle of iterations is over in case an abort condition holds 12 / 26

  13. Background Message Passing Modifying the PMPF Conclusions Message Passing on a conceptual level (8) Conceptually, a cycle y of iterations z = 0 , 1 , . . . , ∗ looks as follows. The MP heuristic “links” the consecutive iterations. For example, the Belief Propagation (BP) heuristic defines y z − 1 U BP ( l, c ) � U � y z δ BP ( l, c ) = = y y z − 1 U BP ( l, c ) + z − 1 S BP ( l, c ) U + S 13 / 26

  14. Background Message Passing Modifying the PMPF Conclusions Message Passing on a conceptual level (9) The MP framework defines how to compute the biases using the y ∗ ω ( c, v ) . 1 Compute the variable freedom to be assigned to true ( T ) or false ( F ) � � y T ( v ) = [1 − y y F ( v ) = [1 − y ∗ ω ( c, v )] ∗ ω ( c, v )] c ∈ C − c ∈ C + v v 2 Compute magnetization values using T and F y µ + ( v ) , y µ − ( v ) , y µ ± ( v ) ∈ [0 . 0 , 1 . 0] These give y µ ( v ) = y µ + ( v ) + y µ − ( v ) + y µ ± ( v ) 3 Compute the biases y µ − ( v ) y µ + ( v ) y β + ( v ) = y β − ( v ) = y β ( v ) = y β + ( v ) − y β − ( v ) y µ ( v ) y µ ( v ) 14 / 26

  15. Background Message Passing Modifying the PMPF Conclusions Message Passing on a conceptual level (10) Here, all computations are defined by the MP framework except the µ . The exact µ equations are defined by the MP heuristic. The Belief Propagation (BP) heuristic defines y µ + BP ( v ) = y T BP ( v ) y µ − BP ( v ) = y F BP ( v ) y µ ± BP ( v ) = 0 Using the BP heuristic in the MP framework then results in y T BP ( v ) − y F BP ( v ) y β BP ( v ) = y T BP ( v ) + y F BP ( v ) 15 / 26

  16. Background Message Passing Modifying the PMPF Conclusions Heuristics vs. Frameworks (1) In summary, we have the following set of equations (defined by either the MP framework or the MP heuristic). During the iterations we compute disrespect messages δ ( l, c ) warning messages ω ( l, c ) cavity freedom values S ( l, c ) , U ( l, c ) After convergence we compute variable freedom values T ( v ) , F ( v ) variable magnetization values µ + ( v ) , µ − ( v ) , µ ± ( v ) , µ ( v ) variable bias value β + ( v ) , β − ( v ) , β ( v ) If we want to extend the MP framework we must not touch the MP heuristic equations we must influence the equations for the iterations 16 / 26

  17. Background Message Passing Modifying the PMPF Conclusions Heuristics vs. Frameworks (2) The focus lies on the waring message, currently defined as � ω ( c, v ) = δ ( l, c ) l ∈ c \{ v, ¯ v } How can we modify this equation in a meaningful way? Check Statistical Mechanics and the Ising Model has served in the derivation of the SP equations provides concepts that have been ignored so far However. . . quite substantial theory In the following, modifications of ω are presented but the “bottom-up” explanations of the underlying ideas are omitted 17 / 26

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend