foundations of artificial intelligence
play

Foundations of Artificial Intelligence 8. Satisfiability and Model - PowerPoint PPT Presentation

Foundations of Artificial Intelligence 8. Satisfiability and Model Construction DPLL Procedure, Phase Transitions, Local Search, State of the Art Joschka Boedecker and Wolfram Burgard and Frank Hutter and Bernhard Nebel and Michael Tangermann


  1. Foundations of Artificial Intelligence 8. Satisfiability and Model Construction DPLL Procedure, Phase Transitions, Local Search, State of the Art Joschka Boedecker and Wolfram Burgard and Frank Hutter and Bernhard Nebel and Michael Tangermann Albert-Ludwigs-Universit¨ at Freiburg May 29, 2019

  2. Motivation SAT solving is the best available technology for practical solutions to many NP-hard problems Formal verification - Verification of software Ruling out unintended states (null-pointer exceptions, etc.) Proving that the program computes the right solution - Verification of hardware (Pentium bug, etc) Practical approach: encode into SAT & exploit the rapid progress in SAT solving Solving CSP instances in practice Solving graph coloring problems in practice . . . (University of Freiburg) Foundations of AI May 29, 2019 2 / 40

  3. Contents The SAT Problem 1 Davis-Putnam-Logemann-Loveland (DPLL) Procedure 2 “Average” Complexity of the Satisfiability Problem 3 Local Search Procedures 4 State of the Art 5 (University of Freiburg) Foundations of AI May 29, 2019 3 / 40

  4. Lecture Overview The SAT Problem 1 Davis-Putnam-Logemann-Loveland (DPLL) Procedure 2 “Average” Complexity of the Satisfiability Problem 3 Local Search Procedures 4 State of the Art 5 (University of Freiburg) Foundations of AI May 29, 2019 4 / 40

  5. Logical deduction vs. satisfiability Propositional Logic — typical algorithmic questions: Logical deduction Given: A logical theory (set of propositions) Question: Does a proposition logically follow from this theory? Reduction to unsatisfiability, which is coNP-complete (complementary to NP problems) Satisfiability of a formula (SAT) Given: A logical theory Wanted: Model of the theory Example: Configurations that fulfill the constraints given in the theory Can be “easier” because it is enough to find one model (University of Freiburg) Foundations of AI May 29, 2019 5 / 40

  6. The Satisfiability Problem (SAT) Given: Propositional formula ϕ in CNF Wanted: Model of ϕ . or proof, that no such model exists. (University of Freiburg) Foundations of AI May 29, 2019 6 / 40

  7. SAT and CSP SAT can be formulated as a Constraint-Satisfaction-Problem ( → search): (University of Freiburg) Foundations of AI May 29, 2019 7 / 40

  8. SAT and CSP SAT can be formulated as a Constraint-Satisfaction-Problem ( → search): CSP-Variables = Symbols of the alphabet Domain of values = { T, F } Constraints given by clauses (University of Freiburg) Foundations of AI May 29, 2019 7 / 40

  9. Lecture Overview The SAT Problem 1 Davis-Putnam-Logemann-Loveland (DPLL) Procedure 2 “Average” Complexity of the Satisfiability Problem 3 Local Search Procedures 4 State of the Art 5 (University of Freiburg) Foundations of AI May 29, 2019 8 / 40

  10. The DPLL algorithm The DPLL algorithm (Davis, Putnam, Logemann, Loveland, 1962) corresponds to backtracking with inference in CSPs: Recursive call DPLL ( ∆ , l ) with ∆ : set of clauses l : partial variable assignment Result: satisfying assignment that extends l or “unsatisfiable” if no such assignment exists. First call by DPLL( ∆ , ∅ ) (University of Freiburg) Foundations of AI May 29, 2019 9 / 40

  11. The DPLL algorithm The DPLL algorithm (Davis, Putnam, Logemann, Loveland, 1962) corresponds to backtracking with inference in CSPs: Recursive call DPLL ( ∆ , l ) with ∆ : set of clauses l : partial variable assignment Result: satisfying assignment that extends l or “unsatisfiable” if no such assignment exists. First call by DPLL( ∆ , ∅ ) Inference in DPLL: Simplify: if variable v is assigned a value d , then all clauses containing v are simplified immediately (corresponds to forward checking) (University of Freiburg) Foundations of AI May 29, 2019 9 / 40

  12. The DPLL algorithm The DPLL algorithm (Davis, Putnam, Logemann, Loveland, 1962) corresponds to backtracking with inference in CSPs: Recursive call DPLL ( ∆ , l ) with ∆ : set of clauses l : partial variable assignment Result: satisfying assignment that extends l or “unsatisfiable” if no such assignment exists. First call by DPLL( ∆ , ∅ ) Inference in DPLL: Simplify: if variable v is assigned a value d , then all clauses containing v are simplified immediately (corresponds to forward checking) Variables in unit clauses (= clauses with only one variable) are immediately assigned (corresponds to minimum remaining values ordering in CSPs) (University of Freiburg) Foundations of AI May 29, 2019 9 / 40

  13. The DPLL Procedure DPLL Function Given a set of clauses ∆ defined over a set of variables Σ , return “satisfiable” if ∆ is satisfiable. Otherwise return “unsatisfiable”. 1. If ∆ = ∅ return “satisfiable” 2. If � ∈ ∆ return “unsatisfiable” 3. Unit-propagation Rule: If ∆ contains a unit-clause C , assign a truth-value to the variable in C that satisfies C , simplify ∆ to ∆ ′ and return DPLL (∆ ′ ) . (University of Freiburg) Foundations of AI May 29, 2019 10 / 40

  14. The DPLL Procedure DPLL Function Given a set of clauses ∆ defined over a set of variables Σ , return “satisfiable” if ∆ is satisfiable. Otherwise return “unsatisfiable”. 1. If ∆ = ∅ return “satisfiable” 2. If � ∈ ∆ return “unsatisfiable” 3. Unit-propagation Rule: If ∆ contains a unit-clause C , assign a truth-value to the variable in C that satisfies C , simplify ∆ to ∆ ′ and return DPLL (∆ ′ ) . 4. Splitting Rule: Select from Σ a variable v which has not been assigned a truth-value. Assign one truth value t to it, simplify ∆ to ∆ ′ and call DPLL (∆ ′ ) a. If the call returns “satisfiable”, then return “satisfiable”. b. Otherwise assign the other truth-value to v in ∆ , simplify to ∆ ′′ and return DPLL (∆ ′′ ) . (University of Freiburg) Foundations of AI May 29, 2019 10 / 40

  15. Example (1) ∆ = {{ a, b, ¬ c } , {¬ a, ¬ b } , { c } , { a, ¬ b }} (University of Freiburg) Foundations of AI May 29, 2019 11 / 40

  16. Example (1) ∆ = {{ a, b, ¬ c } , {¬ a, ¬ b } , { c } , { a, ¬ b }} 1. Unit-propagation rule: c �→ T (University of Freiburg) Foundations of AI May 29, 2019 11 / 40

  17. Example (1) ∆ = {{ a, b, ¬ c } , {¬ a, ¬ b } , { c } , { a, ¬ b }} 1. Unit-propagation rule: c �→ T {{ a, b } , {¬ a, ¬ b } , { a, ¬ b }} (University of Freiburg) Foundations of AI May 29, 2019 11 / 40

  18. Example (1) ∆ = {{ a, b, ¬ c } , {¬ a, ¬ b } , { c } , { a, ¬ b }} 1. Unit-propagation rule: c �→ T {{ a, b } , {¬ a, ¬ b } , { a, ¬ b }} 2. Splitting rule: (University of Freiburg) Foundations of AI May 29, 2019 11 / 40

  19. Example (1) ∆ = {{ a, b, ¬ c } , {¬ a, ¬ b } , { c } , { a, ¬ b }} 1. Unit-propagation rule: c �→ T {{ a, b } , {¬ a, ¬ b } , { a, ¬ b }} 2. Splitting rule: 2a. a �→ F {{ b } , {¬ b }} (University of Freiburg) Foundations of AI May 29, 2019 11 / 40

  20. Example (1) ∆ = {{ a, b, ¬ c } , {¬ a, ¬ b } , { c } , { a, ¬ b }} 1. Unit-propagation rule: c �→ T {{ a, b } , {¬ a, ¬ b } , { a, ¬ b }} 2. Splitting rule: 2a. a �→ F {{ b } , {¬ b }} 3a. Unit-propagation rule: b �→ T { � } (University of Freiburg) Foundations of AI May 29, 2019 11 / 40

  21. Example (1) ∆ = {{ a, b, ¬ c } , {¬ a, ¬ b } , { c } , { a, ¬ b }} 1. Unit-propagation rule: c �→ T {{ a, b } , {¬ a, ¬ b } , { a, ¬ b }} 2. Splitting rule: 2a. a �→ F 2b. a �→ T {{ b } , {¬ b }} {{¬ b }} 3a. Unit-propagation rule: b �→ T { � } (University of Freiburg) Foundations of AI May 29, 2019 11 / 40

  22. Example (1) ∆ = {{ a, b, ¬ c } , {¬ a, ¬ b } , { c } , { a, ¬ b }} 1. Unit-propagation rule: c �→ T {{ a, b } , {¬ a, ¬ b } , { a, ¬ b }} 2. Splitting rule: 2a. a �→ F 2b. a �→ T {{ b } , {¬ b }} {{¬ b }} 3a. Unit-propagation rule: 3b. Unit-propagation rule: b �→ F b �→ T {} { � } (University of Freiburg) Foundations of AI May 29, 2019 11 / 40

  23. Example (1) ∆ = {{ a, b, ¬ c } , {¬ a, ¬ b } , { c } , { a, ¬ b }} 1. Unit-propagation rule: c �→ T {{ a, b } , {¬ a, ¬ b } , { a, ¬ b }} 2. Splitting rule: 2a. a �→ F 2b. a �→ T {{ b } , {¬ b }} {{¬ b }} 3a. Unit-propagation rule: 3b. Unit-propagation rule: b �→ F b �→ T {} { � } (University of Freiburg) Foundations of AI May 29, 2019 11 / 40

  24. Example (2) ∆ = {{ a, ¬ b, ¬ c, ¬ d } , { b, ¬ d } , { c, ¬ d } , { d }} (University of Freiburg) Foundations of AI May 29, 2019 12 / 40

  25. Example (2) ∆ = {{ a, ¬ b, ¬ c, ¬ d } , { b, ¬ d } , { c, ¬ d } , { d }} 1. Unit-propagation rule: d �→ T (University of Freiburg) Foundations of AI May 29, 2019 12 / 40

  26. Example (2) ∆ = {{ a, ¬ b, ¬ c, ¬ d } , { b, ¬ d } , { c, ¬ d } , { d }} 1. Unit-propagation rule: d �→ T {{ a, ¬ b, ¬ c } , { b } , { c }} (University of Freiburg) Foundations of AI May 29, 2019 12 / 40

  27. Example (2) ∆ = {{ a, ¬ b, ¬ c, ¬ d } , { b, ¬ d } , { c, ¬ d } , { d }} 1. Unit-propagation rule: d �→ T {{ a, ¬ b, ¬ c } , { b } , { c }} 2. Unit-propagation rule: b �→ T {{ a, ¬ c } , { c }} (University of Freiburg) Foundations of AI May 29, 2019 12 / 40

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend