learning a sat solver from single bit supervision
play

Learning a SAT Solver from Single-Bit Supervision Daniel Selsam 1 - PowerPoint PPT Presentation

Learning a SAT Solver from Single-Bit Supervision Daniel Selsam 1 Matthew Lamm 1 unz 1 Benedikt B Percy Liang 1 Leonardo de Moura 2 David L. Dill 1 1 Stanford University 2 Microsoft Research March 22nd, 2018 Setup Train: Input:


  1. Learning a SAT Solver from Single-Bit Supervision Daniel Selsam 1 Matthew Lamm 1 unz 1 Benedikt B¨ Percy Liang 1 Leonardo de Moura 2 David L. Dill 1 1 Stanford University 2 Microsoft Research March 22nd, 2018

  2. Setup Train: � � Input: SAT problem P Output: ✶ { P is satisfiable } 2

  3. Setup Train: � � Input: SAT problem P Output: ✶ { P is satisfiable } Test: NeuroSAT unsat P NeuroSAT sat P

  4. Setup Train: � � Input: SAT problem P Output: ✶ { P is satisfiable } Test: NeuroSAT unsat P NeuroSAT sat P solution 2

  5. Background: SAT 3

  6. Background: SAT ◮ A propositional formula can be represented as an AND of ORS. – example: ( x 1 ∨ x 2 ) ∧ ( x 2 ∨ x 1 ) � �� � � �� � c 1 c 2 3

  7. Background: SAT ◮ A propositional formula can be represented as an AND of ORS. – example: ( x 1 ∨ x 2 ) ∧ ( x 2 ∨ x 1 ) � �� � � �� � c 1 c 2 ◮ Jargon: – x 1 , x 1 , x 2 , x 2 are all literals – c 1 , c 2 are both clauses 3

  8. Background: SAT ◮ A propositional formula can be represented as an AND of ORS. – example: ( x 1 ∨ x 2 ) ∧ ( x 2 ∨ x 1 ) � �� � � �� � c 1 c 2 ◮ Jargon: – x 1 , x 1 , x 2 , x 2 are all literals – c 1 , c 2 are both clauses ◮ A formula is satisfiable if there exists a satisfying assignment . – example: formula above is satisfiable ( e.g. 11) 3

  9. Background: SAT ◮ A propositional formula can be represented as an AND of ORS. – example: ( x 1 ∨ x 2 ) ∧ ( x 2 ∨ x 1 ) � �� � � �� � c 1 c 2 ◮ Jargon: – x 1 , x 1 , x 2 , x 2 are all literals – c 1 , c 2 are both clauses ◮ A formula is satisfiable if there exists a satisfying assignment . – example: formula above is satisfiable ( e.g. 11) ◮ The SAT problem : given a formula, – determine if it is satisfiable – if it is, find a satisfying assignment 3

  10. Machine learning for SAT ◮ Goal: train a neural network to predict satisfiability. – (we’ll discuss decoding solutions later) 4

  11. Machine learning for SAT ◮ Goal: train a neural network to predict satisfiability. – (we’ll discuss decoding solutions later) ◮ Two design challenges: 4

  12. Machine learning for SAT ◮ Goal: train a neural network to predict satisfiability. – (we’ll discuss decoding solutions later) ◮ Two design challenges: – what problems do we train on? 4

  13. Machine learning for SAT ◮ Goal: train a neural network to predict satisfiability. – (we’ll discuss decoding solutions later) ◮ Two design challenges: – what problems do we train on? – with what kind of architecture? 4

  14. Training data 5

  15. Training data ◮ Issue: some problem distributions might be easy to classify. – (based on superficial properties) – want: difficult problems to force it to learn something general 5

  16. Training data ◮ Issue: some problem distributions might be easy to classify. – (based on superficial properties) – want: difficult problems to force it to learn something general ◮ We define distribution SR ( n ) over problems such that: – problems come in pairs – one unsat , one sat – they differ by negating a single literal in a single clause 5

  17. Training data ◮ Issue: some problem distributions might be easy to classify. – (based on superficial properties) – want: difficult problems to force it to learn something general ◮ We define distribution SR ( n ) over problems such that: – problems come in pairs – one unsat , one sat – they differ by negating a single literal in a single clause ◮ To sample a pair from SR ( n ): 5

  18. Training data ◮ Issue: some problem distributions might be easy to classify. – (based on superficial properties) – want: difficult problems to force it to learn something general ◮ We define distribution SR ( n ) over problems such that: – problems come in pairs – one unsat , one sat – they differ by negating a single literal in a single clause ◮ To sample a pair from SR ( n ): – keep sampling random clauses until unsat 5

  19. Training data ◮ Issue: some problem distributions might be easy to classify. – (based on superficial properties) – want: difficult problems to force it to learn something general ◮ We define distribution SR ( n ) over problems such that: – problems come in pairs – one unsat , one sat – they differ by negating a single literal in a single clause ◮ To sample a pair from SR ( n ): – keep sampling random clauses until unsat – flip a single literal in the final clause to make it sat 5

  20. Training data ◮ Issue: some problem distributions might be easy to classify. – (based on superficial properties) – want: difficult problems to force it to learn something general ◮ We define distribution SR ( n ) over problems such that: – problems come in pairs – one unsat , one sat – they differ by negating a single literal in a single clause ◮ To sample a pair from SR ( n ): – keep sampling random clauses until unsat – flip a single literal in the final clause to make it sat – return the pair 5

  21. Network architecture 6

  22. Network architecture ( x 1 ∨ x 2 ) ∧ ( x 1 ∨ x 2 ) � �� � � �� � c 1 c 2 6

  23. Network architecture ( x 1 ∨ x 2 ) ∧ ( x 1 ∨ x 2 ) � �� � � �� � c 1 c 2 x 1 c 1 x 1 x 2 c 2 x 2 6

  24. Network architecture ( x 1 ∨ x 2 ) ∧ ( x 1 ∨ x 2 ) � �� � � �� � c 1 c 2 x 1 c 1 x 1 x 2 c 2 x 2 NeuroSAT: 6

  25. Network architecture ( x 1 ∨ x 2 ) ∧ ( x 1 ∨ x 2 ) � �� � � �� � c 1 c 2 x 1 c 1 x 1 x 2 c 2 x 2 NeuroSAT: ◮ maintain embedding at every node 6

  26. Network architecture ( x 1 ∨ x 2 ) ∧ ( x 1 ∨ x 2 ) � �� � � �� � c 1 c 2 x 1 c 1 x 1 x 2 c 2 x 2 NeuroSAT: ◮ maintain embedding at every node ◮ iteratively pass messages along the edges of the graph 6

  27. Network architecture ( x 1 ∨ x 2 ) ∧ ( x 1 ∨ x 2 ) � �� � � �� � c 1 c 2 x 1 c 1 x 1 x 2 c 2 x 2 NeuroSAT: ◮ maintain embedding at every node ◮ iteratively pass messages along the edges of the graph ◮ after T time steps, map literals into scalar “votes” 6

  28. Network architecture ( x 1 ∨ x 2 ) ∧ ( x 1 ∨ x 2 ) � �� � � �� � c 1 c 2 x 1 c 1 x 1 x 2 c 2 x 2 NeuroSAT: ◮ maintain embedding at every node ◮ iteratively pass messages along the edges of the graph ◮ after T time steps, map literals into scalar “votes” ◮ average votes to compute logit 6

  29. Experiment 7

  30. Experiment ◮ Datasets: – Train: SR ( U (10 , 40)) – Test: SR (40) 7

  31. Experiment ◮ Datasets: – Train: SR ( U (10 , 40)) – Test: SR (40) ◮ SR (40) – 40 variables (a trillion possible assignments) – ≈ 200 clauses – ≈ 1,000 literal occurrences – uniformly random: just a tangled, structureless mess – every problem a single bit away from flipping – (caveat: easy for SOTA) 7

  32. Experiment ◮ Datasets: – Train: SR ( U (10 , 40)) – Test: SR (40) ◮ SR (40) – 40 variables (a trillion possible assignments) – ≈ 200 clauses – ≈ 1,000 literal occurrences – uniformly random: just a tangled, structureless mess – every problem a single bit away from flipping – (caveat: easy for SOTA) ◮ Results: NeuroSAT predicts with 85% accuracy. 7

  33. NeuroSAT in action 8

  34. NeuroSAT in action 8

  35. NeuroSAT in action 8

  36. NeuroSAT in action 8

  37. NeuroSAT in action 8

  38. NeuroSAT in action 8

  39. NeuroSAT in action 8

  40. NeuroSAT in action 8

  41. NeuroSAT in action 8

  42. NeuroSAT in action 8

  43. NeuroSAT in action 8

  44. NeuroSAT in action 8

  45. NeuroSAT in action 8

  46. NeuroSAT in action 8

  47. NeuroSAT in action 8

  48. NeuroSAT in action 8

  49. NeuroSAT in action 8

  50. NeuroSAT in action 8

  51. NeuroSAT in action 8

  52. NeuroSAT in action 8

  53. NeuroSAT in action 8

  54. NeuroSAT in action 8

  55. NeuroSAT in action 8

  56. NeuroSAT in action 8

  57. NeuroSAT in action 8

  58. Decoding satisfying assignments 9

  59. Decoding satisfying assignments 9

  60. Decoding satisfying assignments 9

  61. Decoding satisfying assignments 9

  62. Decoding satisfying assignments 9

  63. Decoding satisfying assignments 9

  64. Decoding satisfying assignments 9

  65. Decoding satisfying assignments 9

  66. Decoding satisfying assignments 9

  67. Decoding satisfying assignments Percent of satisfiable problems in SR (40) solved: 70% 9

  68. Running for more rounds 10

  69. Scaling to bigger problems 11

  70. Generalizing to other domains ◮ NeuroSAT generalizes to problems from other domains. 12

  71. Generalizing to other domains ◮ NeuroSAT generalizes to problems from other domains. ◮ First we generate random graphs: 12

  72. Generalizing to other domains ◮ NeuroSAT generalizes to problems from other domains. ◮ First we generate random graphs: ◮ For each graph, we generate: – k -coloring problems (3 ≤ k ≤ 5) – k -dominating set problems (2 ≤ k ≤ 4) – k -clique problems (3 ≤ k ≤ 5) – k -cover problems (4 ≤ k ≤ 6) 12

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend