machine learning for sat
play

Machine Learning for SAT Holger H. Hoos Department of Computer - PowerPoint PPT Presentation

Machine Learning for SAT Holger H. Hoos Department of Computer Science University of British Columbia Canada Dagstuhl Seminar on Theory and Practice of SAT Solving 2015/04/21 How can machine learning methods help to ... I build better SAT


  1. Machine Learning for SAT Holger H. Hoos Department of Computer Science University of British Columbia Canada Dagstuhl Seminar on Theory and Practice of SAT Solving 2015/04/21

  2. How can machine learning methods help to ... I build better SAT solvers I predict (and understand) solver performance Disclaimer: Lots of literature; no comprehensive survey, just an overview of some work in the area. Holger Hoos: Machine learning for SAT 2

  3. 1

  4. Automatic configuration of SAT solvers Hutter, HH, St¨ utzle (2007); Hutter, HH, Leyton-Brown, St¨ utzle (2009); Hutter, Lindauer, Balint, Bayless, HH, Leyton-Brown (in preparation) Goal: Find parameter settings of a SAT solver that achieves optimised performance for specific distribution of instances. Approaches (work for arbitrary problems, solvers): I stochastic local search in configuration space (ParamILS) I sequential model-based optimisation (SMAC) Holger Hoos: Machine learning for SAT 3

  5. Lingeling on software verification (2013 Configurable SAT Solver Competition data) average running time 3.32 → 0.16 CPU sec; rank 4 → 1 on Industrial SAT+UNSAT Holger Hoos: Machine learning for SAT 4

  6. Clasp-3.0.4-p8 on N − Rooks (2014 Configurable SAT Solver Competition data) PAR10 705 → 5 CPU sec; rank 5 → 1 on Industrial SAT+UNSAT Holger Hoos: Machine learning for SAT 5

  7. Knuth’s sat13 on diverse set of instances (very easy to medium; trained on very easy only) 10 1 running time optimised [gmem] 10 0 10 -1 10 -2 10 -3 10 -3 10 -2 10 -1 10 0 10 1 running time default [gmem] mean running time 0.572 → 0.402 gmems; geometric average speedup: 1.414-fold Holger Hoos: Machine learning for SAT 6

  8. Knuth’s sat13 on diverse set of instances (TAOCP testing instances; trained on very easy) 10 3 10 2 running time optimised [gmem] 10 1 10 0 10 -1 10 -2 10 -3 10 -3 10 -2 10 -1 10 0 10 1 10 2 10 3 running time default [gmem] mean running time 47.4 → 36.9 gmems; geometric average speedup: 1.357-fold Holger Hoos: Machine learning for SAT 7

  9. Citations to key publications on algorithm configuration GGA (Ansotegui et al. 09) I/F-Race (Balaprakash et al. 07) SMAC (Hutter et al. 11) ParamILS (Hutter et al. 09) 200 150 100 50 0 2 2 2 2 2 2 2 2 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 7 8 9 0 1 2 3 4 (Data from Google Scholar)

  10. 2

  11. Automatic per-instance solver selection Xu, Hutter, HH, Leyton-Brown (2007–2012) Goal: Given a set S of SAT solvers and an instance i to be solved, find the solver from S that can be expected to work best on i . Approaches (work for arbitrary problems, solvers): I for each solver, learn a function that predicts solver performance from instance features; select solver with best predicated performance (SATzilla 2007–2009) I use cost-based classification method to select solver based on instance features (SATzilla 2011–present) Holger Hoos: Machine learning for SAT 8

  12. 2012 SAT Challenge Rank Application Hard Combinatorial Random (SAT only) 1 SATzilla2012 App SATzilla2012 COMB CCASat 2 SATzilla2012 ALL SATzilla2012 ALL SATzilla2012 RAND 3 Industrial SAT Solver ppfolio2012 SATzilla2012 ALL 4 lingeling (2011) interactSAT_c Sparrow (2011) 5 interactSAT pfolioUZK EagleUP (2011) 6 glucose aspeed-crafted sattime2012 7 SINN Clasp-crafted ppfolio2012 2/3 first, 3/3 second, 3/3 third places Holger Hoos: Machine learning for SAT 9

  13. 2012 SAT Challenge: Single best solver vs SATzilla SATzilla2012 APP runtime [CPU sec] 2 10 1 10 0 10 0 1 2 10 10 10 lingeling2011 runtime [CPU sec] Holger Hoos: Machine learning for SAT 10

  14. 3

  15. Automatic construction of parallel solver porfolios (ACPP) Lindauer, HH, Leyton-Brown, Schaub (2012; under review) Goal: Given one or more SAT parametrised solvers, build a parallel portfolio (from solvers and configurations) with optimised performance for specific distribution of instances. Approaches (work for arbitrary problems, solvers): I configure joint parameter space I iteratively build portfolio by greedily adding components that achieve largest performance improvement Holger Hoos: Machine learning for SAT 11

  16. Parallel portfolio based on Lingeling (v.276) on 2012 SAT Comp. Application instances runtime Greedy-MT4 [s] 1000 100 10 1 1 10 100 1000 runtime Conf-SP [s] Holger Hoos: Machine learning for SAT 12

  17. Parallel portfolio based on clasp (v.2.0.2) on 2012 SAT Comp. Application instances run t im Greedy-MT4 [s ] 1000 100 10 1 1 10 100 1000 run t ime Conf -ST [s ] Holger Hoos: Machine learning for SAT 13

  18. Sequential vs parallel solvers on 2012 SAT Comp. Application instances Solver # time-outs PAR10 Sequential solvers: pfolioUZK 150 4656 glucose-2 . 1 55 1778 SATzilla-2012- APP 38 1289 Parallel solvers with default config: ppfolio+CS 46 1506 pfolioUZK-MP(8)+CS 35 1168 Plingeling(aqw)+CS (2013) 32 1058 ACPP solvers: Global-MP(8) 35 1172 ParHydra 4 -MT(8) 29 992 (ACPP solvers constructed based on solver set underlying pfolioUZK) Holger Hoos: Machine learning for SAT 14

  19. 4

  20. Scaling of running time with instance size Mu & Hoos (IJCAI-15) Goal: Study empirical time complexity of solving phase-transition random 3-SAT instances using high-performance SAT solvers. Methodology: I fit parametric models to running time data I challenge models by extrapolation I assess models using bootstrap confidence intervals (Hoos 2009; Hoos & St¨ utzle 2014) Holger Hoos: Machine learning for SAT 15

  21. Scaling of running time for WalkSAT/SKC 10 1 10 0 time [CPU sec] 10 -1 10 -2 Support data Exp. model: 6.89157e-04 × 1.00798 n 10 -3 Poly. model: 8.83962e-11 × n 3.18915 Exp. model bootstrap intervals Poly. model bootstrap intervals Challenge data (with confidence intervals) 10 -4 100 200 500 1000 n Holger Hoos: Machine learning for SAT 16

  22. Scaling of running time for March hi 10000 1000 100 10 CPU time [sec] 1 0.1 0.01 Support data for sat. instances Challenge data for sat. instances 0.001 Exp. model for sat.: 8.33113e-06 × 1.03119 n Exp. model bootstrap intervals for sat. 0.0001 Support data for unsat. instances Challenge data for unsat. instances 1e-05 100 200 300 400 500 600 n Holger Hoos: Machine learning for SAT 17

  23. Main findings: I median running times of SLS-based solvers (WalkSAT/SKC, BalancedZ, probSAT) scale polynomially (degree ≈ 3) I median running times of complete solvers (knfcs, march hi, march br) scale exponentially (base ≈ 1 . 03) I scaling models for SLS-based solvers are very similar, but march-variants scale better than kcnfs Holger Hoos: Machine learning for SAT 18

  24. Take-home message: Machine learning techniques ... I let us model & predict solver behaviour I help us build better solvers I give us interesting insights into solver performance If you care about SAT solving in practice: Use these techniques! If you only care about theorems: You may be inspired by the results thus obtained. Holger Hoos: Machine learning for SAT 19

  25. Holger H. Hoos ∃∀ Empirical Algorithmics Cambridge University Press (nearing completion)

  26. References (1): – Hoos, H. H. (2009). A bootstrap approach to analysing the scaling of empirical run- time data with problem size. Technical Report TR-2009-16, University of British Columbia, June 2009. http://www.cs.ubc.ca/hoos/Publ/Hoos09.pdf . – Hoos, H. H.; Leyton-Brown, K.; Schaub, T.; Schneider, M. (2012). Algorithm configuration for portfolio-based parallel SAT-solving. Proceedings of the First Workshop on Combining Constraint Solving with Mining and Learning (CoCoMile12). pp. 7-12. – Hoos, H. H.; & St¨ utzle, T. (2014). On the empirical scaling of run-time for finding optimal solutions to the travelling salesman problem. European Journal of Operational Research 238: 87–94. – Hutter, F.; Hoos, H.; Leyton-Brown, K.; St¨ utzle, T. (2009). ParamILS: An automatic algorithm configuration framework. Journal of Artificial Intelligence Research 36, 267306. – Hutter, F.; Hoos, H. H.; and St¨ utzle, T. (2007). Automatic algorithm configuration based on local search. Proceedings of the 22nd National Conference on Artificial Intelligence (AAAI07), pp. 1152-1157. – Hutter, F.; Lindauer, M.; Balint, A.; Bayless., S.; Hoos, H. H.; Leyton-Brown, K. (2015). The Configurable SAT Solver Challenge (CSSC). Under review. – Hutter, F.; Xu, L.; Hoos, H. H.; Leyton-Brown, K. (2014). Algorithm Runtime Prediction: Methods & Applications. Artificial Intelligence, pp. 79–111, January 2014. – Lindauer, M.; Hoos, H. H.; Leyton-Brown, K.; Schaub, T. (2015). Automatic Construction of Parallel Portfolios via Algorithm Configuration. Under review. – Mu, Z.; Hoos, H. H. (2015). On the Empirical Time Complexity of Random 3-SAT at the Phase Transition. Proceedings of the 24th International Joint Conference on Artificial Intelligence (IJCAI-15), to appear. (Preprint available at http://www.cs.ubc.ca/~hoos/Publ/MuHoo15-preprint.pdf .)

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend