SLIDE 1 Machine Learning for SAT
Holger H. Hoos
Department of Computer Science University of British Columbia Canada
Dagstuhl Seminar on Theory and Practice of SAT Solving 2015/04/21
Machine Learning for SAT Holger H. Hoos Department of Computer - - PowerPoint PPT Presentation
Machine Learning for SAT Holger H. Hoos Department of Computer - - PowerPoint PPT Presentation
Machine Learning for SAT Holger H. Hoos Department of Computer Science University of British Columbia Canada Dagstuhl Seminar on Theory and Practice of SAT Solving 2015/04/21 How can machine learning methods help to ... I build better SAT
SLIDE 2 How can machine learning methods help to ...
I build better SAT solvers I predict (and understand) solver performance
Disclaimer: Lots of literature; no comprehensive survey, just an overview of some work in the area.
Holger Hoos: Machine learning for SAT 2
SLIDE 3
1
SLIDE 4 Automatic configuration of SAT solvers
Hutter, HH, St¨ utzle (2007); Hutter, HH, Leyton-Brown, St¨ utzle (2009); Hutter, Lindauer, Balint, Bayless, HH, Leyton-Brown (in preparation)
Goal: Find parameter settings of a SAT solver that achieves optimised performance for specific distribution of instances. Approaches (work for arbitrary problems, solvers):
I stochastic local search in configuration space (ParamILS) I sequential model-based optimisation (SMAC)
Holger Hoos: Machine learning for SAT 3
SLIDE 5 Lingeling on software verification
(2013 Configurable SAT Solver Competition data)
average running time 3.32 → 0.16 CPU sec; rank 4 → 1 on Industrial SAT+UNSAT
Holger Hoos: Machine learning for SAT 4
SLIDE 6 Clasp-3.0.4-p8 on N−Rooks
(2014 Configurable SAT Solver Competition data)
PAR10 705 → 5 CPU sec; rank 5 → 1 on Industrial SAT+UNSAT
Holger Hoos: Machine learning for SAT 5
SLIDE 7 Knuth’s sat13 on diverse set of instances
(very easy to medium; trained on very easy only)
10-3 10-2 10-1 100 101 10-3 10-2 10-1 100 101 running time optimised [gmem] running time default [gmem]
mean running time 0.572 → 0.402 gmems; geometric average speedup: 1.414-fold
Holger Hoos: Machine learning for SAT 6
SLIDE 8 Knuth’s sat13 on diverse set of instances
(TAOCP testing instances; trained on very easy)
10-3 10-2 10-1 100 101 102 103 10-3 10-2 10-1 100 101 102 103 running time optimised [gmem] running time default [gmem]
mean running time 47.4 → 36.9 gmems; geometric average speedup: 1.357-fold
Holger Hoos: Machine learning for SAT 7
SLIDE 9 Citations to key publications on algorithm configuration
50 100 150 200 2 7 2 8 2 9 2 1 2 1 1 2 1 2 2 1 3 2 1 4 ParamILS (Hutter et al. 09) SMAC (Hutter et al. 11) I/F-Race (Balaprakash et al. 07) GGA (Ansotegui et al. 09) (Data from Google Scholar)
SLIDE 10
2
SLIDE 11 Automatic per-instance solver selection
Xu, Hutter, HH, Leyton-Brown (2007–2012)
Goal: Given a set S of SAT solvers and an instance i to be solved, find the solver from S that can be expected to work best on i. Approaches (work for arbitrary problems, solvers):
I for each solver, learn a function that predicts solver
performance from instance features; select solver with best predicated performance (SATzilla 2007–2009)
I use cost-based classification method to select solver based on
instance features (SATzilla 2011–present)
Holger Hoos: Machine learning for SAT 8
SLIDE 12 2012 SAT Challenge
Rank Application Hard Combinatorial Random (SAT only) 1 SATzilla2012 App SATzilla2012 COMB CCASat 2 SATzilla2012 ALL SATzilla2012 ALL SATzilla2012 RAND 3 Industrial SAT Solver ppfolio2012 SATzilla2012 ALL 4 lingeling (2011) interactSAT_c Sparrow (2011) 5 interactSAT pfolioUZK EagleUP (2011) 6 glucose aspeed-crafted sattime2012 7 SINN Clasp-crafted ppfolio2012
2/3 first, 3/3 second, 3/3 third places
Holger Hoos: Machine learning for SAT 9
SLIDE 13 2012 SAT Challenge: Single best solver vs SATzilla
10 10
1
10
2
10 10
1
10
2
lingeling2011 runtime [CPU sec] SATzilla2012 APP runtime [CPU sec]
Holger Hoos: Machine learning for SAT 10
SLIDE 14
3
SLIDE 15 Automatic construction of parallel solver porfolios (ACPP)
Lindauer, HH, Leyton-Brown, Schaub (2012; under review)
Goal: Given one or more SAT parametrised solvers, build a parallel portfolio (from solvers and configurations) with optimised performance for specific distribution of instances. Approaches (work for arbitrary problems, solvers):
I configure joint parameter space I iteratively build portfolio by greedily adding components that
achieve largest performance improvement
Holger Hoos: Machine learning for SAT 11
SLIDE 16 Parallel portfolio based on Lingeling (v.276)
- n 2012 SAT Comp. Application instances
SLIDE 17 Parallel portfolio based on clasp (v.2.0.2)
- n 2012 SAT Comp. Application instances
SLIDE 18 Sequential vs parallel solvers
- n 2012 SAT Comp. Application instances
SLIDE 19
4
SLIDE 20 Scaling of running time with instance size
Mu & Hoos (IJCAI-15)
Goal: Study empirical time complexity of solving phase-transition random 3-SAT instances using high-performance SAT solvers. Methodology:
I fit parametric models to running time data I challenge models by extrapolation I assess models using bootstrap confidence intervals
(Hoos 2009; Hoos & St¨ utzle 2014)
Holger Hoos: Machine learning for SAT 15
SLIDE 21 Scaling of running time for WalkSAT/SKC
10-4 10-3 10-2 10-1 100 101 100 200 500 1000 time [CPU sec] n Support data
- Exp. model: 6.89157e-04 × 1.00798n
- Poly. model: 8.83962e-11 × n3.18915
- Exp. model bootstrap intervals
- Poly. model bootstrap intervals
SLIDE 22 Scaling of running time for March hi
1e-05 0.0001 0.001 0.01 0.1 1 10 100 1000 10000 100 200 300 400 500 600 CPU time [sec] n Support data for sat. instances Challenge data for sat. instances
- Exp. model for sat.: 8.33113e-06 × 1.03119n
- Exp. model bootstrap intervals for sat.
SLIDE 23 Main findings:
I median running times of SLS-based solvers (WalkSAT/SKC,
BalancedZ, probSAT) scale polynomially (degree ≈ 3)
I median running times of complete solvers (knfcs, march hi,
march br) scale exponentially (base ≈ 1.03)
I scaling models for SLS-based solvers are very similar,
but march-variants scale better than kcnfs
Holger Hoos: Machine learning for SAT 18
SLIDE 24 Take-home message:
Machine learning techniques ...
I let us model & predict solver behaviour I help us build better solvers I give us interesting insights into solver performance
If you care about SAT solving in practice: Use these techniques! If you only care about theorems: You may be inspired by the results thus obtained.
Holger Hoos: Machine learning for SAT 19
SLIDE 25
∃∀
Holger H. HoosEmpirical
Algorithmics
Cambridge University Press (nearing completion) SLIDE 26 References (1):
– Hoos, H. H. (2009). A bootstrap approach to analysing the scaling of empirical run- time data with problem size. Technical Report TR-2009-16, University of British Columbia, June 2009. http://www.cs.ubc.ca/hoos/Publ/Hoos09.pdf. – Hoos, H. H.; Leyton-Brown, K.; Schaub, T.; Schneider, M. (2012). Algorithm configuration for portfolio-based parallel SAT-solving. Proceedings of the First Workshop on Combining Constraint Solving with Mining and Learning (CoCoMile12). pp. 7-12. – Hoos, H. H.; & St¨ utzle, T. (2014). On the empirical scaling of run-time for finding optimal solutions to the travelling salesman problem. European Journal of Operational Research 238: 87–94. – Hutter, F.; Hoos, H.; Leyton-Brown, K.; St¨ utzle, T. (2009). ParamILS: An automatic algorithm configuration framework. Journal of Artificial Intelligence Research 36, 267306. – Hutter, F.; Hoos, H. H.; and St¨ utzle, T. (2007). Automatic algorithm configuration based on local search. Proceedings of the 22nd National Conference on Artificial Intelligence (AAAI07), pp. 1152-1157. – Hutter, F.; Lindauer, M.; Balint, A.; Bayless., S.; Hoos, H. H.; Leyton-Brown, K. (2015). The Configurable SAT Solver Challenge (CSSC). Under review. – Hutter, F.; Xu, L.; Hoos, H. H.; Leyton-Brown, K. (2014). Algorithm Runtime Prediction: Methods &
- Applications. Artificial Intelligence, pp. 79–111, January 2014.
- Transition. Proceedings of the 24th International Joint Conference on Artificial Intelligence (IJCAI-15), to
- appear. (Preprint available at http://www.cs.ubc.ca/~hoos/Publ/MuHoo15-preprint.pdf.)
SLIDE 27 References (2):
– Xu, L.; Hutter, F.; Hoos, H. H.; Leyton-Brown, K. (2007). SATzilla-07: the design and analysis of an algorithm portfolio for SAT. Proceedings of the 13th International Conference on Principles and Practice of Constraint Programming (CP07), pp. 712-727. – Xu, L.; Hutter, F.; Hoos, H. H.; Leyton-Brown, K. (2009). SATzilla2009: an automatic algorithm portfolio for SAT. SAT Competition 2009: Solver Descriptions. – Xu, L.; Hutter, F.; Hoos, H. H.; Leyton-Brown, K. (2012). Evaluating component solver contributions in portfolio-based algorithm selectors. Proceedings of the 15th International Conference on Theory and Applications of Satisfiability Testing (SAT12), pp. 228-241. – Xu, L.; Hutter, F.; Hoos, H. H.; Leyton-Brown, K. (2012). Satzilla2012: Improved algorithm selection based on cost-sensitive classification models. SAT Challenge 2012: Solver Descriptions.