Grid-Based SAT Solving with Iterative Partitioning and Clause - - PowerPoint PPT Presentation
Grid-Based SAT Solving with Iterative Partitioning and Clause - - PowerPoint PPT Presentation
Grid-Based SAT Solving with Iterative Partitioning and Clause Learning Antti E. J. Hyv arinen, Tommi Junttila, and Ilkka Niemel a Department of Information and Computer Science Aalto University, School of Science Finland September 16,
September 16, 2011
Motivation
◮ We study solving of challenging SAT instance ◮ Using distributed computing with approx. 64 cores and 1h
time limit on each running job
◮ In a setting which adapts straightforwardly to conflict driven
clause learning SAT solvers
5000 10000 15000 20000 5000 10000 15000 20000 Part-Tree-Learn time (in s) MiniSat 2.2.0 time (in s)
◮ System is based on MiniSat 2.2.0 ◮ × — sat, — unsat
September 16, 2011
Randomness in SAT Solver Runtimes
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 100 1000 10000 probability P(T ≤ t) time t (in seconds)
Modern SAT solvers exhibit high variance in solving a SAT
- instance. The random distributions for two instances (green
and red) are shown above.
September 16, 2011
Algorithm Portfolios
Many modern parallel SAT solvers are based on having several algorithms compete on solving the same instance.
1 10 100 50 100 150 200 250 300 Speedup (simulated) Number of CPUs N
The obtained speedup depends heavily on the formula (no learned clause sharing in the simulated results above)
September 16, 2011
Divide-and-conquer approaches
◮ Algorithm portfolios do not always scale ◮ Dividing the search space has been suggested as an
alternative
◮ A formula φ can be divided to n derived formulas
φ ∧ Co1, . . . , φ ∧ Con s.t. φ ≡ (φ ∧ Co1) ∨ . . . ∨ (φ ∧ Con).
◮ For example, pick log2 n variables from φ and conjoin them
in all polarities with φ. If n = 2, then Co1 = (x1 ∧ x2) Co2 = (x1 ∧ ¬x2) Co3 = (¬x1 ∧ x2) Co4 = (¬x1 ∧ ¬x2)
φ φ ∧ Co1 φ ∧ Co4 φ ∧ Co2 φ ∧ Co3
Formulas φ ∧ Coi can be solved independently in parallel.
September 16, 2011
Risks in Divide-and-Conquer
In particular for unsat instances the divide-and-conquer approach might result in worse speedup than the algorithm portfolio approach. Suppose that a formula φ with run time distribution q(t) is divided to n derived formulas φi having run time distribution q(nαt), 0 ≤ α ≤ 1.
20 40 60 80 100 120 140 160 180 10 20 30 40 50 60 70 80 90 100 Expected run time (in s) Number of derived formulas n ET α=0.5 α=0.6 α=0.7 α=0.8 α=0.9
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 100 1000 10000 probability P(T ≤ t) time t (in seconds)
- riginal
portfolio div&conq q(t/n) experimental div&conq
◮ In pathological cases the run time increases (left) ◮ In practice, divide-and-conquer is no better than portfolio
(right)
September 16, 2011
On the Iterative Partitioning and Learning
The increase in run time can be avoided by running the original formula and the derived formulas simultaneously. This can be generalized recursively.
t/o unsat 2 4 1 unsat unsat 3
◮ Initially, say, 8 formulas are running simultaneously ◮ As unsat or timeout results are obtained, it would be nice
to collect also the learned clauses to further constrain subsequent derived formulas, as indicated by the arrows (right tree).
September 16, 2011
Learned Clauses
As the constraints used in divide-and-conquer are not (necessarily) logical consequences of the original formula, the clauses learned by the solvers might not be either.
◮ Let φ = (x1 ∨ x2 ∨ x3) ∧ (x2 ∨ ¬x3) and Co1 = ¬x1. ◮ Then a modern SAT solver simplifies φ ∧ ¬x1 to
(x2 ∨ x3) ∧ (x2 ∨ ¬x3)
◮ Now branching on ¬x2 results in learned clause (x2) which
is not a logical consequence of φ as x1, ¬x2, ¬x3 satisfies φ.
¬x2@1 λ x3 ¬x3
September 16, 2011
Assumption Tagging
One approach is to tag constraints with new assumption literals which will be forced false when solving the derived formula.
◮ Let a be a new variable not occurring in φ, and
Co1 = (a ∨ ¬x1)
◮ Simplification can now be avoided, and
φ ∧ Co1 = (x1 ∨ x2 ∨ x3) ∧ (x2 ∨ ¬x3) ∧ (a ∨ ¬x1).
◮ The solver may now learn either (x1 ∨ x2), or (a ∨ x2),
consequences of φ and φ ∧ Co1, respectively.
¬x1 λ x3 ¬x3 ¬x2@2 ¬a@1
September 16, 2011
Results on Assumption Tagging
Clause sharing with assumption tagging does not give us speedup!
10 100 1000 10000 100000 1e+06 1e+07 1e+08 1e+09 1e+10 100 100001e+061e+081e+10 MiniSat 2.2.0 assumption tagging MiniSat 2.2.0 no learned clauses Decisions 10 100 1000 10000 100000 10 100 1000 10000 100000 MiniSat 2.2.0 assumption tagging MiniSat 2.2.0 no learned clauses Run time (in s)
- Max. 100 000 literals worth of learned clauses per instance
◮ Left: decisions per derived formula decreases ◮ Right: Run time is roughly equal ◮ Top line: Many memory outs
September 16, 2011
Flag Tagging
To allow for both the simplification and clause sharing, one can set a flag on the constraints. This flag is then inherited at conflict analysis to the learned clause.
◮ Let φ = (x1 ∨ x2 ∨ x3) ∧ (x2 ∨ ¬x3) and Co1 = ¬x1. ◮ Then SAT solver simplifies φ ∧ ¬x1 to
(x2 ∨ x3)u ∧ (x2 ∨ ¬x3), where u stands for “unsafe”
◮ branching on ¬x2 results in (x2)u.
¬x2@1 λ x3 ¬x3
September 16, 2011
Results on Flag Tagging
10 100 1000 10000 100000 1e+06 1e+07 1e+08 1e+09 100 10000 1e+06 1e+08 MiniSat 2.2.0 flag tagging MiniSat 2.2.0 no learned clauses decisions 10 100 1000 10000 100000 10 100 1000 10000 100000 MiniSat 2.2.0 flag tagging MiniSat 2.2.0 no learned clauses time (in s)
Again max. 100 000 literals worth of learned clauses
◮ While the number of independent learned clauses is
smaller, the approach gives now also speed-up
September 16, 2011
Experimental Setup
U U U U
t/o unsat 2 4 1 unsat unsat 3
Learned clauses φ | = c U U U U U U U U
September 16, 2011
Experimental Setup
U U U
t/o unsat unsat unsat
Learned clauses φ | = c Simplify
t/o
U U U U U U U U U
September 16, 2011
Results (1)
5000 10000 15000 20000 5000 10000 15000 20000 Part-Tree-Learn (time in s) Part-Tree (time in s) 5000 10000 15000 20000 5000 10000 15000 20000 Part-Tree-Learn (time in s) CL-SDSAT (time in s)
◮ Adding learning to the partition tree approach gives clear
speed-up and solves more instances (left fig.)
◮ The approach usually beats a clause-sharing portfolio
approach based on the same solver (right fig.)
September 16, 2011
Results (2)
Unsolved instances from SAT-COMP 2009
Name Type plingeling Part-Tree-Learn Part-Tree 9dlx vliw at b iq8 Unsat 3256.41 — — 9dlx vliw at b iq9 Unsat 5164.00 — — AProVE07-25 Unsat — 9967.24 9986.58 dated-5-19-u Unsat 4465.00 2522.40 4104.30 eq.atree.braun.12.unsat Unsat — 4691.99 5247.13 eq.atree.braun.13.unsat Unsat — 9972.47 12644.24 gss-24-s100 Sat 2929.92 3492.01 1265.33 gss-26-s100 Sat 18173.00 10347.41 16308.65 gus-md5-14 Unsat — 13890.05 13466.18 ndhf xits 09 UNSAT Unsat — 9583.10 11769.23 rbcl xits 09 UNKNOWN Unsat — 9818.59 8643.21 rpoc xits 09 UNSAT Unsat — 8635.29 9319.52 sortnet-8-ipc5-h19-sat Sat 2699.62 4303.93 20699.58 total-10-17-u Unsat 3672.00 4447.26 5952.43 total-5-15-u Unsat — 18670.33 21467.79
September 16, 2011
Conclusions
Iterative partitioning is an approach based on search space splitting which solves hard SAT instances, and
◮ Does not suffer from the increased expected run time
problem
◮ Is faster than a comparable, efficient portfolio solver
◮ To my knowledge, the first time this has been achieved in