Concurrent Clause Strengthening Siert Wieringa and Keijo Heljanko - - PowerPoint PPT Presentation

concurrent clause strengthening
SMART_READER_LITE
LIVE PREVIEW

Concurrent Clause Strengthening Siert Wieringa and Keijo Heljanko - - PowerPoint PPT Presentation

Concurrent Clause Strengthening Siert Wieringa and Keijo Heljanko Department of Information and Computer Science Aalto University, School of Science and Technology siert.wieringa@aalto.fi July 10, 2013 Introduction Modern SAT solvers rely


slide-1
SLIDE 1

Concurrent Clause Strengthening

Siert Wieringa and Keijo Heljanko

Department of Information and Computer Science Aalto University, School of Science and Technology siert.wieringa@aalto.fi July 10, 2013

slide-2
SLIDE 2

Concurrent Clause Strengthening July 10, 2013 2/21

Introduction

◮ Modern SAT solvers rely on many techniques outside the

core CDCL search procedure.

◮ For example preprocessing and inprocessing, but also

conflict clause strengthening.

◮ The solver must decide when, and to what extent, it should

apply such techniques.

◮ Instead of interleaving additional reasoning with search,

both can be executed concurrently.

slide-3
SLIDE 3

Concurrent Clause Strengthening July 10, 2013 3/21

Using concurrency

◮ Avoids difficult to design heuristics for deciding when to

switch between tasks.

◮ Exploits the availability of multi-core hardware. ◮ Provides a true division of work without dividing the search

space.

◮ Concurrent clause strengthening yields surprisingly

consistent performance improvements.

slide-4
SLIDE 4

Concurrent Clause Strengthening July 10, 2013 4/21

Clause strengthening

◮ Strengthening a clause means removing redundant literals.

Given: A clause c such that F | = c Find: A subclause c′ ⊆ c such that F | = c′

◮ Finding c′ such that it is of minimal length is an NP-hard

problem.

◮ MiniSAT minimizes all conflict clauses with respect to the

clauses used in their derivation.

slide-5
SLIDE 5

Concurrent Clause Strengthening July 10, 2013 5/21

The solver-reducer architecture

solver reducer

work set result queue ◮ Two concurrently executing threads. ◮ The SOLVER is a conventional CDCL solver. ◮ The REDUCER provides a clause strengthening algorithm. ◮ Communication solely by passing clauses through the

work set and the result queue.

slide-6
SLIDE 6

Concurrent Clause Strengthening July 10, 2013 6/21

Basic operation

solver reducer

work set result queue

◮ Whenever the SOLVER learns a clause it writes a copy of

that clause to the work set.

◮ The REDUCER reads its input clauses from the work set,

and writes clauses it has strengthened to the result queue.

◮ The SOLVER frequently introduces clauses from the result

queue to its learnt clause database.

◮ The REDUCER has its own copy of the problem clauses as

well as its own learnt clause database.

slide-7
SLIDE 7

Concurrent Clause Strengthening July 10, 2013 7/21

The REDUCER’s algorithm

solver reducer

work set result queue

◮ Assign a literal of input clause c to false, then perform unit

propagation.

◮ Remove from c literals that became assigned false during

unit propagation.

◮ Repeat until all literals of c are assigned false, or a conflict

arises.

◮ If a conflict arises then analyze, learn, and return the

subclause c′ ⊆ c containing literals “causing” the conflict.

◮ Otherwise, add c to the learnt clause database, return c.

slide-8
SLIDE 8

Concurrent Clause Strengthening July 10, 2013 8/21

The work set

solver reducer

work set result queue

◮ As the REDUCER learns, it becomes stronger but slower. ◮ The REDUCER can usually not keep up with the supply of

clauses from the SOLVER.

◮ How to implement the work set? ◮ FIFO - Tends to deliver clauses to the REDUCER that are

  • ld, and often no longer interesting.

◮ LIFO - Strong clauses may never be delivered as they shift

backwards in the queue quickly.

slide-9
SLIDE 9

Concurrent Clause Strengthening July 10, 2013 9/21

Sorting the work set

solver reducer

work set result queue

◮ We can use clause length or LBD as an approximation for

clause quality.

◮ As the average length changes clauses that were relatively

long when learnt may seem short when they are old.

◮ Solution: Limit the capacity. ◮ If the SOLVER adds a clause to a full work set then this

clause replaces the oldest clause.

◮ If the REDUCER requests a clause from a non-empty work

set it receives the best clause.

slide-10
SLIDE 10

Concurrent Clause Strengthening July 10, 2013 10/21

Keeping it simple

solver reducer

work set result queue

◮ The REDUCER only returns clauses that are strict

subclauses of its inputs.

◮ The REDUCER does not share its learnt clauses. ◮ The REDUCER assigns literals in the order they appear in

the input clause.

◮ The SOLVER does not have a mechanism for deleting

clauses for which a subclause is found in the result queue.

◮ The result queue is a simple unbounded FIFO queue.

slide-11
SLIDE 11

Concurrent Clause Strengthening July 10, 2013 11/21

Implementation

solver reducer

work set result queue

◮ MiniRed based on MiniSAT 2.2.0. ◮ GlucoRed based on Glucose 2.1 / 2.2. ◮ Base solvers modified as little as possible. ◮ The code added to both solvers is identical, except: ◮ MiniRed sorts its work set by clause length. ◮ GlucoRed sorts its work set by LBD.

slide-12
SLIDE 12

Concurrent Clause Strengthening July 10, 2013 12/21

Average clause length experiment

34.6% discarded from workset 30.2% not reduced

solver reducer

reduced?

work set result queue 56.8 91.3 38.1 27.6 15.3 32.9 32.9

◮ Average over 367 benchmarks. ◮ MiniRed with default settings. ◮ work set capacity 1000 clauses.

slide-13
SLIDE 13

Concurrent Clause Strengthening July 10, 2013 12/21

Average clause length experiment

34.6% discarded from workset 30.2% not reduced

solver reducer

reduced?

work set result queue 56.8 91.3 38.1 27.6 15.3 32.9 32.9

◮ Average over 367 benchmarks. ◮ MiniRed with default settings. ◮ work set capacity 1000 clauses.

slide-14
SLIDE 14

Concurrent Clause Strengthening July 10, 2013 12/21

Average clause length experiment

34.6% discarded from workset 30.2% not reduced

solver reducer

reduced?

work set result queue 56.8 91.3 38.1 27.6 15.3 32.9 32.9

◮ Average over 367 benchmarks. ◮ MiniRed with default settings. ◮ work set capacity 1000 clauses.

slide-15
SLIDE 15

Concurrent Clause Strengthening July 10, 2013 12/21

Average clause length experiment

34.6% discarded from workset 30.2% not reduced

solver reducer

reduced?

work set result queue 56.8 91.3 38.1 27.6 15.3 32.9 32.9

◮ Average over 367 benchmarks. ◮ MiniRed with default settings. ◮ work set capacity 1000 clauses.

slide-16
SLIDE 16

Concurrent Clause Strengthening July 10, 2013 12/21

Average clause length experiment

34.6% discarded from workset 30.2% not reduced

solver reducer

reduced?

work set result queue 56.8 91.3 38.1 27.6 15.3 32.9 32.9

◮ Average over 367 benchmarks. ◮ MiniRed with default settings. ◮ work set capacity 1000 clauses.

slide-17
SLIDE 17

Concurrent Clause Strengthening July 10, 2013 13/21

Performance testing

◮ The set Competition contains 547 application track

benchmarks (Competition 2011/Challenge 2012).

◮ The set Simplified contains 501 benchmarks resulting from

running SatElite on the Competition set.

◮ In these slides we will only present results for the

Simplified set.

◮ 900 second wall clock time limit. ◮ 1800 second CPU time limit.

slide-18
SLIDE 18

Concurrent Clause Strengthening July 10, 2013 14/21

MiniRed scatter plot

1 10 100 1000 1 10 100 1000 MiniRed - wall clock time (s) MiniSAT - wall clock time (s) x/2 x/4 unsatisfiable satisfiable

slide-19
SLIDE 19

Concurrent Clause Strengthening July 10, 2013 15/21

GlucoRed scatter plot

1 10 100 1000 1 10 100 1000 GlucoRed - wall clock time (s) Glucose - wall clock time (s) x/2 x/4 unsatisfiable satisfiable

slide-20
SLIDE 20

Concurrent Clause Strengthening July 10, 2013 16/21

UNSAT benchmarks - wall clock time cactus

100 200 300 400 500 600 700 800 900 60 80 100 120 140 160 180 200 220 240 wall clock time (s) instances solved MiniSAT (#164) MiniRed (#222) Glucose (#220) GlucoRed (#237)

slide-21
SLIDE 21

Concurrent Clause Strengthening July 10, 2013 17/21

UNSAT benchmarks - CPU time cactus

200 400 600 800 1000 1200 1400 1600 1800 60 80 100 120 140 160 180 200 220 240 CPU time (s) instances solved MiniSAT (#191) MiniRed (#222) Glucose (#232) GlucoRed (#237)

slide-22
SLIDE 22

Concurrent Clause Strengthening July 10, 2013 18/21

SAT benchmarks - wall clock time cactus

100 200 300 400 500 600 700 800 900 100 110 120 130 140 150 160 wall clock time (s) instances solved MiniSAT (#150) MiniRed (#159) Glucose (#155) GlucoRed (#147)

slide-23
SLIDE 23

Concurrent Clause Strengthening July 10, 2013 19/21

Results discussion

◮ Concurrent clause strengthening is strong on unsatisfiable

benchmarks. GlucoRed PeneLoPe 2-core 2-core 4-core 8-core UNSAT Wall clock 237 227 231 247 CPU 237 227 221 217 SAT Wall clock 147 142 160 164 CPU 149 142 154 149

◮ Portfolio solvers expose orthogonal behavior. ◮ The two approaches can be combined!

slide-24
SLIDE 24

Concurrent Clause Strengthening July 10, 2013 20/21

Conclusions

◮ Concurrent clause strengthening is a simple technique,

providing significant performance improvements.

◮ Particularly strong on unsatisfiable benchmarks. ◮ Using concurrency to aid CDCL search, rather than to

parallelize it.

◮ The basic idea can be exploited in many ways, e.g.

concurrent inprocessing.

slide-25
SLIDE 25

Concurrent Clause Strengthening July 10, 2013 21/21

Availability

◮ Source code for MiniRed and GlucoRed is available from:

http://bitbucket.org/siert

◮ MiniRed and GlucoRed have been integrated in ZZ:

http://bitbucket.org/niklaseen

◮ The ZZ-framework by Niklas Eén provides the Bip model

checker, including e.g. PDR and BMC algorithms.

slide-26
SLIDE 26

Concurrent Clause Strengthening July 10, 2013 21/21

Capacity of the work set

solver reducer

work set result queue

200 400 600 800 1000 20000 40000 60000 80000 100000 120000 140000 # Clauses in work set Clause insertion # c7nidw f8b IBM

◮ The default work set capacity is 1000 clauses.

slide-27
SLIDE 27

Concurrent Clause Strengthening July 10, 2013 21/21

Average clause length experiment (2)

34.6% discarded from workset 40.3% 30.2% not reduced 16.4%

solver reducer

reduced?

work set result queue 56.8 32.9 143.4 27.4 38.1 44.4 32.9 27.4 91.3 27.6 15.3 287.9 25.0 12.9

◮ Average over 367 benchmarks. ◮ Clause min. disabled in SOLVER. ◮ Total number of clauses generated increased by 17%.