SAT Solvers Aditya Parameswaran Luv Kumar 1st June, 2005 and 14th - - PowerPoint PPT Presentation

sat solvers
SMART_READER_LITE
LIVE PREVIEW

SAT Solvers Aditya Parameswaran Luv Kumar 1st June, 2005 and 14th - - PowerPoint PPT Presentation

IIT Bombay - CFDVS SAT Solvers Aditya Parameswaran Luv Kumar 1st June, 2005 and 14th June, 2005 Aditya Parameswaran, Luv Kumar: SAT Solvers, 1 Outline Outline The Satisfiability Problem Types of Solvers The basic DPLL


slide-1
SLIDE 1

IIT Bombay - CFDVS

SAT Solvers

Aditya Parameswaran Luv Kumar 1st June, 2005 and 14th June, 2005

Aditya Parameswaran, Luv Kumar: SAT Solvers, 1

slide-2
SLIDE 2

Outline

  • Outline
  • The Satisfiability Problem
  • Types of Solvers
  • The basic DPLL Framework
  • Variations to the DPLL Algorithm

Branching Heuristics Deduction Mechanism Conflict Analysis and Learning Other Techniques

  • Stalmarcks Algorithm
  • More Details And Examples

Comparison of BerkMin and Chaff on Branching Heuristics Comparison of Chaff and SATO in BCP

  • Local Search
  • Phase Transitions
  • More Details on Stalmarcks method

Equivalence Classes

  • References

Aditya Parameswaran, Luv Kumar: SAT Solvers, 2

slide-3
SLIDE 3

The Satisfiability Problem

◮ Given a Boolean propositional formula, does there exist

assignment of values such that the formula becomes true?

◮ We consider a CNF formula, C1 ∧ C2 ∧ C3 ∧ C4 . . . , where, ◮ Ci is a disjunction of literals like a, ¬b, c, . . .

Aditya Parameswaran, Luv Kumar: SAT Solvers, 3

slide-4
SLIDE 4

The Satisfiability Problem

History of the SAT Problem

◮ Davis Putnam - Resolution based, 1960, later proposed Search

based algorithm, 1962

◮ Proved NP Complete by Cook, 1971 ◮ Stalmarcks algorithm - Patented, 1995 ◮ Conflict driven Learning and Non chronological backtracking

in GRASP, 1996

◮ Local Search, 1997 ◮ SATO - DPLL Based, 1997 ◮ Chaff - DPLL Based, 2001 ◮ BerkMin - DPLL Based, 2002

Aditya Parameswaran, Luv Kumar: SAT Solvers, 4

slide-5
SLIDE 5

Types of Solvers

Two Types:

◮ Complete - find a solution or prove that none exist, e.g.

Resolution, Search, Stalmarcks, BDDs

◮ Stochastic - do not prove unsatisfiability, can get some

solutions quickly, e.g. Local Search

Aditya Parameswaran, Luv Kumar: SAT Solvers, 5

slide-6
SLIDE 6

The basic DPLL Framework

Algorithm: DPLL(formula, assignment) { if (deduce(formula, assignment) == SAT) return SAT; else if (deduce(formula, assignment) == CONF) return CONF; else { v = new_variable(formula, assignment); a = new_assignment(formula, assignment, v, 0); if (DPLL(formula, a) == SAT) return SAT; else { a = new_assignment(formula, assignment, v, 1); return DPLL(formula, a); } } }

Aditya Parameswaran, Luv Kumar: SAT Solvers, 6

slide-7
SLIDE 7

Variations to the DPLL Algorithm

Iterative Algorithm while(1) { decide_next_branch(); //branching heuristics while (true) { status = deduce(); //deduction mechanism if (status == CONF) { bl = analyze_conflict(); //conflict analysis if (bl == 0) return UNSAT; else backtrack(bl); } else if (status == SATISFIABLE) return SAT; else break; } }

Aditya Parameswaran, Luv Kumar: SAT Solvers, 7

slide-8
SLIDE 8

Variations to the DPLL Algorithm

  • Outline
  • The Satisfiability Problem
  • Types of Solvers
  • The basic DPLL Framework
  • Variations to the DPLL Algorithm

Branching Heuristics Deduction Mechanism Conflict Analysis and Learning Other Techniques

  • Stalmarcks Algorithm
  • More Details And Examples

Comparison of BerkMin and Chaff on Branching Heuristics Comparison of Chaff and SATO in BCP

  • Local Search
  • Phase Transitions
  • More Details on Stalmarcks method

Equivalence Classes

  • References

Aditya Parameswaran, Luv Kumar: SAT Solvers, 8

slide-9
SLIDE 9

Variations to the DPLL Algorithm

Branching Heuristics

◮ Branching Heuristics alter the search path and thereby change

the structure of the search tree

◮ In the original DPLL paper, a branch was randomly chosen ◮ DLCS (Dynamic Largest Combined Sum)

  • Introduced in GRASP
  • Pick the variable with largest number of occurrences in

unresolved clauses

  • State dependent
  • Considerable work reqd.
  • Static Statistic, does not take search history into consideration
  • Differentiation between learned and original clauses required.

Aditya Parameswaran, Luv Kumar: SAT Solvers, 9

slide-10
SLIDE 10

Variations to the DPLL Algorithm

◮ VSIDS (Variable State Independent Decaying Sum)

  • Introduced in Chaff
  • Choose variable with highest score to branch
  • Initial score is the number of clauses with the variable
  • Periodic division by a constant
  • Incrementation by a constant on addition of new clauses with

the variable

  • State Independent
  • Focus on recent clauses

◮ BerkMin method

  • Simliar to the Chaff method
  • When a conflict occurs, all literals in the clauses responsible

for the conflict have their scores increased

  • Periodic divison by a constant
  • Branch on one of the variables present in the last added clause

Aditya Parameswaran, Luv Kumar: SAT Solvers, 10

slide-11
SLIDE 11

Variations to the DPLL Algorithm

Deduction Mechanism

◮ Most important part of the solver (maximum time spent here) ◮ The function deduce() tries to see if by the current

assignment of values to variables, some conclusions can be got regarding the satisfiability of the formula

◮ It returns SAT if the assignment causes a satisfying instance

and CONF if it causes a conflict, else returns UNKNOWN and updates the program state.

◮ All SAT solvers use the unit clause rule ◮ The unit clause rule states that if all but one of the literals in

a clause are assigned false, then that literal is assigned true

◮ Such clauses are called unit clauses the unassigned literal is a

unit literal, the process of assigning 1 to all such literals is called BCP (Boolean Constraint Propagation)

◮ Counter Method is used in GRASP, Head-Tail List Method is

from SATO, and Two-Literal watch was first used in Chaff

Aditya Parameswaran, Luv Kumar: SAT Solvers, 11

slide-12
SLIDE 12

Variations to the DPLL Algorithm

Deduction Mechanism (contd.)

◮ Counter Method

  • Simple method of keeping track of which clauses are satisfied
  • r conflicting
  • Each clause has 3 variables: number of 1 value literals, number
  • f 0 value literals and total number of literals
  • When number of 0 value literals equals total number, the

clause is contradictory

  • When number of 0 value literals equals total number minus one

and number of 1 value literals equals 0 then it is an unit clause

  • With m clauses and n variables and on an average l literals per

clause then on an average ml/n counters are updated

  • Undo takes same number of operations

Aditya Parameswaran, Luv Kumar: SAT Solvers, 12

slide-13
SLIDE 13

Variations to the DPLL Algorithm

◮ Head-Tail List method

  • Each clause maintains two pointers, the head and tail pointer
  • Each variable maintains 4 linked lists, pos-head, neg-head,

pos-tail, neg-tail, each of which contain pointers to the clauses which have their head/tail literal in the positive/negative phases of the variable.

  • If v is assigned 1, then both the pos-lists will be ignored.
  • For each clause v in the neg-head list, the solver will search for

a literal which does not evaluate to 1 such that,

  • 1. If we first encounter a literal evaluating to 1, then the clause

is sat., and we stop

  • 2. If we first encounter a literal u that is free and u is not the tail

literal, then we move the head pointer

  • 3. If all literals are assigned 0, and tail literal is undefined, then it

is a unit clause with tail literal as unit literal

  • 4. If all literals are assigned 0 and tail literal is 0, then it is a

conflicting clause

  • Repeat for neg-tail list, backwards
  • Average number of clauses to be updated, m/n, same on

backtrack

Aditya Parameswaran, Luv Kumar: SAT Solvers, 13

slide-14
SLIDE 14

Variations to the DPLL Algorithm

◮ Two Literal Watch method

  • Two literals “watched” in each clause
  • Each variable has two lists, pos-watched and neg-watched

containing pointers to all the clauses where the variable is watched in positive or negative phases

  • When a variable v is assigned a value 1, then for each clause in

the neg-watched list, we search for a literal that is not set to 0

  • 1. If there is such a literal u, and it is not the other watched

literal, then we make u the watched literal

  • 2. If the only such u is the other watched literal, and its free,

then it is a unit clause

  • 3. If the only such u is the other watched literal and it evaluates

to 1, then it is sat.

  • 4. If all literals in the clause are assigned 0 and no such u exists,

then clause is conflicting

  • Backtrack in constant time

Aditya Parameswaran, Luv Kumar: SAT Solvers, 14

slide-15
SLIDE 15

Variations to the DPLL Algorithm

Conflict Analysis and Learning

◮ Conflict Analysis tries to find the reason for the conflict and

tries to resolve it

◮ Original DPLL paper used Chronological Backtracking, in

which the last assignment that has not been flipped is flipped

◮ Non-Chronological Backtracking need not flip the last

assignment and could backtrack to a earlier decision level

◮ This was first introduced in GRASP ◮ Conflict Directed Learning incorporates learned Conflict

Clauses

◮ Conflict Clauses are obtained using Implication Graphs ◮ In Implication Graphs, all the variables, whose values are

either known or are implied are depicted

◮ Directed edges are drawn from the variables that make up part

  • f the unit clause implying the new variable (antecedents)

Aditya Parameswaran, Luv Kumar: SAT Solvers, 15

slide-16
SLIDE 16

Variations to the DPLL Algorithm

In the conflict clause, we include the following:

◮ The antecedents of the conflict which are decisions made in

the current decision level

◮ The antecedents of the conflict in previous decision levels ◮ Recursively, this procedure is applied to the antecedents which

are part of the current decision level, but are not decisions The backtrack level is the maximum of the decision levels of all the variables in the conflict clause

Aditya Parameswaran, Luv Kumar: SAT Solvers, 16

slide-17
SLIDE 17

Variations to the DPLL Algorithm

Other Techniques

◮ In the deduce() function, some solvers use the pure literal rule

in addition to the unit clause rule

◮ Some solvers use Random Restarts inorder to discard the

search tree and start over

◮ Some solvers have a function preprocess() which tries to arrive

at conclusions before the start of the algorithm

◮ The learned clauses are deleted after a given period

Aditya Parameswaran, Luv Kumar: SAT Solvers, 17

slide-18
SLIDE 18

Stalmarcks Algorithm

◮ Essentially breadth first search, we try both sides of a branch

to find forced decisions

◮ To prove validity of a formula, we assume the formula to be

false, and try to arrive at a contradiction

◮ All formulae are represented using → and ⊥, which is

represented as 0 in the triplet

◮ A formula is represented using a set of triplets ◮ A triplet (x, y, z) is an abbreviation for x ↔ (y → z) ◮ Example: p → (q → p) is represented as

  • (b1, q, p)
  • (b2, p, b1)

◮ A terminal triplet is one that gives a contradiction, like

  • (0, q, 1)
  • (1, 1, 0)
  • (0, 0, x)

Aditya Parameswaran, Luv Kumar: SAT Solvers, 18

slide-19
SLIDE 19

Stalmarcks Algorithm

◮ Triggering rules to be used for manipulation of formulae:

  • (r1)

(0,y,z) y/1 z/0

(r2)

(x,y,1) x/1

  • (r3)

(x,0,z) x/1

(r4)

(x,1,z) x/z

  • (r5)

(x,y,0) x/¬y

(r6)

(x,x,z) x/1

  • (r7)

(x,y,y) x/1

  • Dilemma rule (D1 and D2 are derivations, and S is the

intersection of the assignments produced by the two derivations) T T[x/1] T[x/0] D1 D2 U[S1] V [S0] T[S]

  • If one of the derivation leads to a contradiction, the result of

applying the dilemma rule is the result of the other derivation

Aditya Parameswaran, Luv Kumar: SAT Solvers, 19

slide-20
SLIDE 20

More Details And Examples

Chaff Branching Algorithm (VSIDS)

  • 1. Each variable in each polarity has a counter initialized to the

number of instances of that variable with the polarity

  • 2. When a clause is added, each variable in the clause has its

value incremented

  • 3. Unassigned variable and polarity with the highest count

chosen at each decision; ties broken at random by default

  • 4. Periodic division by 2

BerkMin Branching Algorithm

  • 1. Each variable in each polarity has a counter initialized to the

number of instances of that variable with the polarity

  • 2. In case of a conflict scores of the variables in the conflict

clause and those responsible for the conflict are incremented

  • 3. Unassigned variable and polarity with the highest count

chosen at each decision from the most recent clause added to the clause database; ties broken at random by default

  • 4. Periodic division by 4

Aditya Parameswaran, Luv Kumar: SAT Solvers, 20

slide-21
SLIDE 21

More Details And Examples

BCP Algorithms:- SATO List Method If (neg_assigned(literal)) { from-pos-head(literal); from-pos-tail(literal); } If (pos_assigned(literal)) { from-neg-head(literal); from-neg-tail(literal); } from-pos-head(literal) { for all the clauses in pos-head list for i = head_lit + 1 to tail_lit if (truth_val (ith literal) == unknown) if (ith literal == tail literal) unit clause, add to unit clauses stack exclude clause; return else move pointer to ith literal; return else if (truth_val (ith literal) == TRUE) exclude clause; return conflict }

Aditya Parameswaran, Luv Kumar: SAT Solvers, 21

slide-22
SLIDE 22

More Details And Examples

BCP Algorithms:- Two Literal Watch Method If (neg_assigned(literal)) for all the clauses in pos-watched list if there is a literal != 0 and != o. w. literal move the pointer to the given location if (literal == 1) then exclude clause else if (o. w. literal == undefined) unit clause, add to unit clauses list exclude clause else if (o. w. literal == 1) exclude clause else conflict end for If (pos_assigned(literal)) for all the clauses in neg-watched list ...

Aditya Parameswaran, Luv Kumar: SAT Solvers, 22

slide-23
SLIDE 23

Local Search

◮ A cost function is assigned to every variable assignment, the

procedure tries to minimise the cost function by moving to “neighboring” assignments

◮ A greedy procedure is one which constantly tries to minimise

the cost function

◮ A move which does not change the cost function is a sideways

move with a sequence of such moves as a plateau

◮ Has a probability of getting stuck in a local minima. In order

to get out of a local minima, the following procedures are adopted:

  • 1. Random Walk: With probability p, a random variable in an

unsatisfied clause is flipped, and with prob. (1-p) the greedy algorithm is followed

  • 2. Random Noise: Same as above, with no restrictions on random

variable

◮ In most cases, if the initial assignment is close to the global

minima, then the solution can be obtained by minimising the cost function

Aditya Parameswaran, Luv Kumar: SAT Solvers, 23

slide-24
SLIDE 24

Local Search

GSAT Algorithm a = set of clauses for i = 1 to MAX_TRIES T = a randomly generated truth assignment for j = 1 to MAX_FLIPS if T satisfies a then return T p = a prop. variable such that change in its truth assignment gives the largest increase in total number of clauses of a that are satisfied by T T = T with the truth assignment of p reversed end for return "no satifying assignment found" end

Aditya Parameswaran, Luv Kumar: SAT Solvers, 24

slide-25
SLIDE 25

Phase Transitions

◮ Clauses are created randomly of fixed length ◮ The parameters of interest are: number of clauses(L), number

  • f variables(N) and number of literals/clause(K)

◮ Formulae with few clauses are underconstrained and usually

satisfiable

◮ Formulae with many clauses are overconstrained and usually

unsatisfiable

◮ Satisfiability of formulae in either extreme is easy to determine ◮ A phase transition tends to occur in between when the

problems are critically constrained and it is difficult to determine their satisfiability

◮ For random 2-SAT, the transition has been proven to occur at

L/N = 1

◮ For random 3-SAT, the transition has been experimentally

found to occur at L/N ≈ 4.3

Aditya Parameswaran, Luv Kumar: SAT Solvers, 25

slide-26
SLIDE 26

Phase Transitions

Aditya Parameswaran, Luv Kumar: SAT Solvers, 26

slide-27
SLIDE 27

More Details on Stalmarcks method

Equivalence Classes

◮ Are used to show relationships between variables ◮ Two formulae are said to be in the same equivalence class,

denoted as a ∼ b, if they have the same truth value

◮ It is not necessary to store the complementary class ◮ Initially there are the individual variable “undeterminate”

equivalence classes, and also the ⊤ and ⊥ classes

◮ Propagation rules merge atleast two equivalence classes, as

the proof progresses

◮ Instead of branching on truth/false we can branch by merging

equivalence classes (A ≡ B or A ≡ ¬B)

Aditya Parameswaran, Luv Kumar: SAT Solvers, 27

slide-28
SLIDE 28

More Details on Stalmarcks method

Hardness, Proof Depth and Saturation

◮ A proof is said to be k-hard if there is no proof involving less

than k applications of the dilemma rule, and there exists a proof which involves k applications of dilemma rule

◮ Proof Depth has the following definition:

  • Simple rules have 0 depth
  • Composition of two simple proofs R1 to R2 and R2 to R3 has as its

depth the maximum of the two above proofs

  • During the dilemma rule if the two branches have depth d1 and d2

then the depth of the whole system is max(d1, d2) + 1

◮ Alternatively, proof depth is the maximum number of

simultaneously open branches

◮ A relation is k-saturated iff proofs of depth k or less do not

add any equivalences between the subformulas

Aditya Parameswaran, Luv Kumar: SAT Solvers, 28

slide-29
SLIDE 29

References

  • 1. M Davis and H Putnam, A computing procedure for

quantification theory, Journal of ACM (1960)

  • 2. M Davis, G Logemann and D Loveland, A machine program

for theorem proving, Communications of ACM (1962)

  • 3. H Zhang, SATO: An efficient propositional prover, Conf. on

Automated deduction (1997)

  • 4. H Zhang and M Stickel, An efficient algorithm for unit

propagation, Int. Symp. on AI and mathematics, 1996

  • 5. E Goldberg and Y Novikov, BerkMin: A fast and robust SAT

solver, DATE - 2002

  • 6. J P Marques Silva and K A Sakallah, GRASP: A new search

algorithm for satisfiability, IEEE (Int. Conf. on tools with AI)

  • 1996
  • 7. M Sheeran and G Stalmarck, A tutorial on Stalmarcks proof

procedure for propositional logic

  • 8. M Moskewicz, C Madigan, Y Zhao, L Zhang and S Malik,

Chaff: Engineering an efficient SAT solver, 39th Design

Aditya Parameswaran, Luv Kumar: SAT Solvers, 29

slide-30
SLIDE 30

References

Automation Conf. 2001

  • 9. B Selman, H Levesque and D Mitchell, A new method for

solving hard satisfiability problems, National Conf. on AI, July 1992

  • 10. SAT Course material,

http://www.inf.ed.ac.uk/teaching/courses/propm/papers/Zhang/

  • 11. SAT solving, basic principles and directions,

http://www-ti.informatik.uni-tuebingen.de/ ∼fmg/teaching/verif0405/BMC.pdf

  • 12. S Malik and L Zhang, Quest for Efficient Boolean

Satisfiability Solvers

  • 13. S Malik, Quest for Efficient Boolean Satisfiability Solvers

Presentation, www.inf.ed.ac.uk/teaching/courses/propm/papers/cav- cade2002.pdf All of these are available at www.cse.iitb.ac.in/∼ adityagp/sat/

Aditya Parameswaran, Luv Kumar: SAT Solvers, 30