Boolean Satisfiability Example problem instance Naive algorithm - - PowerPoint PPT Presentation

boolean satisfiability
SMART_READER_LITE
LIVE PREVIEW

Boolean Satisfiability Example problem instance Naive algorithm - - PowerPoint PPT Presentation

Boolean Satisfiability Ari Mermelstein (with advisement from Professor Gu) Statement of the Problem Boolean Satisfiability Example problem instance Naive algorithm pseudocode example Ari Mermelstein (with advisement from Professor Gu)


slide-1
SLIDE 1

Boolean Satisfiability Ari Mermelstein (with advisement from Professor Gu) Statement of the Problem Example problem instance Naive algorithm

pseudocode example

Motivation DPLL algorithm

Unit Propagation Pure Literals DPLL with heuristics added How to Parallelize

Johnson’s Randomized Approximation Algorithm How to parallelize

Boolean Satisfiability

Ari Mermelstein (with advisement from Professor Gu) CSc 76010 Parallel Scientific Computing, 2017

slide-2
SLIDE 2

Boolean Satisfiability Ari Mermelstein (with advisement from Professor Gu) Statement of the Problem Example problem instance Naive algorithm

pseudocode example

Motivation DPLL algorithm

Unit Propagation Pure Literals DPLL with heuristics added How to Parallelize

Johnson’s Randomized Approximation Algorithm How to parallelize

What is Satisfiability?

Given:

◮ a set X of n Boolean variables,

(i.e. X = {x1, x2, · · · , xn} and each xi ∈ {0, 1}).

◮ a set of m clauses in Conjunctive Normal Form (CNF)

(i.e. C = {c1, c2, · · · , cm}.) Each clause contains some of the variables, and for each x ∈ X, either x or ¯ x appears. Does an assignment, σ: X → {0, 1}, exist such that for all c ∈ C, c = true.

slide-3
SLIDE 3

Boolean Satisfiability Ari Mermelstein (with advisement from Professor Gu) Statement of the Problem Example problem instance Naive algorithm

pseudocode example

Motivation DPLL algorithm

Unit Propagation Pure Literals DPLL with heuristics added How to Parallelize

Johnson’s Randomized Approximation Algorithm How to parallelize

Example

X = {x1, x2, x3} C = {x1 ∨ ¯ x2, ¯ x3 ∨ ¯ x1, x3 ∨ x1 ∨ x2} The set of clauses C, is also known as a formula.

slide-4
SLIDE 4

Boolean Satisfiability Ari Mermelstein (with advisement from Professor Gu) Statement of the Problem Example problem instance Naive algorithm

pseudocode example

Motivation DPLL algorithm

Unit Propagation Pure Literals DPLL with heuristics added How to Parallelize

Johnson’s Randomized Approximation Algorithm How to parallelize

Naive algorithm

Result: A boolean true or false Input: A set of variables X[1..n] Input: A formula F[1..m] Let satisfied[1..m] and assign[1..n] be new arrays; Initialize(assign) ; // set all assignments to 0 while NotAllOnes(assign) do Increment(assign) ; // change the assignment in

  • ne position, treating assign like a binary

number Initialize(satisfied); for i=1 to m do satisfied[i] = evaluate(F[i]); if all c ∈ C are satisfied then return true; end end return false; end

slide-5
SLIDE 5

Boolean Satisfiability Ari Mermelstein (with advisement from Professor Gu) Statement of the Problem Example problem instance Naive algorithm

pseudocode example

Motivation DPLL algorithm

Unit Propagation Pure Literals DPLL with heuristics added How to Parallelize

Johnson’s Randomized Approximation Algorithm How to parallelize

Example

X = {x1, x2, x3} C = {x1 ∨ ¯ x2, ¯ x3 ∨ ¯ x1, x3 ∨ x1 ∨ x2}

◮ We will start with x1 = 0, x2 = 0, and x3 = 0.

With this assignment, we obtain: 0 ∨ 1, 1 ∨ 1, 0 ∨ 0 ∨ 0. which is equal to 1 , 1, 0. This is not satisfiable.

◮ We move on to x1 = 0, x2 = 0, and x3 = 1.

We obtain: 0 ∨ 1, 0 ∨ 1, 1 ∨ 0 ∨ 0 which is equal to 1 , 1 , 1. This is satisfiable!

slide-6
SLIDE 6

Boolean Satisfiability Ari Mermelstein (with advisement from Professor Gu) Statement of the Problem Example problem instance Naive algorithm

pseudocode example

Motivation DPLL algorithm

Unit Propagation Pure Literals DPLL with heuristics added How to Parallelize

Johnson’s Randomized Approximation Algorithm How to parallelize

Comment and Motivation for presentation

The algorithm on the previous slides take Θ(2n) time to run. We need to be able to use heuristics and approximations to solve this problem faster. There are 2 approaches:

◮ modified brute force using heuristics, such as DPLL

algorithm ( Davis, Putnam, Logemann, Loveland 1962).

◮ randomized approximation algorithms, such as

Johnson’s MAX-SAT approximation algorithm (1974).

slide-7
SLIDE 7

Boolean Satisfiability Ari Mermelstein (with advisement from Professor Gu) Statement of the Problem Example problem instance Naive algorithm

pseudocode example

Motivation DPLL algorithm

Unit Propagation Pure Literals DPLL with heuristics added How to Parallelize

Johnson’s Randomized Approximation Algorithm How to parallelize

DPLL Algorithm (1962)

The idea of the DPLL algorithm is that you start with a formula F and a partial interpretation I, where initially, I = ∅. Evaluate F using the assignments in I. If all of the clauses in F are true using the assignments in I, then F is satisfiable. If at least one clause is definitely false using the assignments in I, then F is not satisfiable. If there are some clauses for which you can not tell, we pick an assignment for a varaible, xi / ∈ I and add it to I, and recursively repeat the process.

slide-8
SLIDE 8

Boolean Satisfiability Ari Mermelstein (with advisement from Professor Gu) Statement of the Problem Example problem instance Naive algorithm

pseudocode example

Motivation DPLL algorithm

Unit Propagation Pure Literals DPLL with heuristics added How to Parallelize

Johnson’s Randomized Approximation Algorithm How to parallelize

Pseudocode for DPLL-SAT(X,F, I)

Result: A Boolean true or false Input: A set of variables X[1..n] Input: A formula F Input: A partial interpretation I if I ⇒ F then return true; end else if I ⇒ ¬F then return false; end else choose xi / ∈ I; return DPLL-SAT(X, F, I ∪ {xi = 1}) or DPLL-SAT(X, F, I ∪ {xi = 0}); end

slide-9
SLIDE 9

Boolean Satisfiability Ari Mermelstein (with advisement from Professor Gu) Statement of the Problem Example problem instance Naive algorithm

pseudocode example

Motivation DPLL algorithm

Unit Propagation Pure Literals DPLL with heuristics added How to Parallelize

Johnson’s Randomized Approximation Algorithm How to parallelize

Analysis of the above algorithm

At the moment this is just an elegant way of doing the Naive Algorithm above. However, we should add two heuristics:

◮ Unit Propagation ◮ Choosing assignment of pure literals

slide-10
SLIDE 10

Boolean Satisfiability Ari Mermelstein (with advisement from Professor Gu) Statement of the Problem Example problem instance Naive algorithm

pseudocode example

Motivation DPLL algorithm

Unit Propagation Pure Literals DPLL with heuristics added How to Parallelize

Johnson’s Randomized Approximation Algorithm How to parallelize

Unit Propagation

◮ We view F as having the unit rules from I added into it. ◮ When we consider these assignments, we can remove

some variables from the clauses of F that have no bearing on the final result.

◮ This is accomplished by removing the negation of all

variables of the assignment from the clauses of F.

slide-11
SLIDE 11

Boolean Satisfiability Ari Mermelstein (with advisement from Professor Gu) Statement of the Problem Example problem instance Naive algorithm

pseudocode example

Motivation DPLL algorithm

Unit Propagation Pure Literals DPLL with heuristics added How to Parallelize

Johnson’s Randomized Approximation Algorithm How to parallelize

Example

Suppose that F = {x1 ∨ ¯ x2, ¯ x3 ∨ ¯ x1, x3 ∨ x1 ∨ x2}. and I = {x3 = 1}

◮ That tells us that x3 is true, and hence ¯

x3 is false.

◮ We know that ¯

x3 has no bearing on the truth value of the clauses in which ¯ x3 is part. (we can remove ”false” from an or statement).

◮ So we can simplify F to be

F = {x1 ∨ ¯ x2, ¯ x1, x3 ∨ x1 ∨ x2}.

slide-12
SLIDE 12

Boolean Satisfiability Ari Mermelstein (with advisement from Professor Gu) Statement of the Problem Example problem instance Naive algorithm

pseudocode example

Motivation DPLL algorithm

Unit Propagation Pure Literals DPLL with heuristics added How to Parallelize

Johnson’s Randomized Approximation Algorithm How to parallelize

Pure Literals

◮ If a variable only appears positively, (i.e. for any given

variable xi, ¯ xi never appears) then we don’t lose anything by deciding to set it to true.

◮ If a variable only appears negatively, (i.e. for any given

variable xi, xi itself never appears) then we don’t lose anything by deciding to set it to false.

slide-13
SLIDE 13

Boolean Satisfiability Ari Mermelstein (with advisement from Professor Gu) Statement of the Problem Example problem instance Naive algorithm

pseudocode example

Motivation DPLL algorithm

Unit Propagation Pure Literals DPLL with heuristics added How to Parallelize

Johnson’s Randomized Approximation Algorithm How to parallelize

Example

Suppose that F = {x1 ∨ ¯ x2, ¯ x3 ∨ x1, x3 ∨ x1 ∨ x2}. We see that x1 only appears positively, so we will choose to set it to 1, and update I = I ∪ {x1 = 1}.

slide-14
SLIDE 14

Boolean Satisfiability Ari Mermelstein (with advisement from Professor Gu) Statement of the Problem Example problem instance Naive algorithm

pseudocode example

Motivation DPLL algorithm

Unit Propagation Pure Literals DPLL with heuristics added How to Parallelize

Johnson’s Randomized Approximation Algorithm How to parallelize

Full-DPLL-SAT(X, F, I)

if I ⇒ F then return true; end else if I ⇒ ¬F then return false; end else F, I = unitPropogation(F, I); if I is inconsistent then // I contains both xi and ¯ xi for some xi ∈ X return false; end F, I = dealWithPureLiterals(F, I); if F = ∅ then // If all clauses are "unit propogated" away return true; end choose xi / ∈ I; return Full-DPLL-SAT(X, F, I ∪ {xi = 1}) or Full-DPLL-SAT(X, F, I ∪ {xi = 0}); end

slide-15
SLIDE 15

Boolean Satisfiability Ari Mermelstein (with advisement from Professor Gu) Statement of the Problem Example problem instance Naive algorithm

pseudocode example

Motivation DPLL algorithm

Unit Propagation Pure Literals DPLL with heuristics added How to Parallelize

Johnson’s Randomized Approximation Algorithm How to parallelize

How to make DPLL work in Parallel

◮ Broadcast the initial F to all processors. ◮ Allow multiple processors to work on different partial

interpretations in parallel, doing unit propagation as needed.

◮ Each processor will communicate its result to the

master processor. If any processor answers with ”yes,” yes will be returned and the other processors will stop.

slide-16
SLIDE 16

Boolean Satisfiability Ari Mermelstein (with advisement from Professor Gu) Statement of the Problem Example problem instance Naive algorithm

pseudocode example

Motivation DPLL algorithm

Unit Propagation Pure Literals DPLL with heuristics added How to Parallelize

Johnson’s Randomized Approximation Algorithm How to parallelize

Parallel DPLL (cont.)

The broadcast of information to all processors will take: Tcomm1 = tstartup + |F|tdata The collection of information from all of the processors will take: Tcomm2 = tstartup + tdata

slide-17
SLIDE 17

Boolean Satisfiability Ari Mermelstein (with advisement from Professor Gu) Statement of the Problem Example problem instance Naive algorithm

pseudocode example

Motivation DPLL algorithm

Unit Propagation Pure Literals DPLL with heuristics added How to Parallelize

Johnson’s Randomized Approximation Algorithm How to parallelize

Randomized α-Approximation Algorithm Definition

◮ An algorithm is said to be a randomized

α-approximation algorithm for a particular problem if the algorithm runs in polynomial time, and it produces a solution that is ≥ αOPT, where OPT is the optimal solution for the original algorithm.

slide-18
SLIDE 18

Boolean Satisfiability Ari Mermelstein (with advisement from Professor Gu) Statement of the Problem Example problem instance Naive algorithm

pseudocode example

Motivation DPLL algorithm

Unit Propagation Pure Literals DPLL with heuristics added How to Parallelize

Johnson’s Randomized Approximation Algorithm How to parallelize

Randomized-SAT-Approximation(X, F)

Result: an assignment, σ, of variables that approximately satisfies F Input: A set of variables X[1..n] Input: A set of clauses F[1..m] Let σ[1..n] be a new array; for i=1 to n do toss = flip-a-fair-coin() ; // With probability

1 2

if toss=HEADS then σ[i] = 1; end ; // if toss=TAILS else σ[i] = 0; end end return σ;

slide-19
SLIDE 19

Boolean Satisfiability Ari Mermelstein (with advisement from Professor Gu) Statement of the Problem Example problem instance Naive algorithm

pseudocode example

Motivation DPLL algorithm

Unit Propagation Pure Literals DPLL with heuristics added How to Parallelize

Johnson’s Randomized Approximation Algorithm How to parallelize

Analysis

Johnson (1974) proves that this approximation algorithm is a 1

2-approximation for satisfiability. In this case, we mean

that this algorithm is guaranteed to return an assignment that satisfies at least half of the clauses in F. Clearly this algorithm runs in Θ(n) time.

slide-20
SLIDE 20

Boolean Satisfiability Ari Mermelstein (with advisement from Professor Gu) Statement of the Problem Example problem instance Naive algorithm

pseudocode example

Motivation DPLL algorithm

Unit Propagation Pure Literals DPLL with heuristics added How to Parallelize

Johnson’s Randomized Approximation Algorithm How to parallelize

How to get rid of randomization

We can use the Method of Conditional Expectations to turn this algorithm into a deterministic one (i.e. without randomization).

slide-21
SLIDE 21

Boolean Satisfiability Ari Mermelstein (with advisement from Professor Gu) Statement of the Problem Example problem instance Naive algorithm

pseudocode example

Motivation DPLL algorithm

Unit Propagation Pure Literals DPLL with heuristics added How to Parallelize

Johnson’s Randomized Approximation Algorithm How to parallelize

Pseudocode for Deterministic-Approximation(X,F)

Let σ[1..n] be a new array; for i=1 to n do Set σ[i] = 1; totalWeightIfTrue=0; for j=1 to m do if σ[1..i] satisfies F[j] then weight-if-true = 1; end else k=sizeof(F[j]); weight-if-true = 1 − 1

2 k;

end totalWeightIfTrue+=weight-if-true; end end ; // Repeat the same code for σ[i] = 0 if totalWeightIfTrue > totalWeightIfFalse then σ[i] = 1; end else σ[i] = 0; end return σ;

slide-22
SLIDE 22

Boolean Satisfiability Ari Mermelstein (with advisement from Professor Gu) Statement of the Problem Example problem instance Naive algorithm

pseudocode example

Motivation DPLL algorithm

Unit Propagation Pure Literals DPLL with heuristics added How to Parallelize

Johnson’s Randomized Approximation Algorithm How to parallelize

How to Parallelize

  • 1. First, we will scatter parts of the formula array to each

processor.

  • 2. We will have each processor compute weight-if-true and

weight-if-false for each clause in F.

  • 3. Each processor will send its values back to the master

processor.

  • 4. The master processor will do a parallel sum to decide on

the correct assignment.

slide-23
SLIDE 23

Boolean Satisfiability Ari Mermelstein (with advisement from Professor Gu) Statement of the Problem Example problem instance Naive algorithm

pseudocode example

Motivation DPLL algorithm

Unit Propagation Pure Literals DPLL with heuristics added How to Parallelize

Johnson’s Randomized Approximation Algorithm How to parallelize

Analysis of Computation Time

Steps 2 and 4 contribute to the computation time:

◮ Step 2:

Tcomp1 = Θ(nm p ) where p=the number of processors. This is because we have to calculate n assignments.

◮ Step 4:

Tcomp2 = Θ(n log m) This is because we have to do n parallel sums.

◮ The total time is:

Tcomp = Θ(nm p + n log m)

slide-24
SLIDE 24

Boolean Satisfiability Ari Mermelstein (with advisement from Professor Gu) Statement of the Problem Example problem instance Naive algorithm

pseudocode example

Motivation DPLL algorithm

Unit Propagation Pure Literals DPLL with heuristics added How to Parallelize

Johnson’s Randomized Approximation Algorithm How to parallelize

Analysis of Communication Time

Steps 1 and 3 contribute to the communication time.

◮ Step 1:

Tcomm1 = tstartup + m p tdata This is because we have to scatter m

p data items to

each processor.

◮ Step 3:

Tcomm2 = tstartup + 2tdata Since each processor has to send 2 values back to the master.