Lectur ture 1 e 10 Stochastic Local Search (4. (4.8) Slide 1 - - PowerPoint PPT Presentation

lectur ture 1 e 10 stochastic local search
SMART_READER_LITE
LIVE PREVIEW

Lectur ture 1 e 10 Stochastic Local Search (4. (4.8) Slide 1 - - PowerPoint PPT Presentation

Computer Science CPSC 322 Lectur ture 1 e 10 Stochastic Local Search (4. (4.8) Slide 1 Lect cture re O Overvi rview Recap Domain Splitting for Arc Consistency Local Search Stochastic Local Search (SLS) Comparing SLS


slide-1
SLIDE 1

Computer Science CPSC 322

Lectur ture 1 e 10 Stochastic Local Search

(4. (4.8)

Slide 1

slide-2
SLIDE 2

Lect cture re O Overvi rview

  • Recap
  • Domain Splitting for Arc Consistency
  • Local Search
  • Stochastic Local Search (SLS)
  • Comparing SLS

2

slide-3
SLIDE 3

Arc C Consi sist stency A cy Algori rithm

  • Go through all the arcs in the network
  • Make each arc consistent by pruning the appropriate

domain, when needed

  • Reconsider arcs that could be turned back to being

inconsistent by this pruning

  • Eventually reach a ‘fixed point’: all arcs consistent

3

slide-4
SLIDE 4
  • When AC reduces the domain of a variable X to make an arc

〈X,c〉 arc consistent, which arcs does it need to reconsider? AC does not need to reconsider other arcs

  • If arc 〈Y,c〉 was arc consistent before, it will still be arc consistent.

“Consistent before” means each element yi in Y must have an element xi in X that satisfies the constraint. Those xi would not be pruned from Dom(X), so arc 〈Y,c〉 stays consistent

  • If an arc 〈X,ci〉 was arc consistent before, it will still be arc consistent

The domains of Zi have not been touched

  • Nothing changes for arcs of constraints not involving X

Whi hich arc arcs nee need t to

  • be rec

be reconsidered?

Z1 c1 Z2 c2 Z3 c3 Y c T H E S E X A c4

4

slide-5
SLIDE 5

Arc C Consi sist stency A cy Algori rithm: m: C Comp mplexi xity

  • Let’s determine Worst-case complexity of this

procedure (compare with DFS O(dn))

  • let the max size of a variable domain be d
  • let the number of variables be n
  • The max number of binary constraints is ?

(n (n * (n (n-1)) )) / 2

  • How many times, at worst, the same arc

can be inserted in the ToDoArc list? O(d) d)

  • How many steps are involved in checking

the consistency of an arc? O(d2)

  • Overall

ll complexit ity: O(n2d3)

  • Compare to O(dN) of DFS. Arc consistency is MUCH faster

5

So did we find a polynomial algorithm to solve CPSs? No, AC does not always solve the CPS. It is a way to possibly simplify the original CSP and make it easier to solve

Overall complexity: O(n2d3)

slide-6
SLIDE 6

Ar Arc Consi sist stency ency Algorithm hm: Interp erpret reting ng Outco comes es

  • Three possible outcomes (when all arcs are arc

consistent):

  • Each domain has a single value,

 e.g. built-in AISpace example “Scheduling problem 1”  We have: a (unique) solution.

  • At least one domain is empty,

 We have: No solution! All values are ruled out for this variable.  e.g. try this graph (can easily generate it by modifying Simple Problem 2)

  • Some domains have more than one value,

 There may be: one solution, multiple ones, or none  Need to solve this new CSP (usually simpler) problem: – same constraints, domains have been reduced

6

slide-7
SLIDE 7

Lect cture re O Overvi rview

  • Recap
  • Domain Splitting for Arc Consistency
  • Local Search
  • Stochastic Local Search (SLS)
  • Comparing SLS

7

slide-8
SLIDE 8

Sear arch v h vs. Domai ain S n Splitting ng

  • Arc consistency ends: Some domains have more than
  • ne value → may or may not have a solution
  • A. Apply Depth-First Search with Pruning or
  • B. Split the problem in a number of disjoint cases CSPi: for

instance CSP with dom(X) = {x1, x2, x3, x4} becomes CSP1 with dom(X) = {x1, x2} and CSP2 with dom(X) = {x3, x4}

  • Solution to CSP is the union of solutions to CSPi

8

slide-9
SLIDE 9

Exa Example

Run “Scheduling Problem 2” in AIspace

  • Try spitting on E (select 4 first, then 2 then 3)

9

slide-10
SLIDE 10

Anot

  • ther

her E Exampl ple

“Crossword 1” in Aispace,

  • try splitting on D3 and then A3 (always “select half”)

10

slide-11
SLIDE 11

Domain in s splitti tting

  • For each subCSP, which arcs have to be on the ToDoArcs

list when we get the subCSP by splitting the domain of X?

  • A. arcs <Zi, r(Zi,X)>
  • B. arcs < Zi, r(Zi,X)> and <X, r(Zi,X)>

C. arcs < Zi, r(Z,X)> and <Y, r(X,Y)>

Z1 c1 Z2 c2 Z3 c3 Y c X A c4

  • D. All arcs in the figure
  • E. All 8 arcs related to constraints involving X: (c1,c2,c3,c)

11

slide-12
SLIDE 12

Domain in s splitti tting

  • For each subCSP, which arcs have to be on the ToDoArcs

list when we get the subCSP by splitting the domain of X?

C. arcs < Zi, r(Z,X)> and <Y, r(X,Y)>

Z1 c1 Z2 c2 Z3 c3 Y c X A c4 T H E S E T H I S

12

slide-13
SLIDE 13

If domains with multiple values Split on one

Sear archi hing b ng by domai ain s n splitting ng

CSP, apply AC

CSP1, apply AC CSP2, apply AC

If domains with multiple values Split on one If domains with multiple values…..Split on one

13

slide-14
SLIDE 14

Anot nother er form

  • rmulation

n of

  • f CSP as

as sear search

Arc consistency with domain splitting

  • States: vector (D(V1), …, D(Vn)) of remaining domains,

with D(Vi) ⊆ dom(Vi) for each Vi

  • Start state: vector of original domains (dom(V1), …, dom(Vn))
  • Successor function:
  • reduce one of the domains + run arc consistency
  • Goal state: vector of unary domains that satisfies all

constraints

  • That is, only one value left for each variable
  • The assignment of each variable to its single value is a model
  • Solution: that assignment

14

slide-15
SLIDE 15

Arc consistency + domain splitting: example

({1,2,3,4}, {1,2,3,4}, {1,2,3,4}) ({1,3}, {1,2,3,4}, {1,2,3,4}) ({1,3}, {1,3}, {1,3}) ({2,4}, {1,2,3,4}, {1,2,3,4}) ({1,3}, {1}, {1,3}) AC ({}, {}, {}) ({1,3}, {3}, {1,3}) ({}, {}, {}) AC AC ({2,4}, {2,4}, {2,4}) ({2,4}, {2}, {2,4}) AC ({}, {}, {}) ({2,4}, {4}, {2,4}) ({}, {}, {}) AC AC A ϵ {1,3} A ϵ {2,4} B ϵ {1} B ϵ {3} B ϵ {2} B ϵ {4} 3 variables: A, B, C Domains: all {1,2,3,4} A=B, B=C, A≠C No solution No solution No solution ({1,2,3,4}, {1,2,3,4}, {1,2,3,4}) AC (arc consistency)

15

slide-16
SLIDE 16

Arc consistency + domain splitting: another example

({1,2,3,4}, {1,2,3,4}, {1,2,3,4}) ({1,3}, {1,2,3,4}, {1,2,3,4}) ({1,3}, {1,3}, {1,3}) ({2,4}, {1,2,3,4}, {1,2,3,4}) ({1,3}, {1}, {1,3}) AC ({1}, {1}, {1}) ({1,3}, {3}, {1,3}) ({3}, {3}, {3}) AC AC ({2,4}, {2,4}, {2,4}) ({2,4}, {2}, {2,4}) AC ({2}, {2}, {2}) ({2,4}, {4}, {2,4}) ({4}, {4}, {4}) AC AC A ϵ {1,3} A ϵ {2,4} B ϵ {1} B ϵ {3} B ϵ {2} B ϵ {4} 3 variables: A, B, C Domains: all {1,2,3,4} A=B, B=C, A=C Solution Solution Solution Solution ({1,2,3,4}, {1,2,3,4}, {1,2,3,4}) AC (arc consistency)

16

slide-17
SLIDE 17

If domains with multiple values Split on one

Sear archi hing b ng by domai ain s n splitting ng

How many CSPs do we need to keep around at a time? Assume solution at depth m and b children at each split

CSP, apply AC

CSP1, apply AC CSP2, apply AC

If domains with multiple values Split on one If domains with multiple values…..Split on one

  • A. O(bm)
  • B. O(bm)
  • B. O(mb)
  • B. O(b2)

17

slide-18
SLIDE 18

If domains with multiple values Split on one

Sear archi hing b ng by domai ain s n splitting ng

How many CSPs do we need to keep around at a time? Assume solution at depth m and b children at each split O(bm): It is DFS

CSP, apply AC

CSP1, apply AC CSP2, apply AC

If domains with multiple values Split on one If domains with multiple values…..Split on one

18

slide-19
SLIDE 19

Syst Systematically so solvi ving C CSPs SPs: Su Summary

  • Build Constraint Network
  • Apply Arc Consistency
  • One domain is empty →
  • Each domain has a single value →
  • Some domains have more than one value →
  • Apply Depth-First Search with Pruning OR
  • Split the problem in a number of disjoint cases
  • Apply Arc Consistency to each case, and repeat

19

slide-20
SLIDE 20

Limitat ation o

  • n of System

emat atic A Appr proac

  • aches

es

  • Many CSPs (scheduling, DNA computing, etc.) are

simply too big for systematic approaches

  • If you have 105 vars with dom(vari) = 104

Systematic Search Branching factor b = Solution depth d = Complexity = Constraint Network Size = Complexity of AC =

20

slide-21
SLIDE 21

Limitat ation o

  • n of System

emat atic A Appr proac

  • aches

es

  • Many CSPs are simply too big for systematic approaches
  • If you have 105 vars with dom(vari) = 104

Systematic Search Branching factor b = 104 Solution depth d = 105 Time Complexity = O((104 ) 105) Constraint Network Size = O(105 + 105 *105) Time Complexity of AC = O((105*2 * 104*3)

21

slide-22
SLIDE 22

Lea Learning G Goal

  • als for C
  • r CSP
  • Define possible worlds in term of variables and their domains
  • Compute number of possible worlds on real examples
  • Specify constraints to represent real world problems differentiating

between:

  • Unary and k-ary constraints
  • List vs. function format
  • Verify whether a possible world satisfies a set of constraints (i.e.,

whether it is a model, a solution)

  • Implement the Generate-and-Test Algorithm. Explain its

disadvantages.

  • Solve a CSP by search (specify neighbors, states, start state, goal

state). Compare strategies for CSP search. Implement pruning for DFS search in a CSP.

  • Define/read/write/trace/debug the arc consistency algorithm. Compute

its complexity and assess its possible outcomes

  • Define/read/write/trace/debug domain splitting and its integration with

arc consistency

22

slide-23
SLIDE 23

Lect cture re O Overvi rview

  • Recap
  • Domain Splitting for Arc Consistency
  • Local Search
  • Stochastic Local Search (SLS)
  • Comparing SLS

23

slide-24
SLIDE 24
  • Solving CSPs is NP-hard
  • Search space for many CSPs is huge
  • Exponential in the number of variables
  • Even arc consistency with domain splitting is often not enough
  • Alternative: local search
  • use algorithms that search the space locally, rather than

systematically

  • Often finds a solution quickly, but are not guaranteed to find a

solution if one exists (thus, cannot prove that there is no solution)

Loc

  • cal Sear

arch: Mo Motivation

  • n

24

slide-25
SLIDE 25

Loca cal S Search rch

  • Idea:
  • Consider the space of complete assignments of values to variables

(all possible worlds)

  • Neighbours of a current node are similar variable assignments
  • Move from one node to another according to a function that scores

how good each assignment is

  • Useful method in practice

– Best available method for many constraint satisfaction and constraint optimization problems

25

slide-26
SLIDE 26

Local al S Sear arch P h Probl blem em: D Defini nition

  • n

Definition: A local search problem consists of a: CSP: a set of variables, domains for these variables, and constraints on their joint values. A node in the search space will be a complete assignment to all

  • f the variables.

Neighbour relation: an edge in the search space will exist when the neighbour relation holds between a pair of nodes. Scoring function: h(n), judges cost of a node (want to minimize)

  • E.g. the number of constraints violated in node n.
  • E.g. the cost of a state in an optimization context.

26

slide-27
SLIDE 27

Local al S Sear arch P h Probl blem em: E Exampl ple

Definition: A local search problem consists of a: CSP: a set of variables, {V1 ….,Vn }, each with domain Dom(Vi), and constraints on their joint values. A node in the search space will be a complete assignment to all

  • f the variables.

Nneighbour relation: The neighbors of node with assignment A= {V1 / v1,…,Vn / vn }

are nodes with assignments that differ from A for one value only

Scoring function: h(n), judges cost of a node (want to minimize)

  • E.g. the number of constraints violated in node n.
  • E.g. the cost of a state in an optimization context.

27

slide-28
SLIDE 28

V1 = v1 ,V2 = v1 ,.., Vn = v1

Search Space

V1 = v2 ,V2 = v1 ,.., Vn = v1 V1 = v4 ,V2 = v1 ,.., Vn = v1 V1 = v1 ,V2 = vn ,.., Vn = v1 V1 = v4 ,V2 = v2 ,.., Vn = v1 V1 = v4 ,V2 = v3 ,.., Vn = v1 V1 = v4 ,V2 = v1 ,.., Vn = v2

  • Only the current node is kept in memory at each step.
  • Very different from the systematic tree search approaches we

have seen so far!

  • Local search does NOT backtrack!

29

slide-29
SLIDE 29

Itera rative ve B Best I Impro rove veme ment

  • How to determine the neighbor node to be selected?
  • Iterative Best Improvement:
  • select the neighbor that optimizes some evaluation function
  • Which strategy would make sense? Select neighbor with …
  • D. Minimal number of constraint violations
  • B. Similar number of constraint violations as current state
  • A. Maximal number of constraint violations
  • C. No constraint violations

30

slide-30
SLIDE 30

Itera rative ve B Best I Impro rove veme ment

  • How to determine the neighbor node to be selected?
  • Iterative Best Improvement:
  • select the neighbor that optimizes some evaluation function
  • Which strategy would make sense? Select neighbour with …
  • Evaluation function:

h(n): number of constraint violations in state n

  • Greedy descent: evaluate h(n) for each neighbour, pick the neighbour n

with minimal h(n) – minimize the number of unsatisfied constraints

  • Hill climbing: equivalent algorithm for maximization problems
  • Here: Maximize number of satisfied constraints

Minimal number of constraint violations

31

slide-31
SLIDE 31

Hill Climbing

NOTE: Everything that will be said for Hill Climbing is also true for Greedy Descent

evaluation/scoring function: number of satisfied contrs. Current nt s state/pos e/possibl ble w e world X 1 assume both vars have integer domain X2

32

slide-32
SLIDE 32

Exa Example: N N-Quee ueens ns

  • Put n queens on an n × n board with no two queens
  • n the same row, column, or diagonal (i.e attacking

each other)

  • Positions a queen

can attack

33

slide-33
SLIDE 33

Exam ampl ple: e: N N-queen as queen as a a loc

  • cal

al s sear earch pr ch probl

  • blem

em

CSP: N-queen CSP

  • One variable per column; domains {1,…,N} => row where the queen in

the ith column seats;

  • Constraints: no two queens in the same row, column or diagonal

Neighbour relation: value of a single column differs Scoring function: number of constraint violations (i.e., number

  • f attacks)

34

slide-34
SLIDE 34

Exam ample: G Gree reedy des descent f for N

  • r N-Quee

ueen

For each column, assign randomly each queen to a row (a number between 1 and N) Repeat

  • For each column & each number: Evaluate how many constraint

violations changing the assignment to that number would yield

  • Choose the column and number that leads to the fewest violated

constraints; change it Until solved

Each cell lists h (i.e. #constraints unsatisfied) if you move the queen in that column into the cell

35

slide-35
SLIDE 35

h = 5 h = ? h = ?

36

slide-36
SLIDE 36

h = 5 h = ? h = ?

37

slide-37
SLIDE 37

Gen eneral Local al Sea earch Algo gorithm

Random initialization Local search step 1: Proced edur ure Local-Search(V,dom,C) 2: Input nputs 3: V: a set of variables 4: dom: a function such that dom(X) is the domain of variable X 5: C: set of constraints to be satisfied 6: Output put complete assignment that satisfies the constraints 7: Loc Local al 8: A[V] an array of values indexed by V 9: repea epeat 10: for

  • r eac

each variable X do do 11: A[X] ←a random value in dom(X); 12: 13: wh while ile (stopping criterion not met & A is not a satisfying assignment) 14: Select a variable Y and a value V ∈dom(Y) 15: Set A[Y] ←V 16: 17: if if (A is a satisfying assignment) then hen 18: ret etur urn A 19: 20: unt until termination

38

slide-38
SLIDE 38

Gen eneral Local al Sea earch Algo gorithm

Random initialization 1: Proced edur ure Local-Search(V,dom,C) 2: Input nputs 3: V: a set of variables 4: dom: a function such that dom(X) is the domain of variable X 5: C: set of constraints to be satisfied 6: Output put complete assignment that satisfies the constraints 7: Loc Local al 8: A[V] an array of values indexed by V 9: repea epeat 10: for

  • r eac

each variable X do do 11: A[X] ←a random value in dom(X); 12: 13: wh while ile (stopping criterion not met & A is not a satisfying assignment) 14: Select a variable Y and a value V ∈dom(Y) 15: Set A[Y] ←V 16: 17: if if (A is a satisfying assignment) then hen 18: ret etur urn A 19: 20: unt until termination

39 Local Search Step Based on local information. E.g., for each neighbour evaluate how many constraints are unsatisfied. Greedy descent: select Y and V to minimize #unsatisfied constraints at each step

slide-39
SLIDE 39

Exampl ample: N-Que ueens

h = 17 h = 1 5 steps

Each cell lists h (i.e. #constraints unsatisfied) if you move the queen from that column into the cell

40

slide-40
SLIDE 40
  • Which move should we

pick in this situation?

Exampl ample: N-Que ueens

41

slide-41
SLIDE 41

The pro he problem of

  • f loc
  • cal mi

mini nima

  • Which move should we

pick in this situation?

  • Current cost: h=1
  • No single move can

improve on this

  • In fact, every single move
  • nly makes things worse

(h ≥ 2)

  • Locally optimal solution
  • Since we are minimizing:

local minimum

42

slide-42
SLIDE 42

Problems with Iterative Best Improvement

It gets misled by locally optimal points

  • (Local Maxima/ Minima)

X1 X2 Eval aluat uation

  • n f

func unction

  • n

43 Most research in local search is about finding effective mechanisms for escaping

slide-43
SLIDE 43

Lect cture re O Overvi rview

  • Recap
  • Domain Splitting for Arc Consistency
  • Local Search
  • Stochastic Local Search (SLS)
  • Comparing SLS

44

slide-44
SLIDE 44

Stoch chast stic Loca cal S Search rch

  • We will use greedy steps to find local minima
  • Move to neighbour with best evaluation function value
  • We will use randomness to escape local minima

45

slide-45
SLIDE 45
  • Start node: random assignment
  • Goal: assignment with zero unsatisfied constraints
  • Heuristic function h: number of unsatisfied constraints

– Lower values of the function are better

  • Stochastic local search is a mix of:

– Greedy descent: move to neighbor with lowest h – Random walk: take some random steps

– i.e., move to a neighbour with some randomness

– Random restart: reassigning values to all variables

Stochast

  • chastic Loc

Local al Sear earch ch (SLS LS) f for

  • r C

CSPs

46

slide-46
SLIDE 46

1: Proced edur ure Local-Search(V,dom,C) 2: Input nputs 3: V: a set of variables 4: dom: a function such that dom(X) is the domain of variable X 5: C: set of constraints to be satisfied 6: Output put complete assignment that satisfies the constraints 7: Loc Local al 8: A[V] an array of values indexed by V 9: repea epeat 10: for

  • r eac

each variable X do do 11: A[X] ←a random value in dom(X); 12: 13: wh while ile (stopping criterion not met & A is not a satisfying assignment) 14: Select a variable Y and a value V ∈dom(Y) 15: Set A[Y] ←V 16: 17: if if (A is a satisfying assignment) then hen 18: ret etur urn A 19: 20: unt until termination

Gen eneral SL SLS S Algo Algori rithm

Extreme case 1: random sampling. Restart at every step: Stopping criterion is “true” Random restart

47

slide-47
SLIDE 47

Gen eneral SLS Algor

  • rithm

1: Proced edur ure Local-Search(V,dom,C) 2: Input nputs 3: V: a set of variables 4: dom: a function such that dom(X) is the domain of variable X 5: C: set of constraints to be satisfied 6: Output put complete assignment that satisfies the constraints 7: Loc Local al 8: A[V] an array of values indexed by V 9: repea epeat 10: for

  • r eac

each variable X do do 11: A[X] ←a random value in dom(X); 12: 13: wh while ile (stopping criterion not met & A is not a satisfying assignment) 14: Select a variable Y and a value V ∈dom(Y) 15: Set A[Y] ←V 16: 17: if if (A is a satisfying assignment) then hen 18: ret etur urn A 19: 20: unt until termination

Extreme case 2: greedy descent Select the neighbor with best h value (select at random among neighbors with same h)

48

Random restart

slide-48
SLIDE 48

Trac racing S SLS LS al algo gorithms i in n AI AIsp space

  • Let’s look at these algorithms in AIspace:
  • Greedy Descent
  • Random Sampling
  • Simple scheduling problem 2 in AIspace:

49