Computer Science CPSC 322
Le Lecture ture 7
Sea earch ch Wrap ap Up, p, In Intr tro
- to
to Con
- nstr
trai aint nt Sat atisfa isfactio ction n Prob
- blems
ems
1
Le Lecture ture 7 Sea earch ch Wrap ap Up, p, In Intr tro - - PowerPoint PPT Presentation
Computer Science CPSC 322 Le Lecture ture 7 Sea earch ch Wrap ap Up, p, In Intr tro o to to Con onstr trai aint nt Sat atisfa isfactio ction n Prob oblems ems 1 Lecture cture Ov Overvie rview A few more points about
Computer Science CPSC 322
Sea earch ch Wrap ap Up, p, In Intr tro
to Con
trai aint nt Sat atisfa isfactio ction n Prob
ems
1
Lecture cture Ov Overvie rview
Lecture 6 (more than a recap)
2
We showed that A* is optimal and complete, under certain conditions
3
A* properties
We showed that A* is optimal and complete, under certain conditions
4
A* properties
Which of the following conditions is not needed?
shortest path from n to a goal
We showed that A* is optimal and complete, under certain conditions
5
A* properties
Which of the following conditions is not needed?
shortest path from n to a goal
we know that because at a goal node
and because h is admissible (see proof in previous class)
f(p’’) ≤ f(p*)
p’ p* p”
f (goal) = c(goal)
thus
f(p*) < f(p’) f(p”) < f(p’)
Any sup-path of the optimal solution path will be expanded before p’
6
Run A* on this example (file “Astar” in course syllabus”) to see how A* starts off going down the suboptimal path (through N5) but then recovers and never expands it, because there are always subpaths of the optimal path through N2 on the frontier with lower f value.
Slide 7
It does not get caught in cycles
(unknown but finite if there exists a solution)
would eventually exceed f*
minimal arc cost in the search graph
See how it works on the “misleading heuristic” problem in AI space:
8
9
A* does not get caught into the cycle because f(n) of sub paths in the cycle eventually (at depth <= 55.4/6.9) exceed the cost of the optimal solution 55.4 (N0->N6->N7->N8)
Cycle le Checkin ecking
able to find multiple solutions
expanded path
visited as part of the current path, not by a different path on the frontier
re-visited it is part of a cycle.
independent of path length)
10
Breadth First Search
Since BFS keeps multiple subpaths going, when a node is encountered for the second time, it could be as part of expanding a different path (e.g. Node 2 while expanding N0-> N3). Not necessarily a cycle.
11
Breadth First Search
The cycle for BFS happens when N2 is encountered for the second time while expanding the path N0->N2->N5->N3 .
12
Depth First Search
Since DFS looks at one path at a time, when a node is encountered for the second time (e.g. Node 2 while expanding N0, N2, N5, N3) it is guaranteed to be part of a cycle.
13
If we only want one path to the solution
a previous path
node
Multiple ltiple Path th Pruning uning
14
If we only want one path to the solution
via a previous path
the node
Or are there algorithms that are guaranteed to always find the shortest path to any node in the search space?
Multiple ltiple Path th Pruning uning
15
“ Whenever search algorithm X expands the first path p ending in node n, this is the lowest-cost path from the start node to n (if all costs ≥ 0)”
This is true for
Algorithm X always find the optimal path to any node n in the search space first
16
“ Whenever search algorithm X expands the first path p ending in node n, this is the lowest-cost path from the start node to n (if all costs ≥ 0)”
This is true for
Algorithm X always find the optimal path to any node n in the search space first
17
construction
Below is the counter-example for A*: it expands the upper path to n first, so if we prune the second path at the bottom, we miss the optimal solution Special conditions on the heuristic can recover the guarantee of LCFS for A*: the monotone restriction (See P&M text, Section 3.7.2)
18
Branch anch-and and-Bound Bound Searc arch
One way to combine DFS with heuristic guidance h(n) and f(n)
(pruning)
19
Branch-and-Bound Analysis
20
checking LB(p) UB, if the answer is yes but h(p) is an
solution
Branch-and-Bound Analysis
21
Branch anch-and and-Bound Bound Anal alysi ysis
22
Branch-and-Bound Analysis
IT DEPENDS on whether we can initialize UB to a finite value, i.e. we have a reliable overestimate of the solution
in a cycle
23
Branch anch-and and-Bound Bound Searc arch
One way to combine DFS with heuristic guidance
(pruning)
24
Complete Optimal Time Space DFS N N O(bm) O(mb) BFS Y Y O(bm) O(bm) IDS Y Y O(bm) O(mb) LCFS (when arc costs available) Y Costs > 0 Y Costs >=0 O(bm) O(bm) Best First (when h available) N N O(bm) O(bm) A*
(when arc costs > 0 and h admissible )
Y Y O(bm) Optimally Efficient O(bm) Branch-and-Bound O(bm) O(bm)
uninformed Uninformed but using arc cost Informed (goal directed)
25
Complete Optimal Time Space DFS N N O(bm) O(mb) BFS Y Y O(bm) O(bm) IDS Y Y O(bm) O(mb) LCFS (when arc costs available) Y Costs > 0 Y Costs >=0 O(bm) O(bm) Best First (when h available) N N O(bm) O(bm) A*
(when arc costs > 0 and h admissible )
Y Y O(bm) Optimally Efficient O(bm) Branch-and-Bound N (Y with finite initial bound) Y If h admissible O(bm) O(bm)
uninformed Uninformed but using arc cost Informed (goal directed)
26
Dynami namic c Program
ming
run one of the search algorithms we have seen so far in the backwards graph (arcs reversed), Using the goal as start state And n as the goal k c b h g z 2 3 1 2 4 1
27
Dynami namic c Program
ming
run one of the search algorithms we have seen so far in the backwards graph (arcs reversed), Using the goal as start state And n as the goal k c b h g z 2 3 1 2 4 1 Which algorithm should we use?
It is the only one guaranteed to find the shortest path from s to any node, and does not need h We want multiple path pruning because we are only interested in the first, shortest path from each node to s
Dynami namic c Program
ming
k c b h g z 2 3 1 2 4 1
29
run LCSF with MP pruning in the backwards graph (arcs reversed), Using the goal as start state
backward graph in AISpace: do invert graph, in create mode
LCSF SF on In Inverted erted Gr Graph ph (samp
ample le steps) ps)
30
Dynami namic c Program
ming
k c b h g z 2 3 1 2 4 1
31
run LCFS with MP pruning in the backwards graph (arcs reversed), Using the goal as start state
neighbor m that minimizes distance to goal
Lecture cture Ov Overvie rview
32
It Iterative rative Deepening epening A* (ID IDA*) A*)
Branch & Bound (B&B) can still get stuck in infinite (or extremely long) paths
for Iterative Deepening
33
depth = 1 depth = 2 depth = 3
Iterative Deepening DFS (IDS) in a Nutshell
– For depth D, ignore any paths with longer length – Depth-bounded depth-first search
If no goal re-start from scratch and get to depth 2 If no goal re-start from scratch and get to depth 3 If no goal re-start from scratch and get to depth 4
34
It Iterative rative Deepening epening A* (ID IDA*) A*)
̶ But the “depth” bound is measured in terms of f ̶ IDA* is a bit of a misnomer
f(p) = cost(p) + h(p)
)
̶ Increase the bound: to the minimum of the f-values that exceeded the previous bound
35
36
Numbers inside nodes are their f scores The algorithm would have started with a bound of 1 (f of the start state). The current bound of 3 is the minimum of the f values found to exceed bound = 1 (i.e. 3 and 4) in that iteration
4
37
f values found to exceed the bound of 4 in this iteration
6
38
8
6,7,8,12
39
8
11, 13, 12, 8 And so on…
Anal alysi ysis s of Ite f Iterat rative ive Deepe epening ning A* (ID IDA*) A*)
times (see slides on uninformed IDS)
40
Complete Optimal Time Space DFS N N O(bm) O(mb) BFS Y Y O(bm) O(bm) IDS Y Y O(bm) O(mb) LCFS (when arc costs available) Y Costs > 0 Y Costs >=0 O(bm) O(bm) Best First (when h available) N N O(bm) O(bm) A*
(when arc costs > ɜ and h admissible )
Y Y O(bm) Optimally Efficient O(bm) Branch-and-Bound N (Y with finite initial bound) Y If h admissible O(bm) O(bm) IDA*
Sea earch ch Me Metho hods ds so
ar
uninformed Uninformed but using arc cost Informed (goal directed)
41
Complete Optimal Time Space DFS N N O(bm) O(mb) BFS Y Y O(bm) O(bm) IDS Y Y O(bm) O(mb) LCFS (when arc costs available) Y Costs > 0 Y Costs >=0 O(bm) O(bm) Best First (when h available) N N O(bm) O(bm) A*
(when arc costs > ɜ and h admissible )
Y Y O(bm) Optimally Efficient O(bm) Branch-and-Bound N (Y with finite initial bound) Y If h admissible O(bm) O(bm) IDA*
(when arc costs > ɜ and h admissible )
Y Y O(bm) O(bm)
Sea earch ch Me Metho hods ds so
ar
uninformed Uninformed but using arc cost Informed (goal directed)
42
Heuri uristic stic DFS FS
information in DFS?
43
Heuri uristic stic DFS FS
DFS?
the frontier
stack
Can use heuristic guidance: h or f Perfect heuristic f: would solve problem without any backtracking
44
Heuri uristic stic DFS FS
DFS?
the frontier
stack
Can use heuristic guidance: h or f Perfect heuristic f: would solve problem without any backtracking
Does it have to be admissible?
45
Heuri uristic stic DFS FS
DFS?
the frontier
stack
Can use heuristic guidance: h or f Perfect heuristic f: would solve problem without any backtracking
Does heuristic have to be admissible?
We are still doing DFS, i.e. following each path all the way to end before trying any
46
Heuri uristic stic DFS FS and d More
some heuristic h’)
Yes
47
regular A*)?
delete worst paths (highest f ) from frontier (e.g. p1, ..pn below) Backup the value of the deleted paths to a common ancestor (e.g. N below) – Way to remember the potential value of the “forgotten” paths
N
p pn p1
The corresponding subtrees get regenerated only when all other paths have been shown to be worse than the “forgotten” path
48
h(p) Old )], h(p ) cost(p
cost(p [ min max h(p) New
i i i
p pn p1
If we want to prune subpaths p1, p2, .., pn below and “back up” their value to common ancestor p
(cost(p st(pi) ) – cost( t(p)) )) + h( h(pi) ) gives the estimated cost of the pruned subpath from p to pi Min [(cost(p
) – cost(p)) t(p)) + h( h(pi)] gives the pruned subpath with the most promising estimated cost Taking the max with Old h(p) ) gives the tighter h value for p 49
Min [(cost(pi) – cost(p)) + h(pi)], Old h(p)
Memory mory-bounded bounded A*
Details of the algorithm are beyond the scope of this course but
depth manageable by the available memory
memory
algorithms for finding optimal solutions under memory limitations
among a set of candidate solution paths, of which only a few fit in memory
50
Selection ction Comple lete te Optima imal Time Space DFS BFS IDS LCFS Best t First st A* A* B&B B&B IDA* MBA*
Recap cap (Must st Know
w to to Fi Fill ll Th This is
51
Selection ction Comple lete te Optima imal Time Space
DFS LIFO N N O(bm) O(mb) BFS FIFO Y Y O(bm) O(bm) IDS LIFO Y Y O(bm) O(mb) LCFS min cost Y ** Y ** O(bm) O(bm) Best First min h N N O(bm) O(bm) A* min f Y** Y** O(bm) O(bm) B&B LIFO + pruning Y** Y** O(bm) O(mb) IDA* LIFO Y** Y** O(bm) O(mb) MBA* min f Y** Y** O(bm) O(bm)
** Needs conditions: you need to know what they are
Recap cap (Must st Know
w to to Fi Fill ll Th This is
52
Selection ction Comple lete te Optima imal Time Space
DFS LIFO N N O(bm) O(mb) BFS FIFO Y Y O(bm) O(bm) IDS LIFO Y Y O(bm) O(mb) LCFS min cost Y ** Y ** O(bm) O(bm) Best First min h N N O(bm) O(bm) A* min f Y** Y** O(bm) O(bm) B&B LIFO + pruning Y** Y** O(bm) O(mb) IDA* LIFO Y Y O(bm) O(mb) MBA* min f Y** Y** O(bm) O(bm)
Alg lgori
thms Of Often ten Used ed in in Practice ctice
** Needs conditions: you need to know what they are 53
Search in Practice Many paths to solution, no ∞ paths? Informed? Large branching factor? NO NO NO Y Y Y IDS B&B IDA* MBA*
These are indeed general guidelines, specific problems might yield different choices
Remember member Deep ep Blu lue? e?
Deep Blue’s Results in the second tournament:
processors
per sec
positions per move reaching depth 14 routinely
heuristic) based on 8000 features (e.g., sum of worth of pieces: pawn 1, rook 5, queen 10)
55
2001 (DMMT '01 Proceedings of the workshop on Data-driven methods in machine
translation - Volume 14 )
Research (2007)
finding salient curves.
to find heuristic functions that simultaneously are admissible, close to actual completion costs, and efficient to calculate…
both for optimal and suboptimal search, and also yields a better linear- space anytime heuristic search. RBFSCR is the first linear space best-first search robust enough to solve a variety of domains with varying operator costs.”
Learning arning Go Goals ls fo for search arch
ntify ify real world examples that make use of deterministic, goal-driven search agents
sess ss the size of the search space of a given search problem.
lement the generic solution to a search problem.
ly basic properties of search algorithms:
algorithms.
lect ct the most appropriate search algorithms for specific problems.
ine/read/wr read/write/ ite/tra trace ce/deb debug ug different the search algorithms covered
lement cycle checking and multiple path pruning for different algorithms
nstru truct ct heuristic functions for specific search problems
rmally lly prove
derstand and general ideas behind Dynamic Programming and MBA*
57
Envir ironment nment Problem Type Query Planning Deterministic Stochastic Constraint Satisfaction Search Arc Consistency Search Search Logics STRIPS Vars + Constraints Value Iteration Variable Elimination Belief Nets Decision Nets Markov Processes Static Sequential
Representation Reasoning Technique
Variable Elimination
First Part of the Course
58
Sta tandard ndard vs Specia ecialized lized Search arch
supports three problem-specific routines:
various problems
59
Envir ironment nment Problem Type Query Planning Deterministic Stochastic Constraint Satisfaction Search Arc Consistency Search Search Logics STRIPS Vars + Constraints Value Iteration Variable Elimination Belief Nets Decision Nets Markov Processes Static Sequential
Representation Reasoning Technique
Variable Elimination
60
We wi e will loo
at Sea earch ch in Sp n Spec ecific fic R&R Systems stems
61
Envir ironment nment Problem Type Query Planning Deterministic Stochastic Constraint Satisfaction Search Arc Consistency Search Search Logics STRIPS Vars + Constraints Value Iteration Variable Elimination Belief Nets Decision Nets Markov Processes Static Sequential
Representation Reasoning Technique
Variable Elimination
We’ll start from CPS
62
Lecture cture Ov Overvie rview
Lecture 6 (more than a recap)
63
We wi e will loo
at Sea earch ch fo for CSP
64
Lecture cture Ov Overvie rview
65
CSPs: s: Crossword ssword Puzzles zzles - Prover verb
Source: Michael Littman
66
Constraint Satisfaction Problems (CSP)
– state is defined by a set of variables Vi with values from domain Di – goal test is a set of constraints specifying
1. allowable combinations of values for subsets of variables (hard constraints) 2. preferences over values of variables (soft constraints)
67
Dimensions of Representational Complexity (from lecture 2)
Inference / Planning)
Some other important dimensions of complexity:
Explicit licit Sta tate te vs. . Fe Features tures (Lecture ecture 2)
How do we model the environment?
world
230=1,073,741,824 states
69
Variables iables/Fe /Features atures and d Possibl sible e Worlds lds
Soon, however, we will give a broader definition of state, so it is best to start distinguishing the two concepts .
70
Example mple (le lecture cture 2)
Mars Explorer Example Weather Temperature Longitude Latitude
One possible world (state) Number of possible (mutually exclusive) worlds (states) {S, -30, 320, 210} 2 x 81 x 360 x 180
{S, C} [-40, 40] [0, 359] [0, 179]
Product of cardinality of each domain … always exponential in the number of variables
71
Constraint Satisfaction Problems (CSP)
algorithms with more power than standard search algorithms
problem and the structure provided by the goal set of constraints, *not* black box.
72