SLIDE 1 复旦大学大数据学院
School of Data Science, Fudan University
Informed Search
魏忠钰
March 6th, 2019
SLIDE 2
Informed Search
▪ In uninformed search, we never “look-ahead” to the goal. E.g., We don't consider the cost of getting to the goal from the end of the current path. ▪ Often we have some other knowledge about the merit of nodes.
SLIDE 3
Search Heuristics
▪ A heuristic is:
▪ A function that estimates how close a state is to a goal ▪ Designed for a particular search problem ▪ Examples: Manhattan distance, Euclidean distance for pathing
10 5 11.2 𝑦1 − 𝑦2 + |𝑧1 − 𝑧2| (𝑦1 − 𝑦2 )2+(𝑧1 − 𝑧2 )2 Manhattan distance: Euclidean distance:
SLIDE 4
Heuristic
SLIDE 5
Example: Pancake Problem
Cost: Number of pancakes flipped
SLIDE 6
Example: Heuristic Function
Heuristic: the number of the largest pancake that is still out of place
4 3 2 3 3 3 4 4 3 4 4 4
h(x)
SLIDE 7
Greedy Search
SLIDE 8
Example: Heuristic Function
SLIDE 9
Greedy Search
▪ Expand the node that seems closest…
SLIDE 10
Greedy Search
▪ What can go wrong?
▪ From Lasi to Fagaras
SLIDE 11
Greedy Search
▪ What can go wrong?
▪ From Lasi to Fagaras
SLIDE 12
Greedy Search
▪ Strategy: expand a node that you think is closest to a goal state
▪ Heuristic: estimate of distance to nearest goal for each state
▪ A common case:
▪ Best-first takes you straight to the (wrong) goal
▪ Worst-case: like a badly-guided DFS ▪ Not Complete ▪ Not Optimal
… b … b
SLIDE 13
A* Search
▪ Take into account the cost of getting to the node as well as our estimation of the cost of getting to the goal from the node. ▪ Evaluation function f(n) ▪ f(n) = g(n) + h(n) ▪ g(n) is the cost of the path represented by node n ▪ h(n) is the heuristic estimate of the cost of achieving the goal from n.
SLIDE 14 Quiz: Combining UCS and Greedy
- Uniform-cost orders by path cost, or backward cost g(n)
- Greedy orders by goal proximity, or forward cost h(n)
- A* Search orders by the sum: f(n) = g(n) + h(n)
S a d b G h=5 h=6 h=2 1 8 1 1 2 h=6 h=0 c h=7 3 e h=1 1
SLIDE 15 Is A* Optimal?
- What went wrong?
- Actual bad goal cost < estimated good goal cost
- We need estimates to be less than actual costs!
A G S 1 3
h = 6 h = 0
5
h = 7
SLIDE 16
Admissible Heuristics
SLIDE 17 Idea: Admissibility
Inadmissible (pessimistic) heuristics break optimality by trapping good plans on the fringe Admissible (optimistic) heuristics slow down bad plans but never
SLIDE 18 Admissible Heuristics
- A heuristic h is admissible (optimistic) if:
where is the true cost to a nearest goal
- Examples:
- Coming up with admissible heuristics is most of what’s
involved in using A* in practice. 15 4
SLIDE 19
Optimality of A* Tree Search
SLIDE 20 Optimality of A* Tree Search Assume:
- A is an optimal goal node
- B is a suboptimal goal node
- h is admissible
Claim:
- A will be visited before B
…
SLIDE 21
Optimality of A* Tree Search: Blocking Proof: ▪ Imagine B is on the fringe ▪ Some ancestor n of A is on the fringe, too ▪ Claim: n will be expanded before B ▪ f(n) is less or equal to f(A)
Definition of f-cost Admissibility of h
…
h = 0 at a goal
SLIDE 22
Optimality of A* Tree Search: Blocking Proof: ▪ Imagine B is on the fringe ▪ Some ancestor n of A is on the fringe, too ▪ Claim: n will be expanded before B ▪ f(n) is less or equal to f(A) ▪ f(A) is less than f(B)
…
SLIDE 23
Optimality of A* Tree Search: Blocking Proof: ▪ Imagine B is on the fringe ▪ Some ancestor n of A is on the fringe, too (maybe A!) ▪ Claim: n will be expanded before B ▪ f(n) is less or equal to f(A) ▪ f(A) is less than f(B) ▪ n expands before B ▪ All ancestors of A expand before B ▪ A expands before B ▪ A* search is optimal
…
SLIDE 24
Properties of A*
… b … b
Uniform-Cost A*
SLIDE 25 UCS vs A* Contours
- Uniform-cost expands equally in all
“directions”
- A* expands mainly toward the goal, but
does hedge its bets to ensure optimality
- Expand all the nodes with F less or equal to
- ptimal cost
Start Goal Start Goal
SLIDE 26
A* History
▪ Peter Hart, Nils Nilsson and Bertram Raphael of Stanford Research Institute (now SRI International) first described the algorithm in 1968. ▪ A1 -> A2 -> A*(Optimal)
SLIDE 27 A* Applications
- Video games
- Pathing / routing problems
- Resource planning problems
- Robot motion planning
- Language analysis
- Machine translation
- Speech recognition
- …
SLIDE 28
Creating Heuristics
SLIDE 29 Creating Admissible Heuristics
▪ Most of the work in solving hard search problems
- ptimally is in coming up with admissible heuristics
▪ Often, admissible heuristics are solutions to relaxed problems, where new actions are available ▪ Inadmissible heuristics are often useful too 15 366
SLIDE 30
Example: 8 Puzzle Start State Goal State Actions
SLIDE 31
8 Puzzle I
▪ Heuristic: Number of tiles misplaced ▪ Why is it admissible? ▪ h(start) = 8
Start State Goal State
SLIDE 32
8 Puzzle II ▪ What if we had an easier 8-puzzle where any tile could slide any direction at any time, ignoring other tiles? ▪ Total Manhattan distance ▪ Why is it admissible? ▪ h(start) = 3 + 1 + 2 + 2 + 2 + 3 + 3 + 2 = 18
Start State Goal State
SLIDE 33
How to design a admissible heuristic functions ?
SLIDE 34
From relaxed problem A tile can move from square A to square B if A is adjacent to B and B is blank. ▪ Constraint 1: A and B is adjacent ▪ Constraint 2: B is blank ▪ Problem 1: A tile can move from square A to square B if A is adjacent to B. ▪ Problem 2: A tile can move from square A to square B if B is blank. ▪ Problem 3: A tile can move from square A to square B.
Heuristic function can be generated automatically with formal expression of the original question!
SLIDE 35 From sub-problems ▪ The task is to get tiles 1, 2, 3, and 4 into their correct positions, without worrying about what happens to the other tiles. ▪ The cost of solving the sub-problem is definitely no more than its
SLIDE 36
The comparison of different heuristic function ▪ Effective Branching Factor (𝑐∗) is computed based on the depth and nodes # in the tree.
▪ h1: # of tiles mis-placed ▪ h2: Total Manhattan distance 𝑂 + 1 = 1 + 𝑐∗ + (𝑐∗)2 + (𝑐∗)2 + ⋯ + (𝑐∗)𝑒
SLIDE 37
Trivial Heuristics, Dominance ▪ Dominance: ha ≥ hc if ▪ Heuristics form a semi-lattice:
▪ Max of admissible heuristics is admissible
▪ Trivial heuristics
▪ Bottom of lattice is the zero heuristic (what does this give us?) ▪ Top of lattice is the exact heuristic
SLIDE 38
8 Puzzle III
▪ How about using the actual cost as a heuristic?
▪ Would it be admissible?
▪ Yes
▪ Would we save on nodes expanded?
▪ Yes
▪ What’s wrong with it?
▪ More computational cost.
▪ With A*: a trade-off between quality of estimate and work per node
▪ As heuristics get closer to the true cost, you will expand fewer nodes but usually do more work per node to compute the heuristic itself
SLIDE 39
Learning for heuristic functions ▪ Learning heuristic functions
𝐼 𝑜 = 𝑑1𝑦1 𝑜 , … , 𝑑𝑛𝑦𝑛(𝑜)
SLIDE 40
Tree Search: Extra Work!
▪ Failure to detect repeated states can cause exponentially more work. Search Tree State Graph
SLIDE 41 Graph Search
▪ In BFS, for example, we shouldn’t bother expanding the circled nodes (why?)
S
a b d p a c e p h f r q q c
G
a q e p h f r q q c
G
a
SLIDE 42
Graph Search ▪ Idea: never expand a state twice ▪ How to implement:
▪ Tree search + set of expanded states (“closed set”) ▪ Expand the search tree node-by-node, but… ▪ Before expanding a node, check to make sure its state has never been expanded before ▪ If not new, skip it, if new add to closed set
▪ Important: store the closed set as a set, not a list ▪ Can graph search wreck completeness? Why/why not?
▪ No
▪ How about optimality?
SLIDE 43
A* Gone Wrong with admissible function S A B C G
1 1 1 2 3 h=2 h=1 h=4 h=1 h=0
S (0+2) A (1+4) B (1+1) C (2+1) G (5+0) C (3+1) G (6+0)
State space graph Search tree
SLIDE 44
Consistency of Heuristics ▪ Main idea: estimated heuristic costs ≤ actual costs
▪ Admissibility: heuristic cost ≤ actual cost to goal ▪ h(A) ≤ actual cost from A to G ▪ Consistency: heuristic “arc” cost ≤ actual cost for each arc ▪ h(A) – h(C) ≤ cost(A to C)
▪ Consequences of consistency:
▪ The f value along a path never decreases ▪ h(A) ≤ cost(A to C) + h(C) ▪ A* graph search is optimal
3
A C G
h=4 h=1 1 h=2
SLIDE 45 Optimality of A* Search
▪ Sketch: consider what A* does with a consistent heuristic:
▪ Fact 1: In tree search, A* expands nodes in increasing total f value (f-contours) ▪ Fact 2: For every state s, nodes that reach s
- ptimally are expanded before nodes that
reach s sub-optimally ▪ Result: A* graph search is optimal
… f 3 f 2 f 1
SLIDE 46
Optimality
▪ Tree search:
▪ A* is optimal if heuristic is admissible ▪ UCS is a special case (h = 0)
▪ Graph search:
▪ A* optimal if heuristic is consistent ▪ UCS optimal (h = 0 is consistent)
▪ Consistency implies admissibility
SLIDE 47
A*: Summary
▪ A* uses both backward costs and (estimates of) forward costs ▪ A* is optimal with admissible / consistent heuristics ▪ Heuristic design is key: often use relaxed problems
SLIDE 48
Tree Search Pseudo-Code
SLIDE 49
Graph Search Pseudo-Code
SLIDE 50 Quiz
Consider the state space graph shown below. A is the start state and H is the goal state. The costs for each edge are shown on the graph. Each edge can be traversed in both directions. You are required to use graph search algorithm.
- 1. Provide the search path using depth first search algorithm (using
alphabetical order to break the tie). Specify the status of the frontier (node with depth) along the search.
- 2. Provide the search path using uniform cost search algorithm (using
alphabetical order to break the tie). Specify the status of the frontier (node with cost) along the search.