Informed Search
Philipp Koehn 24 September 2015
Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015
Informed Search Philipp Koehn 24 September 2015 Philipp Koehn - - PowerPoint PPT Presentation
Informed Search Philipp Koehn 24 September 2015 Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015 Heuristic 1 From Wikipedia: any approach to problem solving, learning, or discovery that employs a practical method
Philipp Koehn 24 September 2015
Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015
1
From Wikipedia:
Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015
2
– hill-climbing – simulated annealing – genetic algorithms (briefly) – local search in continuous spaces (very briefly)
Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015
3
Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015
4
function TREE-SEARCH( problem,fringe) returns a solution, or failure fringe← INSERT(MAKE-NODE(INITIAL-STATE[problem]),fringe) loop do if fringe is empty then return failure node← REMOVE-FRONT(fringe) if GOAL-TEST[problem] applied to STATE(node) succeeds return node fringe← INSERTALL(EXPAND(node, problem),fringe)
Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015
5
– estimate of “desirability” ⇒ Expand most desirable unexpanded node
fringe is a queue sorted in decreasing order of desirability
– greedy search – A∗ search
Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015
6
Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015
7
Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015
8
= estimate of cost from n to the closest goal
Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015
9
Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015
10
Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015
11
Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015
12
Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015
13
Iasi → Neamt → Iasi → Neamt → Complete in finite space with repeated-state checking
Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015
14
Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015
15
– g(n) = cost so far to reach n – h(n) = estimated cost to goal from n – f(n) = estimated total cost of path through n to goal
– i.e., h(n) ≤ h∗(n) where h∗(n) is the true cost from n – also require h(n) ≥ 0, so h(G) = 0 for any goal G
Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015
16
Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015
17
Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015
18
Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015
19
Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015
20
Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015
21
Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015
22
Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015
23
Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015
24
Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015
25
Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015
26
Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015
27
f(G2) = g(G2) since h(G2) = 0 > g(G1) since G2 is suboptimal ≥ f(n) since h is admissible
Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015
28
Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015
29
Yes, unless there are infinitely many nodes with f ≤ f(G)
A∗ expands all nodes with f(n) < C∗ A∗ expands some nodes with f(n) = C∗ A∗ expands no nodes with f(n) > C∗
Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015
30
h(n) ≤ c(n,a,n′) + h(n′)
f(n′) = g(n′) + h(n′) = g(n) + c(n,a,n′) + h(n′) ≥ g(n) + h(n) = f(n)
Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015
31
– h1(n) = number of misplaced tiles – h2(n) = total Manhattan distance (i.e., no. of squares from desired location of each tile)
Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015
32
– h1(n) = number of misplaced tiles – h2(n) = total Manhattan distance (i.e., no. of squares from desired location of each tile)
Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015
33
→ h2 dominates h1 and is better for search
d = 14 IDS = 3,473,941 nodes A∗(h1) = 539 nodes A∗(h2) = 113 nodes d = 24 IDS ≈ 54,000,000,000 nodes A∗(h1) = 39,135 nodes A∗(h2) = 1,641 nodes
h(n) = max(ha(n),hb(n)) is also admissible and dominates ha, hb
Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015
34
⇒ h1(n) gives the shortest solution
⇒ h2(n) gives the shortest solution
the optimal solution cost of the real problem
Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015
35
– can be computed in O(n2) – is a lower bound on the shortest (open) tour
Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015
36
– incomplete and not always optimal
– complete and optimal – also optimally efficient (up to tie-breaks, for forward search)
Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015
37
Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015
38
the goal state itself is the solution
– find optimal configuration, e.g., TSP – find configuration satisfying constraints, e.g., timetable
→ keep a single “current” state, try to improve it
Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015
39
Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015
40
for very large n, e.g., n=1 million
Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015
41
function HILL-CLIMBING( problem) returns a state that is a local maximum inputs: problem, a problem local variables: current, a node neighbor, a node current← MAKE-NODE(INITIAL-STATE[problem]) loop do neighbor←a highest-valued successor of current if VALUE[neighbor] ≤ VALUE[current] then return STATE[current] current←neighbor end
Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015
42
Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015
43
function SIMULATED-ANNEALING( problem, schedule) returns a solution state inputs: problem, a problem schedule, a mapping from time to “temperature” local variables: current, a node next, a node T, a “temperature” controlling prob. of downward steps current← MAKE-NODE(INITIAL-STATE[problem]) for t← 1 to ∞ do T←schedule[t] if T = 0 then return current next←a randomly selected successor of current ∆E← VALUE[next] – VALUE[current] if ∆E > 0 then current←next else current←next only with probability e∆ E/T
Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015
44
Boltzman distribution p(x) = αe
E(x) kT
⇒ always reach best state x∗ because e
E(x∗) kT /e E(x) kT = e E(x∗)−E(x) kT
≫ 1 for small T
Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015
45
Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015
46
Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015
47
Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015
48
– 6-D state space defined by (x1,y2), (x2,y2), (x3,y3) – objective function f(x1,y2,x2,y2,x3,y3) = sum of squared distances from each city to nearest airport
e.g., empirical gradient considers ±δ change in each coordinate
∇f = ( ∂f ∂x1 , ∂f ∂y1 , ∂f ∂x2 , ∂f ∂y2 , ∂f ∂x3 , ∂f ∂y3 ) to increase/reduce f, e.g., by x ← x + α∇f(x)
Newton–Raphson (1664, 1690) iterates x ← x − H−1
f (x)∇f(x)
to solve ∇f(x) = 0, where Hij =∂2f/∂xi∂xj
Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015
49
– exhaustive exploration of the search space – search with heuristics: a*
– hill-climbing – simulated annealing – genetic algorithms (briefly) – local search in continuous spaces (very briefly)
Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015