Intro to AI: Lecture 2 Volker Sorge States and Actions Background for Search Breadth First Search Depth First Search Iterative Deepening Uniform Cost Search
Uninformed Search Depth First Search Iterative Deepening Volker - - PowerPoint PPT Presentation
Uninformed Search Depth First Search Iterative Deepening Volker - - PowerPoint PPT Presentation
Intro to AI: Lecture 2 Volker Sorge States and Actions Background for Search Breadth First Search Uninformed Search Depth First Search Iterative Deepening Volker Sorge Uniform Cost Search Intro to AI: States Lecture 2 Volker Sorge
Intro to AI: Lecture 2 Volker Sorge States and Actions Background for Search Breadth First Search Depth First Search Iterative Deepening Uniform Cost Search
States
In problem solving, states represent configurations of that
- problem. Thereby depending on the problem states can refer
to:
◮ The current progress towards a solution (e.g., in search) ◮ The state of the environment that is manipulated (e.g.,
game play)
◮ The internal state of an agent (e.g., knowledge)
Some important concepts are Intial State The start state of the problem. Goal State The solution of the problem or final state. State Space The space of all possible states.
Intro to AI: Lecture 2 Volker Sorge States and Actions Background for Search Breadth First Search Depth First Search Iterative Deepening Uniform Cost Search
Actions
Actions are transitions from one valid state into another.
◮ Action has to be applicable in the current state of the
problem.
◮ An action has an effect that changes the state.
Some important concepts are: Path A sequence of actions leading from one state to another. Solution A path leading from the intial state to the goal state.
Intro to AI: Lecture 2 Volker Sorge States and Actions Background for Search Breadth First Search Depth First Search Iterative Deepening Uniform Cost Search
Search Problem
We define a search problem as:
◮ A set of states S. ◮ An initial state s ∈ S. ◮ A set of goal or final states F ⊆ S. ◮ An action mapping A ⊆ S × S from a state to a (set
- f) new state(s).
What we are often interested in is the size of the state space |S|, which can of course be infinite. ∈
Intro to AI: Lecture 2 Volker Sorge States and Actions Background for Search Breadth First Search Depth First Search Iterative Deepening Uniform Cost Search
Example: 15 Puzzle
13 10 11 6 5 7 4 8 1 12 2 14 3 15 9 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
◮ A potential initial state on the left. ◮ The goal state on the right. ◮ Actions are valid moves of the tiles. ◮ States are all valid configurations of the puzzle.
Mathematically they correspond to all even permutations.
◮ State space size is therefore 16! 2 = 10, 461, 394, 944, 000
Intro to AI: Lecture 2 Volker Sorge States and Actions Background for Search Breadth First Search Depth First Search Iterative Deepening Uniform Cost Search
Graph: Definition
In this section we need the concept of a graph, a very common data structure. Graphs are used to represent relationships between objects. For example, points on a map together with roads that connect them. Generally a graph is a collection of objects together with links between some of these objects. More formally we can define a graph G as a pair (V , E), where
◮ V is a set of vertices, and ◮ E ⊆ V × V is a set of edges.
{.} × ⊆
Intro to AI: Lecture 2 Volker Sorge States and Actions Background for Search Breadth First Search Depth First Search Iterative Deepening Uniform Cost Search
Graph: Example
Source: Wikipedia
V = {1, 2, 3, 4, 5, 6} E = {(1, 2), (1, 5), (2, 3), (2, 5), (3, 4), (4, 5), (4, 6)}
Intro to AI: Lecture 2 Volker Sorge States and Actions Background for Search Breadth First Search Depth First Search Iterative Deepening Uniform Cost Search
Search
Search can then be defined as
◮ A graph G = (V , E), where States are vertices V = S
and actions are edges E = A.
◮ A method expanding unexplored vertices, i.e., moving
from a vertex to vertices not yet visited.
◮ Explored set of already expanded vertices. ◮ Frontier is a set of nodes yet to be expanded.
The way we expand the Frontier defines a search strategy.
Intro to AI: Lecture 2 Volker Sorge States and Actions Background for Search Breadth First Search Depth First Search Iterative Deepening Uniform Cost Search
Search Tree
◮ Search algorithms effectively construct a tree while
searching the graph.
◮ Level 0 of the tree consists of the start state s. ◮ Level n of the tree are nodes whose shortest path to s
contains n edges.
◮ A leaf is reached as soon as the next has already been
explored (is an element of Explored).
Intro to AI: Lecture 2 Volker Sorge States and Actions Background for Search Breadth First Search Depth First Search Iterative Deepening Uniform Cost Search
Properties of Search Strategies
Completeness If there exists a solution will we find it? Optimality Will we find the shortest possible solution? Time Complexity How long does it take to find the solution? Space Complexity How much memory will we need to find a solution? For the latter two we are usually interested in the worst possible case.
Intro to AI: Lecture 2 Volker Sorge States and Actions Background for Search Breadth First Search Depth First Search Iterative Deepening Uniform Cost Search
Breadth-First Search
◮ Search from the start node in a “concentric” way. ◮ Expand all nodes of level n before expanding any nodes
- n level n + 1.
Intro to AI: Lecture 2 Volker Sorge States and Actions Background for Search Breadth First Search Depth First Search Iterative Deepening Uniform Cost Search
Algorithm: BFS
Input: Graph G = (V , E), Start state s, Set of goal states F Output: Path P ⊆ E
1 begin 2
let Frontier = [s];
3
let Explored = {s};
4
while Frontier= [] do
5
let v::Frontier = Frontier;
6
if v ∈ F then
7
return Path(s,v)
8
end
9
else
10
let Frontier = Frontier@[v1, . . . , vn], where (v, vi) ∈ E and vi ∈ Explored. let Explored = {vi}∪ Explored;
11
end
12
end
13
return failure;
14 end
Usually Frontier is implemented as a priority queue. Observe that despite the algorithm being in pseudo-code, we use some Ocaml notation ::, @, let. ∈ [.] : : @ ∪
Intro to AI: Lecture 2 Volker Sorge States and Actions Background for Search Breadth First Search Depth First Search Iterative Deepening Uniform Cost Search
Example
Source http://goeurope.about.com
Getting from Paris to Rome? (Explore in alphabetical order!)
◮ Solution: Paris→Copenhagen→Prague→Vienna→Rome ◮ Have to explore: Every node except Naples.
Intro to AI: Lecture 2 Volker Sorge States and Actions Background for Search Breadth First Search Depth First Search Iterative Deepening Uniform Cost Search
Search Tree
Level Paris 1 Amsterdam Copenhagen London Munich Madrid 2 Prague Edinburgh Lisbon Marseille 3 Vienna Milan 4 Rome 5 Naples
Intro to AI: Lecture 2 Volker Sorge States and Actions Background for Search Breadth First Search Depth First Search Iterative Deepening Uniform Cost Search
Analysis
Complete Yes, even for infinite state space. Optimal Yes, will always find the shallowest solution. Time Complexity Let b be branching factor of the tree, i.e., the maximum number of nodes expanded in
- ne move. Let d be the depth of the solution.
Then time complexity is O(bd). Space Complexity All nodes in the tree have to be kept. Hence space complexity is the same as time complexity: O(bd). O
Intro to AI: Lecture 2 Volker Sorge States and Actions Background for Search Breadth First Search Depth First Search Iterative Deepening Uniform Cost Search
Breadth-First Search
◮ Search from the start node along one path as far as
possible before backtracking.
◮ Expand the first node of level n as deep as possible
before expanding the next node on level n.
Intro to AI: Lecture 2 Volker Sorge States and Actions Background for Search Breadth First Search Depth First Search Iterative Deepening Uniform Cost Search
Algorithm: DFS
Input: Graph G = (V , E), Start state s, Set of goal states F Output: Path P ⊆ E
1 begin 2
let Frontier = [s];
3
let Explored = {};
4
while Frontier= [] do
5
let v::Frontier = Frontier;
6
if v ∈ F then
7
return Path(s,v)
8
end
9
else
10
let Explored = {v}∪ Explored;
11
let Frontier = [v1, . . . , vn]@Frontier, where (v, vi) ∈ E and vi ∈ Explored.
12
end
13
end
14
return failure;
15 end
Usually Frontier is implemented by a stack.
Intro to AI: Lecture 2 Volker Sorge States and Actions Background for Search Breadth First Search Depth First Search Iterative Deepening Uniform Cost Search
Example
Source http://goeurope.about.com
Getting from Paris to Rome? (Explore in alphabetical order!)
◮ Solution: Paris→Copenhagen→Prague→Vienna→Rome ◮ Have to explore: Paris, Amsterdam, Copenhagen,
Prague, Vienna, Rome
Intro to AI: Lecture 2 Volker Sorge States and Actions Background for Search Breadth First Search Depth First Search Iterative Deepening Uniform Cost Search
Analysis
Complete Not for infinite state space. Optimal No, will always find ’left-most’ solution. Time Complexity Let b be branching factor of the tree, i.e., the maximum number of nodes expanded in
- ne move. Let d be the depth of the solution.
Then time complexity is O(bd). Space Complexity Only the nodes on the most recent path are kept and the successors in each node in the path. Hence space complexity is with respect to the depth of the solution: O(bd).
Intro to AI: Lecture 2 Volker Sorge States and Actions Background for Search Breadth First Search Depth First Search Iterative Deepening Uniform Cost Search
Iterative Deepening
◮ As we have seen depth first search is neither complete
nor optimal.
◮ But there are situations where we
◮ cannot keep the full search tree around, ◮ still want an optimal solution.
◮ The idea is to use a bounded version of iterated
deepening, i.e.,
◮ Explore to one level. ◮ If no solution is found, restart exploring to the next
level.
Intro to AI: Lecture 2 Volker Sorge States and Actions Background for Search Breadth First Search Depth First Search Iterative Deepening Uniform Cost Search
Algorithm: Iterative Deepening
Input: Graph G = (V , E), Start state s, Set of goal states F Output: Path P ⊆ E
1
begin
2
let d = 0; let NextLayer = true;
3
while NextLayer do
4
let d = d + 1; let NextLayer = false;
5
let Frontier = [s]; let Explored = {s};
6
while Frontier= [] do
7
let v::Frontier = Frontier;
8
if depth(v) = d then
9
let NextLayer = true
10
end
11
else if v ∈ F then
12
return Path(s,v)
13
end
14
else
15
let Explored = {v}∪ Explored;
16
let Frontier = [v1, . . . , vn]@Frontier, where (v, vi) ∈ E and vi ∈ Explored and depth(vi) < d;
17
end
18
end
19
end
20
return failure;
21
end
Intro to AI: Lecture 2 Volker Sorge States and Actions Background for Search Breadth First Search Depth First Search Iterative Deepening Uniform Cost Search
Analysis
Complete Yes, even for infinite state space. Optimal Yes, will always find the shallowest solution (if iteration factor is 1!). Time Complexity Interestingly enough the time complexity is the same is for DFS. Although we seemingly do a lot of work twice, in reality only the last layer in the exploration is really constly and any previous layer only adds a negligent factor. Thus time complexity is O(bd). Space Complexity Only the nodes on the most recent path are kept and the successors in each node. Hence space complexity is the same as for DFS: O(bd).
Intro to AI: Lecture 2 Volker Sorge States and Actions Background for Search Breadth First Search Depth First Search Iterative Deepening Uniform Cost Search
Uniform Cost Search
◮ So far we have treated all edges in our graphs equally. ◮ That is, an optimal solution was one with the smallest
number of edges.
◮ In other words the cost to expand a node was always
the same, i.e., equal 1.
◮ We now assign costs to our edges (e.g., distance
between cities), i.e., E ⊂ V × V × N where N are non-negative integers.
◮ Uniform Cost Search is similar to BFS, only that it
expands the cheapest node first. ⊂ N
Intro to AI: Lecture 2 Volker Sorge States and Actions Background for Search Breadth First Search Depth First Search Iterative Deepening Uniform Cost Search
Algorithm: Uniform Cost Search
Input: Graph G = (V , E), Start state s, Set of goal states F Output: Path P ⊆ E
1 begin 2
let Frontier = {s};
3
let Explored = {s};
4
while Frontier= [] do
5
let v ∈ Frontier, such that Path(s,v) has lowest cost;
6
let Frontier = Frontier\{v};
7
if v ∈ F then
8
return Path(s,v)
9
end
10
else
11
let Explored = {v}∪ Explored;
12
let Frontier = Frontier∪{v1, . . . , vn}, where (v, vi) ∈ E and vi ∈ Explored.
13
end
14
end
15
return failure;
16 end
\
Intro to AI: Lecture 2 Volker Sorge States and Actions Background for Search Breadth First Search Depth First Search Iterative Deepening Uniform Cost Search
Example
Source http://goeurope.about.com
Getting from Paris to Rome? (Explore wrt. first costs!)
◮ Solution: Paris→Madrid→Marseilles→Milan→Rome (Cost:2,285) ◮ Have to explore in order:
Paris, London, Amsterdam, Munich, Edinburgh, Madrid, Copenhagen, Lisbon, Prague, Marseilles, Vienna, Milan, Rome
Intro to AI: Lecture 2 Volker Sorge States and Actions Background for Search Breadth First Search Depth First Search Iterative Deepening Uniform Cost Search