ARTIFICIAL INTELLIGENCE Pathfinding and search Lecturer: Silja - - PowerPoint PPT Presentation

artificial intelligence pathfinding and search
SMART_READER_LITE
LIVE PREVIEW

ARTIFICIAL INTELLIGENCE Pathfinding and search Lecturer: Silja - - PowerPoint PPT Presentation

Utrecht University INFOB2KI 2019-2020 The Netherlands ARTIFICIAL INTELLIGENCE Pathfinding and search Lecturer: Silja Renooij These slides are part of the INFOB2KI Course Notes available from www.cs.uu.nl/docs/vakken/b2ki/schema.html


slide-1
SLIDE 1

ARTIFICIAL INTELLIGENCE

Lecturer: Silja Renooij

Pathfinding and search

Utrecht University The Netherlands

These slides are part of the INFOB2KI Course Notes available from www.cs.uu.nl/docs/vakken/b2ki/schema.html

INFOB2KI 2019-2020

slide-2
SLIDE 2

Pathfinding

2

`Physical’ world: open space or structured?

slide-3
SLIDE 3

Contents

  • Dijkstra, BFS, DFS, DLS, IDS
  • Differences with elective course

‘Algoritmiek’:

– Theory all by example: Dijkstra in detail,

  • thers more superficial. Some more in-

depth in Project B – Emphasis on difference tree-search / graph-search – Emphasis on comparison of properties rather than proofs of properties.

3

slide-4
SLIDE 4

4

Pathfinding in Romania

slide-5
SLIDE 5

Requirements for search

  • 1. States S: abstract represention of (set of) real state(s)

– E.g. all cities on a map, or all possible paths in a grid or (waypoint) graph – A problem typically assigns an initial state (e.g "in Arad“) and a goal state (e.g. "in Zerind“)

  • 2. Actions (operators/production rules) S  S’: abstract

representation of combination of real actions

– E.g. change from one path to another by change of links – Each abstract action should be "easier" than the original problem; e.g., "Arad  Zerind" represents a complex set of possible routes, detours, rest stops, etc.

5

slide-6
SLIDE 6

State space for search

The state space represents all states reachable from the initial state by any sequence of actions. A tree or graph with nodes S and (un)directed edges S  S’ is used as state space representation. A search algorithm is applied to the state space representation to find a solution (abstract representation of solution in the real world)

– for guaranteed realizability, a sequence of actions maps initial state to goal state – e.g. list of connections = real path

6

slide-7
SLIDE 7

Searching the state space

  • 3. Strategy

– Defines how to search through state space – E.g. Systematic (desirable, not always possible):

  • All states are visited
  • No state is visited more than once

Search algorithms:

  • differ in employed strategy
  • build a search tree to explore state space
  • often “create” state space while searching (space often

too large to represent all solutions

  • create (portions of) all paths while finding a good path

between two points

7

slide-8
SLIDE 8

8

Different representations

  • Real world is absurdly complex

 We often represent the world in a simplified way

In addition, the search space is abstracted for problem solving in the world An abstract representation of the world is not the same as a state space representation suitable for problem solving! The map of Romania is represented as a graph with cities as nodes, and edges representing existing connections between

  • cities. We can also use grids or waypoint graphs to represent the

real world. As state space representation for path finding we can use, e.g.

  • the same structure as the domain representation (nodes in the grid /

(waypoint) graph), or

  • nodes to represent entire paths (lists of connections) on this grid/graph,…
slide-9
SLIDE 9

Dijkstra’s algorithm

  • Solves the single‐source shortest path

problem using GRAPH‐SEARCH

  • Source state  start node of search
  • Each connection is associated with a cost,
  • ften called step‐cost
  • The cost‐so‐far is the cost to reach a node n

along a given path from the start node (often called path‐cost, denoted g(n) )

  • Result: to each n, a path with minimal g(n)

9

Edsger W. Dijkstra 1930‐2002

slide-10
SLIDE 10

Dijkstra’s algorithm

Bookkeeping:

  • open list (seen, but not processed)
  • closed list (completely processed)
  • cost‐so‐far & connections (path) followed to get here

Init: open list  (start node, cost‐so‐far=0) Iteration: Process node from open list with smallest cost‐so‐far Terminate:

  • When open list is empty
  • Follow back connections to retrieve path

10

slide-11
SLIDE 11

11

Dijkstra’s algorithm: example

Open: (Arad,0) Closed:

slide-12
SLIDE 12

12

Dijkstra’s algorithm: example

Open: (Z,75)Arad, (S,140)A, (T,118)A Closed: (Arad,0)

slide-13
SLIDE 13

13

Dijkstra’s algorithm: example

Open: (S,140)A, (T,118) A, (O,146) Z Closed: (Arad,0), (Z,75)A

slide-14
SLIDE 14

14

Dijkstra’s algorithm: example

Open: (S,140) A, (O,146) Z, (L,229)T Closed: (Arad,0), (Z,75) A, (T,118) A

slide-15
SLIDE 15

15

Dijkstra’s algorithm: example

Open: (O,146)Z, (L,229) T, (R,220)S, (F,239) S Closed: (Arad,0), (Z,75) A, (T,118) A, (S,140) A

 140 + 151 > 146

slide-16
SLIDE 16

16

Dijkstra’s algorithm: example

Open: (L,229) T, (R,220) S, (F,239) S Closed: (Arad,0), (Z,75) A, (T,118) A, (S,140) A, (O,146) Z

slide-17
SLIDE 17

17

Dijkstra’s algorithm: example

Open: (L,229) T, (F,239) S, (C,366)R, (P,317) R Closed: (Arad,0), (Z,75) A, (T,118) A, (S,140) A, (O,146) Z, (R,220) S

slide-18
SLIDE 18

18

Dijkstra’s algorithm: example

Open: (F,239) S, (C,366) R, (P,317) R, (M,299)L Closed: (Arad,0), (Z,75) A, (T,118) A, (S,140) A, (O,146) Z, (R,220) S, (L,229) T

slide-19
SLIDE 19

19

Dijkstra’s algorithm: example

Open: (C,366) R, (P,317) R, (M,299) L, (B,450)F Closed: (Arad,0), (Z,75) A, (T,118) A, (S,140) A, (O,146) Z, (R,220) S, (L,229) T, (F,239) S

slide-20
SLIDE 20

20

Dijkstra’s algorithm: example

Open: (C,366) R, (P,317) R, (B,450) F, (D,374)M Closed: (Arad,0), (Z,75) A, (T,118) A, (S,140) A, (O,146) Z, (R,220) S, (L,229) T, (F,239) S, (M,299) L

slide-21
SLIDE 21

21

Dijkstra’s algorithm: example

Open: (C,366) R, (B,450) F, (D,374) M, (B,418)P Closed:(Arad,0), (Z,75) A, (T,118) A, (S,140) A, (O,146) Z, (R,220) S, (L,229) T, (F,239) S, (M,299) L, (P,317) R

 317+138>366

slide-22
SLIDE 22

22

Dijkstra’s algorithm: example

Open: (D,374) M, (B,418) P Closed:(Arad,0), (Z,75) A, (T,118) A, (S,140) A, (O,146) Z, (R,220) S, (L,229) T, (F,239) S, (M,299)L, (P,317) R, (C,366) R

366+120>374

slide-23
SLIDE 23

23

Dijkstra’s algorithm: example

Open: (B,418) P Etc…. (Dijkstra continues, we stop…) Closed: (Arad,0), (Z,75) A, (T,118) A, (S,140) A (O,146) Z, (R,220) S, (L,229) T, (F,239) S, (M,299) L, (P,317) R, (C,366) R, (D,374) M

slide-24
SLIDE 24

24

Dijkstra’s algorithm: example

Retrieve path to … Bucharest

(Arad,0), (Z,75) A, (T,118) A, (S,140) A, (O,146) Z, (R,220) S, (L,229) T, (F,239) S, (M,299) L, (P,317) R, (C,366) R, (D,374) M,(B,418) P

slide-25
SLIDE 25

Dijkstra’s algorithm: summary

  • Start with paths of length 1 and expand the
  • ne with lowest (non‐negative!) cost first
  • All paths will be found (and no path will be

found more than once)

  • GRAPH‐SEARCH algorithm: i.e. uses closed list

concept to avoid loops

  • Recall: solves the single‐source shortest path

problem  Not aimed at finding path to one specific goal…

25

slide-26
SLIDE 26

26

TREE-SEARCH algorithms

Simulated exploration of state space by generating successors of already‐explored states (a.k.a. expanding) – initial state  start node of tree – expand: function that generates leafs for successors  fringe (= frontier = open list) with all leafs – goal state: used in goal test

Note: goal‐test only when node is considered for expansion, not already upon generation!

slide-27
SLIDE 27

27

Pseudocode general TREE-SEARCH

# fringe = frontier = open list Note: goal‐test upon expansion; sometimes more efficient upon generation!

slide-28
SLIDE 28

28

Implementation: states vs. nodes

  • (As before) State S is abstract representation of physical

configuration in problem domain; it corresponds to a node in the state space representation

  • A node x in a search tree constructed upon exploring the

state space is a data structure containing info on:

– state S – parent node – action – path cost g(x), – depth,…

  • Expand: function that

– creates new nodes – fills in the various node fields – uses problem‐associated Successor‐Fn to generate successors

slide-29
SLIDE 29

29

TREE-SEARCH example

Node for initial state “Arad” in fringe

slide-30
SLIDE 30

30

TREE-SEARCH example

  • Node related to state “Arad” is expanded and removed

from fringe

  • Nodes related to states “Sibiu” (~path from Arad to Sibiu),

“Timisoara” and “Zerind” are generated and added to the fringe

slide-31
SLIDE 31

31

TREE-SEARCH example

Suppose strategy selects node related to state “Sibiu” for expansion:

  • a.o. node related to state “Arad” is generated
  • and added to the fringe

 oops….there’s a loop….!

slide-32
SLIDE 32

32

Repeated states

Failure to detect repeated states can turn a linear problem into an exponential one!

slide-33
SLIDE 33

33

Avoiding loops

Method 1:

  • Don’t add node to the fringe if we generated a node for

the associated state before  What if there are multiple paths to a node and we want to be sure to get the shortest ? (like e.g. to Oradea) Method 2: GRAPH SEARCH

  • Don’t add node to the fringe if we expanded a node for

the associated state before Keep a closed (=explored=visited) list of expanded states (c.f. Dijkstra) This is not for free: takes up time and memory!

slide-34
SLIDE 34

34

GRAPH-SEARCH

slide-35
SLIDE 35

35

TREE-SEARCH vs GRAPH-SEARCH

GRAPH‐SEARCH = TREE‐SEARCH + a closed list of explored states

GRAPH‐SEARCH is used to prevent searching redundant paths; search algorithms can be implemented as either one. NB tree vs graph in this context refers to:

  • structure underlying the state space

It does not refer to:

  • structure built during search (a tree in both cases)
  • simplified representation of problem domain

– e.g. the road map

slide-36
SLIDE 36

36

Search strategies

  • A search strategy defines the order of node expansion
  • Strategies are evaluated along the following dimensions:

– completeness: does it always find a solution if one exists? – optimality: does it always find a least‐cost solution? – time complexity: how long does it take to find a solution? – space complexity: how much memory is needed?

  • Time and space complexity are measured in terms of

– b: maximum branching factor of the search tree (max # succ.) – d: depth of optimal (least‐cost) solution

(start with d=0)

– m: maximum length of path in state space

(may be ∞)

– total number of nodes generated

(= time)

– maximum number of nodes in memory

(= space)

Solution quality Search complexity

slide-37
SLIDE 37

37

Uninformed search strategies

Use only the information available in the problem definition (a.k.a. blind search)

  • Breadth‐first search

(BFS)

  • Uniform‐cost search

(UCS)

  • Depth‐first search

(DFS)

  • Depth‐limited search

(DLS)

  • Iterative deepening search

(IDS) Can only generate successors and distinguish goal‐state from non‐goal state

slide-38
SLIDE 38

38

Breadth-first search

  • Expand shallowest unexpanded node
  • Implementation:

– fringe is a FIFO queue, i.e., new successors go at end – ties: (in this case) queue in alphabetical order ( subsequently expanded in alphabetical order too)

fringe = A

slide-39
SLIDE 39

39

Breadth-first search

fringe = BC

  • Expand shallowest unexpanded node
  • Implementation:

– fringe is a FIFO queue, i.e., new successors go at end – ties: (in this case) queue in alphabetical order

slide-40
SLIDE 40

40

Breadth-first search

fringe = CDE

  • Expand shallowest unexpanded node
  • Implementation:

– fringe is a FIFO queue, i.e., new successors go at end – ties: (in this case) queue in alphabetical order

slide-41
SLIDE 41

41

Breadth-first search

fringe = DEFG

  • Expand shallowest unexpanded node
  • Implementation:

– fringe is a FIFO queue, i.e., new successors go at end – ties: (in this case) queue in alphabetical order

slide-42
SLIDE 42

42

Properties of BFS

Assumption: goal‐test upon generation, finite d

  • Complete? Yes, as long as b is finite
  • Optimal? Yes, if step costs equal shallowest == optimal
  • Time? 1+b+b2+b3+… +bd = O(bd)

(if goal‐test upon expansion: + b(bd‐1) = O(bd+1))

  • Space? same as time

Exponential space is bigger problem (even more than time)  works only for smaller instances

dominate costs for closed list in GRAPH‐SEARCH

slide-43
SLIDE 43

43

Uniform-cost search

Incorporates step costs

  • Expand least‐cost unexpanded node
  • Like Dijkstra, but now with goal test
  • Implementation:

– fringe = priority queue, ordered by path cost g(n)

  • Equivalent to breadth‐first search if step costs all

equal

  • ! But step costs need not be equal (even though

the name may suggest otherwise) !

slide-44
SLIDE 44

44

Properties of UCS

Assumptions: goal‐test upon expansion, finite d; ε = minimal step cost; C* = cost of cheapest solution

  • Complete? Yes, if ε > 0 and b is finite
  • Optimal? Yes: nodes expanded in increasing order of g(n)
  • Time?

O(bceiling(C*/ ε))

(# of nodes with path cost g ≤ C * )

  • Space?

O(bceiling(C*/ ε))

typically dominate costs for closed list in GRAPH‐SEARCH

slide-45
SLIDE 45

45

Depth-first search

  • Expand deepest unexpanded node
  • Implementation:

– fringe = LIFO queue, i.e., put successors at front – ties: (in this case) stack in reverse alphabetical order ( subsequently expanded in alphabetical order!) fringe = A

slide-46
SLIDE 46

46

Depth-first search

  • Expand deepest unexpanded node
  • Implementation:

– fringe = LIFO queue, i.e., put successors at front – ties: (in this case) stack in reverse alphabetical order fringe = BC

slide-47
SLIDE 47

47

Depth-first search

  • Expand deepest unexpanded node
  • Implementation:

– fringe = LIFO queue, i.e., put successors at front – ties: (in this case) stack in reverse alphabetical order fringe = DEC

slide-48
SLIDE 48

48

Depth-first search

  • Expand deepest unexpanded node
  • Implementation:

– fringe = LIFO queue, i.e., put successors at front – ties: (in this case) stack in reverse alphabetical order fringe = HIEC

slide-49
SLIDE 49

49

Depth-first search

  • Expand deepest unexpanded node
  • Implementation:

– fringe = LIFO queue, i.e., put successors at front – ties: (in this case) stack in reverse alphabetical order fringe = IEC

done at H

slide-50
SLIDE 50

50

Depth-first search

  • Expand deepest unexpanded node
  • Implementation:

– fringe = LIFO queue, i.e., put successors at front – ties: (in this case) stack in reverse alphabetical order fringe = EC

done at I  done at D

slide-51
SLIDE 51

51

Depth-first search

  • Expand deepest unexpanded node
  • Implementation:

– fringe = LIFO queue, i.e., put successors at front – ties: (in this case) stack in reverse alphabetical order fringe = JKC

slide-52
SLIDE 52

52

Depth-first search

  • Expand deepest unexpanded node
  • Implementation:

– fringe = LIFO queue, i.e., put successors at front – ties: (in this case) stack in reverse alphabetical order fringe = KC

done at J

slide-53
SLIDE 53

53

Depth-first search

  • Expand deepest unexpanded node
  • Implementation:

– fringe = LIFO queue, i.e., put successors at front – ties: (in this case) stack in reverse alphabetical order fringe = C

done at K  done at E  done at B

slide-54
SLIDE 54

54

Depth-first search

  • Expand deepest unexpanded node
  • Implementation:

– fringe = LIFO queue, i.e., put successors at front – ties: (in this case) stack in reverse alphabetical order fringe = FG

slide-55
SLIDE 55

55

Depth-first search

  • Expand deepest unexpanded node
  • Implementation:

– fringe = LIFO queue, i.e., put successors at front – ties: (in this case) stack in reverse alphabetical order fringe = LMG

slide-56
SLIDE 56

56

Depth-first search

  • Expand deepest unexpanded node
  • Implementation:

– fringe = LIFO queue, i.e., put successors at front – ties: (in this case) stack in reverse alphabetical order fringe = MG

done at L

Etc.

slide-57
SLIDE 57
  • Complete? No, unless GRAPH‐SEARCH in finite state space, finite d
  • Optimal? No, it finds “left‐most” solution, regardless of cost
  • Time?

O(bm)

– terrible if m is much larger than d – but if solutions are dense, may be much faster than breadth‐first

  • Space? O(bm), in case of tree search

(“black” nodes are removed from memory)

57

Properties of DFS

Advantage may be lost in GRAPH‐SEARCH due to costs for closed list !

slide-58
SLIDE 58

58

Depth-limited search

= depth‐first search with depth limit l, i.e., nodes at depth l do not generate successors  solves infinite path problem (m = ∞) Recursive implementation:

slide-59
SLIDE 59

59

Properties of DLS

Note: DFS is special case of DLS with l = m (possibly ∞)

  • Complete? Not if l < d

(do we know d ??)

  • Optimal? Not if d < l
  • Time?

O(bl)

  • Space? O(bl)

Again advantage may be lost in GRAPH‐SEARCH due to costs for closed list !

slide-60
SLIDE 60

60

Iterative deepening search

  • Repeats DFS for increasing depth limit

 Finds best depth limit  Combines benefits of BFS and DFS

slide-61
SLIDE 61

61

Iterative deepening search limit=0

slide-62
SLIDE 62

62

Iterative deepening search limit=1

slide-63
SLIDE 63

63

Iterative deepening search limit=2

slide-64
SLIDE 64

64

Iterative deepening search limit=3

slide-65
SLIDE 65

65

Iterative deepening search

  • Number of nodes generated in a depth‐limited search

(DLS) to depth d with branching factor b: NDLS = b0 + b1 + b2 + … + bd‐2 + bd‐1 + bd

  • Number of nodes generated in an iterative deepening

search (IDS) to depth d with branching factor b: NIDS = (d+1)b0 + d b1 + (d‐1)b2 + … + 3bd‐2 +2bd‐1 + 1bd

  • For b = 10, d = 5:

– NDLS = 1 + 10 + 100 + 1,000 + 10,000 + 100,000 = 111,111 – NIDS = 6 + 50 + 400 + 3,000 + 20,000 + 100,000 = 123,456

Overhead = (123,456 ‐ 111,111)/111,111 = 11%

slide-66
SLIDE 66

66

Properties of IDS

Assumption: finite d

  • Complete? Yes
  • Optimal?

Yes

  • Time? O(bd)

(d+1)b0 + d b1 + (d‐1)b2 + … + bd

  • Space? O(bd)

Inherited from BFS; same assumptions apply Inherited from DFS, but max depth restricted to d; same observations wrt GRAPH‐SEARCH apply

slide-67
SLIDE 67

67

Summary of search algorithms

This overview assumes TREE‐SEARCH, with goal‐test upon expansion, and finite solution depth d. Recall (!):

  • most yes’s and no’s depend on additional assumptions
  • Space complexity may be different if GRAPH‐SEARCH is employed

Is there one best algorithm?

slide-68
SLIDE 68

68

Summary pathfinding and uninformed search

  • Algorithms find path to goal
  • Problem‐specific ingredients used only for:
  • Goal test
  • Path cost

(used in solution, and sometimes for expansion)