A* and Weighted A* Search Maxim Likhachev Carnegie Mellon - - PowerPoint PPT Presentation

a and weighted a search
SMART_READER_LITE
LIVE PREVIEW

A* and Weighted A* Search Maxim Likhachev Carnegie Mellon - - PowerPoint PPT Presentation

A* and Weighted A* Search Maxim Likhachev Carnegie Mellon University Planning as Graph Search Problem 1. Construct a graph representing the planning problem 2. Search the graph for a (hopefully, close-to-optimal) path The two steps above are


slide-1
SLIDE 1

A* and Weighted A* Search

Maxim Likhachev Carnegie Mellon University

slide-2
SLIDE 2

Maxim Likhachev 2

  • 1. Construct a graph representing the planning problem
  • 2. Search the graph for a (hopefully, close-to-optimal)

path The two steps above are often interleaved

Planning as Graph Search Problem

Carnegie Mellon University

slide-3
SLIDE 3

Maxim Likhachev 3

  • 1. Construct a graph representing the planning problem

(future lectures)

  • 2. Search the graph for a (hopefully, close-to-optimal)

path (three next lectures) The two steps above are often interleaved

Planning as Graph Search Problem

Carnegie Mellon University

slide-4
SLIDE 4

Maxim Likhachev 4

  • Cell decomposition
  • X-connected grids
  • lattice-based graphs
  • Skeletonization of the environment/C-Space
  • Visibility graphs
  • Voronoi diagrams
  • Probabilistic roadmaps

Examples of Graph Construction

replicate action template online

Carnegie Mellon University

slide-5
SLIDE 5

Maxim Likhachev 5

  • Cell decomposition
  • X-connected grids
  • lattice-based graphs
  • Skeletonization of the environment/C-Space
  • Visibility graphs
  • Voronoi diagrams
  • Probabilistic roadmaps

Examples of Graph Construction

replicate action template online

Carnegie Mellon University

Will all be covered later

slide-6
SLIDE 6

Maxim Likhachev 6

Examples of Search-based Planning

Carnegie Mellon University

  • 1. Construct a graph representing the planning problem
  • 2. Search the graph for a (hopefully, close-to-optimal) path

The two steps are often interleaved

motion planning for autonomous vehicles in 4D (<x,y,orientation,velocity>)

running Anytime Incremental A* (Anytime D*) on multi-resolution lattice

[Likhachev & Ferguson, IJRR’09] part of efforts by Tartanracing team from CMU for the Urban Challenge 2007 race

slide-7
SLIDE 7

Maxim Likhachev 7

Examples of Search-based Planning

Carnegie Mellon University

  • 1. Construct a graph representing the planning problem
  • 2. Search the graph for a (hopefully, close-to-optimal) path

The two steps are often interleaved

8-dim foothold planning for quadrupeds using R* graph search

slide-8
SLIDE 8

Maxim Likhachev 8

S2 S1 Sgoal 2 2 S4 S3 3 1 Sstart 1 1

Searching Graphs for a Least-cost Path

  • Once a graph is constructed (from skeletonization or uniform cell

decomposition or adaptive cell decomposition or lattice or whatever else), we

need to search it for a least-cost path

Carnegie Mellon University

slide-9
SLIDE 9

Maxim Likhachev 9

  • Many searches work by computing optimal g-values for

relevant states

– g(s) – an estimate of the cost of a least-cost path from sstart to s – optimal values satisfy: g(s) = mins’’ pred(s) g(s’’) + c(s’’,s)

S2 S1 Sgoal 2 g=1 g=3 g=5 2 S4 S3 3 g=2 g=5 1 Sstart 1 1 g=0 the cost c(s1,sgoal) of an edge from s1 to sgoal

Searching Graphs for a Least-cost Path

Carnegie Mellon University

slide-10
SLIDE 10

Maxim Likhachev 10

  • Many searches work by computing optimal g-values for

relevant states

– g(s) – an estimate of the cost of a least-cost path from sstart to s – optimal values satisfy: g(s) = mins’’ pred(s) g(s’’) + c(s’’,s)

S2 S1 Sgoal 2 g=1 g=3 g=5 2 S4 S3 3 g=2 g=5 1 Sstart 1 1 g=0 the cost c(s1,sgoal) of an edge from s1 to sgoal

Searching Graphs for a Least-cost Path

why?

Carnegie Mellon University

slide-11
SLIDE 11

Maxim Likhachev 11

  • Least-cost path is a greedy path computed by backtracking:

– start with sgoal and from any state s move to the predecessor state s’ such that

)) , ' ' ( ) ' ' ( ( min arg '

) ( ' '

s s c s g s

s pred s

 

S2 S1 Sgoal 2 g=1 g=3 g=5 2 S4 S3 3 g=2 g=5 1 Sstart 1 1 g=0

Searching Graphs for a Least-cost Path

Carnegie Mellon University

slide-12
SLIDE 12

Maxim Likhachev 12

  • Computes optimal g-values for relevant states

h(s) g(s)

Sstart S S2 S1 Sgoal

the cost of a shortest path from sstart to s found so far an (under) estimate of the cost

  • f a shortest path from s to sgoal

at any point of time:

A* Search

Carnegie Mellon University

slide-13
SLIDE 13

Maxim Likhachev 13

  • Computes optimal g-values for relevant states

h(s) g(s)

Sstart S S2 S1 Sgoal

at any point of time:

A* Search

heuristic function

  • ne popular heuristic function – Euclidean distance

Carnegie Mellon University

slide-14
SLIDE 14

Maxim Likhachev 14

  • Heuristic function must be:

– admissible: for every state s, h(s) ≤ c*(s,sgoal) – consistent (satisfy triangle inequality):

h(sgoal,sgoal) = 0 and for every s≠sgoal, h(s) ≤ c(s,succ(s)) + h(succ(s))

– admissibility follows from consistency and often consistency follows from admissibility

A* Search

minimal cost from s to sgoal

Carnegie Mellon University

slide-15
SLIDE 15

Maxim Likhachev 15

  • Computes optimal g-values for relevant states

ComputePath function while(sgoal is not expanded) remove s with the smallest [f(s) = g(s)+h(s)] from OPEN; expand s; Main function g(sstart) = 0; all other g-values are infinite; OPEN = {sstart}; ComputePath(); publish solution;

S2 S1 Sgoal 2 g= h=2 g=  h=1 g=  h=0 2 S4 S3 3 g=  h=2 g=  h=1 1 Sstart 1 1 g=0 h=3

set of candidates for expansion for every expanded state g(s) is optimal

(if heuristics are consistent)

A* Search

Carnegie Mellon University

slide-16
SLIDE 16

Maxim Likhachev 16

  • Computes optimal g-values for relevant states

ComputePath function while(sgoal is not expanded) remove s with the smallest [f(s) = g(s)+h(s)] from OPEN; expand s;

S2 S1 Sgoal 2 g= h=2 g=  h=1 g=  h=0 2 S4 S3 3 g=  h=2 g=  h=1 1 Sstart 1 1 g=0 h=3

A* Search

Carnegie Mellon University

slide-17
SLIDE 17

Maxim Likhachev 17

  • Computes optimal g-values for relevant states

ComputePath function while(sgoal is not expanded) remove s with the smallest [f(s) = g(s)+h(s)] from OPEN; insert s into CLOSED; for every successor s’ of s such that s’ not in CLOSED if g(s’) > g(s) + c(s,s’) g(s’) = g(s) + c(s,s’); insert s’ into OPEN;

S2 S1 Sgoal 2 g= h=2 g=  h=1 g=  h=0 2 S4 S3 3 g=  h=2 g=  h=1 1 Sstart 1 1 g=0 h=3

set of states that have already been expanded tries to decrease g(s’) using the found path from sstart to s

A* Search

Carnegie Mellon University

slide-18
SLIDE 18

Maxim Likhachev 18

  • Computes optimal g-values for relevant states

ComputePath function while(sgoal is not expanded) remove s with the smallest [f(s) = g(s)+h(s)] from OPEN; insert s into CLOSED; for every successor s’ of s such that s’ not in CLOSED if g(s’) > g(s) + c(s,s’) g(s’) = g(s) + c(s,s’); insert s’ into OPEN;

CLOSED = {} OPEN = {sstart} next state to expand: sstart

S2 S1 Sgoal 2 g= h=2 g=  h=1 g=  h=0 2 S4 S3 3 g=  h=2 g=  h=1 1 Sstart 1 1 g=0 h=3

A* Search

Carnegie Mellon University

slide-19
SLIDE 19

Maxim Likhachev 19

  • Computes optimal g-values for relevant states

ComputePath function while(sgoal is not expanded) remove s with the smallest [f(s) = g(s)+h(s)] from OPEN; insert s into CLOSED; for every successor s’ of s such that s’ not in CLOSED if g(s’) > g(s) + c(s,s’) g(s’) = g(s) + c(s,s’); insert s’ into OPEN;

CLOSED = {} OPEN = {sstart} next state to expand: sstart

g(s2) > g(sstart) + c(sstart,s2)

S2 S1 Sgoal 2 g= h=2 g=  h=1 g=  h=0 2 S4 S3 3 g=  h=2 g=  h=1 1 Sstart 1 1 g=0 h=3

A* Search

Carnegie Mellon University

slide-20
SLIDE 20

Maxim Likhachev 20

  • Computes optimal g-values for relevant states

S2 S1 Sgoal 2 g=1 h=2 g=  h=1 g=  h=0 2 S4 S3 3 g=  h=2 g=  h=1 1 Sstart 1 1 g=0 h=3

ComputePath function while(sgoal is not expanded) remove s with the smallest [f(s) = g(s)+h(s)] from OPEN; insert s into CLOSED; for every successor s’ of s such that s’ not in CLOSED if g(s’) > g(s) + c(s,s’) g(s’) = g(s) + c(s,s’); insert s’ into OPEN;

A* Search

Carnegie Mellon University

slide-21
SLIDE 21

Maxim Likhachev 21

  • Computes optimal g-values for relevant states

ComputePath function while(sgoal is not expanded) remove s with the smallest [f(s) = g(s)+h(s)] from OPEN; insert s into CLOSED; for every successor s’ of s such that s’ not in CLOSED if g(s’) > g(s) + c(s,s’) g(s’) = g(s) + c(s,s’); insert s’ into OPEN;

CLOSED = {sstart} OPEN = {s2} next state to expand: s2

S2 S1 Sgoal 2 g=1 h=2 g=  h=1 g=  h=0 2 S4 S3 3 g=  h=2 g=  h=1 1 Sstart 1 1 g=0 h=3

A* Search

Carnegie Mellon University

slide-22
SLIDE 22

Maxim Likhachev 22

  • Computes optimal g-values for relevant states

S2 S1 Sgoal 2 g=1 h=2 g= 3 h=1 g=  h=0 2 S4 S3 3 g= 2 h=2 g=  h=1 1 Sstart 1 1 g=0 h=3

ComputePath function while(sgoal is not expanded) remove s with the smallest [f(s) = g(s)+h(s)] from OPEN; insert s into CLOSED; for every successor s’ of s such that s’ not in CLOSED if g(s’) > g(s) + c(s,s’) g(s’) = g(s) + c(s,s’); insert s’ into OPEN;

CLOSED = {sstart,s2} OPEN = {s1,s4} next state to expand: s1

A* Search

Carnegie Mellon University

slide-23
SLIDE 23

Maxim Likhachev 23

  • Computes optimal g-values for relevant states

S2 S1 Sgoal 2 g=1 h=2 g= 3 h=1 g= 5 h=0 2 S4 S3 3 g= 2 h=2 g=  h=1 1 Sstart 1 1 g=0 h=3

ComputePath function while(sgoal is not expanded) remove s with the smallest [f(s) = g(s)+h(s)] from OPEN; insert s into CLOSED; for every successor s’ of s such that s’ not in CLOSED if g(s’) > g(s) + c(s,s’) g(s’) = g(s) + c(s,s’); insert s’ into OPEN;

CLOSED = {sstart,s2,s1} OPEN = {s4,sgoal} next state to expand: s4

A* Search

Carnegie Mellon University

slide-24
SLIDE 24

Maxim Likhachev 24

  • Computes optimal g-values for relevant states

S2 S1 Sgoal 2 g=1 h=2 g= 3 h=1 g= 5 h=0 2 S4 S3 3 g= 2 h=2 g= 5 h=1 1 Sstart 1 1 g=0 h=3

ComputePath function while(sgoal is not expanded) remove s with the smallest [f(s) = g(s)+h(s)] from OPEN; insert s into CLOSED; for every successor s’ of s such that s’ not in CLOSED if g(s’) > g(s) + c(s,s’) g(s’) = g(s) + c(s,s’); insert s’ into OPEN;

CLOSED = {sstart,s2,s1,s4} OPEN = {s3,sgoal} next state to expand: sgoal

A* Search

Carnegie Mellon University

slide-25
SLIDE 25

Maxim Likhachev 25

  • Computes optimal g-values for relevant states

S2 S1 Sgoal 2 g=1 h=2 g= 3 h=1 g= 5 h=0 2 S4 S3 3 g= 2 h=2 g= 5 h=1 1 Sstart 1 1 g=0 h=3

ComputePath function while(sgoal is not expanded) remove s with the smallest [f(s) = g(s)+h(s)] from OPEN; insert s into CLOSED; for every successor s’ of s such that s’ not in CLOSED if g(s’) > g(s) + c(s,s’) g(s’) = g(s) + c(s,s’); insert s’ into OPEN;

CLOSED = {sstart,s2,s1,s4,sgoal} OPEN = {s3} done

A* Search

Carnegie Mellon University

slide-26
SLIDE 26

Maxim Likhachev 26

  • Computes optimal g-values for relevant states

S2 S1 Sgoal 2 g=1 h=2 g= 3 h=1 g= 5 h=0 2 S4 S3 3 g= 2 h=2 g= 5 h=1 1 Sstart 1 1 g=0 h=3

ComputePath function while(sgoal is not expanded) remove s with the smallest [f(s) = g(s)+h(s)] from OPEN; insert s into CLOSED; for every successor s’ of s such that s’ not in CLOSED if g(s’) > g(s) + c(s,s’) g(s’) = g(s) + c(s,s’); insert s’ into OPEN;

for every expanded state g(s) is optimal for every other state g(s) is an upper bound we can now compute a least-cost path

A* Search

Carnegie Mellon University

slide-27
SLIDE 27

Maxim Likhachev 27

  • Computes optimal g-values for relevant states

S2 S1 Sgoal 2 g=1 h=2 g= 3 h=1 g= 5 h=0 2 S4 S3 3 g= 2 h=2 g= 5 h=1 1 Sstart 1 1 g=0 h=3

ComputePath function while(sgoal is not expanded) remove s with the smallest [f(s) = g(s)+h(s)] from OPEN; insert s into CLOSED; for every successor s’ of s such that s’ not in CLOSED if g(s’) > g(s) + c(s,s’) g(s’) = g(s) + c(s,s’); insert s’ into OPEN;

for every expanded state g(s) is optimal for every other state g(s) is an upper bound we can now compute a least-cost path

A* Search

Carnegie Mellon University

slide-28
SLIDE 28

Maxim Likhachev 28

  • Computes optimal g-values for relevant states

S2 S1 Sgoal 2 g=1 h=2 g= 3 h=1 g= 5 h=0 2 S4 S3 3 g= 2 h=2 g= 5 h=1 1 Sstart 1 1 g=0 h=3

ComputePath function while(sgoal is not expanded) remove s with the smallest [f(s) = g(s)+h(s)] from OPEN; insert s into CLOSED; for every successor s’ of s such that s’ not in CLOSED if g(s’) > g(s) + c(s,s’) g(s’) = g(s) + c(s,s’); insert s’ into OPEN;

for every expanded state g(s) is optimal for every other state g(s) is an upper bound we can now compute a least-cost path

why?

A* Search

Carnegie Mellon University

slide-29
SLIDE 29

Maxim Likhachev 29

  • Is guaranteed to return an optimal path (in fact, for every

expanded state) – optimal in terms of the solution

  • Performs provably minimal number of state expansions

required to guarantee optimality – optimal in terms of the computations

S2 S1 Sgoal 2 g=1 h=2 g= 3 h=1 g= 5 h=0 2 S4 S3 3 g= 2 h=2 g= 5 h=1 1 Sstart 1 1 g=0 h=3

A* Search

Carnegie Mellon University

slide-30
SLIDE 30

Maxim Likhachev 30

  • Is guaranteed to return an optimal path (in fact, for every

expanded state) – optimal in terms of the solution

Sketch of proof by induction for h = 0: assume all previously expanded states have optimal g-values next state to expand is s: f(s) = g(s) – min among states in OPEN OPEN separates expanded states from never seen states thus, path to s via a state in OPEN or an unseen state will be worse than g(s) (assuming positive costs)

A* Search

S2 S1 Sgoal 2 g=1 h=2 g= 3 h=1 g= 5 h=0 2 S4 S3 3 g= 2 h=2 g= 5 h=1 1 Sstart 1 1 g=0 h=3

CLOSED = {sstart,s2,s1,s4} OPEN = {s3,sgoal} next state to expand: sgoal

Carnegie Mellon University

slide-31
SLIDE 31

Maxim Likhachev 31

  • A* Search: expands states in the order of f = g+h values

ComputePath function while(sgoal is not expanded) remove s with the smallest [f(s) = g(s)+h(s)] from OPEN; insert s into CLOSED; for every successor s’ of s such that s’ not in CLOSED if g(s’) > g(s) + c(s,s’) g(s’) = g(s) + c(s,s’); insert s’ into OPEN;

Effect of the Heuristic Function

expansion of s

Carnegie Mellon University

slide-32
SLIDE 32

Maxim Likhachev 32

  • A* Search: expands states in the order of f = g+h values

Sketch of proof of optimality by induction for consistent h:

  • 1. assume all previously expanded states have optimal g-values
  • 2. next state to expand is s: f(s) = g(s)+h(s) – min among states in

OPEN

  • 3. assume g(s) is suboptimal
  • 4. then there must be at least one state s’ on an optimal path from

start to s such that it is in OPEN but wasn’t expanded

  • 5. g(s’) + h(s’) ≥ g(s)+h(s)
  • 6. but g(s’) + c*(s’,s) < g(s) =>

g(s’) + c*(s’,s) + h(s) < g(s) + h(s) => g(s’) + h(s’) < g(s) + h(s)

  • 7. thus it must be the case that g(s) is optimal

Effect of the Heuristic Function

Carnegie Mellon University

slide-33
SLIDE 33

Maxim Likhachev 33

  • A* Search: expands states in the order of f = g+h values
  • Dijkstra’s: expands states in the order of f = g values (pretty

much)

  • Intuitively: f(s) – estimate of the cost of a least cost path

from start to goal via s

Effect of the Heuristic Function

h(s) g(s)

Sstart S S2 S1 Sgoal

the cost of a shortest path from sstart to s found so far an (under) estimate of the cost

  • f a shortest path from s to sgoal

Carnegie Mellon University

slide-34
SLIDE 34

Maxim Likhachev 34

  • A* Search: expands states in the order of f = g+h values
  • Dijkstra’s: expands states in the order of f = g values (pretty

much)

  • Weighted A*: expands states in the order of f = g+εh

values, ε > 1 = bias towards states that are closer to goal

Effect of the Heuristic Function

h(s) g(s)

Sstart S S2 S1 Sgoal

the cost of a shortest path from sstart to s found so far an (under) estimate of the cost

  • f a shortest path from s to sgoal

Carnegie Mellon University

slide-35
SLIDE 35

Maxim Likhachev Carnegie Mellon University 35

Effect of the Heuristic Function

sgoal sstart

  • Dijkstra’s: expands states in the order of f = g values

What are the states expanded?

slide-36
SLIDE 36

Maxim Likhachev University of Pennsylvania 36

Effect of the Heuristic Function

sgoal sstart

  • A* Search: expands states in the order of f = g+h values

What are the states expanded?

slide-37
SLIDE 37

Maxim Likhachev University of Pennsylvania 37

Effect of the Heuristic Function

sgoal sstart

  • A* Search: expands states in the order of f = g+h values

for large problems this results in A* quickly running out of memory (memory: O(n))

slide-38
SLIDE 38

Maxim Likhachev 38

Effect of the Heuristic Function

  • Weighted A* Search: expands states in the order of f =

g+εh values, ε > 1 = bias towards states that are closer to goal sstart sgoal

key to finding solution fast: shallow minima for h(s)-h*(s) function

what states are expanded? – research question

Carnegie Mellon University

slide-39
SLIDE 39

Maxim Likhachev 39

Effect of the Heuristic Function

  • Weighted A* Search:

– trades off optimality for speed – ε-suboptimal: cost(solution) ≤ ε·cost(optimal solution) – in many domains, it has been shown to be orders of magnitude faster than A* – research becomes to develop a heuristic function that has shallow local minima

Carnegie Mellon University

slide-40
SLIDE 40

Maxim Likhachev 40

Effect of the Heuristic Function

  • Weighted A* Search:

– trades off optimality for speed – ε-suboptimal: cost(solution) ≤ ε·cost(optimal solution) – in many domains, it has been shown to be orders of magnitude faster than A* – research becomes to develop a heuristic function that has shallow local minima

Is it guaranteed to expand no more states than A*?

Carnegie Mellon University

slide-41
SLIDE 41

Maxim Likhachev 41

Effect of the Heuristic Function

  • Constructing anytime search based on weighted A*:
  • find the best path possible given some amount of time for planning
  • do it by running a series of weighted A* searches with decreasing ε:

ε =2.5

13 expansions solution=11 moves

ε =1.5

15 expansions solution=11 moves

ε =1.0

20 expansions solution=10 moves

Carnegie Mellon University

slide-42
SLIDE 42

Maxim Likhachev 42

Effect of the Heuristic Function

  • Constructing anytime search based on weighted A*:
  • find the best path possible given some amount of time for planning
  • do it by running a series of weighted A* searches with decreasing ε:

ε =2.5

13 expansions solution=11 moves

ε =1.5

15 expansions solution=11 moves

ε =1.0

20 expansions solution=10 moves

  • Inefficient because

–many state values remain the same between search iterations –we should be able to reuse the results of previous searches

Carnegie Mellon University

slide-43
SLIDE 43

Maxim Likhachev 43

Effect of the Heuristic Function

  • Constructing anytime search based on weighted A*:
  • find the best path possible given some amount of time for planning
  • do it by running a series of weighted A* searches with decreasing ε:

ε =2.5

13 expansions solution=11 moves

ε =1.5

15 expansions solution=11 moves

ε =1.0

20 expansions solution=10 moves

  • ARA*(will be explained in a later lecture)
  • an efficient version of the above that reuses state values within any search iteration
  • will learn next lecture after we learn about incremental version of A*

Carnegie Mellon University

slide-44
SLIDE 44

Maxim Likhachev 44

Effect of the Heuristic Function

  • Useful properties to know:
  • h1(s), h2(s) – consistent, then:

h(s) = max(h1(s),h2(s)) – consistent

  • if A* uses ε-consistent heuristics:

h(sgoal) = 0 and h(s) ≤ ε c(s,succ(s)) + h(succ(s) for all s≠sgoal,

then A* is ε-suboptimal:

cost(solution) ≤ ε cost(optimal solution)

  • weighted A* is A* with ε-consistent heuristics
  • h1(s), h2(s) – consistent, then:

h(s) = h1(s)+h2(s) – ε-consistent

Carnegie Mellon University

slide-45
SLIDE 45

Maxim Likhachev 45

Effect of the Heuristic Function

  • Useful properties to know:
  • h1(s), h2(s) – consistent, then:

h(s) = max(h1(s),h2(s)) – consistent

  • if A* uses ε-consistent heuristics:

h(sgoal) = 0 and h(s) ≤ ε c(s,succ(s)) + h(succ(s) for all s≠sgoal,

then A* is ε-suboptimal:

cost(solution) ≤ ε cost(optimal solution)

  • weighted A* is A* with ε-consistent heuristics
  • h1(s), h2(s) – consistent, then:

h(s) = h1(s)+h2(s) – ε-consistent

Proof?

Carnegie Mellon University

slide-46
SLIDE 46

Maxim Likhachev 46

Effect of the Heuristic Function

  • Useful properties to know:
  • h1(s), h2(s) – consistent, then:

h(s) = max(h1(s),h2(s)) – consistent

  • if A* uses ε-consistent heuristics:

h(sgoal) = 0 and h(s) ≤ ε c(s,succ(s)) + h(succ(s) for all s≠sgoal,

then A* is ε-suboptimal:

cost(solution) ≤ ε cost(optimal solution)

  • weighted A* is A* with ε-consistent heuristics
  • h1(s), h2(s) – consistent, then:

h(s) = h1(s)+h2(s) – ε-consistent

Proof? What is ε? Proof?

Carnegie Mellon University

slide-47
SLIDE 47

Maxim Likhachev

Examples of Heuristic Function

  • For grid-based navigation:

– Euclidean distance – Manhattan distance: h(x,y) = abs(x-xgoal) + abs(y-ygoal) – Diagonal distance: h(x,y) = max(abs(x-xgoal), abs(y-ygoal)) – More informed distances???

  • Robot arm planning:

– End-effector distance – Any others???

Carnegie Mellon University

slide-48
SLIDE 48

Maxim Likhachev

Examples of Heuristic Function

  • For grid-based navigation:

– Euclidean distance – Manhattan distance: h(x,y) = abs(x-xgoal) + abs(y-ygoal) – Diagonal distance: h(x,y) = max(abs(x-xgoal), abs(y-ygoal)) – More informed distances???

  • Robot arm planning:

– End-effector distance – Any others???

Carnegie Mellon University

slide-49
SLIDE 49

Maxim Likhachev

Examples of Heuristic Function

  • For grid-based navigation:

– Euclidean distance – Manhattan distance: h(x,y) = abs(x-xgoal) + abs(y-ygoal) – Diagonal distance: h(x,y) = max(abs(x-xgoal), abs(y-ygoal)) – More informed distances???

  • Autonomous door opening:

– Heuristic function???

Carnegie Mellon University

slide-50
SLIDE 50

Maxim Likhachev 50

Memory Issues

  • A* does provably minimum number of expansions (O(n)) for finding

a provably optimal solution

  • Memory requirements of A* (O(n)) can be improved though
  • Memory requirements of weighted A* are often but not always better

Carnegie Mellon University

slide-51
SLIDE 51

Maxim Likhachev 51

Memory Issues

  • Alternatives:

– Depth-First Search (w/o coloring all expanded states):

  • explore each every possible path at a time avoiding looping and keeping in the

memory only the best path discovered so far

  • Complete and optimal (assuming finite state-spaces)
  • Memory: O(bm), where b – max. branching factor, m – max. pathlength
  • Complexity: O(bm), since it will repeatedly re-expand states

Carnegie Mellon University

slide-52
SLIDE 52

Maxim Likhachev 52

Memory Issues

  • Alternatives:

– Depth-First Search (w/o coloring all expanded states):

  • explore each every possible path at a time avoiding looping and keeping in the

memory only the best path discovered so far

  • Complete and optimal (assuming finite state-spaces)
  • Memory: O(bm), where b – max. branching factor, m – max. pathlength
  • Complexity: O(bm), since it will repeatedly re-expand states
  • Example:

– graph: a 4-connected grid of 40 by 40 cells, start: center of the grid – A* expands up to 800 states, DFS may expand way over 420 > 1012 states

Carnegie Mellon University

slide-53
SLIDE 53

Maxim Likhachev 53

Memory Issues

  • Alternatives:

– Depth-First Search (w/o coloring all expanded states):

  • explore each every possible path at a time avoiding looping and keeping in the

memory only the best path discovered so far

  • Complete and optimal (assuming finite state-spaces)
  • Memory: O(bm), where b – max. branching factor, m – max. pathlength
  • Complexity: O(bm), since it will repeatedly re-expand states
  • Example:

– graph: a 4-connected grid of 40 by 40 cells, start: center of the grid – A* expands up to 800 states, DFS may expand way over 420 > 1012 states

What if goal is few steps away in a huge state-space?

Carnegie Mellon University

slide-54
SLIDE 54

Maxim Likhachev 54

Memory Issues

  • Alternatives:

– IDA* (Iterative Deepening A*)

  • 1. set fmax = 1 (or some other small value)
  • 2. execute (previously explained) DFS that does not expand states with f>fmax
  • 3. If DFS returns a path to the goal, return it
  • 4. Otherwise fmax= fmax+1 (or larger increment) and go to step 2

Carnegie Mellon University

slide-55
SLIDE 55

Maxim Likhachev 55

Memory Issues

  • Alternatives:

– IDA* (Iterative Deepening A*)

  • 1. set fmax = 1 (or some other small value)
  • 2. execute (previously explained) DFS that does not expand states with f>fmax
  • 3. If DFS returns a path to the goal, return it
  • 4. Otherwise fmax= fmax+1 (or larger increment) and go to step 2
  • Complete and optimal in any state-space (with positive costs)
  • Memory: O(bl), where b – max. branching factor, l – length of optimal

path

  • Complexity: O(kbl), where k is the number of times DFS is called

Carnegie Mellon University