Global A-Optimal Robot Exploration in SLAM ICRA 2005 Robert Sim - - PowerPoint PPT Presentation

global a optimal robot exploration in slam
SMART_READER_LITE
LIVE PREVIEW

Global A-Optimal Robot Exploration in SLAM ICRA 2005 Robert Sim - - PowerPoint PPT Presentation

Global A-Optimal Robot Exploration in SLAM ICRA 2005 Robert Sim and Nicholas Roy Presented by Andy Matuschak 1 The Problem 2 The Problem How do we interpret our sensors data? How can we avoid obstacles and hazards? Whats


slide-1
SLIDE 1

Global A-Optimal Robot Exploration in SLAM

ICRA 2005 Robert Sim and Nicholas Roy Presented by Andy Matuschak

1

slide-2
SLIDE 2

The Problem

2

slide-3
SLIDE 3

The Problem

3

  • How do we interpret our sensors’ data?
  • How can we avoid obstacles and hazards?
  • What’s the best path to explore the terrain?
  • What do we do in the event of a Martian

attack on our instruments?

slide-4
SLIDE 4

The Problem

4

  • How do we interpret our sensors’ data?
  • How can we avoid obstacles and hazards?
  • What’s the best path to explore the terrain?
  • What do we do in the event of a Martian

attack on our instruments?

slide-5
SLIDE 5

The Problem

5

  • How do we interpret our sensors’ data?
  • How can we avoid obstacles and hazards?
  • What’s the best path to explore the terrain?
  • And what does “best” mean?
  • What do we do in the event of a Martian

attack on our instruments?

slide-6
SLIDE 6

What is “best”? Formulation

In general, a state is represented by:

6

Where the features in the environment are at: And the robot’s position is represented by:

slide-7
SLIDE 7

And takes range and bearing measurements:

7

But the measurements are noisy! We need: The robot takes some action at every step:

What is “best”? Formulation

slide-8
SLIDE 8
  • SLAM gives the posterior state distribution
  • (“Simultaneous Location and Mapping”)
  • Assumes noise is Gaussian
  • Makes increasingly better state estimates
  • Produces:

8

What is “best”? Using SLAM

slide-9
SLIDE 9
  • We need something to minimize!
  • How much does each move help?
  • We can try analyzing the system’s entropy.

9

What is “best”? Entropic Analysis

slide-10
SLIDE 10
  • Entropy of a distribution:
  • Relative entropy after taking some action:
  • Now, if we have d prior components:

10

What is “best”? Entropic Analysis

  • Where and
slide-11
SLIDE 11
  • We want to maximize information gain,
  • So we want to minimize entropy.
  • Given , we

minimize the covariance determinant.

  • This is called “d-optimal” minimization.

11

What is “best”? D-Optimal

slide-12
SLIDE 12
  • Determinant is proportional to the volume of a

hyperellipsoid where each dimension’s diameter is an eigenvalue of the covariance.

  • Can send to zero by minimizing one dimension.

12

What is “best”? D-Optimal

slide-13
SLIDE 13
  • Idea: minimize mean error instead of overall

variance

  • Minimize trace instead of determinant:
  • That’s proportional to the mean for a

constant feature count.

13

What is “best”? A-Optimal

slide-14
SLIDE 14
  • The change may seem arbitrary, but:

14

What is “best”? A-Optimal

slide-15
SLIDE 15

And now, for exploration…

15

slide-16
SLIDE 16

Greedy Exploration

16

  • At every step, pick the single “best” action.
  • Fast!
  • Simple!
  • Not very effective!
slide-17
SLIDE 17

Global Exploration

  • Idea: pick the “best” sequence of actions.
  • Optimally accurate!
  • Clearly intractable without manipulation.

17

slide-18
SLIDE 18

Pruning the Search

  • Discretize environment into grid.
  • Robot can move to 8 connected neighbors.
  • Find best path which doesn’t cross itself.
  • Repeat until uncertainty is low enough.

18

slide-19
SLIDE 19

Algorithmic Idea

  • Because paths don’t cross, each point has
  • ne best covariance trace.
  • For each point, we store the best trace and

the last point visited in the best path to it.

  • Only update these if the trace along some
  • ther path is lower.
  • Use a priority queue for the states to speed

up convergence.

19

slide-20
SLIDE 20

20

slide-21
SLIDE 21

21

slide-22
SLIDE 22

22

slide-23
SLIDE 23

23

slide-24
SLIDE 24

24

slide-25
SLIDE 25

25

slide-26
SLIDE 26

Convergence

  • No state can be repeated on any trajectory.
  • Entropy goes down with every

measurement, since we prune bad paths.

  • Therefore, the algorithm converges.

26

slide-27
SLIDE 27

Time Analysis

  • Assume a priority queue with linear search.
  • For s positions, we have O(s2) updates.
  • Checking the m-length “parent” list at each

step is O(m).

  • But the list is bounded by s (no repeats!), so

the total running time is O(s3).

  • This can be much better with faster queues.

27

slide-28
SLIDE 28

Performance

28

slide-29
SLIDE 29

Performance

29

slide-30
SLIDE 30

My Thoughts

  • Changing the meaning of “best” had huge
  • impact. What other “best”s are good?
  • How much do we lose by not allowing

loops in our paths?

  • What if making an observation is

expensive?

30

slide-31
SLIDE 31

Questions?

31