Introduction to Mobile Robotics Bayes Filter Particle Filter and - - PowerPoint PPT Presentation

introduction to mobile robotics bayes filter particle
SMART_READER_LITE
LIVE PREVIEW

Introduction to Mobile Robotics Bayes Filter Particle Filter and - - PowerPoint PPT Presentation

Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization Wolfram Burgard 1 Motivation Estimating the state of a dynamical system is a fundamental problem The Recursive Bayes Filter is an effective


slide-1
SLIDE 1

1

Bayes Filter – Particle Filter and Monte Carlo Localization Introduction to Mobile Robotics

Wolfram Burgard

slide-2
SLIDE 2

2

§ Estimating the state of a dynamical system is a fundamental problem § The Recursive Bayes Filter is an effective approach to estimate the belief about the state of a dynamical system § How to represent this belief? § How to maximize it? § Particle filters are a way to efficiently represent an arbitrary (non-Gaussian) distribution § Basic principle § Set of state hypotheses (“particles”) § Survival-of-the-fittest

Motivation

slide-3
SLIDE 3

3

=ηP(zt | xt) P(xt | ut, xt−1)

Bel(xt−1) dxt−1

Bayes Filters

=η P(zt | xt,u1, z1,…,ut) P(xt | u1, z1,…,ut)

Bayes z = observation u = action x = state

Bel(xt) = P(xt | u1, z1 …,ut, zt)

Markov

=η P(zt | xt) P(xt | u1, z1,…,ut)

Markov

=η P(zt | xt) P(xt | ut, xt−1)

P(xt−1 | u1, z1,…,ut) dxt−1 =η P(zt | xt) P(xt | u1, z1,…,ut, xt−1)

P(xt−1 | u1, z1,…,ut) dxt−1

Total prob. Markov

=ηP(zt | xt) P(xt | ut, xt−1)

P(xt−1 | u1, z1,…, zt−1) dxt−1 ,

slide-4
SLIDE 4

Probabilistic Localization

slide-5
SLIDE 5

5

§ Particle sets can be used to approximate functions

Function Approximation

§ The more particles fall into an interval, the higher the probability of that interval § How to draw samples from a function/distribution?

slide-6
SLIDE 6

6

§ Let us assume that f(x)< a for all x § Sample x from a uniform distribution § Sample c from [0,a] § if f(x) > c keep the sample

  • therwise

reject the sample

Rejection Sampling

c x f(x) c x’ f(x’)

OK

a

slide-7
SLIDE 7

7

§ We can even use a different distribution g to generate samples from f § Using an importance weight w, we can account for the “differences between g and f ” § w = f / g § f is called target § g is called proposal § Pre-condition: f(x)>0 à g(x)>0

Importance Sampling Principle

slide-8
SLIDE 8

8

§ Set of weighted samples

Particle Filter Representation

§ The samples represent the posterior

State hypothesis Importance weight

slide-9
SLIDE 9

Importance Sampling with Resampling: Landmark Detection Example

slide-10
SLIDE 10

Distributions

slide-11
SLIDE 11

11

Distributions

Wanted: samples distributed according to p(x| z1, z2, z3)

slide-12
SLIDE 12

This is Easy!

We can draw samples from p(x|zl) by adding noise to the detection parameters.

slide-13
SLIDE 13

Importance Sampling

Target distribution f: p(x | z1, z2,..., zn) = p(zk | x) p(x)

k

p(z1, z2,..., zn) Sampling distribution g: p(x | zl) = p(zl | x)p(x) p(zl) Importance weights w: f g = p(x | z1, z2,..., zn) p(x | zl) = p(zl) p(zk | x)

k≠l

p(z1, z2,..., zn)

slide-14
SLIDE 14

Importance Sampling with Resampling

Weighted samples After resampling

slide-15
SLIDE 15

Particle Filter Localization

  • 1. Draw

from

  • 2. Draw from
  • 3. Importance factor for
  • 4. Re-sample
slide-16
SLIDE 16

17

§ Let us assume that f(x)< a for all x § Sample x from a uniform distribution § Sample c from [0,a] § if f(x) > c keep the sample

  • therwise

reject the sample

Rejection Sampling

c x f(x) c x’ f(x’)

OK

a

  • 2. Draw from
slide-17
SLIDE 17

18

§ We can even use a different distribution g to generate samples from f § Using an importance weight w, we can account for the “differences between g and f ” § w = f / g § f is called target § g is called proposal § Pre-condition: f(x)>0 à g(x)>0

Importance Sampling Principle

  • 3. Importance factor for
slide-18
SLIDE 18

Particle Filters

slide-19
SLIDE 19

) | ( ) ( ) ( ) | ( ) ( ) | ( ) ( x z p x Bel x Bel x z p w x Bel x z p x Bel a a a = ¬ ¬

  • Sensor Information: Importance Sampling
slide-20
SLIDE 20

Bel−(x) ← p(x | u, x') Bel(x') d x'

Robot Motion

slide-21
SLIDE 21

Bel(x) ← α p(z | x) Bel−(x) w ← α p(z | x) Bel−(x) Bel−(x) = α p(z | x)

Sensor Information: Importance Sampling

slide-22
SLIDE 22

Robot Motion

Bel−(x) ← p(x | u, x') Bel(x') d x'

slide-23
SLIDE 23

24

Particle Filter Algorithm

§ Sample the next generation for particles using the proposal distribution § Compute the importance weights : weight = target distribution / proposal distribution § Resampling: “Replace unlikely samples by more likely ones”

slide-24
SLIDE 24

25

  • 1. Algorithm particle_filter( St-1, ut, zt):

2.

  • 3. For

Generate new samples 4. Sample index j(i) from the discrete distribution given by wt-1 5. Sample from using and 6. Compute importance weight 7. Update normalization factor 8. Add to new particle set

  • 9. For

10. Normalize weights

Particle Filter Algorithm

, = Æ = h

t

S

i =1,…,n St = St ∪{< xt

i,wt i >}

η =η + wt

i

xt

i

p(xt | xt−1,ut) xt−1

j(i)

ut wt

i = p(zt | xt i)

i =1,…,n wt

i = wt i /η

slide-25
SLIDE 25

draw xit-1 from Bel(xt-1) draw xit from p(xt | xit-1,ut) Importance factor for xit:

wt

i =

target distribution proposal distribution = η p(zt | xt) p(xt | xt−1,ut) Bel (xt−1) p(xt | xt−1,ut) Bel (xt−1) ∝ p(zt | xt)

Bel(xt) = η p(zt | xt) p(xt | xt−1,ut) Bel(xt−1)

dxt−1

Particle Filter Algorithm

slide-26
SLIDE 26

Resampling

§ Given: Set S of weighted samples. § Wanted : Random sample, where the probability of drawing xi is given by wi. § Typically done n times with replacement to generate new sample set S’.

slide-27
SLIDE 27

w2 w3 w1 wn Wn-1

Resampling

w2 w3 w1 wn Wn-1

§ Roulette wheel § Binary search, n log n § Stochastic universal sampling § Systematic resampling § Linear time complexity § Easy to implement, low variance

slide-28
SLIDE 28
  • 1. Algorithm systematic_resampling(S,n):

2.

  • 3. For

Generate cdf 4. 5. Initialize threshold

  • 6. For

Draw samples … 7. While ( ) Skip until next threshold reached 8. 9. Insert 10. Increment threshold

  • 11. Return S’

Resampling Algorithm

1 1

, ' w c S = Æ =

n i ! 2 =

i i i

w c c + =

  • 1

1 ], , ] ~

1 1

=

  • i

n U u n j ! 1 =

1 1

  • +

+ = n u u

j j i j

c u >

{ }

> < È =

  • 1

, ' ' n x S S

i

1 + = i i

Also called stochastic universal sampling

slide-29
SLIDE 29

30

Particle Filters for Mobile Robot Localization

§ Each particle is a potential pose of the robot § Proposal distribution is the motion model of

the robot (prediction step)

§ The observation model is used to compute

the importance weight (correction step)

slide-30
SLIDE 30

Start

Motion Model

slide-31
SLIDE 31

Proximity Sensor Model (Reminder)

Laser sensor Sonar sensor

slide-32
SLIDE 32

36

Mobile Robot Localization Using Particle Filters (1)

§ Each particle is a potential pose of the robot § The set of weighted particles approximates

the posterior belief about the robot’s pose (target distribution)

slide-33
SLIDE 33

37

Mobile Robot Localization Using Particle Filters (2)

§ Particles are drawn from the motion model

(proposal distribution)

§ Particles are weighted according to the

  • bservation model (sensor model)

§ Particles are resampled according to the

particle weights

slide-34
SLIDE 34

38

Mobile Robot Localization Using Particle Filters (3)

Why is resampling needed?

§ We only have a finite number of particles § Without resampling: The filter is likely to

loose track of the “good” hypotheses

§ Resampling ensures that particles stay in

the meaningful area of the state space

slide-35
SLIDE 35

39

slide-36
SLIDE 36

40

slide-37
SLIDE 37

41

slide-38
SLIDE 38

42

slide-39
SLIDE 39

43

slide-40
SLIDE 40

44

slide-41
SLIDE 41

45

slide-42
SLIDE 42

46

slide-43
SLIDE 43

47

slide-44
SLIDE 44

48

slide-45
SLIDE 45

49

slide-46
SLIDE 46

50

slide-47
SLIDE 47

51

slide-48
SLIDE 48

52

slide-49
SLIDE 49

53

slide-50
SLIDE 50

54

slide-51
SLIDE 51

55

slide-52
SLIDE 52

56

Sample-based Localization (Sonar)

slide-53
SLIDE 53

Using Ceiling Maps for Localization

[Dellaert et al. 99]

slide-54
SLIDE 54

Vision-based Localization

P(z|x) h(x) z

slide-55
SLIDE 55

Under a Light

Measurement z: P(z|x):

slide-56
SLIDE 56

Next to a Light

Measurement z: P(z|x):

slide-57
SLIDE 57

Elsewhere

Measurement z: P(z|x):

slide-58
SLIDE 58

Global Localization Using Vision

slide-59
SLIDE 59

69

Limitations

§ The approach described so far is able

§ to track the pose of a mobile robot and § to globally localize the robot

§ How can we deal with localization errors

(i.e., the kidnapped robot problem)?

slide-60
SLIDE 60

70

Approaches

§ Randomly insert a fixed number of

samples with randomly chosen poses

§ This corresponds to the assumption that

the robot can be teleported at any point in time to an arbitrary location

§ Alternatively, insert such samples inversely

proportional to the average likelihood of the observations (the lower this likelihood the higher the probability that the current estimate is wrong).

slide-61
SLIDE 61

71

Summary – Particle Filters

§ Particle filters are an implementation of

recursive Bayesian filtering

§ They represent the posterior by a set of

weighted samples

§ They can model arbitrary and thus also

non-Gaussian distributions

§ Proposal to draw new samples § Weights are computed to account for the

difference between the proposal and the target

§ Monte Carlo filter, Survival of the fittest,

Condensation, Bootstrap filter

slide-62
SLIDE 62

72

Summary – PF Localization

§ In the context of localization, the particles

are propagated according to the motion model.

§ They are then weighted according to the

likelihood model (likelihood of the

  • bservations).

§ In a re-sampling step, new particles are

drawn with a probability proportional to the likelihood of the observation.

§ This leads to one of the most popular

approaches to mobile robot localization