Development and Implementation of SLAM Algorithms Kasra Khosoussi - - PowerPoint PPT Presentation

development and implementation of slam algorithms
SMART_READER_LITE
LIVE PREVIEW

Development and Implementation of SLAM Algorithms Kasra Khosoussi - - PowerPoint PPT Presentation

Development and Implementation of SLAM Algorithms Kasra Khosoussi Supervised by: Dr. Hamid D. Taghirad Advanced Robotics and Automated Systems (ARAS) Industrial Control Laboratory K.N. Toosi University of Technology July 13, 2011 K. Khosoussi


slide-1
SLIDE 1

Development and Implementation of SLAM Algorithms

Kasra Khosoussi

Supervised by: Dr. Hamid D. Taghirad Advanced Robotics and Automated Systems (ARAS) Industrial Control Laboratory K.N. Toosi University of Technology

July 13, 2011

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 1 / 43

slide-2
SLIDE 2

Outline

1

Introduction SLAM Problem Bayesian Filtering Particle Filter

2

RBPF-SLAM

3

Monte Carlo Approximation of the Optimal Proposal Distribution Introduction LRS LIS-1 LIS-2

4

Results Simulation Results Experiments On Real Data

5

Conclusion

6

Future Work

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 2 / 43

slide-3
SLIDE 3

Introduction

Table of Contents

1

Introduction SLAM Problem Bayesian Filtering Particle Filter

2

RBPF-SLAM

3

Monte Carlo Approximation of the Optimal Proposal Distribution Introduction LRS LIS-1 LIS-2

4

Results Simulation Results Experiments On Real Data

5

Conclusion

6

Future Work

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 3 / 43

slide-4
SLIDE 4

Introduction SLAM Problem

Autonomous Mobile Robots

The ultimate goal of mobile robotics is to design autonomous mobile robots. “The ability to simultaneously localize a robot and accurately map its environment is a key prerequisite of truly autonomous robots.” SLAM stands for Simultaneous Localization and Mapping.

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 4 / 43

slide-5
SLIDE 5

Introduction SLAM Problem

Autonomous Mobile Robots

The ultimate goal of mobile robotics is to design autonomous mobile robots. “The ability to simultaneously localize a robot and accurately map its environment is a key prerequisite of truly autonomous robots.” SLAM stands for Simultaneous Localization and Mapping.

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 4 / 43

slide-6
SLIDE 6

Introduction SLAM Problem

Autonomous Mobile Robots

The ultimate goal of mobile robotics is to design autonomous mobile robots. “The ability to simultaneously localize a robot and accurately map its environment is a key prerequisite of truly autonomous robots.” SLAM stands for Simultaneous Localization and Mapping.

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 4 / 43

slide-7
SLIDE 7

Introduction SLAM Problem

Autonomous Mobile Robots

The ultimate goal of mobile robotics is to design autonomous mobile robots. “The ability to simultaneously localize a robot and accurately map its environment is a key prerequisite of truly autonomous robots.” SLAM stands for Simultaneous Localization and Mapping.

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 4 / 43

slide-8
SLIDE 8

Introduction SLAM Problem

Autonomous Mobile Robots

The ultimate goal of mobile robotics is to design autonomous mobile robots. “The ability to simultaneously localize a robot and accurately map its environment is a key prerequisite of truly autonomous robots.” SLAM stands for Simultaneous Localization and Mapping.

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 4 / 43

slide-9
SLIDE 9

Introduction SLAM Problem

Autonomous Mobile Robots

The ultimate goal of mobile robotics is to design autonomous mobile robots. “The ability to simultaneously localize a robot and accurately map its environment is a key prerequisite of truly autonomous robots.” SLAM stands for Simultaneous Localization and Mapping.

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 4 / 43

slide-10
SLIDE 10

Introduction SLAM Problem

Autonomous Mobile Robots

The ultimate goal of mobile robotics is to design autonomous mobile robots. “The ability to simultaneously localize a robot and accurately map its environment is a key prerequisite of truly autonomous robots.” SLAM stands for Simultaneous Localization and Mapping.

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 4 / 43

slide-11
SLIDE 11

Introduction SLAM Problem

The SLAM Problem

Assumptions No a priori knowledge about the environment (i.e. map) No independent position information (i.e. GPS) Static environment Given Observations of the environment Control signals Goal Estimate the map of the environment (e.g. locations of the features) Estimate the position and orientation (pose) of the robot

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 5 / 43

slide-12
SLIDE 12

Introduction SLAM Problem

The SLAM Problem

Assumptions No a priori knowledge about the environment (i.e. map) No independent position information (i.e. GPS) Static environment Given Observations of the environment Control signals Goal Estimate the map of the environment (e.g. locations of the features) Estimate the position and orientation (pose) of the robot

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 5 / 43

slide-13
SLIDE 13

Introduction SLAM Problem

The SLAM Problem

Assumptions No a priori knowledge about the environment (i.e. map) No independent position information (i.e. GPS) Static environment Given Observations of the environment Control signals Goal Estimate the map of the environment (e.g. locations of the features) Estimate the position and orientation (pose) of the robot

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 5 / 43

slide-14
SLIDE 14

Introduction SLAM Problem

Map Representation

Topological maps Grid maps Feature-based maps

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 6 / 43

slide-15
SLIDE 15

Introduction SLAM Problem

Map Representation

Topological maps Grid maps Feature-based maps

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 6 / 43

slide-16
SLIDE 16

Introduction SLAM Problem

Map Representation

Topological maps Grid maps Feature-based maps

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 6 / 43

slide-17
SLIDE 17

Introduction Bayesian Filtering

Probabilistic Methods

Probabilistic methods outperform deterministic algorithms Describe uncertainty in data, models and estimates Bayesian estimation

  • Robot pose st
  • Robot pose is assumed to be a Markov

process with initial distribution p(s0)

  • Feature’s location θi
  • Map θ = {θ1, . . . , θN}
  • Observation zt, and control input ut
  • xt = [st

θ]T

  • x1:t {x1, . . . , xt}

Filtering distribution: p(st, θ|z1:t, u1:t) Smoothing distribution: p(s0:t, θ|z1:t, u1:t) MMSE estimate: ˆ xt = E[xt|z1:t, u1:t] ˆ x0:t = E[x0:t|z1:t, u1:t]

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 7 / 43

slide-18
SLIDE 18

Introduction Bayesian Filtering

Probabilistic Methods

Probabilistic methods outperform deterministic algorithms Describe uncertainty in data, models and estimates Bayesian estimation

  • Robot pose st
  • Robot pose is assumed to be a Markov

process with initial distribution p(s0)

  • Feature’s location θi
  • Map θ = {θ1, . . . , θN}
  • Observation zt, and control input ut
  • xt = [st

θ]T

  • x1:t {x1, . . . , xt}

Filtering distribution: p(st, θ|z1:t, u1:t) Smoothing distribution: p(s0:t, θ|z1:t, u1:t) MMSE estimate: ˆ xt = E[xt|z1:t, u1:t] ˆ x0:t = E[x0:t|z1:t, u1:t]

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 7 / 43

slide-19
SLIDE 19

Introduction Bayesian Filtering

Probabilistic Methods

Probabilistic methods outperform deterministic algorithms Describe uncertainty in data, models and estimates Bayesian estimation

  • Robot pose st
  • Robot pose is assumed to be a Markov

process with initial distribution p(s0)

  • Feature’s location θi
  • Map θ = {θ1, . . . , θN}
  • Observation zt, and control input ut
  • xt = [st

θ]T

  • x1:t {x1, . . . , xt}

Filtering distribution: p(st, θ|z1:t, u1:t) Smoothing distribution: p(s0:t, θ|z1:t, u1:t) MMSE estimate: ˆ xt = E[xt|z1:t, u1:t] ˆ x0:t = E[x0:t|z1:t, u1:t]

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 7 / 43

slide-20
SLIDE 20

Introduction Bayesian Filtering

Probabilistic Methods

Probabilistic methods outperform deterministic algorithms Describe uncertainty in data, models and estimates Bayesian estimation

  • Robot pose st
  • Robot pose is assumed to be a Markov

process with initial distribution p(s0)

  • Feature’s location θi
  • Map θ = {θ1, . . . , θN}
  • Observation zt, and control input ut
  • xt = [st

θ]T

  • x1:t {x1, . . . , xt}

Filtering distribution: p(st, θ|z1:t, u1:t) Smoothing distribution: p(s0:t, θ|z1:t, u1:t) MMSE estimate: ˆ xt = E[xt|z1:t, u1:t] ˆ x0:t = E[x0:t|z1:t, u1:t]

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 7 / 43

slide-21
SLIDE 21

Introduction Bayesian Filtering

Probabilistic Methods

Probabilistic methods outperform deterministic algorithms Describe uncertainty in data, models and estimates Bayesian estimation

  • Robot pose st
  • Robot pose is assumed to be a Markov

process with initial distribution p(s0)

  • Feature’s location θi
  • Map θ = {θ1, . . . , θN}
  • Observation zt, and control input ut
  • xt = [st

θ]T

  • x1:t {x1, . . . , xt}

Filtering distribution: p(st, θ|z1:t, u1:t) Smoothing distribution: p(s0:t, θ|z1:t, u1:t) MMSE estimate: ˆ xt = E[xt|z1:t, u1:t] ˆ x0:t = E[x0:t|z1:t, u1:t]

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 7 / 43

slide-22
SLIDE 22

Introduction Bayesian Filtering

Probabilistic Methods

Probabilistic methods outperform deterministic algorithms Describe uncertainty in data, models and estimates Bayesian estimation

  • Robot pose st
  • Robot pose is assumed to be a Markov

process with initial distribution p(s0)

  • Feature’s location θi
  • Map θ = {θ1, . . . , θN}
  • Observation zt, and control input ut
  • xt = [st

θ]T

  • x1:t {x1, . . . , xt}

Filtering distribution: p(st, θ|z1:t, u1:t) Smoothing distribution: p(s0:t, θ|z1:t, u1:t) MMSE estimate: ˆ xt = E[xt|z1:t, u1:t] ˆ x0:t = E[x0:t|z1:t, u1:t]

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 7 / 43

slide-23
SLIDE 23

Introduction Bayesian Filtering

Probabilistic Methods

Probabilistic methods outperform deterministic algorithms Describe uncertainty in data, models and estimates Bayesian estimation

  • Robot pose st
  • Robot pose is assumed to be a Markov

process with initial distribution p(s0)

  • Feature’s location θi
  • Map θ = {θ1, . . . , θN}
  • Observation zt, and control input ut
  • xt = [st

θ]T

  • x1:t {x1, . . . , xt}

Filtering distribution: p(st, θ|z1:t, u1:t) Smoothing distribution: p(s0:t, θ|z1:t, u1:t) MMSE estimate: ˆ xt = E[xt|z1:t, u1:t] ˆ x0:t = E[x0:t|z1:t, u1:t]

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 7 / 43

slide-24
SLIDE 24

Introduction Bayesian Filtering

State-Space Equations

Robot motion equation: st = f(st−1, ut, vt) Observation equation: zt = g(st, θnt, wt)

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 8 / 43

slide-25
SLIDE 25

Introduction Bayesian Filtering

State-Space Equations

Robot motion equation: st = f(st−1, ut, vt) Observation equation: zt = g(st, θnt, wt) f(·, ·, ·) and g(·, ·, ·) are non-linear functions

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 8 / 43

slide-26
SLIDE 26

Introduction Bayesian Filtering

State-Space Equations

Robot motion equation: st = f(st−1, ut, vt) Observation equation: zt = g(st, θnt, wt) f(·, ·, ·) and g(·, ·, ·) are non-linear functions vt and wt are zero-mean white Gaussian noises with covariances matrices Qt and Rt

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 8 / 43

slide-27
SLIDE 27

Introduction Bayesian Filtering

Bayes Filter

How to estimate the posterior distribution recursively in time? Bayes filter!

1 Prediction 2 Update

p(xt|z1:t−1, u1:t) =

  • p(xt|xt−1, ut) p(xt−1|z1:t−1, u1:t−1) dxt−1

(1) p(xt|z1:t, u1:t) = p(zt|xt) p(xt|z1:t−1, u1:t)

  • p(zt|xt)p(xt|z1:t−1, u1:t)dxt

(2) Motion model Observation model There is a similar recursive formula for the smoothing density

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 9 / 43

slide-28
SLIDE 28

Introduction Bayesian Filtering

Bayes Filter

How to estimate the posterior distribution recursively in time? Bayes filter!

1 Prediction 2 Update

p(xt|z1:t−1, u1:t) =

  • p(xt|xt−1, ut) p(xt−1|z1:t−1, u1:t−1) dxt−1

(1) p(xt|z1:t, u1:t) = p(zt|xt) p(xt|z1:t−1, u1:t)

  • p(zt|xt)p(xt|z1:t−1, u1:t)dxt

(2) Motion model Observation model There is a similar recursive formula for the smoothing density

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 9 / 43

slide-29
SLIDE 29

Introduction Bayesian Filtering

Bayes Filter

How to estimate the posterior distribution recursively in time? Bayes filter!

1 Prediction 2 Update

p(xt|z1:t−1, u1:t) =

  • p(xt|xt−1, ut) p(xt−1|z1:t−1, u1:t−1) dxt−1

(1) p(xt|z1:t, u1:t) = p(zt|xt) p(xt|z1:t−1, u1:t)

  • p(zt|xt)p(xt|z1:t−1, u1:t)dxt

(2) Motion model Observation model There is a similar recursive formula for the smoothing density

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 9 / 43

slide-30
SLIDE 30

Introduction Bayesian Filtering

Bayes Filter

How to estimate the posterior distribution recursively in time? Bayes filter!

1 Prediction 2 Update

p(xt|z1:t−1, u1:t) =

  • p(xt|xt−1, ut) p(xt−1|z1:t−1, u1:t−1) dxt−1

(1) p(xt|z1:t, u1:t) = p(zt|xt) p(xt|z1:t−1, u1:t)

  • p(zt|xt)p(xt|z1:t−1, u1:t)dxt

(2) Motion model Observation model There is a similar recursive formula for the smoothing density

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 9 / 43

slide-31
SLIDE 31

Introduction Bayesian Filtering

Bayes Filter

How to estimate the posterior distribution recursively in time? Bayes filter!

1 Prediction 2 Update

p(xt|z1:t−1, u1:t) =

  • p(xt|xt−1, ut) p(xt−1|z1:t−1, u1:t−1) dxt−1

(1) p(xt|z1:t, u1:t) = p(zt|xt) p(xt|z1:t−1, u1:t)

  • p(zt|xt)p(xt|z1:t−1, u1:t)dxt

(2) Motion model Observation model There is a similar recursive formula for the smoothing density

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 9 / 43

slide-32
SLIDE 32

Introduction Bayesian Filtering

Bayes Filter

How to estimate the posterior distribution recursively in time? Bayes filter!

1 Prediction 2 Update

p(xt|z1:t−1, u1:t) =

  • p(xt|xt−1, ut) p(xt−1|z1:t−1, u1:t−1) dxt−1

(1) p(xt|z1:t, u1:t) = p(zt|xt) p(xt|z1:t−1, u1:t)

  • p(zt|xt)p(xt|z1:t−1, u1:t)dxt

(2) Motion model Observation model There is a similar recursive formula for the smoothing density

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 9 / 43

slide-33
SLIDE 33

Introduction Bayesian Filtering

Bayes Filter

How to estimate the posterior distribution recursively in time? Bayes filter!

1 Prediction 2 Update

p(xt|z1:t−1, u1:t) =

  • p(xt|xt−1, ut) p(xt−1|z1:t−1, u1:t−1) dxt−1

(1) p(xt|z1:t, u1:t) = p(zt|xt) p(xt|z1:t−1, u1:t)

  • p(zt|xt)p(xt|z1:t−1, u1:t)dxt

(2) Motion model Observation model There is a similar recursive formula for the smoothing density

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 9 / 43

slide-34
SLIDE 34

Introduction Bayesian Filtering

Bayes Filter Cont’d.

Example For the case of linear-Gaussian models, Bayes filter equations would be simplified into the Kalman filter equations.

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 10 / 43

slide-35
SLIDE 35

Introduction Bayesian Filtering

Bayes Filter Cont’d.

Example For the case of linear-Gaussian models, Bayes filter equations would be simplified into the Kalman filter equations. But . . . In general, it is impossible to implement the exact Bayes filter because it requires the ability to evaluate complex high-dimensional integrals.

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 10 / 43

slide-36
SLIDE 36

Introduction Bayesian Filtering

Bayes Filter Cont’d.

Example For the case of linear-Gaussian models, Bayes filter equations would be simplified into the Kalman filter equations. But . . . In general, it is impossible to implement the exact Bayes filter because it requires the ability to evaluate complex high-dimensional integrals. So we have to use approximation . . . Extended Kalman Filter (EKF) Unscented Kalman Filter (UKF) Gaussian-Sum Filter Extended Information Filter (EIF) Particle Filter (A.K.A. Sequential Monte Carlo Methods)

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 10 / 43

slide-37
SLIDE 37

Introduction Bayesian Filtering

Bayes Filter Cont’d.

Example For the case of linear-Gaussian models, Bayes filter equations would be simplified into the Kalman filter equations. But . . . In general, it is impossible to implement the exact Bayes filter because it requires the ability to evaluate complex high-dimensional integrals. So we have to use approximation . . . Extended Kalman Filter (EKF) Unscented Kalman Filter (UKF) Gaussian-Sum Filter Extended Information Filter (EIF) Particle Filter (A.K.A. Sequential Monte Carlo Methods)

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 10 / 43

slide-38
SLIDE 38

Introduction Particle Filter

Perfect Monte Carlo Sampling

  • Q. How to compute expected values such as Ep(x)[h(x)] =
  • h(x)p(x)dx

for any integrable function h(·)?

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 11 / 43

slide-39
SLIDE 39

Introduction Particle Filter

Perfect Monte Carlo Sampling

  • Q. How to compute expected values such as Ep(x)[h(x)] =
  • h(x)p(x)dx

for any integrable function h(·)? Perfect Monte Carlo (A.K.A. Monte Carlo Integration)

1 Generate N i.i.d. samples {x[i]}N

i=1 according to p(x)

2 Estimate the PDF as PN(x) 1

N

N

i=1 δ(x − x[i])

3 Estimate Ep(x)[h(x)] ≈

  • h(x)PN(x)dx = 1

N

N

i=1 h(x[i])

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 11 / 43

slide-40
SLIDE 40

Introduction Particle Filter

Perfect Monte Carlo Sampling

  • Q. How to compute expected values such as Ep(x)[h(x)] =
  • h(x)p(x)dx

for any integrable function h(·)? Perfect Monte Carlo (A.K.A. Monte Carlo Integration)

1 Generate N i.i.d. samples {x[i]}N

i=1 according to p(x)

2 Estimate the PDF as PN(x) 1

N

N

i=1 δ(x − x[i])

3 Estimate Ep(x)[h(x)] ≈

  • h(x)PN(x)dx = 1

N

N

i=1 h(x[i])

Convergence theorems for N → ∞ using central limit theorem and strong law of large numbers Error decreases with O(N−1/2) regardless of the dimension of x

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 11 / 43

slide-41
SLIDE 41

Introduction Particle Filter

Perfect Monte Carlo Sampling

  • Q. How to compute expected values such as Ep(x)[h(x)] =
  • h(x)p(x)dx

for any integrable function h(·)? Perfect Monte Carlo (A.K.A. Monte Carlo Integration)

1 Generate N i.i.d. samples {x[i]}N

i=1 according to p(x)

2 Estimate the PDF as PN(x) 1

N

N

i=1 δ(x − x[i])

3 Estimate Ep(x)[h(x)] ≈

  • h(x)PN(x)dx = 1

N

N

i=1 h(x[i])

Convergence theorems for N → ∞ using central limit theorem and strong law of large numbers Error decreases with O(N−1/2) regardless of the dimension of x But . . . It is usually impossible to sample directly from the filtering or smoothing distribution (high-dimensional, non-standard, only known up to a constant)

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 11 / 43

slide-42
SLIDE 42

Introduction Particle Filter

Importance Sampling (IS)

Idea Generate samples from another distribution called the importance function like π(x), and weight these samples according to w∗(x[i]) = p(x[i])

π(x[i]):

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 12 / 43

slide-43
SLIDE 43

Introduction Particle Filter

Importance Sampling (IS)

Idea Generate samples from another distribution called the importance function like π(x), and weight these samples according to w∗(x[i]) = p(x[i])

π(x[i]):

Ep(x)[h(x)] =

  • h(x) p(x)

π(x) π(x)dx = 1

N

N

  • i=1

w∗(x[i])h(x[i])

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 12 / 43

slide-44
SLIDE 44

Introduction Particle Filter

Importance Sampling (IS)

Idea Generate samples from another distribution called the importance function like π(x), and weight these samples according to w∗(x[i]) = p(x[i])

π(x[i]):

Ep(x)[h(x)] =

  • h(x) p(x)

π(x) π(x)dx = 1

N

N

  • i=1

w∗(x[i])h(x[i]) But . . . How to do this recursively in time?

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 12 / 43

slide-45
SLIDE 45

Introduction Particle Filter

Sequential Importance Sampling (SIS)

Sampling from scratch from the importance function π(x0:t|z1:t, u1:t) implies growing computational complexity for each step over time

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 13 / 43

slide-46
SLIDE 46

Introduction Particle Filter

Sequential Importance Sampling (SIS)

Sampling from scratch from the importance function π(x0:t|z1:t, u1:t) implies growing computational complexity for each step over time

  • Q. How to estimate p(x0:t|z1:t, u1:t) using importance sampling

recursively?

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 13 / 43

slide-47
SLIDE 47

Introduction Particle Filter

Sequential Importance Sampling (SIS)

Sampling from scratch from the importance function π(x0:t|z1:t, u1:t) implies growing computational complexity for each step over time

  • Q. How to estimate p(x0:t|z1:t, u1:t) using importance sampling

recursively? Sequential Importance Sampling At time t, generate x[i]

t according to π(xt|x[i] 0:t−1, z1:t, u1:t) (proposal

distribution), and merge it with the previous samples x[i]

0:t−1 drawn from

π(x0:t−1|z1:t−1, u1:t−1): x[i]

0:t = {x[i] 0:t−1, x[i] t } ∼ π(x0:t|z1:t, u1:t)

w(x[i]

0:t) = w(x[i] 0:t−1) p(zt|x[i] t )p(x[i] t |x[i] t−1, ut)

π(x[i]

t |x[i] 0:t−1, z1:t, u1:t)

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 13 / 43

slide-48
SLIDE 48

Introduction Particle Filter

Degeneracy and Resampling

Degeneracy Problem After a few steps, all but one of the particles (samples) would have very insignificant normalized weight

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 14 / 43

slide-49
SLIDE 49

Introduction Particle Filter

Degeneracy and Resampling

Degeneracy Problem After a few steps, all but one of the particles (samples) would have very insignificant normalized weight Resampling Eliminate particles with low normalized weights and multiply those with high normalized weights in a probabilistic manner

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 14 / 43

slide-50
SLIDE 50

Introduction Particle Filter

Degeneracy and Resampling

Degeneracy Problem After a few steps, all but one of the particles (samples) would have very insignificant normalized weight Resampling Eliminate particles with low normalized weights and multiply those with high normalized weights in a probabilistic manner Resampling will cause sample impoverishment

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 14 / 43

slide-51
SLIDE 51

Introduction Particle Filter

Degeneracy and Resampling

Degeneracy Problem After a few steps, all but one of the particles (samples) would have very insignificant normalized weight Resampling Eliminate particles with low normalized weights and multiply those with high normalized weights in a probabilistic manner Resampling will cause sample impoverishment Effective sample size (ESS) is a measure of the degeneracy of SIS that can be used in order to avoid unnecessary resampling steps ˆ Neff = 1 N

i=1 ˜

w(x[i]

0:t)2

Perform resampling only if Neff is lower than a fixed threshold NT

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 14 / 43

slide-52
SLIDE 52

Introduction Particle Filter

Proposal Distribution

Selecting an appropriate proposal distribution π(xt|x0:t−1, z1:t, u1:t) plays an important role in the success of particle filter The simplest and most common choice is the motion model (transition density) p(xt|xt−1, ut) p(xt|xt−1, zt , ut) is known as the optimal proposal distribution and limits the degeneracy of the particle filter by minimizing the conditional variance of unnormalized weights Importance weights for the optimal proposal distribution can be

  • btained as:

w(x[i]

0:t) = w(x[i] 0:t−1)p(zt|x[i] t−1)

But . . . In SLAM, neither p(xt|xt−1, zt, ut) nor p(zt|x[i]

t−1) can be computed in

closed form and we have to use approximation

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 15 / 43

slide-53
SLIDE 53

Introduction Particle Filter

Proposal Distribution

Selecting an appropriate proposal distribution π(xt|x0:t−1, z1:t, u1:t) plays an important role in the success of particle filter The simplest and most common choice is the motion model (transition density) p(xt|xt−1, ut) p(xt|xt−1, zt , ut) is known as the optimal proposal distribution and limits the degeneracy of the particle filter by minimizing the conditional variance of unnormalized weights Importance weights for the optimal proposal distribution can be

  • btained as:

w(x[i]

0:t) = w(x[i] 0:t−1)p(zt|x[i] t−1)

But . . . In SLAM, neither p(xt|xt−1, zt, ut) nor p(zt|x[i]

t−1) can be computed in

closed form and we have to use approximation

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 15 / 43

slide-54
SLIDE 54

Introduction Particle Filter

Proposal Distribution

Selecting an appropriate proposal distribution π(xt|x0:t−1, z1:t, u1:t) plays an important role in the success of particle filter The simplest and most common choice is the motion model (transition density) p(xt|xt−1, ut) p(xt|xt−1, zt , ut) is known as the optimal proposal distribution and limits the degeneracy of the particle filter by minimizing the conditional variance of unnormalized weights Importance weights for the optimal proposal distribution can be

  • btained as:

w(x[i]

0:t) = w(x[i] 0:t−1)p(zt|x[i] t−1)

But . . . In SLAM, neither p(xt|xt−1, zt, ut) nor p(zt|x[i]

t−1) can be computed in

closed form and we have to use approximation

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 15 / 43

slide-55
SLIDE 55

Introduction Particle Filter

Proposal Distribution

Selecting an appropriate proposal distribution π(xt|x0:t−1, z1:t, u1:t) plays an important role in the success of particle filter The simplest and most common choice is the motion model (transition density) p(xt|xt−1, ut) p(xt|xt−1, zt , ut) is known as the optimal proposal distribution and limits the degeneracy of the particle filter by minimizing the conditional variance of unnormalized weights Importance weights for the optimal proposal distribution can be

  • btained as:

w(x[i]

0:t) = w(x[i] 0:t−1)p(zt|x[i] t−1)

But . . . In SLAM, neither p(xt|xt−1, zt, ut) nor p(zt|x[i]

t−1) can be computed in

closed form and we have to use approximation

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 15 / 43

slide-56
SLIDE 56

Introduction Particle Filter

Proposal Distribution

Selecting an appropriate proposal distribution π(xt|x0:t−1, z1:t, u1:t) plays an important role in the success of particle filter The simplest and most common choice is the motion model (transition density) p(xt|xt−1, ut) p(xt|xt−1, zt , ut) is known as the optimal proposal distribution and limits the degeneracy of the particle filter by minimizing the conditional variance of unnormalized weights Importance weights for the optimal proposal distribution can be

  • btained as:

w(x[i]

0:t) = w(x[i] 0:t−1)p(zt|x[i] t−1)

But . . . In SLAM, neither p(xt|xt−1, zt, ut) nor p(zt|x[i]

t−1) can be computed in

closed form and we have to use approximation

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 15 / 43

slide-57
SLIDE 57

RBPF-SLAM

Table of Contents

1

Introduction SLAM Problem Bayesian Filtering Particle Filter

2

RBPF-SLAM

3

Monte Carlo Approximation of the Optimal Proposal Distribution Introduction LRS LIS-1 LIS-2

4

Results Simulation Results Experiments On Real Data

5

Conclusion

6

Future Work

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 16 / 43

slide-58
SLIDE 58

RBPF-SLAM

Rao-Blackwellized Particle Filter in SLAM

SLAM is a very high-dimensional problem Estimating the p(s0:t, θ|z1:t, u1:t) using a particle filter can be very inefficient We can factor the smoothing distribution into two parts as p(s0:t, θ|z1:t, u1:t) = p(s0:t|z1:t, u1:t)

M

  • k=1

p(θk| s0:t , z1:t, u1:t) Motion model is used as the proposal distribution in FastSLAM 1.0 FastSLAM 2.0 linearizes the observation equation and approximates the optimal proposal distribution with a Gaussian distribution FastSLAM 2.0 outperforms FastSLAM 1.0

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 17 / 43

slide-59
SLIDE 59

RBPF-SLAM

Rao-Blackwellized Particle Filter in SLAM

SLAM is a very high-dimensional problem Estimating the p(s0:t, θ|z1:t, u1:t) using a particle filter can be very inefficient We can factor the smoothing distribution into two parts as p(s0:t, θ|z1:t, u1:t) = p(s0:t|z1:t, u1:t)

M

  • k=1

p(θk| s0:t , z1:t, u1:t) Motion model is used as the proposal distribution in FastSLAM 1.0 FastSLAM 2.0 linearizes the observation equation and approximates the optimal proposal distribution with a Gaussian distribution FastSLAM 2.0 outperforms FastSLAM 1.0

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 17 / 43

slide-60
SLIDE 60

RBPF-SLAM

Rao-Blackwellized Particle Filter in SLAM

SLAM is a very high-dimensional problem Estimating the p(s0:t, θ|z1:t, u1:t) using a particle filter can be very inefficient We can factor the smoothing distribution into two parts as p(s0:t, θ|z1:t, u1:t) = p(s0:t|z1:t, u1:t)

M

  • k=1

p(θk| s0:t , z1:t, u1:t) Motion model is used as the proposal distribution in FastSLAM 1.0 FastSLAM 2.0 linearizes the observation equation and approximates the optimal proposal distribution with a Gaussian distribution FastSLAM 2.0 outperforms FastSLAM 1.0

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 17 / 43

slide-61
SLIDE 61

RBPF-SLAM

Rao-Blackwellized Particle Filter in SLAM

SLAM is a very high-dimensional problem Estimating the p(s0:t, θ|z1:t, u1:t) using a particle filter can be very inefficient We can factor the smoothing distribution into two parts as p(s0:t, θ|z1:t, u1:t) = p(s0:t|z1:t, u1:t)

M

  • k=1

p(θk| s0:t , z1:t, u1:t) Motion model is used as the proposal distribution in FastSLAM 1.0 FastSLAM 2.0 linearizes the observation equation and approximates the optimal proposal distribution with a Gaussian distribution FastSLAM 2.0 outperforms FastSLAM 1.0

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 17 / 43

slide-62
SLIDE 62

RBPF-SLAM

Rao-Blackwellized Particle Filter in SLAM

SLAM is a very high-dimensional problem Estimating the p(s0:t, θ|z1:t, u1:t) using a particle filter can be very inefficient We can factor the smoothing distribution into two parts as p(s0:t, θ|z1:t, u1:t) = p(s0:t|z1:t, u1:t)

M

  • k=1

p(θk| s0:t , z1:t, u1:t) Motion model is used as the proposal distribution in FastSLAM 1.0 FastSLAM 2.0 linearizes the observation equation and approximates the optimal proposal distribution with a Gaussian distribution FastSLAM 2.0 outperforms FastSLAM 1.0

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 17 / 43

slide-63
SLIDE 63

RBPF-SLAM

Rao-Blackwellized Particle Filter in SLAM

SLAM is a very high-dimensional problem Estimating the p(s0:t, θ|z1:t, u1:t) using a particle filter can be very inefficient We can factor the smoothing distribution into two parts as p(s0:t, θ|z1:t, u1:t) = p(s0:t|z1:t, u1:t)

M

  • k=1

p(θk| s0:t , z1:t, u1:t) Motion model is used as the proposal distribution in FastSLAM 1.0 FastSLAM 2.0 linearizes the observation equation and approximates the optimal proposal distribution with a Gaussian distribution FastSLAM 2.0 outperforms FastSLAM 1.0

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 17 / 43

slide-64
SLIDE 64

RBPF-SLAM

FastSLAM 2.0

Linearization error

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 18 / 43

slide-65
SLIDE 65

RBPF-SLAM

FastSLAM 2.0

Linearization error Gaussian approximation

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 18 / 43

slide-66
SLIDE 66

RBPF-SLAM

FastSLAM 2.0

Linearization error Gaussian approximation Linear motion models with respect to the noise variable vt

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 18 / 43

slide-67
SLIDE 67

Monte Carlo Approximation of the Optimal Proposal Distribution

Table of Contents

1

Introduction SLAM Problem Bayesian Filtering Particle Filter

2

RBPF-SLAM

3

Monte Carlo Approximation of the Optimal Proposal Distribution Introduction LRS LIS-1 LIS-2

4

Results Simulation Results Experiments On Real Data

5

Conclusion

6

Future Work

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 19 / 43

slide-68
SLIDE 68

Monte Carlo Approximation of the Optimal Proposal Distribution Introduction

FastSLAM 2.0 v.s. The Proposed Algorithms

FastSLAM 2.0

1 Approximate the optimal proposal distribution with a Gaussian using

the linearized observation equation. Sample from this Gaussian

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 20 / 43

slide-69
SLIDE 69

Monte Carlo Approximation of the Optimal Proposal Distribution Introduction

FastSLAM 2.0 v.s. The Proposed Algorithms

FastSLAM 2.0

1 Approximate the optimal proposal distribution with a Gaussian using

the linearized observation equation. Sample from this Gaussian

2 Update the landmarks EKFs for the observed features

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 20 / 43

slide-70
SLIDE 70

Monte Carlo Approximation of the Optimal Proposal Distribution Introduction

FastSLAM 2.0 v.s. The Proposed Algorithms

FastSLAM 2.0

1 Approximate the optimal proposal distribution with a Gaussian using

the linearized observation equation. Sample from this Gaussian

2 Update the landmarks EKFs for the observed features 3 Compute the importance weights using the linearized observation

equation

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 20 / 43

slide-71
SLIDE 71

Monte Carlo Approximation of the Optimal Proposal Distribution Introduction

FastSLAM 2.0 v.s. The Proposed Algorithms

FastSLAM 2.0

1 Approximate the optimal proposal distribution with a Gaussian using

the linearized observation equation. Sample from this Gaussian

2 Update the landmarks EKFs for the observed features 3 Compute the importance weights using the linearized observation

equation

4 Perform resampling

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 20 / 43

slide-72
SLIDE 72

Monte Carlo Approximation of the Optimal Proposal Distribution Introduction

FastSLAM 2.0 v.s. The Proposed Algorithms

FastSLAM 2.0

1 Approximate the optimal proposal distribution with a Gaussian using

the linearized observation equation. Sample from this Gaussian

2 Update the landmarks EKFs for the observed features 3 Compute the importance weights using the linearized observation

equation

4 Perform resampling

Proposed Algorithms

1 Sample from the optimal proposal distribution using Monte Carlo

sampling methods

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 20 / 43

slide-73
SLIDE 73

Monte Carlo Approximation of the Optimal Proposal Distribution Introduction

FastSLAM 2.0 v.s. The Proposed Algorithms

FastSLAM 2.0

1 Approximate the optimal proposal distribution with a Gaussian using

the linearized observation equation. Sample from this Gaussian

2 Update the landmarks EKFs for the observed features 3 Compute the importance weights using the linearized observation

equation

4 Perform resampling

Proposed Algorithms

1 Sample from the optimal proposal distribution using Monte Carlo

sampling methods

2 Update the landmarks EKFs for the observed features

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 20 / 43

slide-74
SLIDE 74

Monte Carlo Approximation of the Optimal Proposal Distribution Introduction

FastSLAM 2.0 v.s. The Proposed Algorithms

FastSLAM 2.0

1 Approximate the optimal proposal distribution with a Gaussian using

the linearized observation equation. Sample from this Gaussian

2 Update the landmarks EKFs for the observed features 3 Compute the importance weights using the linearized observation

equation

4 Perform resampling

Proposed Algorithms

1 Sample from the optimal proposal distribution using Monte Carlo

sampling methods

2 Update the landmarks EKFs for the observed features 3 Compute the importance weights using Monte Carlo integration

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 20 / 43

slide-75
SLIDE 75

Monte Carlo Approximation of the Optimal Proposal Distribution Introduction

FastSLAM 2.0 v.s. The Proposed Algorithms

FastSLAM 2.0

1 Approximate the optimal proposal distribution with a Gaussian using

the linearized observation equation. Sample from this Gaussian

2 Update the landmarks EKFs for the observed features 3 Compute the importance weights using the linearized observation

equation

4 Perform resampling

Proposed Algorithms

1 Sample from the optimal proposal distribution using Monte Carlo

sampling methods

2 Update the landmarks EKFs for the observed features 3 Compute the importance weights using Monte Carlo integration 4 Perform resampling only if it is necessary according to ESS

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 20 / 43

slide-76
SLIDE 76

Monte Carlo Approximation of the Optimal Proposal Distribution Introduction

MC Approximation of The Optimal Proposal Dist.

MC sampling methods such as importance sampling (IS) and rejection sampling (RS) can be used in order to sample from the

  • ptimal proposal distribution p(st|s[i]

t−1, zt, ut)

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 21 / 43

slide-77
SLIDE 77

Monte Carlo Approximation of the Optimal Proposal Distribution Introduction

MC Approximation of The Optimal Proposal Dist.

MC sampling methods such as importance sampling (IS) and rejection sampling (RS) can be used in order to sample from the

  • ptimal proposal distribution p(st|s[i]

t−1, zt, ut)

Instead of sampling directly from the optimal proposal distribution . . . We can generate M samples {s[i,j]

t

}M

j=1 according to another

distribution like q(st|s[i]

t−1, zt, ut) and then weight those samples

proportional to

p(st|s[i]

t−1,zt,ut)

q(st|s[i]

t−1,zt,ut)

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 21 / 43

slide-78
SLIDE 78

Monte Carlo Approximation of the Optimal Proposal Distribution Introduction

MC Approximation of The Optimal Proposal Dist.

MC sampling methods such as importance sampling (IS) and rejection sampling (RS) can be used in order to sample from the

  • ptimal proposal distribution p(st|s[i]

t−1, zt, ut)

Instead of sampling directly from the optimal proposal distribution . . . We can generate M samples {s[i,j]

t

}M

j=1 according to another

distribution like q(st|s[i]

t−1, zt, ut) and then weight those samples

proportional to

p(st|s[i]

t−1,zt,ut)

q(st|s[i]

t−1,zt,ut) Local Importance Sampling (LIS)

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 21 / 43

slide-79
SLIDE 79

Monte Carlo Approximation of the Optimal Proposal Distribution Introduction

MC Approximation of The Optimal Proposal Dist.

MC sampling methods such as importance sampling (IS) and rejection sampling (RS) can be used in order to sample from the

  • ptimal proposal distribution p(st|s[i]

t−1, zt, ut)

Instead of sampling directly from the optimal proposal distribution . . . We can generate M samples {s[i,j]

t

}M

j=1 according to another

distribution like q(st|s[i]

t−1, zt, ut) and then weight those samples

proportional to

p(st|s[i]

t−1,zt,ut)

q(st|s[i]

t−1,zt,ut) Local Importance Sampling (LIS)

We can generate M samples {s[i,j]

t

}M

j=1 according to another

distribution like q(st|s[i]

t−1, zt, ut) and accept some of them as samples

  • f the optimal proposal distribution according to rejection sampling

criteria

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 21 / 43

slide-80
SLIDE 80

Monte Carlo Approximation of the Optimal Proposal Distribution Introduction

MC Approximation of The Optimal Proposal Dist.

MC sampling methods such as importance sampling (IS) and rejection sampling (RS) can be used in order to sample from the

  • ptimal proposal distribution p(st|s[i]

t−1, zt, ut)

Instead of sampling directly from the optimal proposal distribution . . . We can generate M samples {s[i,j]

t

}M

j=1 according to another

distribution like q(st|s[i]

t−1, zt, ut) and then weight those samples

proportional to

p(st|s[i]

t−1,zt,ut)

q(st|s[i]

t−1,zt,ut) Local Importance Sampling (LIS)

We can generate M samples {s[i,j]

t

}M

j=1 according to another

distribution like q(st|s[i]

t−1, zt, ut) and accept some of them as samples

  • f the optimal proposal distribution according to rejection sampling

criteria Local Rejection Sampling (LRS)

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 21 / 43

slide-81
SLIDE 81

Monte Carlo Approximation of the Optimal Proposal Distribution Introduction

MC Approximation of The Optimal Proposal Dist.

MC sampling methods such as importance sampling (IS) and rejection sampling (RS) can be used in order to sample from the

  • ptimal proposal distribution p(st|s[i]

t−1, zt, ut)

Instead of sampling directly from the optimal proposal distribution . . . We can generate M samples {s[i,j]

t

}M

j=1 according to another

distribution like q(st|s[i]

t−1, zt, ut) and then weight those samples

proportional to

p(st|s[i]

t−1,zt,ut)

q(st|s[i]

t−1,zt,ut) Local Importance Sampling (LIS)

We can generate M samples {s[i,j]

t

}M

j=1 according to another

distribution like q(st|s[i]

t−1, zt, ut) and accept some of them as samples

  • f the optimal proposal distribution according to rejection sampling

criteria Local Rejection Sampling (LRS) q(st|s[i]

t−1, ut, zt) = p(st|s[i] t−1, ut)

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 21 / 43

slide-82
SLIDE 82

Monte Carlo Approximation of the Optimal Proposal Distribution LRS

Local Rejection Sampling (LRS)

graphics by M. Jordan

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 22 / 43

slide-83
SLIDE 83

Monte Carlo Approximation of the Optimal Proposal Distribution LRS

Local Rejection Sampling (LRS)

graphics by M. Jordan

Rejection Sampling

1 Generate u[i] ∼ U[0, 1] and s[i,j]

t

∼ p(st|s[i]

t−1, ut)

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 22 / 43

slide-84
SLIDE 84

Monte Carlo Approximation of the Optimal Proposal Distribution LRS

Local Rejection Sampling (LRS)

graphics by M. Jordan

Rejection Sampling

1 Generate u[i] ∼ U[0, 1] and s[i,j]

t

∼ p(st|s[i]

t−1, ut)

2 Accept s[i,j]

t

if u[i] ≤

p(s[i,j]

t

|s[i]

t−1,zt,ut)

C·p(s[i,j]

t

|s[i]

t−1,ut)

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 22 / 43

slide-85
SLIDE 85

Monte Carlo Approximation of the Optimal Proposal Distribution LRS

Local Rejection Sampling (LRS)

graphics by M. Jordan

Rejection Sampling

1 Generate u[i] ∼ U[0, 1] and s[i,j]

t

∼ p(st|s[i]

t−1, ut)

2 Accept s[i,j]

t

if u[i] ≤

p(s[i,j]

t

|s[i]

t−1,zt,ut)

C·p(s[i,j]

t

|s[i]

t−1,ut) =

p(zt|s[i,j]

t

) max

j

p(zt|s[i,j]

t

)

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 22 / 43

slide-86
SLIDE 86

Monte Carlo Approximation of the Optimal Proposal Distribution LRS

LRS Cont’d.

Now we have to compute the importance weights for the set of accepted samples

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 23 / 43

slide-87
SLIDE 87

Monte Carlo Approximation of the Optimal Proposal Distribution LRS

LRS Cont’d.

Now we have to compute the importance weights for the set of accepted samples We should compute p(zt|s[i]

t−1)

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 23 / 43

slide-88
SLIDE 88

Monte Carlo Approximation of the Optimal Proposal Distribution LRS

LRS Cont’d.

Now we have to compute the importance weights for the set of accepted samples We should compute p(zt|s[i]

t−1)

p(zt|s[i]

t−1) =

  • p(zt|st)p(st|s[i]

t−1, ut)dst

MC Intergration

1 M

M

  • j=1

p(zt|s[i,j]

t

)

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 23 / 43

slide-89
SLIDE 89

Monte Carlo Approximation of the Optimal Proposal Distribution LRS

LRS Cont’d.

Now we have to compute the importance weights for the set of accepted samples We should compute p(zt|s[i]

t−1)

p(zt|s[i]

t−1) =

  • p(zt|st)p(st|s[i]

t−1, ut)dst

MC Intergration

1 M

M

  • j=1

p(zt|s[i,j]

t

) p(zt|s[i,j]

t

) can be approximated by a Gaussian

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 23 / 43

slide-90
SLIDE 90

Monte Carlo Approximation of the Optimal Proposal Distribution LRS

LRS Cont’d.

Now we have to compute the importance weights for the set of accepted samples We should compute p(zt|s[i]

t−1)

p(zt|s[i]

t−1) =

  • p(zt|st)p(st|s[i]

t−1, ut)dst

MC Intergration

1 M

M

  • j=1

p(zt|s[i,j]

t

) p(zt|s[i,j]

t

) can be approximated by a Gaussian MC Integration

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 23 / 43

slide-91
SLIDE 91

Monte Carlo Approximation of the Optimal Proposal Distribution LRS

LRS Cont’d.

Now we have to compute the importance weights for the set of accepted samples We should compute p(zt|s[i]

t−1)

p(zt|s[i]

t−1) =

  • p(zt|st)p(st|s[i]

t−1, ut)dst

MC Intergration

1 M

M

  • j=1

p(zt|s[i,j]

t

) p(zt|s[i,j]

t

) can be approximated by a Gaussian MC Integration ⇒ Large number of local particles M

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 23 / 43

slide-92
SLIDE 92

Monte Carlo Approximation of the Optimal Proposal Distribution LIS-1

Local Importance Sampling (LIS)

LIS-1

LIS-1

1 Similar to LRS, we have to generate M random samples {s[i,j]

t

}M

j=1

according to p(st|s[i]

t−1, ut)

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 24 / 43

slide-93
SLIDE 93

Monte Carlo Approximation of the Optimal Proposal Distribution LIS-1

Local Importance Sampling (LIS)

LIS-1

LIS-1

1 Similar to LRS, we have to generate M random samples {s[i,j]

t

}M

j=1

according to p(st|s[i]

t−1, ut)

2 Local IS weights: w(LIS)(s[i,j]

t

) = p(zt|s[i,j]

t

) ∝

p(s[i,j]

t

|s[i]

t−1,zt,ut)

p(s[i,j]

t

|s[i]

t−1,ut)

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 24 / 43

slide-94
SLIDE 94

Monte Carlo Approximation of the Optimal Proposal Distribution LIS-1

Local Importance Sampling (LIS)

LIS-1

LIS-1

1 Similar to LRS, we have to generate M random samples {s[i,j]

t

}M

j=1

according to p(st|s[i]

t−1, ut)

2 Local IS weights: w(LIS)(s[i,j]

t

) = p(zt|s[i,j]

t

) ∝

p(s[i,j]

t

|s[i]

t−1,zt,ut)

p(s[i,j]

t

|s[i]

t−1,ut) 3 Local resampling among {s[i,j]

t

}M

j=1 using w(LIS)(s[i,j] t

)

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 24 / 43

slide-95
SLIDE 95

Monte Carlo Approximation of the Optimal Proposal Distribution LIS-1

Local Importance Sampling (LIS)

LIS-1

LIS-1

1 Similar to LRS, we have to generate M random samples {s[i,j]

t

}M

j=1

according to p(st|s[i]

t−1, ut)

2 Local IS weights: w(LIS)(s[i,j]

t

) = p(zt|s[i,j]

t

) ∝

p(s[i,j]

t

|s[i]

t−1,zt,ut)

p(s[i,j]

t

|s[i]

t−1,ut) 3 Local resampling among {s[i,j]

t

}M

j=1 using w(LIS)(s[i,j] t

)

4 Main weights: w(s∗[i,j]

t

) = w(s[i]

t−1)p(zt|s[i] t−1)

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 24 / 43

slide-96
SLIDE 96

Monte Carlo Approximation of the Optimal Proposal Distribution LIS-1

Local Importance Sampling (LIS)

LIS-1

LIS-1

1 Similar to LRS, we have to generate M random samples {s[i,j]

t

}M

j=1

according to p(st|s[i]

t−1, ut)

2 Local IS weights: w(LIS)(s[i,j]

t

) = p(zt|s[i,j]

t

) ∝

p(s[i,j]

t

|s[i]

t−1,zt,ut)

p(s[i,j]

t

|s[i]

t−1,ut) 3 Local resampling among {s[i,j]

t

}M

j=1 using w(LIS)(s[i,j] t

)

4 Main weights: w(s∗[i,j]

t

) = w(s[i]

t−1)p(zt|s[i] t−1)

5 p(zt|s[i]

t−1)

MC Integration 1 M

M

j=1 p(zt|s[i,j] t

)

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 24 / 43

slide-97
SLIDE 97

Monte Carlo Approximation of the Optimal Proposal Distribution LIS-1

Local Importance Sampling (LIS)

LIS-1

LIS-1

1 Similar to LRS, we have to generate M random samples {s[i,j]

t

}M

j=1

according to p(st|s[i]

t−1, ut)

2 Local IS weights: w(LIS)(s[i,j]

t

) = p(zt|s[i,j]

t

) ∝

p(s[i,j]

t

|s[i]

t−1,zt,ut)

p(s[i,j]

t

|s[i]

t−1,ut) 3 Local resampling among {s[i,j]

t

}M

j=1 using w(LIS)(s[i,j] t

)

4 Main weights: w(s∗[i,j]

t

) = w(s[i]

t−1)p(zt|s[i] t−1)

5 p(zt|s[i]

t−1)

MC Integration 1 M

M

j=1 p(zt|s[i,j] t

)

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 24 / 43

slide-98
SLIDE 98

Monte Carlo Approximation of the Optimal Proposal Distribution LIS-2

Local Importance Sampling (LIS)

LIS-2

LIS-2 Similar to LRS and LIS-1, we have to generate M random samples {s[i,j]

t

}M

j=1 according to p(st|s[i] t−1, ut)

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 25 / 43

slide-99
SLIDE 99

Monte Carlo Approximation of the Optimal Proposal Distribution LIS-2

Local Importance Sampling (LIS)

LIS-2

LIS-2 Similar to LRS and LIS-1, we have to generate M random samples {s[i,j]

t

}M

j=1 according to p(st|s[i] t−1, ut)

Total weights of generated particles {s[i,j]

t

}N,M

(i,j)=(1,1) = local weights

× SIS weights

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 25 / 43

slide-100
SLIDE 100

Monte Carlo Approximation of the Optimal Proposal Distribution LIS-2

Local Importance Sampling (LIS)

LIS-2

LIS-2 Similar to LRS and LIS-1, we have to generate M random samples {s[i,j]

t

}M

j=1 according to p(st|s[i] t−1, ut)

Total weights of generated particles {s[i,j]

t

}N,M

(i,j)=(1,1) = local weights

× SIS weights Instead of eliminating local weights through local resamplings (LIS-1), total weights are computed in LIS-2

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 25 / 43

slide-101
SLIDE 101

Monte Carlo Approximation of the Optimal Proposal Distribution LIS-2

Local Importance Sampling (LIS)

LIS-2

LIS-2 Similar to LRS and LIS-1, we have to generate M random samples {s[i,j]

t

}M

j=1 according to p(st|s[i] t−1, ut)

Total weights of generated particles {s[i,j]

t

}N,M

(i,j)=(1,1) = local weights

× SIS weights Instead of eliminating local weights through local resamplings (LIS-1), total weights are computed in LIS-2 It is proved in the thesis that total weights would be equal to w(s[i,j]

0:t ) = w(s[i] 0:t−1)p(zt|s[i,j] t

)

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 25 / 43

slide-102
SLIDE 102

Monte Carlo Approximation of the Optimal Proposal Distribution LIS-2

Local Importance Sampling (LIS)

LIS-2

LIS-2 Similar to LRS and LIS-1, we have to generate M random samples {s[i,j]

t

}M

j=1 according to p(st|s[i] t−1, ut)

Total weights of generated particles {s[i,j]

t

}N,M

(i,j)=(1,1) = local weights

× SIS weights Instead of eliminating local weights through local resamplings (LIS-1), total weights are computed in LIS-2 It is proved in the thesis that total weights would be equal to w(s[i,j]

0:t ) = w(s[i] 0:t−1)p(zt|s[i,j] t

) Local weights are used in LIS-2 to filter those samples with low p(zt|s[i,j]

t

)

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 25 / 43

slide-103
SLIDE 103

Results

Table of Contents

1

Introduction SLAM Problem Bayesian Filtering Particle Filter

2

RBPF-SLAM

3

Monte Carlo Approximation of the Optimal Proposal Distribution Introduction LRS LIS-1 LIS-2

4

Results Simulation Results Experiments On Real Data

5

Conclusion

6

Future Work

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 26 / 43

slide-104
SLIDE 104

Results Simulation Results

Simulation

50 Monte Carlo runs with different seeds 200m × 200m simulated environment

−100 −50 50 100 −80 −60 −40 −20 20 40 60 80

meters meters

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 27 / 43

slide-105
SLIDE 105

Results Simulation Results

Number of Resamplings

Table: Average number of resampling steps over 50 MC simulations

N FastSLAM 2.0 LRS LIS-2 (M = 50) (M = 3) 20 221.01 180.84 188.85 30 229.79 186 193.13 40 235.71 186.55 195.33 50 237.80 188.96 196.32 60 239.55 189.62 195.99 70 240.10 189.40 197.85 80 243.61 190.39 197.96 90 244.16 190.94 198.73 100 245.70 192.69 199.50

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 28 / 43

slide-106
SLIDE 106

Results Simulation Results

Runtime

Table: Average Runtime over 50 MC simulations

N FastSLAM 2.0 LRS LIS-2 (M = 50) (M = 3) (sec) (sec) (sec) 20 36 318 38 30 52 480 56 40 68 635 75 50 84 794 93 60 103 953 112 70 117 1106 130 80 134 1265 148 90 149 1413 165 100 167 1588 183

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 29 / 43

slide-107
SLIDE 107

Results Simulation Results

Mean Square Error

10 1 10 2 10 20 30 40 50 60 70

Number of Particles average MSE over time (m2)

FastSLAM2.0 LRS LIS−2

Figure: Average MSE of estimated robot pose over time. The parameter M is set to 50 for LRS and 3 for LIS-2.

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 30 / 43

slide-108
SLIDE 108

Results Simulation Results

Mean Square Error Cont’d.

25 50 75 100 125 150 175 200 20 40 60 80 100 120 140

time (sec)

average MSE over 50 MC simulations (m2) FastSLAM2.0 LRS (L=50) LRS (L=20) LIS−2 (L=20) LIS−2 (L=3)

Figure: Average MSE of estimated robot position over 50 Monte Carlo simulations.

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 31 / 43

slide-109
SLIDE 109

Results Experiments On Real Data

Victoria Park Dataset

Victoria Park dataset by E. Nebot and J. Guivant

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 32 / 43

slide-110
SLIDE 110

Results Experiments On Real Data

Victoria Park Dataset

Victoria Park dataset by E. Nebot and J. Guivant Large environment 200m × 300m

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 32 / 43

slide-111
SLIDE 111

Results Experiments On Real Data

Victoria Park Dataset

Victoria Park dataset by E. Nebot and J. Guivant Large environment 200m × 300m More than 108,000 controls and observations

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 32 / 43

slide-112
SLIDE 112

Results Experiments On Real Data

Victoria Park Dataset

Victoria Park dataset by E. Nebot and J. Guivant Large environment 200m × 300m More than 108,000 controls and observations

−100 −50 50 100 150 200 250 −50 50 100 150 200

Figure: Estimated Robot Path

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 32 / 43

slide-113
SLIDE 113

Conclusion

Table of Contents

1

Introduction SLAM Problem Bayesian Filtering Particle Filter

2

RBPF-SLAM

3

Monte Carlo Approximation of the Optimal Proposal Distribution Introduction LRS LIS-1 LIS-2

4

Results Simulation Results Experiments On Real Data

5

Conclusion

6

Future Work

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 33 / 43

slide-114
SLIDE 114

Conclusion

Conclusion

LRS and LIS can describe non-Gaussian (e.g. multi-modal) proposal distributions Nonlinear motion models Linearization error LRS and LIS have control over the accuracy of the approximation through M Much lower number of resampling steps in LRS and LIS-2 than in FastSLAM 2.0 ⇒ Slower rate of degeneracy ⇒ Better approximation

  • f the optimal proposal distribution ⇒ Sample impoverishment

problem Monte Carlo Integration in LRS and LIS-1 ⇒ Large M ⇒ High computational cost Accurate results of LRS come at the cost of large runtime LIS-2 (M=3) outperform FastSLAM 2.0 and LRS (M=50) for moderate number of particles (N)

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 34 / 43

slide-115
SLIDE 115

Conclusion

Conclusion

LRS and LIS can describe non-Gaussian (e.g. multi-modal) proposal distributions Nonlinear motion models Linearization error LRS and LIS have control over the accuracy of the approximation through M Much lower number of resampling steps in LRS and LIS-2 than in FastSLAM 2.0 ⇒ Slower rate of degeneracy ⇒ Better approximation

  • f the optimal proposal distribution ⇒ Sample impoverishment

problem Monte Carlo Integration in LRS and LIS-1 ⇒ Large M ⇒ High computational cost Accurate results of LRS come at the cost of large runtime LIS-2 (M=3) outperform FastSLAM 2.0 and LRS (M=50) for moderate number of particles (N)

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 34 / 43

slide-116
SLIDE 116

Conclusion

Conclusion

LRS and LIS can describe non-Gaussian (e.g. multi-modal) proposal distributions Nonlinear motion models Linearization error LRS and LIS have control over the accuracy of the approximation through M Much lower number of resampling steps in LRS and LIS-2 than in FastSLAM 2.0 ⇒ Slower rate of degeneracy ⇒ Better approximation

  • f the optimal proposal distribution ⇒ Sample impoverishment

problem Monte Carlo Integration in LRS and LIS-1 ⇒ Large M ⇒ High computational cost Accurate results of LRS come at the cost of large runtime LIS-2 (M=3) outperform FastSLAM 2.0 and LRS (M=50) for moderate number of particles (N)

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 34 / 43

slide-117
SLIDE 117

Conclusion

Conclusion

LRS and LIS can describe non-Gaussian (e.g. multi-modal) proposal distributions Nonlinear motion models Linearization error LRS and LIS have control over the accuracy of the approximation through M Much lower number of resampling steps in LRS and LIS-2 than in FastSLAM 2.0 ⇒ Slower rate of degeneracy ⇒ Better approximation

  • f the optimal proposal distribution ⇒ Sample impoverishment

problem Monte Carlo Integration in LRS and LIS-1 ⇒ Large M ⇒ High computational cost Accurate results of LRS come at the cost of large runtime LIS-2 (M=3) outperform FastSLAM 2.0 and LRS (M=50) for moderate number of particles (N)

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 34 / 43

slide-118
SLIDE 118

Conclusion

Conclusion

LRS and LIS can describe non-Gaussian (e.g. multi-modal) proposal distributions Nonlinear motion models Linearization error LRS and LIS have control over the accuracy of the approximation through M Much lower number of resampling steps in LRS and LIS-2 than in FastSLAM 2.0 ⇒ Slower rate of degeneracy ⇒ Better approximation

  • f the optimal proposal distribution ⇒ Sample impoverishment

problem Monte Carlo Integration in LRS and LIS-1 ⇒ Large M ⇒ High computational cost Accurate results of LRS come at the cost of large runtime LIS-2 (M=3) outperform FastSLAM 2.0 and LRS (M=50) for moderate number of particles (N)

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 34 / 43

slide-119
SLIDE 119

Conclusion

Conclusion

LRS and LIS can describe non-Gaussian (e.g. multi-modal) proposal distributions Nonlinear motion models Linearization error LRS and LIS have control over the accuracy of the approximation through M Much lower number of resampling steps in LRS and LIS-2 than in FastSLAM 2.0 ⇒ Slower rate of degeneracy ⇒ Better approximation

  • f the optimal proposal distribution ⇒ Sample impoverishment

problem Monte Carlo Integration in LRS and LIS-1 ⇒ Large M ⇒ High computational cost Accurate results of LRS come at the cost of large runtime LIS-2 (M=3) outperform FastSLAM 2.0 and LRS (M=50) for moderate number of particles (N)

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 34 / 43

slide-120
SLIDE 120

Conclusion

Conclusion

LRS and LIS can describe non-Gaussian (e.g. multi-modal) proposal distributions Nonlinear motion models Linearization error LRS and LIS have control over the accuracy of the approximation through M Much lower number of resampling steps in LRS and LIS-2 than in FastSLAM 2.0 ⇒ Slower rate of degeneracy ⇒ Better approximation

  • f the optimal proposal distribution ⇒ Sample impoverishment

problem Monte Carlo Integration in LRS and LIS-1 ⇒ Large M ⇒ High computational cost Accurate results of LRS come at the cost of large runtime LIS-2 (M=3) outperform FastSLAM 2.0 and LRS (M=50) for moderate number of particles (N)

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 34 / 43

slide-121
SLIDE 121

Conclusion

Conclusion

LRS and LIS can describe non-Gaussian (e.g. multi-modal) proposal distributions Nonlinear motion models Linearization error LRS and LIS have control over the accuracy of the approximation through M Much lower number of resampling steps in LRS and LIS-2 than in FastSLAM 2.0 ⇒ Slower rate of degeneracy ⇒ Better approximation

  • f the optimal proposal distribution ⇒ Sample impoverishment

problem Monte Carlo Integration in LRS and LIS-1 ⇒ Large M ⇒ High computational cost Accurate results of LRS come at the cost of large runtime LIS-2 (M=3) outperform FastSLAM 2.0 and LRS (M=50) for moderate number of particles (N)

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 34 / 43

slide-122
SLIDE 122

Conclusion

Conclusion

LRS and LIS can describe non-Gaussian (e.g. multi-modal) proposal distributions Nonlinear motion models Linearization error LRS and LIS have control over the accuracy of the approximation through M Much lower number of resampling steps in LRS and LIS-2 than in FastSLAM 2.0 ⇒ Slower rate of degeneracy ⇒ Better approximation

  • f the optimal proposal distribution ⇒ Sample impoverishment

problem Monte Carlo Integration in LRS and LIS-1 ⇒ Large M ⇒ High computational cost Accurate results of LRS come at the cost of large runtime LIS-2 (M=3) outperform FastSLAM 2.0 and LRS (M=50) for moderate number of particles (N)

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 34 / 43

slide-123
SLIDE 123

Conclusion

Conclusion

LRS and LIS can describe non-Gaussian (e.g. multi-modal) proposal distributions Nonlinear motion models Linearization error LRS and LIS have control over the accuracy of the approximation through M Much lower number of resampling steps in LRS and LIS-2 than in FastSLAM 2.0 ⇒ Slower rate of degeneracy ⇒ Better approximation

  • f the optimal proposal distribution ⇒ Sample impoverishment

problem Monte Carlo Integration in LRS and LIS-1 ⇒ Large M ⇒ High computational cost Accurate results of LRS come at the cost of large runtime LIS-2 (M=3) outperform FastSLAM 2.0 and LRS (M=50) for moderate number of particles (N)

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 34 / 43

slide-124
SLIDE 124

Conclusion

Conclusion

LRS and LIS can describe non-Gaussian (e.g. multi-modal) proposal distributions Nonlinear motion models Linearization error LRS and LIS have control over the accuracy of the approximation through M Much lower number of resampling steps in LRS and LIS-2 than in FastSLAM 2.0 ⇒ Slower rate of degeneracy ⇒ Better approximation

  • f the optimal proposal distribution ⇒ Sample impoverishment

problem Monte Carlo Integration in LRS and LIS-1 ⇒ Large M ⇒ High computational cost Accurate results of LRS come at the cost of large runtime LIS-2 (M=3) outperform FastSLAM 2.0 and LRS (M=50) for moderate number of particles (N)

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 34 / 43

slide-125
SLIDE 125

Conclusion

Conclusion

LRS and LIS can describe non-Gaussian (e.g. multi-modal) proposal distributions Nonlinear motion models Linearization error LRS and LIS have control over the accuracy of the approximation through M Much lower number of resampling steps in LRS and LIS-2 than in FastSLAM 2.0 ⇒ Slower rate of degeneracy ⇒ Better approximation

  • f the optimal proposal distribution ⇒ Sample impoverishment

problem Monte Carlo Integration in LRS and LIS-1 ⇒ Large M ⇒ High computational cost Accurate results of LRS come at the cost of large runtime LIS-2 (M=3) outperform FastSLAM 2.0 and LRS (M=50) for moderate number of particles (N)

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 34 / 43

slide-126
SLIDE 126

Conclusion

Conclusion

LRS and LIS can describe non-Gaussian (e.g. multi-modal) proposal distributions Nonlinear motion models Linearization error LRS and LIS have control over the accuracy of the approximation through M Much lower number of resampling steps in LRS and LIS-2 than in FastSLAM 2.0 ⇒ Slower rate of degeneracy ⇒ Better approximation

  • f the optimal proposal distribution ⇒ Sample impoverishment

problem Monte Carlo Integration in LRS and LIS-1 ⇒ Large M ⇒ High computational cost Accurate results of LRS come at the cost of large runtime LIS-2 (M=3) outperform FastSLAM 2.0 and LRS (M=50) for moderate number of particles (N)

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 34 / 43

slide-127
SLIDE 127

Future Work

Table of Contents

1

Introduction SLAM Problem Bayesian Filtering Particle Filter

2

RBPF-SLAM

3

Monte Carlo Approximation of the Optimal Proposal Distribution Introduction LRS LIS-1 LIS-2

4

Results Simulation Results Experiments On Real Data

5

Conclusion

6

Future Work

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 35 / 43

slide-128
SLIDE 128

Future Work

Future Work

How to guess an “appropriate” value for M? More Monte Carlo runs Hybrid algorithm (linearization + Monte Carlo sampling) Repeat the simulations for the case of unknown data association

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 36 / 43

slide-129
SLIDE 129

Future Work

Future Work

How to guess an “appropriate” value for M? More Monte Carlo runs Hybrid algorithm (linearization + Monte Carlo sampling) Repeat the simulations for the case of unknown data association

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 36 / 43

slide-130
SLIDE 130

Future Work

Future Work

How to guess an “appropriate” value for M? More Monte Carlo runs Hybrid algorithm (linearization + Monte Carlo sampling) Repeat the simulations for the case of unknown data association

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 36 / 43

slide-131
SLIDE 131

Future Work

Future Work

How to guess an “appropriate” value for M? More Monte Carlo runs Hybrid algorithm (linearization + Monte Carlo sampling) Repeat the simulations for the case of unknown data association

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 36 / 43

slide-132
SLIDE 132

Papers

Papers

IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2011: Monte-Carlo Approximation of The Optimal Proposal Distribution in RBPF-SLAM (submitted) . . .

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 37 / 43

slide-133
SLIDE 133

Thank You

Thank You . . .

Figure: Melon: The Semi-autonomous Mobile Robot of K.N. Toosi Univ. of Tech.

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 38 / 43

slide-134
SLIDE 134

Thank You

Thank You . . .

Figure: Melon: The Semi-autonomous Mobile Robot of K.N. Toosi Univ. of Tech.

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 38 / 43

slide-135
SLIDE 135

Thank You

Thank you . . .

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 39 / 43

slide-136
SLIDE 136

Thank You

Rao-Blackwellized Particle Filter in SLAM

SLAM is a very high-dimensional problem Estimating the p(s0:t, θ|z1:t, u1:t) using a particle filter can be very inefficient We can factor the smoothing distribution into two parts as p(s0:t, θ|z1:t, u1:t) = p(s0:t|z1:t, u1:t)

M

  • k=1

p(θk| s0:t , z1:t, u1:t) We can estimate p(s0:t|z1:t, u1:t) using a particle filter p(θk|s[i]

0:t, z1:t, u1:t) can be estimated using an EKF for each robot

path particle s[i]

0:t

A map (estimated locations of the features) is attached to each robot path particle

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 40 / 43

slide-137
SLIDE 137

Thank You

Rao-Blackwellized Particle Filter in SLAM

SLAM is a very high-dimensional problem Estimating the p(s0:t, θ|z1:t, u1:t) using a particle filter can be very inefficient We can factor the smoothing distribution into two parts as p(s0:t, θ|z1:t, u1:t) = p(s0:t|z1:t, u1:t)

M

  • k=1

p(θk| s0:t , z1:t, u1:t) We can estimate p(s0:t|z1:t, u1:t) using a particle filter p(θk|s[i]

0:t, z1:t, u1:t) can be estimated using an EKF for each robot

path particle s[i]

0:t

A map (estimated locations of the features) is attached to each robot path particle

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 40 / 43

slide-138
SLIDE 138

Thank You

Rao-Blackwellized Particle Filter in SLAM

SLAM is a very high-dimensional problem Estimating the p(s0:t, θ|z1:t, u1:t) using a particle filter can be very inefficient We can factor the smoothing distribution into two parts as p(s0:t, θ|z1:t, u1:t) = p(s0:t|z1:t, u1:t)

M

  • k=1

p(θk| s0:t , z1:t, u1:t) We can estimate p(s0:t|z1:t, u1:t) using a particle filter p(θk|s[i]

0:t, z1:t, u1:t) can be estimated using an EKF for each robot

path particle s[i]

0:t

A map (estimated locations of the features) is attached to each robot path particle

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 40 / 43

slide-139
SLIDE 139

Thank You

Rao-Blackwellized Particle Filter in SLAM

SLAM is a very high-dimensional problem Estimating the p(s0:t, θ|z1:t, u1:t) using a particle filter can be very inefficient We can factor the smoothing distribution into two parts as p(s0:t, θ|z1:t, u1:t) = p(s0:t|z1:t, u1:t)

M

  • k=1

p(θk| s0:t , z1:t, u1:t) We can estimate p(s0:t|z1:t, u1:t) using a particle filter p(θk|s[i]

0:t, z1:t, u1:t) can be estimated using an EKF for each robot

path particle s[i]

0:t

A map (estimated locations of the features) is attached to each robot path particle

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 40 / 43

slide-140
SLIDE 140

Thank You

Rao-Blackwellized Particle Filter in SLAM

SLAM is a very high-dimensional problem Estimating the p(s0:t, θ|z1:t, u1:t) using a particle filter can be very inefficient We can factor the smoothing distribution into two parts as p(s0:t, θ|z1:t, u1:t) = p(s0:t|z1:t, u1:t)

M

  • k=1

p(θk| s0:t , z1:t, u1:t) We can estimate p(s0:t|z1:t, u1:t) using a particle filter p(θk|s[i]

0:t, z1:t, u1:t) can be estimated using an EKF for each robot

path particle s[i]

0:t

A map (estimated locations of the features) is attached to each robot path particle

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 40 / 43

slide-141
SLIDE 141

Thank You

Rao-Blackwellized Particle Filter in SLAM

SLAM is a very high-dimensional problem Estimating the p(s0:t, θ|z1:t, u1:t) using a particle filter can be very inefficient We can factor the smoothing distribution into two parts as p(s0:t, θ|z1:t, u1:t) = p(s0:t|z1:t, u1:t)

M

  • k=1

p(θk| s0:t , z1:t, u1:t) We can estimate p(s0:t|z1:t, u1:t) using a particle filter p(θk|s[i]

0:t, z1:t, u1:t) can be estimated using an EKF for each robot

path particle s[i]

0:t

A map (estimated locations of the features) is attached to each robot path particle

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 40 / 43

slide-142
SLIDE 142

Thank You

Importance Sampling

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 41 / 43

slide-143
SLIDE 143

Thank You

Importance Sampling (IS)

Idea Generate samples from another distribution called the importance function like π(x), and weight these samples according to w∗(x[i]) = p(x[i])

π(x[i]):

Ep(x)[h(x)] =

  • h(x) p(x)

π(x) π(x)dx = 1

N

N

  • i=1

w∗(x[i])h(x[i]) In practice we compute importance weights w(x) proportional to p(x)

π(x) and

normalize them to estimate the expected value as: Ep(x)[h(x)] ≈

N

  • i=1

w(x[i]) N

j=1 w(x[j])

h(x[i]) But . . . How to do this recursively in time?

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 42 / 43

slide-144
SLIDE 144

Thank You

Importance Sampling (IS)

Idea Generate samples from another distribution called the importance function like π(x), and weight these samples according to w∗(x[i]) = p(x[i])

π(x[i]):

Ep(x)[h(x)] =

  • h(x) p(x)

π(x) π(x)dx = 1

N

N

  • i=1

w∗(x[i])h(x[i]) In practice we compute importance weights w(x) proportional to p(x)

π(x) and

normalize them to estimate the expected value as: Ep(x)[h(x)] ≈

N

  • i=1

w(x[i]) N

j=1 w(x[j])

h(x[i]) But . . . How to do this recursively in time?

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 42 / 43

slide-145
SLIDE 145

Thank You

Importance Sampling (IS)

Idea Generate samples from another distribution called the importance function like π(x), and weight these samples according to w∗(x[i]) = p(x[i])

π(x[i]):

Ep(x)[h(x)] =

  • h(x) p(x)

π(x) π(x)dx = 1

N

N

  • i=1

w∗(x[i])h(x[i]) In practice we compute importance weights w(x) proportional to p(x)

π(x) and

normalize them to estimate the expected value as: Ep(x)[h(x)] ≈

N

  • i=1

w(x[i]) N

j=1 w(x[j])

h(x[i]) But . . . How to do this recursively in time?

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 42 / 43

slide-146
SLIDE 146

Thank You

Importance Sampling (IS)

Idea Generate samples from another distribution called the importance function like π(x), and weight these samples according to w∗(x[i]) = p(x[i])

π(x[i]):

Ep(x)[h(x)] =

  • h(x) p(x)

π(x) π(x)dx = 1

N

N

  • i=1

w∗(x[i])h(x[i]) In practice we compute importance weights w(x) proportional to p(x)

π(x) and

normalize them to estimate the expected value as: Ep(x)[h(x)] ≈

N

  • i=1

w(x[i]) N

j=1 w(x[j])

h(x[i]) But . . . How to do this recursively in time?

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 42 / 43

slide-147
SLIDE 147

Thank You

SIR

  • K. Khosoussi (ARAS)

Development of SLAM Algorithms July 13, 2011 43 / 43