P(X=x i ) , or P(x i ) , is the probability that the = Pr( x - - PowerPoint PPT Presentation

p x x i or p x i is the probability that the pr x a b p x
SMART_READER_LITE
LIVE PREVIEW

P(X=x i ) , or P(x i ) , is the probability that the = Pr( x - - PowerPoint PPT Presentation

Probabilistic Robotics CSE-571 Key idea: Explicit representation of uncertainty Probabilistic Robotics (using the calculus of probability theory) Probabilistic Robotics Perception = state estimation Action = utility optimization


slide-1
SLIDE 1

1

CSE-571 Probabilistic Robotics

Probabilistic Robotics Probabilities Bayes rule Bayes filters

Probabilistic Robotics

Key idea: Explicit representation of uncertainty

(using the calculus of probability theory)

  • Perception = state estimation
  • Action

= utility optimization

Discrete Random Variables

  • X denotes a random variable.
  • X can take on a countable number of values

in {x1, x2, …, xn}.

  • P(X=xi), or P(xi), is the probability that the

random variable X takes on value xi.

  • P( ) is called probability mass function.
  • E.g.

02 . , 08 . , 2 . , 7 . ) ( = Room P

.

Continuous Random Variables

  • X takes on values in the continuum.
  • p(X=x), or p(x), is a probability density

function.

  • E.g.

= ∈

b a

dx x p b a x ) ( )) , ( Pr(

x p(x)

slide-2
SLIDE 2

2

Joint and Conditional Probability

  • P(X=x and Y=y) = P(x,y)
  • If X and Y are independent then

P(x,y) = P(x) P(y)

  • P(x | y) is the probability of x given y

P(x | y) = P(x,y) / P(y) P(x,y) = P(x | y) P(y)

  • If X and Y are independent then

P(x | y) = P(x)

Law of Total Probability, Marginals

=

y

y x P x P ) , ( ) (

=

y

y P y x P x P ) ( ) | ( ) (

( ) 1

x

P x =

Discrete case

( ) 1 p x dx =

Continuous case

= dy y p y x p x p ) ( ) | ( ) (

= dy y x p x p ) , ( ) (

Bayes Formula evidence prior likelihood ) ( ) ( ) | ( ) ( ) ( ) | ( ) ( ) | ( ) , ( ⋅ = = ⇒ = = y P x P x y P y x P x P x y P y P y x P y x P

  • Often causal knowledge is easier to
  • btain than diagnostic knowledge.
  • Bayes rule allows us to use causal

knowledge.

Normalization

1 '

( | ) ( ) ( ) ( | ) ( ) ( ) 1 ( ) ( | ') ( ')

x

P y x P x P x y P y x P x P y P y P y x P x η η

= = = = ∑

slide-3
SLIDE 3

3

Simple Example of State Estimation

  • Suppose a robot obtains measurement z
  • What is P(open|z)?

Example

67 . 3 2 5 . 3 . 5 . 6 . 5 . 6 . ) | ( ) ( ) | ( ) ( ) | ( ) ( ) | ( ) | ( = = ⋅ + ⋅ ⋅ = ¬ ¬ + = z

  • pen

P

  • pen

p

  • pen

z P

  • pen

p

  • pen

z P

  • pen

P

  • pen

z P z

  • pen

P

  • z raises the probability that the door is open.

P(z | open) = 0.6 P(z | ¬open) = 0.3 P(open) = P(¬open) = 0.5

Normalization

1 '

( | ) ( ) ( ) ( | ) ( ) ( ) 1 ( ) ( | ') ( ')

x

P y x P x P x y P y x P x P y P y P y x P x η η

= = = = ∑

y x x y x y x

y x P x x P x y P x

| | |

aux ) | ( : aux 1 ) ( ) | ( aux : η η = ∀ = = ∀

Algorithm:

Conditioning

  • Bayes rule and background knowledge:

) | ( ) | ( ) , | ( ) , | ( z y P z x P z x y P z y x P =

∫ ∫ ∫

= = = dz z y P z y x P dz y z P z y x P dz z P z y x P y x P ) | ( ) , | ( ) | ( ) , | ( ) ( ) , | ( ) (

? ? ?

slide-4
SLIDE 4

4 Conditioning

  • Bayes rule and background knowledge:

) | ( ) | ( ) , | ( ) , | ( z y P z x P z x y P z y x P = ( ) ( | , ) ( | ) P x y P x y z P z y dz =∫

Conditional Independence

) | ( ) | ( ) , ( z y P z x P z y x P = ) , | ( ) ( y z x P z x P = ) , | ( ) ( x z y P z y P =

  • Equivalent to

and Simple Example of State Estimation

  • Suppose our robot obtains another
  • bservation z2.
  • What is P(open|z1, z2)?

Recursive Bayesian Updating

) , , | ( ) , , | ( ) , , , | ( ) , , | (

1 1 1 1 1 1 1 − − −

=

n n n n n n

z z z P z z x P z z x z P z z x P … … … …

Markov assumption: zn is conditionally independent

  • f z1,...,zn-1 given x.

P(x | z1,…,zn) = P(zn | x) P(x | z1,…,zn − 1) P(zn | z1,…,zn − 1) = η P(zn | x) P(x | z1,…,zn − 1) = η1...n P(zi | x)

i=1...n

P(x)

slide-5
SLIDE 5

5

Example: Second Measurement

625 . 8 5 3 1 5 3 3 2 2 1 3 2 2 1 ) | ( ) | ( ) | ( ) | ( ) | ( ) | ( ) , | (

1 2 1 2 1 2 1 2

= = ⋅ + ⋅ ⋅ = ¬ ¬ + = z

  • pen

P

  • pen

z P z

  • pen

P

  • pen

z P z

  • pen

P

  • pen

z P z z

  • pen

P

  • z2 lowers the probability that the door is open.

P(z2 | open) = 0.5 P(z2 | ¬open) = 0.6 P(open | z1) = 2 / 3 P(¬open | z1) = 1/ 3

Bayes Filters: Framework

  • Given:
  • Stream of observations z and action data u:
  • Sensor model P(z|x).
  • Action model P(x|u,x’).
  • Prior probability of the system state P(x).
  • Wanted:
  • Estimate of the state X of a dynamical system.
  • The posterior of the state is also called Belief:

) , , , | ( ) (

1 2 1 t t t t

z u z u x P x Bel

= … } , , , {

1 2 1 t t t

z u z u d

= …

Bayes Filters

) , , , | ( ) , , , , | (

1 1 1 1 t t t t t

u z u x P u z u x z P … … η =

Bayes z = observation u = action x = state

) , , , | ( ) (

1 1 t t t t

z u z u x P x Bel … =

Markov

) , , , | ( ) | (

1 1 t t t t

u z u x P x z P … η =

1 1 1

) ( ) , | ( ) | (

− − −

=

t t t t t t t

dx x Bel x u x P x z P η

Markov 1 1 1 1 1

) , , , | ( ) , | ( ) | (

− − −

=

t t t t t t t t

dx u z u x P x u x P x z P … η

= η P(zt | xt ) P(xt | u1,z1,…,ut,xt−1)

P(xt−1 | u1,z1,…,ut ) dxt−1

Total prob.

Bayes Filter Algorithm

1. Algorithm Bayes_filter( Bel(x),d ):

2. n=0 3. If d is a perceptual data item z then 4. For all x do 5. 6. 7. For all x do 8. 9. Else if d is an action data item u then

  • 10. For all x do

11.

  • 12. Return Bel’(x)

) ( ) | ( ) ( ' x Bel x z P x Bel = ) ( ' x Bel + =η η ) ( ' ) ( '

1

x Bel x Bel

=η Bel'(x) = P(x | u,x')

Bel(x') dx'

1 1 1

) ( ) , | ( ) | ( ) (

− − −

=

t t t t t t t t

dx x Bel x u x P x z P x Bel η

slide-6
SLIDE 6

6 Markov Assumption

Underlying Assumptions

  • Static world
  • Independent noise
  • Perfect model, no approximation errors

) , | ( ) , , | (

1 : 1 1 : 1 1 : 1 t t t t t t t

u x x p u z x x p

− − −

= ) | ( ) , , | (

: 1 1 : 1 : t t t t t t

x z p u z x z p =

Dynamic Environments

  • Two possible locations x1 and x2
  • P(x1)=0.99
  • P(z|x2)=0.09 P(z|x1)=0.07

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 5 10 15 20 25 30 35 40 45 50 p( x | d) Number of integrations p(x2 | d) p(x1 | d)

Bayes Filters for Robot Localization

Representations for Bayesian Robot Localization

Discrete approaches (’95)

  • Topological representation (’95)
  • uncertainty handling (POMDPs)
  • occas. global localization, recovery
  • Grid-based, metric representation (’96)
  • global localization, recovery

Multi-hypothesis (’00)

  • multiple Kalman filters
  • global localization, recovery

Particle filters (’99)

  • sample-based representation
  • global localization, recovery

Kalman filters (late-80s)

  • Gaussians, unimodal
  • approximately linear models
  • position tracking

AI Robotics

slide-7
SLIDE 7

7 Bayes Filters are Familiar!

  • Kalman filters
  • Particle filters
  • Hidden Markov models
  • Dynamic Bayesian networks
  • Partially Observable Markov Decision

Processes (POMDPs)

1 1 1

) ( ) , | ( ) | ( ) (

− − −

=

t t t t t t t t

dx x Bel x u x P x z P x Bel η

Summary

  • Bayes rule allows us to compute

probabilities that are hard to assess

  • therwise.
  • Under the Markov assumption,

recursive Bayesian updating can be used to efficiently combine evidence.

  • Bayes filters are a probabilistic tool

for estimating the state of dynamic systems.