Introduction Maani Ghaffari January 8, 2020 Robotics Systems: How - - PowerPoint PPT Presentation

introduction
SMART_READER_LITE
LIVE PREVIEW

Introduction Maani Ghaffari January 8, 2020 Robotics Systems: How - - PowerPoint PPT Presentation

NA 568 - Winter 2020 Introduction Maani Ghaffari January 8, 2020 Robotics Systems: How and Why? https://www.youtube.com/watch?v=uFyT8zCg1Kk 2 Course Objective Teaching essential mathematics and algorithms behind the current robotics


slide-1
SLIDE 1

NA 568 - Winter 2020

Introduction

Maani Ghaffari January 8, 2020

slide-2
SLIDE 2

Robotics Systems: How and Why?

https://www.youtube.com/watch?v=uFyT8zCg1Kk

2

slide-3
SLIDE 3

Course Objective ◮ Teaching essential mathematics and algorithms behind the current robotics research ◮ Prepare you for research and development in robotics ◮ Solving real robotic problems

3

slide-4
SLIDE 4

Logistics and Collaboration Policy ◮ All official course info on Canvas ◮ Join Piazza from Canvas ◮ Your assignments must represent original work.

◮ Can talk with other students at a conceptual level all you want. ◮ CANNOT share non-trivial code or solutions.

◮ It is critical that you encounter unexpected challenges!

◮ Solving these problems is where you’ll learn the material! 4

slide-5
SLIDE 5

Probability Density Functions

Courtesy: T. Barfoot

5

slide-6
SLIDE 6

Joint and Conditional Distribution

Let X and Y be two random variables. ◮ The joint distribution of X and Y is: p(x,y) = p(X = x and Y = y); ◮ If X and Y are independent then p(x,y) = p(x)p(y) ◮ The conditional probability of X given Y is: p(x|y) = p(x,y)

p(y)

p(y) > 0.

6

slide-7
SLIDE 7

Marginalization

◮ Given p(x,y), the marginal distribution of X can be computed by summing (integration) over Y . ◮ The law of total probability is its variant which uses the conditional probability definition p(x) =

  • y∈Y

p(x,y) =

  • y∈Y

p(x|y)p(y) and for continuous random variables is p(x) =

  • y∈Y

p(x,y)dy =

  • y∈Y

p(x|y)p(y)dy

7

slide-8
SLIDE 8

Bayes’ Rule

◮ p(x,y) = p(x|y)p(y) = P(y|x)p(x) ◮ p(hypothesis|data) = p(data|hypothesis)p(hypothesis) p(data) p(x|y) = p(y|x)p(x) p(y) = p(y|x)p(x)

  • x∈X p(y|x)p(x)

◮ Posterior = Likelihood × Prior Evidence (Marginal Likelihood)

8

slide-9
SLIDE 9

Bayes’ Rule

Example An autonomous car is approaching a traffic light which can be either green, yellow, or red. The car is programmed to be conservative and thus it will stop if it detects a yellow or red light; otherwise it will continue

  • driving. Previous tests have demonstrated that due to sensor

imperfections, the car will drive through (without stopping) 10% of yellow lights, 95% of green lights, and 1% of red lights. The traffic light is on a continuous cycle (30 seconds green, 5 seconds yellow, 25 seconds red). You are riding in the car and are busy working on your Mobile Robotics project (i.e., not watching the road, light, etc.). You feel the car stop as it approaches the traffic light described above. What is the probability that the traffic light was yellow when the vehicle sensed it?

9

slide-10
SLIDE 10

Bayes’ Rule

Answer

Let S represent the event that the vehicle stopped, G the event that the light was green, Y that it was yellow, R that is was red. ◮ Given: P(S|Y ) = 0.90, P(S|G) = 0.05, P(S|R) = 0.99, P(Y ) = 5/60, P(R) = 25/60, P(G) = 30/60 ◮ Find: P(Y |S) P(Y |S) = P(S|Y )P(Y ) P(S) P(Y |S) = P(S|Y )P(Y ) P(S|Y )P(Y ) + P(S|R)P(R) + P(S|G)P(G) P(Y |S) = 0.90(5/60) 0.90(5/60) + 0.99(25/60) + 0.05(30/60) = 14.63%

10

slide-11
SLIDE 11

Bayes’ Rule with Prior Knowledge

◮ Given three random variables X, Y , and Z, Bayes’ rule relates the prior probability distribution, p(x|z), and the likelihood function, p(y|x,z), as follows. p(x|y,z) = p(y|x,z)p(x|z) p(y|z) ◮ Given Z, if X and Y are conditionally independent then p(x,y|z) = p(x|z)p(y|z)

Example

Height and vocabulary are not independent; but they are conditionally independent if you add age.

https://en.wikipedia.org/wiki/Conditional independence#Examples

11

slide-12
SLIDE 12

Univariate Normal Distribution The univariate (one-dimensional) Gaussian (or normal) distribution with mean µ and variance σ2 has the following Probability Density Function (PDF). p(x) = 1 √ 2πσ2 exp(−1 2 (x − µ)2 σ2 ) We often write x ∼ N(µ, σ2) or N(x; µ, σ2) to imply that x follows a Gaussian distribution with mean µ = E[x] and variance σ2 = V[x].

12

slide-13
SLIDE 13

Multivariate Normal Distribution The multivariate Gaussian (normal) distribution of an n-dimensional random vector x ∼ N(µ, Σ), with mean µ = E[x] and covariance Σ = Cov[x] = E[(x − µ)(x − µ)T] is p(x) = (2π)− n

2 |Σ|− 1 2 exp(−1

2(x − µ)TΣ−1(x − µ))

13

slide-14
SLIDE 14

Visualizing multivariate Gaussian Let x = vec(x1, x2) and x ∼ N(µ, Σ) where µ =

  • 0.0

0.5

  • , Σ =
  • 0.8

0.3 0.3 1.0

  • Figure: Left, two-dimensional PDF; right, top view of the first plot.

14

slide-15
SLIDE 15

Marginalization and Conditioning of Normal Distribution Let x and y be jointly Gaussian random vectors

  • x

y

  • ∼ N(
  • µx

µy

  • ,
  • A

C CT B

  • )

then the marginal distribution of x is x ∼ N(µx, A) and the conditional distribution of x given y is x|y ∼ N(µx + CB−1(y − µy), A − CB−1CT)

15

slide-16
SLIDE 16

Visualizing multivariate Gaussian Let x = vec(x1, x2) and x ∼ N(µ, Σ) where µ =

  • 0.0

0.5

  • , Σ =
  • 0.8

0.3 0.3 1.0

  • .

2 0.02 . 2 0.02 0.02 0.02 . 4 0.04 0.04 0.04 0.04 0.06 . 6 0.06 0.06 0.08 . 8 0.08 0.08 0.1 0.1 0.1 0.12 . 1 2 . 1 2 . 1 4 0.14 0.16 0.16 0.18

Figure: Left, the contour plot of the PDF; right, the marginals and the conditional distribution of

p(x1|x2 = 0.9).

16

slide-17
SLIDE 17

Affine Transformation of a Multivariate Gaussian Suppose x ∼ N(µ, Σ) and y = Ax + b. Then y ∼ N(Aµ + b, AΣAT). E[y] = E[Ax + b] = AE[x] + b = Aµ + b Cov[y] = E[(y − E[y])(y − E[y])T] = E[(Ax − Aµ)(Ax − Aµ)T] = AE[(x − µ)(x − µ)T]AT = AΣAT

17

slide-18
SLIDE 18

Bayes Filters: Framework ◮ Given:

◮ Stream of observations z1:t and action data u1:t ◮ Sensor/measurement model p(zt|xt) ◮ Action/motion/transition model p(xt|xt−1,ut)

◮ Wanted:

◮ The state xt of dynamical system ◮ The posterior of state is called belief bel(xt) = p(xt|z1:t, u1:t) 18

slide-19
SLIDE 19

Bayes Filter

Algorithm 1 Bayes-filter Require: Belief bel(xt−1) = p(xt−1|z1:t−1, u1:t−1), action ut, measurement zt;

1: for all state variables do 2:

bel(xt) =

  • p(xt|x1:t−1, ut)bel(xt−1)dxt−1

// Predict using ac- tion/control input ut

3:

bel(xt) = ηp(zt|xt)bel(xt) // Update using perceptual data zt

4: return bel(xt)

19

slide-20
SLIDE 20

Bayes Filters: Implementation Examples

◮ Kalman Filter: unimodal linear filter ◮ Information Filter: unimodal linear filter ◮ Extended Kalman Filter: unimodal nonlinear filter with Gaussian noise assumption ◮ Extended Information Filter: unimodal nonlinear filter with Gaussian noise assumption ◮ Particle Filter: multimodal nonlinear filter

20

slide-21
SLIDE 21

Simple Example of State Estimation

◮ Suppose a robot

  • btains measurement

z, e.g., using its camera; ◮ What is p(open|z)?

21

slide-22
SLIDE 22

Causal vs. Diagnostic Reasoning

◮ p(open|z) is diagnostic. ◮ p(z|open) is causal. ◮ Often causal knowledge is easier to obtain. ◮ Bayes rule allows us to use causal knowledge: p(open|z) = p(z|open)p(open) p(z)

22

slide-23
SLIDE 23

Example

Sensor model (likelihood): ◮ p(z = sense open|open) = 0.6 ◮ p(z = sense open|¬open) = 0.3

23

slide-24
SLIDE 24

Example

Sensor model (likelihood): ◮ p(z = sense open|open) = 0.6 ◮ p(z = sense open|¬open) = 0.3 Prior knowledge (non-informative in this case): ◮ p(open) = p(¬open) = 0.5

23

slide-25
SLIDE 25

Example

Sensor model (likelihood): ◮ p(z = sense open|open) = 0.6 ◮ p(z = sense open|¬open) = 0.3 Prior knowledge (non-informative in this case): ◮ p(open) = p(¬open) = 0.5 Update/Correction: p(open|z) = p(z|open)p(open) p(z|open)p(open) + p(z|¬open)p(¬open) p(open|z = sense open) = 0.6 × 0.5 0.6 × 0.5 + 0.3 × 0.5 = 0.6667

Remark

z raises the probability that the door is open.

23

slide-26
SLIDE 26

Combining Evidence ◮ Suppose our robot obtains another observation z2. ◮ How can we integrate this new information? ◮ More generally, how can we estimate p(x|z1, . . . ,zn)?

24

slide-27
SLIDE 27

Recursive Bayesian Updating

p(x|z1, . . . ,zn) = p(zn|x, z1, . . . ,zn−1)p(x|z1, . . . ,zn−1) p(zn|z1, . . . ,zn−1)

Assumption (Markov Assumption)

zn is independent of z1, . . . ,zn−1 if we know x.

25

slide-28
SLIDE 28

Recursive Bayesian Updating

p(x|z1, . . . ,zn) = p(zn|x, z1, . . . ,zn−1)p(x|z1, . . . ,zn−1) p(zn|z1, . . . ,zn−1)

Assumption (Markov Assumption)

zn is independent of z1, . . . ,zn−1 if we know x.

  • r equivalently we can state:

Assumption (Markov Property)

The Markov property states that “the future is independent of the past if the present is known.“ A stochastic process that has this property is called a Markov process.

25

slide-29
SLIDE 29

Recursive Bayesian Updating

p(x|z1, . . . ,zn) = p(zn|x, z1, . . . ,zn−1)p(x|z1, . . . ,zn−1) p(zn|z1, . . . ,zn−1)

Assumption (Markov Assumption)

zn is independent of z1, . . . ,zn−1 if we know x. p(x|z1, . . . ,zn) = p(zn|x)p(x|z1, . . . ,zn−1) p(zn|z1, . . . ,zn−1) = ηn p(zn|x)p(x|z1, . . . ,zn−1) = η1:n

n

  • i=1

p(zi|x)p(x) where η1:n η1η2 · · · ηn.

26

slide-30
SLIDE 30

Readings ◮ Probabilistic Robotics: Ch. 1 and 2, Understand Example 2.4.2 ◮ State Estimation for Robotics: Ch. 2 ◮ Lecture note 1 ◮ Bar-Shalom Ch. 1.3-1.6

27

slide-31
SLIDE 31

Next Time ◮ Kalman Filtering ◮ Readings:

◮ Probabilistic Robotics: Ch. 3 ◮ State Estimation for Robotics: Ch. 3 ◮ Lecture note 2 ◮ Bar-Shalom Ch. 2 and 5 28