temporal inference
play

Temporal Inference 16-385 Computer Vision (Kris Kitani) Carnegie - PowerPoint PPT Presentation

Temporal Inference 16-385 Computer Vision (Kris Kitani) Carnegie Mellon University Basic Inference Tasks Filtering Prediction P ( X t | e 1: t ) P ( X t + k | e 1: t ) Posterior probability over the current Posterior probability over a future


  1. Temporal Inference 16-385 Computer Vision (Kris Kitani) Carnegie Mellon University

  2. Basic Inference Tasks Filtering Prediction P ( X t | e 1: t ) P ( X t + k | e 1: t ) Posterior probability over the current Posterior probability over a future state, given all evidence up to state, given all evidence up to present present Smoothing Best Sequence arg max P ( X 1: t | e 1: t ) P ( X k | e 1: t ) X 1: t Posterior probability over a past state, Best state sequence given all evidence given all evidence up to present up to present

  3. Filtering P ( X t | e 1: t ) Posterior probability over the current state, given all evidence up to present Where am I now?

  4. Filtering Can be computed with recursion (Dynamic Programming) X P ( X t +1 | e 1: t +1 ) ∝ P ( e t +1 | X t +1 ) P ( X t +1 | X t ) P ( X t | e 1: t ) observation model motion model prior X t posterior

  5. Filtering Can be computed with recursion (Dynamic Programming) X P ( X t +1 | e 1: t +1 ) ∝ P ( e t +1 | X t +1 ) P ( X t +1 | X t ) P ( X t | e 1: t ) observation model motion model X t What is this?

  6. Filtering Can be computed with recursion (Dynamic Programming) X P ( X t +1 | e 1: t +1 ) ∝ P ( e t +1 | X t +1 ) P ( X t +1 | X t ) P ( X t | e 1: t ) X t same type of ‘message’

  7. Filtering Can be computed with recursion (Dynamic Programming) X P ( X t +1 | e 1: t +1 ) ∝ P ( e t +1 | X t +1 ) P ( X t +1 | X t ) P ( X t | e 1: t ) X t same type of ‘message’ called a belief distribution Bel ( x t ) sometimes people use this annoying notation instead: a belief is a reflection of the systems (robot, tracker) knowledge about the state X

  8. Filtering Can be computed with recursion (Dynamic Programming) X P ( X t +1 | e 1: t +1 ) ∝ P ( e t +1 | X t +1 ) P ( X t +1 | X t ) P ( X t | e 1: t ) X t Where does this equation come from? (scary math to follow…)

  9. Filtering Can be computed with recursion (Dynamic Programming) X P ( X t +1 | e 1: t +1 ) ∝ P ( e t +1 | X t +1 ) P ( X t +1 | X t ) P ( X t | e 1: t ) X t just splitting up the notation here P ( X t +1 | e 1: t +1 ) = P ( X t +1 | e t +1 , e 1: t )

  10. Filtering Can be computed with recursion (Dynamic Programming) X P ( X t +1 | e 1: t +1 ) ∝ P ( e t +1 | X t +1 ) P ( X t +1 | X t ) P ( X t | e 1: t ) X t P ( X t +1 | e 1: t +1 ) = P ( X t +1 | e t +1 , e 1: t ) Apply Bayes' rule (with evidence)

  11. Filtering Can be computed with recursion (Dynamic Programming) X P ( X t +1 | e 1: t +1 ) ∝ P ( e t +1 | X t +1 ) P ( X t +1 | X t ) P ( X t | e 1: t ) X t P ( X t +1 | e 1: t +1 ) = P ( X t +1 | e t +1 , e 1: t ) = P ( e t +1 | X t +1 , e 1: t ) P ( X t +1 | e 1: t ) Apply Markov assumption on P ( e t +1 | e 1: t ) observation model

  12. Filtering Can be computed with recursion (Dynamic Programming) X P ( X t +1 | e 1: t +1 ) ∝ P ( e t +1 | X t +1 ) P ( X t +1 | X t ) P ( X t | e 1: t ) X t P ( X t +1 | e 1: t +1 ) = P ( X t +1 | e t +1 , e 1: t ) = P ( e t +1 | X t +1 , e 1: t ) P ( X t +1 | e 1: t ) P ( e t +1 | e 1: t ) Condition on the = α P ( e t +1 | X t +1 ) P ( X t +1 | e 1: t ) previous state X t X

  13. Filtering Can be computed with recursion (Dynamic Programming) X P ( X t +1 | e 1: t +1 ) ∝ P ( e t +1 | X t +1 ) P ( X t +1 | X t ) P ( X t | e 1: t ) X t P ( X t +1 | e 1: t +1 ) = P ( X t +1 | e t +1 , e 1: t ) = P ( e t +1 | X t +1 , e 1: t ) P ( X t +1 | e 1: t ) P ( e t +1 | e 1: t ) = α P ( e t +1 | X t +1 ) P ( X t +1 | e 1: t ) X = α P ( e t +1 | X t +1 ) P ( X t +1 | X t , e 1: t ) P ( X t | e 1: t ) X t Apply Markov assumption on motion model X

  14. Filtering Can be computed with recursion (Dynamic Programming) X P ( X t +1 | e 1: t +1 ) ∝ P ( e t +1 | X t +1 ) P ( X t +1 | X t ) P ( X t | e 1: t ) X t P ( X t +1 | e 1: t +1 ) = P ( X t +1 | e t +1 , e 1: t ) = P ( e t +1 | X t +1 , e 1: t ) P ( X t +1 | e 1: t ) P ( e t +1 | e 1: t ) = α P ( e t +1 | X t +1 ) P ( X t +1 | e 1: t ) X = α P ( e t +1 | X t +1 ) P ( X t +1 | X t , e 1: t ) P ( X t | e 1: t ) X t X = α P ( e t +1 | X t +1 ) P ( X t +1 | X t ) P ( X t | e 1: t ) X t

  15. Hidden Markov Model example ‘In the trunk of a car of a sleepy driver’ model left right binary random variable (left lane or right lane) x 2 x 4 x 1 x 0 x 3 x = { x left , x right }

  16. From a hole in the car you can see the ground x 2 x 4 x 1 x 0 x 3 e 1 e 2 e 3 e 4 binary random variable (center lane is yellow or road is gray) e = { e gray , e yellow }

  17. P ( x t | x t − 1 ) x right x left x right x left 0.7 0.3 x left What needs P ( x 0 ) 0.5 0.5 to sum to one? 0.3 0.7 x right x 2 x 4 x 1 x 0 x 3 e 1 e 2 e 3 e 4 P ( e t | x t ) x right x left 0.9 0.2 e yellow This is filtering! 0.1 0.8 e gray What’s the probability of being in the left lane at t=4?

  18. x left x right x right x left x right P ( x 0 ) P ( x t | x t − 1 ) P ( e t | x t ) x left x left e yellow 0.5 0.5 0.7 0.3 0.9 0.2 x right e gray 0.3 0.7 0.1 0.8 X P ( X t +1 | e 1: t +1 ) ∝ P ( e t +1 | X t +1 ) P ( X t +1 | X t ) P ( X t | e 1: t ) Filtering: X t p ( x 1 | e 1 = e yellow ) =? What is the belief distribution if I see yellow at t=1 X p ( x 1 ) = x 0 p ( x 1 | x 0 ) p ( x 0 ) Prediction step: p ( x 1 | e 1 ) = α p ( e 1 | x 1 ) p ( x 1 ) Update step:

  19. x left x right x right x left x right P ( x 0 ) P ( x t | x t − 1 ) P ( e t | x t ) x left x left e yellow 0.5 0.5 0.7 0.3 0.9 0.2 x right e gray 0.3 0.7 0.1 0.8 X P ( X t +1 | e 1: t +1 ) ∝ P ( e t +1 | X t +1 ) P ( X t +1 | X t ) P ( X t | e 1: t ) Filtering: X t What is the belief distribution if I see yellow at t=1 p ( x 1 | e 1 = e yellow ) =? X p ( x 1 ) = x 0 p ( x 1 | x 0 ) p ( x 0 ) Prediction step: = [0 . 7 0 . 3](0 . 5) + [0 . 3 0 . 7](0 . 5)  0 . 7 �  0 . 5  0 . 5 � � 0 . 3 = = 0 . 3 0 . 7 0 . 5 0 . 5

  20. x left x right x right x left x right P ( x 0 ) P ( x t | x t − 1 ) P ( e t | x t ) x left x left e yellow 0.5 0.5 0.7 0.3 0.9 0.2 x right e gray 0.3 0.7 0.1 0.8 X P ( X t +1 | e 1: t +1 ) ∝ P ( e t +1 | X t +1 ) P ( X t +1 | X t ) P ( X t | e 1: t ) Filtering: X t What is the belief distribution if I see yellow at t=1 p ( x 1 | e 1 = e yellow ) =? p ( x 1 | e 1 ) = α p ( e 1 | x 1 ) p ( x 1 ) Update step:

  21. x left x right x right x left x right P ( x 0 ) P ( x t | x t − 1 ) P ( e t | x t ) x left x left e yellow 0.5 0.5 0.7 0.3 0.9 0.2 x right e gray 0.3 0.7 0.1 0.8 X P ( X t +1 | e 1: t +1 ) ∝ P ( e t +1 | X t +1 ) P ( X t +1 | X t ) P ( X t | e 1: t ) Filtering: X t What is the belief distribution if I see yellow at t=1 p ( x 1 | e 1 = e yellow ) =? p ( x 1 | e 1 ) = α p ( e 1 | x 1 ) p ( x 1 ) Update step: = α (0 . 9 0 . 2) . ∗ (0 . 5 0 . 5) observed yellow  0 . 9 �  0 . 5  0 . 45 � � 0 . 0 = α = 0 . 0 0 . 2 0 . 5 0 . 1  0 . 818 � more likely to be in which lane? ≈ 0 . 182

  22. x left x right x right x left x right P ( x 0 ) P ( x t | x t − 1 ) P ( e t | x t ) x left x left e yellow 0.5 0.5 0.7 0.3 0.9 0.2 x right e gray 0.3 0.7 0.1 0.8 X P ( X t +1 | e 1: t +1 ) ∝ P ( e t +1 | X t +1 ) P ( X t +1 | X t ) P ( X t | e 1: t ) Filtering: X t What is the belief distribution if I see yellow at t=1 p ( x 1 | e 1 = e yellow ) =? Summary X p ( x 1 ) = x 0 p ( x 1 | x 0 ) p ( x 0 ) Prediction step: �  � 0 . 5 = 0 . 5  p ( x 1 | e 1 ) = α p ( e 1 | x 1 ) p ( x 1 ) Update step:  0 . 818 � ≈ 0 . 182

  23. x left x right x right x left x right P ( x 0 ) P ( x t | x t − 1 ) P ( e t | x t ) x left x left e yellow 0.5 0.5 0.7 0.3 0.9 0.2 x right e gray 0.3 0.7 0.1 0.8 X P ( X t +1 | e 1: t +1 ) ∝ P ( e t +1 | X t +1 ) P ( X t +1 | X t ) P ( X t | e 1: t ) Filtering: X t p ( x 2 | e 1 , e 2 ) =? What if you see yellow again at t=2

  24. x left x right x right x left x right P ( x 0 ) P ( x t | x t − 1 ) P ( e t | x t ) x left x left e yellow 0.5 0.5 0.7 0.3 0.9 0.2 x right e gray 0.3 0.7 0.1 0.8 X P ( X t +1 | e 1: t +1 ) ∝ P ( e t +1 | X t +1 ) P ( X t +1 | X t ) P ( X t | e 1: t ) Filtering: X t p ( x 2 | e 1 , e 2 ) =? What if you see yellow again at t=2 X p ( x 2 | e 1 ) = x 1 p ( x 2 | x 1 ) p ( x 1 | e 1 ) Prediction step:  �  �  � p ( x 1 | e 1 , e 2 ) = α p ( e 1 | x 1 ) p ( x 1 ) Update step:  �  �

  25. x left x right x right x left x right P ( x 0 ) P ( x t | x t − 1 ) P ( e t | x t ) x left x left e yellow 0.5 0.5 0.7 0.3 0.9 0.2 x right e gray 0.3 0.7 0.1 0.8 X P ( X t +1 | e 1: t +1 ) ∝ P ( e t +1 | X t +1 ) P ( X t +1 | X t ) P ( X t | e 1: t ) Filtering: X t p ( x 2 | e 1 , e 2 ) =? What if you see yellow again at t=2 X p ( x 2 | e 1 ) = x 1 p ( x 2 | x 1 ) p ( x 1 | e 1 ) Prediction step:  0 . 7 �  0 . 818  0 . 627 � � 0 . 3 = = 0 . 3 0 . 7 0 . 182 0 . 373 Why does the probability of being in the left lane go down?

  26. x left x right x right x left x right P ( x 0 ) P ( x t | x t − 1 ) P ( e t | x t ) x left x left e yellow 0.5 0.5 0.7 0.3 0.9 0.2 x right e gray 0.3 0.7 0.1 0.8 X P ( X t +1 | e 1: t +1 ) ∝ P ( e t +1 | X t +1 ) P ( X t +1 | X t ) P ( X t | e 1: t ) Filtering: X t p ( x 2 | e 1 , e 2 ) =? What if you see yellow again at t=2 p ( x 2 | e 1 , e 2 ) = α p ( e 2 | x 2 ) p ( x 2 | e 1 ) Update step:  0 . 9 �  0 . 627 � 0 . 0 = α 0 . 0 0 . 2 0 . 373  0 . 883 � ≈ 0 . 117 Why does the probability of being in the left lane go up?

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend