Topics in Brain Computer Interfaces Topics in Brain Computer - - PowerPoint PPT Presentation

topics in brain computer interfaces topics in brain
SMART_READER_LITE
LIVE PREVIEW

Topics in Brain Computer Interfaces Topics in Brain Computer - - PowerPoint PPT Presentation

Topics in Brain Computer Interfaces Topics in Brain Computer Interfaces CS295- -7 7 CS295 Professor: M ICHAEL B LACK TA: F RANK W OOD Spring 2005 Bayesian Inference through Particle Filtering Frank Wood - CS295-7 2005 Brown University


slide-1
SLIDE 1

Frank Wood - CS295-7 2005 Brown University

Topics in Brain Computer Interfaces Topics in Brain Computer Interfaces CS295 CS295-

  • 7

7

Professor: MICHAEL BLACK TA: FRANK WOOD Spring 2005 Bayesian Inference through Particle Filtering

slide-2
SLIDE 2

Frank Wood - CS295-7 2005 Brown University

Homework Review

  • Results?
  • Questions?
  • Causal vs. Generative?
slide-3
SLIDE 3

Frank Wood - CS295-7 2005 Brown University

Decoding Methods

Direct decoding methods:

,...) , (

1 −

=

k k k

z z f x v v v

Simple linear regression method

d k k T k d k k T k

Z f y Z f x

− −

= =

: 2 : 1 v

v v v

slide-4
SLIDE 4

Frank Wood - CS295-7 2005 Brown University

Decoding Methods

Direct decoding methods: In contrast to generative encoding models: Need a sound way to exploit generative models for decoding.

,...) , (

1 −

=

k k k

z z f x v v v ) (

k k

x f z v v =

slide-5
SLIDE 5

Frank Wood - CS295-7 2005 Brown University

Today’s Strategy

  • More mathematical than the previous classes.
  • Group exploration and discovery.
  • One topic with deeper level of understanding.

– Particle Filtering

  • Review and explore recursive Bayesian estimation
  • Introduce SIS algorithm
  • Explore Monte Carlo integration
  • Examine SIS algorithm (if time permits)
slide-6
SLIDE 6

Frank Wood - CS295-7 2005 Brown University

Accounting for Uncertainty

Every real process has process and measurement noise

noise ) ( f

: n

  • bservatio

+ =

k k

x z v v

y uncertaint ) ( f ˆ ~ |

: n

  • bservatio

+

k k k

x x z v v v

noise ) ( f

1 : process

+ =

− k k

x x v v

A probabilistic process model accounts for process and measurement noise

  • probabilistically. Noise appears as modeling uncertainty.

y uncertaint ) ( f ˆ ~ |

1 : process 1

+

− − k k k

x x x v v v

Example: missile interceptor system. The missile propulsion system is noisy and radar observations are noisy. Even if we are given exact process and observation models our estimate of the missile’s position may diverge if we don’t account for uncertainty.

slide-7
SLIDE 7

Frank Wood - CS295-7 2005 Brown University

Recursive Bayesian Estimation

  • Optimally integrates subsequent
  • bservations into process and
  • bservation models.
  • Example

– Biased coin. – Fair Bernoulli prior

y uncertaint ) ( f ˆ ~ |

: t measuremen

+

k k k

x x z v v v y uncertaint ) ( f ˆ ~ |

1 : model 1

+

− − k k k

x x x v v v

slide-8
SLIDE 8

Frank Wood - CS295-7 2005 Brown University

normalization constant (independent of mouth) Prior (a priori – before the evidence) Likelihood (evidence)

Bayesian Inference

Posterior a posteriori probability (after the evidence)

) z ( ) x ( ) x | z ( ) z | ( p p p x p =

We infer system state from uncertain observations and our prior knowledge (model) of system state.

slide-9
SLIDE 9

Frank Wood - CS295-7 2005 Brown University

Notation and BCI Example

⎥ ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎢ ⎣ ⎡ =

k n k k k

z z z z

, , 2 , 1

M v

e.g. firing rates of all n cells at time k

⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ ⎡ =

k y k x k y k x k k k

a a v v y x x

, , , ,

v

e.g. hand kinematics at time k

) , , , (

  • r

1 2 1 : 1 −

=

k k k

z z z z Z v K v v v v

) , , , (

  • r

2 1 : 1 k k k

x x x x X v K v v v v =

Observations System State

slide-10
SLIDE 10

Frank Wood - CS295-7 2005 Brown University

Generative Model

k k

  • bs

k

q x f z v v v + = ) (

: 1 k k p k

w x f x v v v + =

− )

(

1 : 1

neural firing rate state (e.g. hand position, velocity, acceleration) noise (e.g. Normal or Poisson)

Encoding:

linear, non-linear?

f()’s Markov?

slide-11
SLIDE 11

Frank Wood - CS295-7 2005 Brown University

Build a probabilistic model of this real process and with it estimate the posterior distribution so that we can infer the most likely state

  • r the expected state

Today’s Goal

) z | x (

: 1 k k

p

How ???

Recursion!

) z | x ( ) z | x (

: 1 1 : 1 1 k k k k

p p ⇒

− −

  • How can we formulate this recursion ?
  • How can we compute this recursion ?
  • What assumptions must we make ?

) | ( argmax

: 1 k k x

z x p

k

) | (

: 1 k k k

z x p x

slide-12
SLIDE 12

Frank Wood - CS295-7 2005 Brown University

Modeling

  • An useful aside: Graphical models

Graphical models are a way of systematically diagramming the dependencies amongst groups of random variables. Graphical models can help elucidate assumptions and modeling choices that would

  • therwise be hard to visualize and understand.

Using a graphical model will help us design our model!

slide-13
SLIDE 13

Frank Wood - CS295-7 2005 Brown University

Graphical Model

k

x v

k

z v

) | (

k k x

z p v v

Generative model:

slide-14
SLIDE 14

Frank Wood - CS295-7 2005 Brown University

Graphical Model

k

x v

k

z v

1 − k

x v

1 − k

z v

1 + k

x v

1 + k

z v

slide-15
SLIDE 15

Frank Wood - CS295-7 2005 Brown University

Graphical Model

k

x v

k

z v

1 − k

x v

1 − k

z v

1 + k

x v

1 + k

z v

) | ( ) , , , | (

1 1 2 1 − − −

=

k k k k k

x x p x x x x p K ) | ( ) , | (

1 : 1 k k k k k

x z p x z z p v v v v v =

) | ( ) | (

1 1 , 1 : 1 − − −

=

k k k k k

x x p x z x p v v v r v

slide-16
SLIDE 16

Frank Wood - CS295-7 2005 Brown University

Summary

From these modeling choices all we have to choose is: Likelihood model Temporal prior model How to compute the posterior

) | (

k k x

z p v v

) | (

1 − k k x

x p v v

− − − −

=

1 1 : 1 1 1 : 1

) | ( ) | ( ) | ( ) | (

k k k k k k k k k

x d z x p x x p x z p z x p v v v v v v v v v κ

Encoding model decoding

, ) ( 0 x p v ) ( 0 z p r

Initial distributions

slide-17
SLIDE 17

Frank Wood - CS295-7 2005 Brown University

⎟ ⎟ ⎟ ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ ⎛

42 2 1 k k k

z z z M firing rate vector (zero mean, sqrt) 42 X 42 matrix

L

v

, 2 , 1 ,

) , ( ~

= k k

Q N q

42 X 6 matrix

⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ ⎛

k k k k k k

y x y x

a a v v y x

system state vector (zero mean) 6 X 6 matrix

Linear Gaussian Generative Model

Observation Equation:

6 X 6 matrix

System Equation:

k k k

w x A x v v v + =

+ 1

L

v

, 2 , 1 ,

) , ( ~

= k k

W N w

k k k

q x H z v v v + =

slide-18
SLIDE 18

Frank Wood - CS295-7 2005 Brown University

Gaussian Assumption Clarified

Gaussian distribution:

) , ( ~ ) , ( ~ Q N q x H z Q x H N z

k k k k k

v v v v v = −

Recall:

) / ) ( 2 1 exp( 2 1 ) (

2 2 σ

µ σ π − − = x x p

slide-19
SLIDE 19

Frank Wood - CS295-7 2005 Brown University

Graphical Model

∏ ∏

= = −

= =

M k k k M k k k M M M M M

p p p p p p

1 2 1 1

)] ( ][ ) ( ) ( [ ) ( ) ( ) , ( x z x x x X | Z X Z X

k

x

1 − k

x

1 + k

x

k k k k

w x A x + =

−1 1 − k

z

k

z

1 + k

z

k k k k

q x H z + =

slide-20
SLIDE 20

Frank Wood - CS295-7 2005 Brown University

Break!

  • When we come back quickly arrange yourselves in

groups of 4 or 5. Uniformly distribute the applied math people and people who have taken CS143 (Vision) into these groups.

  • Instead of straight-up lecture we are going to work

through some derivations together to improve retention and facilitate understanding.

  • If you don’t have pencil and paper please get some.
slide-21
SLIDE 21

Frank Wood - CS295-7 2005 Brown University

Next step.

  • Now we have a model – how do we do recursive

Bayesian inference.

  • I will present to you several relatively easy problems

which I expect each group to solve in 5-10 minutes. When every group is finished I will select one group and ask for the person in that group who understood the problem the least to explain the solution to the

  • class. The group is responsible for nominating this

person and his or her ability to explain the solution.

slide-22
SLIDE 22

Frank Wood - CS295-7 2005 Brown University

normalization constant (independent of kinematics) Prior (a priori – before the evidence) Likelihood (evidence)

Recursive Bayesian Inference

Posterior a posteriori probability (after the evidence)

) firing ( ) kinematics ( ) kinematics | firing ( ) firing | kinematics (

t : 1 t t t t : 1 t

p p p p = We sequentially infer hand kinematics from uncertain evidence and our prior knowledge of how hands move.

slide-23
SLIDE 23

Frank Wood - CS295-7 2005 Brown University

Recursive Bayesian Estimation

  • Update Stage

– From the prediction stage you have a prior distribution over the system state at the current time k. After observing the process at time k you can update the posterior to reflect that new information.

  • Prediction Stage

– Given the posterior from a previous update stage and your system model you produce the next prior distribution.

slide-24
SLIDE 24

Frank Wood - CS295-7 2005 Brown University

Update Stage

) z x (

: 1 k k

p ) z z ( ) z x ( ) x (

1 : 1 1 : 1 − −

=

k k k k k k

p p z p

Cancels Cancels

) z ( ) z , x (

: 1 : 1 k k k

p p =

Bayes Rule Bayes Rule

) z ( ) z z ( ) z , x ( ) , x (

1 : 1 1 : 1 1 : 1 1 : 1 − − − −

=

k k k k k k k k

p p p z z p

Bayes Rule Again Bayes Rule Again

) z ( ) z z ( ) z ( ) z x ( ) x (

1 : 1 1 : 1 1 : 1 1 : 1 − − − −

=

k k k k k k k k

p p p p z p

Independence Independence New Observation New Observation Prior Prior Posterior Posterior

slide-25
SLIDE 25

Frank Wood - CS295-7 2005 Brown University

Prediction Stage

) z x p(

1 : 1 − k k 1 1 : 1 1 1

x ) z x p( ) x x p(

− − − −

∂ = ∫

k k k k k

A C A C A B C B δ ) | Pr( ) , | Pr( ) | Pr(

=

1 1 : 1 1 1 : 1 1

x ) z x p( ) z , x x p(

− − − − −

∂ = ∫

k k k k k k

Law of Total Probability Law of Total Probability Independence Independence Posterior Posterior Prior Prior

slide-26
SLIDE 26

Frank Wood - CS295-7 2005 Brown University

Phew!

  • Let’s drill this into our heads and actually run

the Bayesian recursion to see how it starts and behaves.

slide-27
SLIDE 27

Frank Wood - CS295-7 2005 Brown University

The Bayesian Recursion

( ) ( )

( ) ( )

1

P , P model the and P and P given

− k k k k

x x x z z x r r r r r r

( ) ( ) ( ) ( )

P P | P | P z x x z z x r r r r r r =

( ) ( ) ( )

1 1

| P | P | P x z x x x z x r r r r r r r ∂ = ∫

( ) ( ) ( ) ( )

1 1 1 1 1 1

| P | P | P , | P z z z x x z z z x r r r r r r r r r =

( ) ( ) ( )

1 1 1 1 2 1 2

, | P | P , | P x z z x x x z z x r r r r r r r r ∂ = ∫

( ) ( ) ( ) ( )

1 2 1 2 2 2 2 1 2

, | P , | P | P , , | P z z z z z x x z z z z x r r r r r r r r r r r r =

M

Run the Bayesian recursion to depth 2.

slide-28
SLIDE 28

Frank Wood - CS295-7 2005 Brown University

Highlighting the Assumptions

) z x (

: 1 k k

p ) z z x (

1 : 1 k k k

, p

=

1 1 : 1 1 1

x ) z x ( ) x x ( ) x z (

− − − −

k k k k k k k

d p p p

) z x ( ) z x z (

1 : 1 1 : 1 − −

k k k k k

p , p

Bayes rule: ) ( / ) ( ) | ( ) | ( a p b p a b p b a p = Independence assumption:

) x | x ( ) z , x | x (

1 1 : 1 1 − − −

=

k k k k k

p p

Independence assumption: ) | ( ) , | (

1 k k k k k

p p x z Z x z =

) z x ( ) x z (

1 : 1 −

k k k k

p p

Law of Total Probability:

= db c b p c b a p c a p ) | ( ) , | ( ) | (

1 1 : 1 1 1 : 1 1

x ) z x ( ) z , x x ( ) x z (

− − − − −

k k k k k k k k

δ p p p

What’s missing?

slide-29
SLIDE 29

Frank Wood - CS295-7 2005 Brown University

Bayesian Formulation

1 1 1 1 :

p p p p

− − − −

=

k :k k k k k k k k

)dx z x ( ) x x ( ) x z ( κ ) z x ( ): |x (z

k k

p likelihood temporal prior posterior probability at previous time step normalizing term ): |x (x

k k 1

p

): |z (x

:k k 1 1 1

p

− −

: κ

slide-30
SLIDE 30

Frank Wood - CS295-7 2005 Brown University

General Model

  • can be an arbitrary, non-Gaussian, multi-

modal distribution.

  • The recursive equation may have no explicit solution,

but can usually be approximated numerically using Monte Carlo techniques such as particle filtering.

  • However, if both the likelihood and prior are linear

Gaussian, then the recursive equation has a closed form solution. This model, which we’ll see next week, is known as the Kalman filter. (Kalman, 1960) ) z | x (

: 0 k k

p

slide-31
SLIDE 31

Frank Wood - CS295-7 2005 Brown University

Particle Filtering

  • “A technique for implementing a recursive Bayesian

filter by Monte Carlo simulations” Arulampalam et. al.

  • A set of samples (particles) and weights that represent

the posterior distribution (a Random Measure) is maintained throughout the algorithm.

  • It boils down to sampling, density representation by

samples, and Monte Carlo integration.

slide-32
SLIDE 32

Frank Wood - CS295-7 2005 Brown University

Sampling

  • Uniform

– rand() Linear Congruential Generator

  • x(n) = a * x(n-1) + b mod M

0.2311 0.6068 0.4860 0.8913 0.7621 0.4565 0.0185

  • Normal

– randn() Box-Mueller

  • x1,x2 ~ U(0,1) -> y1,y2 ~N(0,1)

– y1 = sqrt( - 2 ln(x1) ) cos( 2 pi x2 ) – y2 = sqrt( - 2 ln(x1) ) sin( 2 pi x2 )

  • Binomial(p)

– if(rand()<p)

  • Higher Dimensional Distributions

– Metropolis Hastings / Gibbs

slide-33
SLIDE 33

Frank Wood - CS295-7 2005 Brown University

Distribution Representation by Samples

  • 5
  • 4
  • 3
  • 2
  • 1

1 2 3 4 5 0.2 0.4 1/(2π)-1/2 e-x2/2

  • 5
  • 4
  • 3
  • 2
  • 1

1 2 3 4 5 1 2

  • Hist. 10 Smpls

µ = .0013

var = .8162

  • 5
  • 4
  • 3
  • 2
  • 1

1 2 3 4 5 5 10

  • Hist. 100 Smpls

µ = .0012

var = .7805

  • 5
  • 4
  • 3
  • 2
  • 1

1 2 3 4 5 20 40 60

  • Hist. 1000 Smpls

µ = -.043

var = .9547

slide-34
SLIDE 34

Frank Wood - CS295-7 2005 Brown University

Law of Large Numbers

  • If we have n fair samples drawn from a distribution P
  • Then one version of the Law of Large Numbers says that the

emperical average of a the value of a function over the samples converges to the expected value of the function as the number

  • f samples grows.

[ ]

) ( E ) ( 1

1

X f x f n

P n n i i ∞ → =

) ( ~

: 1

X P x n

slide-35
SLIDE 35

Frank Wood - CS295-7 2005 Brown University

Why Does this Matter

  • Particle Filtering represents distributions by

samples.

  • We need to either maximize or take an

expected value of the posterior (both functions) with the posterior represented by samples.

  • We need to sample from distributions to

simulate trajectories, etc.

slide-36
SLIDE 36

Frank Wood - CS295-7 2005 Brown University

Monte Carlo Integration

) ( 1 ) ( P

1 : k N i x i k k k N

x w N z x

i k ∂

= ∂

=

δ

1 1 : 1 1

x ) z x p( ) x x p(

− − − −

∂ = ∫

k k k k k

) z x p(

1 : − k k 1 1 1 1 1

x ) ( 1 ) x x p(

1

− − = − −

∂ ∂ = ∫

k k N i x i k k k

x w N

i k

δ

1 1 1 1 1

x ) ( ) x x p( 1

1

− − − = −

∂ ∂ =

∫ ∑

k k x k k N i i k

x w N

i k

δ ) x p( 1

1 1 1 i k k N i i k

x w N

− = −

=

Given Evaluate Weight Weight Particle/Sample Particle/Sample

slide-37
SLIDE 37

Frank Wood - CS295-7 2005 Brown University

The Particle Filtering Story

  • A bit hand wavy

– Simplified from Arulampalam et al and DeFreitas et al – Largely overlooks the importance weights are maintained and updated – Doesn’t touch particle degeneration and replacement

  • Posterior Representation by Samples
  • Importance Sampling
  • Weights in Importance Sampling
  • Sampling from the Prediction Distribution
  • Simple Particle Filter
slide-38
SLIDE 38

Frank Wood - CS295-7 2005 Brown University

Posterior Representation by Samples

We use a set of random samples from the posterior distribution to represent the posterior. The we can use sample statistics to approximate expectations over the posterior. Problem: Problem: we need to update the samples such that they still accurately represent the posterior after the next observation. Let be a set of fair samples from distribution , then for functions

slide-39
SLIDE 39

Frank Wood - CS295-7 2005 Brown University

Importance Sampling

Assume we have a weighted sample set the prediction distribution becomes a linear mixture model

sample

and then sampling from the model proposal pdf

  • Can sample from this mixture model by treating the weights

as mixing probabilities

N 1 0 1 Cumulative distribution of weights

) p(

1 : − k k z

x ) p( 1

1 1 1 i k k N i i k

x x w N

− = −

=

From Monte Carlo integration over posterior at time k. From Monte Carlo integration over posterior at time k. Specified as part

  • f the model.

Specified as part

  • f the model.

) x p(

1 i k k x −

{ }

N i z x S

i k i k k

< < =

− − −

1 for ,

) ( 1 ) ( 1 1

Importance weights Importance weights

slide-40
SLIDE 40

Frank Wood - CS295-7 2005 Brown University

Weights in Importance Sampling

Weighted samples Draw samples from a proposal distribution

weighted samples

i.e. Find weights so that the linearly weighted sample statistics approximate expectations under the desired distribution

slide-41
SLIDE 41

Frank Wood - CS295-7 2005 Brown University

Updating the Weights

The samples from the prediction distribution need to be re-weighted such that they still represent the posterior distribution well after a new observation:

) z x p(

: 1 k k

) z z p( ) z x p( ) x p(

1 : 1 : − −

=

k k k k k k

z

We have a sample representation for this. We have a sample representation for this.

∑ =

=

N i j k j k j k k

x w N x z k

1 1

1 ) l(

This is our model likelihood This is our model likelihood Goes away after re- normalization. Goes away after re- normalization.

Major hand waving here! Inaccuracies abound!

j k j k k j k

w x z k w

1

) l(

= ⇒

slide-42
SLIDE 42

Frank Wood - CS295-7 2005 Brown University

An algorithmic run-through

Simple particle filter: draw samples from the prediction distribution weights are proportional to the ratio of posterior and prediction distributions, i.e. the normalized likelihood [Gordon et al ’93; Isard & Blake ’98; Liu & Chen ’98, …]

posterior posterior temporal dynamics likelihood sample sample normalize

slide-43
SLIDE 43

Frank Wood - CS295-7 2005 Brown University

Particle Filter Particle Filter

Isard & Blake ‘96 Posterior

) | (

1 1 − − k k

z x p r r

slide-44
SLIDE 44

Frank Wood - CS295-7 2005 Brown University

Particle Filter Particle Filter

Isard & Blake ‘96 Posterior

) | (

1 : 1 1 − − k k

z x p r r

sample sample

slide-45
SLIDE 45

Frank Wood - CS295-7 2005 Brown University

Particle Filter Particle Filter

Isard & Blake ‘96 Temporal dynamics sample sample

) | (

1 − k k

x x p

Posterior

) (

1 : 1 1 − − k k

z x p r r

sample sample

slide-46
SLIDE 46

Frank Wood - CS295-7 2005 Brown University

Particle Filter Particle Filter

Isard & Blake ‘96 Temporal dynamics sample sample Likelihood

) | (

k k x

z p r r

Posterior sample sample

) | (

1 : 1 1 − − k k

z x p r r ) | (

1 − k k

x x p

slide-47
SLIDE 47

Frank Wood - CS295-7 2005 Brown University

Particle Filter Particle Filter

Isard & Blake ‘96 Temporal dynamics sample sample Posterior Likelihood normalize normalize Posterior sample sample

) | (

1 : 1 1 − − k k

z x p r r ) | (

: 1 k k z

x p r r ) | (

1 − k k

x x p ) | (

k k x

z p r r

slide-48
SLIDE 48

Frank Wood - CS295-7 2005 Brown University

Sampling

  • Many sampling steps in particle filtering.

– Samping from a Gaussian. – Sampling from a multinomial. – Sampling from a weighted mixture model.

  • More general sampling techniques that we may

get to later.

– Metropolis-Hastings – Gibbs

slide-49
SLIDE 49

Frank Wood - CS295-7 2005 Brown University

Sampling from a Gaussian

Q LL

T =

[ ]

T n

τ τ τ τ τ ... , ,

2 1

= r

) , ( ~ Q N L µ µ τ r r r +

Given

) ( Q N , µ

Show And explain how this fact can be used to sample from a Gaussian. A Gaussian distribution A Gaussian distribution A Cholesky decomposition

  • f the covariance matrix.

A Cholesky decomposition

  • f the covariance matrix.

A random vector where each element is normal with zero mean and unit variance. A random vector where each element is normal with zero mean and unit variance.

[ ] [ ]

E E = = τ τ L L

[ ]

T T L

L τ τ r r E =

[ ] T

T L

L τ τ r r E = Q =

[ ]

τ r L Var

slide-50
SLIDE 50

Frank Wood - CS295-7 2005 Brown University

Sampling from a Multinomial

Given a weighted sample set

sample N 1 1 Cumulative distribution of weights

} ... 1 ); , {(

) ( ) (

N i w S

i i

= = x

slide-51
SLIDE 51

Frank Wood - CS295-7 2005 Brown University

) , ( ~ Q x H z

k k

v v Ν

likelihood

  • bservation model

− − − − − = 1 1 1 1 1

) | ( ) | ( ) | (

k k k k k k k

x d Z x p x x p Z x p v v v v v v v

prior

) , ˆ (

1 1 − − k k

P x N

) , ( ~

1 W

x A x

k k −

v v N

system model

) | ( ) | ( ) | (

1 −

=

k k k k k k

Z x p x z p Z x p v v v v v v κ

BAYESIAN INFERENCE

Infer (decode) behavior from firing. p(behavior at k | firing up to k) = Kalman Filter

slide-52
SLIDE 52

Frank Wood - CS295-7 2005 Brown University

) , ( ~

1 t t t t

W x A x

v v N

Kalman Filter

Likelihood Temporal prior Posterior is also Gaussian

) | (

t j t

x z p v v

) | (

1 − t t x

x p v v

− − − −

=

1 1 1 1

) | ( ) | ( ) | ( ) | (

t t t t t t t t t

x d Z x p x x p x z p Z x p v v v v v v v v v κ ) , ( ~

t t t t

Q x H z v v Ν

Kalman filter. Real-time, recursive, decoding.

  • bservation model:

system model:

slide-53
SLIDE 53

Frank Wood - CS295-7 2005 Brown University

1 1

and ˆ

  • f

estimate Initial

k- k

P x −

Time Update Measurement Update

Welch and Bishop 2002

KALMAN FILTER ALGORITHM

Prior estimate Error covariance Posterior estimate Kalman gain Error covariance

W A AP P x A x

T k k k k

+ = =

− − − − 1 1

ˆ ˆ

1

) ( ) ( ) ˆ ( ˆ ˆ

− − − − − −

+ = − = − + = Q H HP H P K P H K I P x H z K x x

T k T k k k k k k k k k k

v

slide-54
SLIDE 54

Frank Wood - CS295-7 2005 Brown University

Factored Sampling

Weighted samples

weighted samples

∑ =

= N i i t t n t t

p p n t

w

1 ) ( ) (

) | ( ) | ( ) ( x I x I

Normalized likelihood:

} ... 1 ); , {(

) ( ) (

N i w S

i i

= = x