Overview of State Space Models Standard State Space Model Standard - - PowerPoint PPT Presentation

overview of state space models standard state space model
SMART_READER_LITE
LIVE PREVIEW

Overview of State Space Models Standard State Space Model Standard - - PowerPoint PPT Presentation

Overview of State Space Models Standard State Space Model Standard state space model x n +1 = F n x n + G n u n n 0 Straight-forward extensions y n = H n x n + v n Properties where Solving the normal equations F n C


slide-1
SLIDE 1

Standard State Space Model Matrices xn+1 = Fnxn + Gnun n ≥ 0 yn = Hnxn + vn

  • Conceptually, yn usually contains all the signals that are
  • bservable

– May come from multiple sensors

  • xn contains all of the parameters of the process that you are

interested in estimating

  • Note that the process is nonstationary in general since all of the

matrices are allowed to change with time

  • J. McNames

Portland State University ECE 539/639 State Space Models

  • Ver. 1.03

3

Overview of State Space Models

  • Standard state space model
  • Straight-forward extensions
  • Properties
  • Solving the normal equations
  • Covariance matrices
  • Wide-sense Markov processes
  • Designing state space models

– Common choices for the state dynamics

  • Continuous-time to discrete-time conversion

– Coarse estimates – Exact conversions

  • Nonlinear state space models
  • J. McNames

Portland State University ECE 539/639 State Space Models

  • Ver. 1.03

1

State Space Model With Known Input xn+1 = Fnxn + Gn(un + un) n ≥ 0 yn = Hnxn + vn

  • The state space model can be extended when there are known

input signals (collected in the vector un ∈ Cm×1) that affect the state of the system

  • This changes little in the development of the Kalman filter

algorithm

  • The apparent constraint that both un and un are multiplied by

Gn does not really impair the flexibility of the model

  • J. McNames

Portland State University ECE 539/639 State Space Models

  • Ver. 1.03

4

Standard State Space Model xn+1 = Fnxn + Gnun n ≥ 0 yn = Hnxn + vn where Fn ∈ Cℓ×ℓ Gn ∈ Cℓ×m Hn ∈ Cp×ℓ xn ∈ Cℓ×1 un ∈ Cm×1 yn ∈ Cp×1 vn ∈ Cp×1

  • un and vn are multivariate white noise processes
  • In many cases vn is interpreted as an output disturbance or

measurement noise

  • un is usually called process noise
  • This is a more general form than is used in many texts
  • J. McNames

Portland State University ECE 539/639 State Space Models

  • Ver. 1.03

2

slide-2
SLIDE 2

Covariance Matrices xn+1 = Fnxn + Gnun yn = Hnxn + vn ⎡ ⎣ x0 un vn ⎤ ⎦ , ⎡ ⎢ ⎢ ⎣ x0 uk vk 1 ⎤ ⎥ ⎥ ⎦

  • =

⎡ ⎣ Π0 Qnδnk Snδnk S∗

nδnk

Rnδnk ⎤ ⎦

  • The random variable x0 and the two white noise processes drive

the state space model – This means that un, uk = 0 for n = k – Does not mean Qn or Rn are diagonal matrices

  • The model is causal in that xn and yn are completely determined

by x0 and past and present values of vn and un

  • Clever use of the inner product notation has been used here to

indicate these RVs have zero mean

  • J. McNames

Portland State University ECE 539/639 State Space Models

  • Ver. 1.03

7

State Space Model With Known Input Continued xn+1 = Fnxn + Gn(un + un) n ≥ 0 yn = Hnxn + vn

  • We can specify the covariance matrix of un to be anything we like

such that the additive noise term has whatever covariance matrix we like

  • Suppose we wish for the scaled state noise Gnun to have a

covariance matrix Tn Qn G−1

n TnG−∗ n

un, un = Qn Gnun, Gnun = GnQnG∗

n = Tn

  • J. McNames

Portland State University ECE 539/639 State Space Models

  • Ver. 1.03

5

Covariance Matrices xn+1 = Fnxn + Gnun yn = Hnxn + vn ⎡ ⎣ x0 un vn ⎤ ⎦ , ⎡ ⎢ ⎢ ⎣ x0 uk vk 1 ⎤ ⎥ ⎥ ⎦

  • =

⎡ ⎣ Π0 Qnδnk Snδnk S∗

nδnk

Rnδnk ⎤ ⎦

  • The matrices Π0, Qn, Rn, and Sn are assumed to be known
  • By definition, the matrices Qn and Rn must be Hermitian and

non-negative definite

  • The matrix Sn doesn’t have to be Hermitian, but it must be such

that the joint covariance matrix is non-negative definite

  • Q

S S∗ R

  • ≥ 0
  • This characterization ensures the process is wide-sense Markov (to

be discussed later)

  • J. McNames

Portland State University ECE 539/639 State Space Models

  • Ver. 1.03

8

State Space Model With Known Input Continued xn+1 = Fnxn + Gnun yn = Hnxn + vn

  • An additional known input signal could be added to the output

equation in a similar fashion yn = Hnxn + wn + vn

  • This is most easily handled by defining a different observed signal

as zn yn − wn zn = Hnxn + vn

  • Thus, it easily reduces to the form of our standard state space

model output equation

  • J. McNames

Portland State University ECE 539/639 State Space Models

  • Ver. 1.03

6

slide-3
SLIDE 3

Example 1: Orthogonality Properties xn+1 = Fnxn + Gnun yn = Hnxn + vn Prove some of the orthogonality properties on the previous slide.

  • J. McNames

Portland State University ECE 539/639 State Space Models

  • Ver. 1.03

11

Correlation of Process and Measurement Noise xn+1 = Fnxn + Gnun yn = Hnxn + vn ⎡ ⎣ x0 un vn ⎤ ⎦ , ⎡ ⎢ ⎢ ⎣ x0 uk vk 1 ⎤ ⎥ ⎥ ⎦

  • =

⎡ ⎣ Π0 Qnδnk Snδnk S∗

nδnk

Rnδnk ⎤ ⎦

  • It is common to assume the process and measurement noise

processes are completely uncorrelated, Sn = 0 for all n

  • Due to domain knowledge that they have different origins
  • When feedback is used and the output modifies the state

equation, the noises may become correlated, Sn = 0 for some n

  • There are many forms, but the most useful is to assume that they

are correlated only at the same time instant

  • J. McNames

Portland State University ECE 539/639 State Space Models

  • Ver. 1.03

9

Relationship to Ry ˆ xn|n = xn, yy−2y y col{y0, . . . , yn}

  • We know that the optimal causal estimator of xn is given by the

normal equations

  • Requires knowledge of two terms
  • The first is easy to solve for

xn, y = xn, Hnxn + vn = xn, xnH∗

n

  • In order to solve the normal equations, we need to know how

Ry = y2 is related to all of the state space model parameters and covariance matrices

  • However, we need to solve for xn, xk in order to obtain

expressions for both xn, y and Ry

  • J. McNames

Portland State University ECE 539/639 State Space Models

  • Ver. 1.03

12

Orthogonality Properties You should be able to show that the following properties are true: un, xk = 0 vn, xk = 0 for n ≥ k un, yk = 0 vn, yk = 0 for n > k un, yk = Sn vn, yk = Rn for n = k un, x0 = 0 iff un, xn = 0 for n ≥ 0

  • The key idea is to leverage the orthogonality of the initial state

and the noise covariance matrices

  • J. McNames

Portland State University ECE 539/639 State Space Models

  • Ver. 1.03

10

slide-4
SLIDE 4

Solving for the State Covariance Matrix Continued xn, xk = Φ(n, k)Πk for n ≥ k For since x, y = y, x∗ for any random vectors x and y, it follows immediately that xn, xk = xk, xn∗ = (Φ(k, n)Πn)∗ = ΠnΦ∗(k, n) for n ≤ k These results can be summarized as xn, xk = ⎧ ⎪ ⎨ ⎪ ⎩ Φ(n, k)Πk n ≥ k Πn n = k ΠnΦ∗(k, n) n ≤ k

  • J. McNames

Portland State University ECE 539/639 State Space Models

  • Ver. 1.03

15

Solving for the State Covariance Matrix Given the state space model, xn+1 = Fnxn + Gnun yn = Hnxn + vn we can easily show that Πn+1 xn, xn = Fnxn + Gnun, Fnxn + Gnun = Fnxn, xnF ∗

n + Gnun, unG∗ n

= FnΠnF ∗

n + GnQnG∗ n

  • J. McNames

Portland State University ECE 539/639 State Space Models

  • Ver. 1.03

13

Solving for the Output Covariance Matrix Ry Given the state space model, xn+1 = Fnxn + Gnun yn = Hnxn + vn The output covariance matrix is given by yn, yk = Hnxn + vn, Hkxk + vk = Hnxn, xkH∗

k + Hnxn, vk + vn, xkH∗ k + vn, vk

The cross-terms xn, vk and vn, xk don’t necessarily cancel in this case sense un and vn are correlated. However they do if n = k since then xn is only a function of past values of un If vn, xk = 0 xn, vk = 0 then yn, yn = HnΠnH∗

n + Rn

  • J. McNames

Portland State University ECE 539/639 State Space Models

  • Ver. 1.03

16

Solving for the State Covariance Matrix Continued Now let us define the state transition matrix Φ(n, k)

  • Fn−1Fn−2 . . . Fk

n > k I n = k Then xn = Φ(n, k)xk + A col{uk, uk+1, . . . , un−1}, n ≥ k for some matrix A. In other words, xn consists of Φ(n, k)xk plus some linear combination of {uk, uk+1, . . . , un−1}, which is uncorrelated with xk. Thus xn, xk = Φ(n, k)xk + A col{uk, uk+1, . . . , un−1}, xk = Φ(n, k)xk, xk + 0 = Φ(n, k)Πk for n ≥ k

  • J. McNames

Portland State University ECE 539/639 State Space Models

  • Ver. 1.03

14

slide-5
SLIDE 5

Symmetry in Ry Consider three observation vectors {y0, y1, y2} produced by the standard state space model Ry = ⎡ ⎣ R0 + H0ΠoH∗ N ∗

0 H∗ 1

N ∗

0 F ∗ 1 H∗ 2

H1N0 R1 + H1Π1H∗

1

N ∗

1 H∗ 2

H2F1N0 H2N1 R2 + H2Π2H∗

2

⎤ ⎦

  • However, it is not yet clear how we might be able to invert Ry

efficiently

  • Would be much easier to invert if was block diagonal
  • The correlation matrix of the innovations is block diagonal
  • But then we need a way of obtaining the innovations and it’s

covariance matrix

  • Stay tuned . . .
  • J. McNames

Portland State University ECE 539/639 State Space Models

  • Ver. 1.03

19

Solving for the Output Covariance Matrix Ry Continued yn, yk = Hnxn, xkH∗

k + Hnxn, vk + vn, xkH∗ k + vn, vk

Now if n > k, vn, xk = 0 vn, vk = 0 For n > k, yn, yk = Hnxn, xkH∗

k + Hnxn, vk

= HnΦ(n, k)ΠkH∗

k + HnΦ(n, k + 1) (Fkxk + Gkuk) , vk

= HnΦ(n, k)ΠkH∗

k + HnΦ(n, k + 1)Gkuk, vk

= HnΦ(n, k + 1)FkΠkH∗

k + HnΦ(n, k + 1)GkSk

= HnΦ(n, k + 1) (FkΠkH∗

k + GkSk)

= HnΦ(n, k + 1)Nk where Nk FkΠkH∗

k + GkSk

  • J. McNames

Portland State University ECE 539/639 State Space Models

  • Ver. 1.03

17

Wide Sense Markov (WSM) Processes

  • Markov stochastic processes have the following relationship for

their conditional probability density functions (pdfs) fyn|yj,yk(yn|yj, yk) = fyn|yj(yn|yj) if n > j > k

  • For linear MMSE estimation we only use first- and second-order

statistics

  • A random process is Wide-sense Markov (WSM) if the linear

MMSE estimator of xn given L{xn−1, . . . , x0} equals the linear MMSE estimator of xn given L{xn−1}.

  • J. McNames

Portland State University ECE 539/639 State Space Models

  • Ver. 1.03

20

Solving for the Output Covariance Matrix Ry Finished Again we x, y = y, x∗ for any random vectors x and y, to complete the solution yn, yk = ⎧ ⎪ ⎨ ⎪ ⎩ HnΦ(n, k + 1)Nk n > k HnΠnH∗

n + Rn

n = k N ∗

nΦ∗(k, n + 1)H∗ k

n < k where Nn FnΠnH∗

n + GnSn

It may not be apparent, but this has introduced exactly the type of symmetry in Ry that we needed

  • J. McNames

Portland State University ECE 539/639 State Space Models

  • Ver. 1.03

18

slide-6
SLIDE 6

WSM Processes have State Space Model Representations Continued Thus, given a wide-sense Markov process xn, we can define a state space model xn+1 = Fnxn + GnuN

  • x0

un

  • ,
  • x0

uk

  • =
  • Π0

Qnδnk

  • Thus a process {xn, n ≥ 0} is WSM if and only if it has a forwards

Markovian state space representation of the form above.

  • However, it turns out that yn is not WSM
  • In general a sum of WSM processes is generally not WSM
  • Processes such as yn = Hnxn + vn are called projections of

Markov processes

  • However the aggregate process obtained by considering

col{xn, yn} is WSM

  • J. McNames

Portland State University ECE 539/639 State Space Models

  • Ver. 1.03

23

State Space Models are WSM Processes xn+1 = Fnxn + GnuN

  • x0

un

  • ,
  • x0

uk

  • =
  • Π0

Qnδnk

  • Since xn ∈ L{x0, u0, . . . , un−1} we know that

un⊥xk, n ≥ k ≥ 0 Thus the orthogonal projection of xn+1 onto L{x0, . . . , xn} is given by ˆ xn+1|{x0,...,xn} = Fn ˆ xn|{x0,...,xn} + Gn ˆ un|{x0,...,xn} = Fnxn + 0 = Fn ˆ xn|xn = ˆ xn+1|xn

  • J. McNames

Portland State University ECE 539/639 State Space Models

  • Ver. 1.03

21

Examples of State Space Models

  • Autoregressive (AR) processes
  • ARMA processes
  • AR processes revisited
  • Physical models
  • Nonlinear models
  • J. McNames

Portland State University ECE 539/639 State Space Models

  • Ver. 1.03

24

WSM Processes have State Space Models Suppose we have a wide-sense Markov process xn. Consider its innovations process e0 = x0 en+1 = xn+1 − Ko,nxn Now define un en+1 Fn Ko,n xn2 Πn xn+1 = Fnxn + un Now since the innovations are orthogonal un, uk = en+1, ek+1 = 0 for n = k un2 = en+12 = Πn+1 − FnΠnF ∗

n Qn

Since en+1⊥e0, un = en+1, and e0 = x0, we have un⊥x0

  • J. McNames

Portland State University ECE 539/639 State Space Models

  • Ver. 1.03

22

slide-7
SLIDE 7

Autoregressive Processes Continued yn =

  • 1

. . .

⎢ ⎢ ⎢ ⎢ ⎢ ⎣ yn yn−1 yn−2 . . . yn−ℓ+1 ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ yn = Hxn

  • J. McNames

Portland State University ECE 539/639 State Space Models

  • Ver. 1.03

27

Autoregressive Processes yn+1 = un +

ℓ−1

  • k=0

akyn−k Note that this differs from the usual model of AR processes x(n) = w(n) +

p

  • k=1

akx(n − k) in several ways

  • The white noise process is denoted as un
  • The AR process is denoted as yn
  • yn+1 is a function of un, instead of un+1 (non-minimum phase)
  • The order of the model is denoted as ℓ
  • yn can be a vector-valued process, though I will still denote the

coefficients with lower-case letters ak

  • J. McNames

Portland State University ECE 539/639 State Space Models

  • Ver. 1.03

25

Autoregressive Processes Continued Thus with the definition of a state vector xn col{yn, yn−1, . . . , yn−ℓ+1} we can write an autoregressive process in state space form xn+1 = Fxn + Gun yn = Hxn

  • The companion matrix F is stable if and only if all its eigenvalues

are less than one in magnitude

  • This is equivalent to requiring that

a(z) zℓ − a0zℓ−1 − · · · − aℓ−1 has all of its roots inside the unit circle

  • Note that this could easily be generalized to the case where any of

the model parameters are time-varying

  • J. McNames

Portland State University ECE 539/639 State Space Models

  • Ver. 1.03

28

AR Process State Space Model Let us define xn col{yn, yn−1, . . . , yn−ℓ+1} ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ yn+1 yn yn−1 . . . yn−ℓ+2 ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ = ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ a0 a1 . . . aℓ−2 aℓ−1 1 . . . 1 . . . . . . . . . ... . . . . . . . . . 1 ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ yn yn−1 yn−2 . . . yn−ℓ+1 ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ + ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ 1 . . . ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ u(n) xn+1 = Fxn + Gun Note that in the vector case, yn ∈ Cp×1, F and G are block matrices.

  • J. McNames

Portland State University ECE 539/639 State Space Models

  • Ver. 1.03

26

slide-8
SLIDE 8

AR Processes Revisited xn col{yn, yn−1, . . . , yn−ℓ+1}

  • The AR model we set up earlier is of little practical utility
  • Just like the earlier case, all of the model parameters must be

known

  • The state of the system is trivial to estimate once ℓ observations
  • f the output process have been made
  • Recall that the state of the system is usually consists of the

parameters that you want (or have) to estimate

  • For AR processes, the parameters we’re interested in estimating

are the model coefficients {a0, . . . , aℓ−1}

  • J. McNames

Portland State University ECE 539/639 State Space Models

  • Ver. 1.03

31

ARMA Processes yn+1 = a0yn + · · · + aℓ−1yn−ℓ+1 + b0un + · · · + bℓ−1un−ℓ+1 xn+1 a0xn + · · · + aℓ−1xn−ℓ+1 + un yn+1 = b0xn + · · · + bℓ−1xn−ℓ+1 xn+1 = Fxn + Gun yn = Hxn

  • An ARMA process can be expressed as white noise filtered by a

cascade of an AP system followed by an AZ system

  • We already have an expression for the output of the AP system
  • The output of the AZ system is then just a linear combination of
  • utputs of the AP system
  • J. McNames

Portland State University ECE 539/639 State Space Models

  • Ver. 1.03

29

AR Process Alternative Formulation Let us define the state as the parameters, xn a1 a2 . . . aℓ T Then the output of the AR process is given by yn =

  • k=1

akyn−k + vn = Hnxn + vn where Hn yn−1 yn−2 . . . yn−ℓ

  • Now the state represents the parameters we’re interested in

estimating

  • This is a time-varying model
  • But how do we write an expression for the state update equation?
  • J. McNames

Portland State University ECE 539/639 State Space Models

  • Ver. 1.03

32

ARMA In State Space Form xn+1 = ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ a0 a1 . . . aℓ−2 aℓ−1 1 . . . 1 . . . . . . . . . ... . . . . . . . . . 1 ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ xn + ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ 1 . . . ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ u(n) yn = b0 . . . bℓ−1

  • xn

xn+1 = Fxn + Gun yn = Hxn

  • J. McNames

Portland State University ECE 539/639 State Space Models

  • Ver. 1.03

30

slide-9
SLIDE 9

Random Walk Models xn+1 = Fnxn + Gnun xn+1 = xn + un yn = Hnxn + vn yn = Hnxn + vn

  • Fn = I, as for constant parameter models
  • The state at time (n + 1) is the same as xn plus a random

perturbation

  • This is a good model of processes in which the model parameters

slowly drift over time

  • The degree of the drift is controlled by the process noise

covariance Qn

  • Often, Qn = αI where the scalar α is chosen by the user
  • The model is unstable, but the estimates may (usually) still

converge!

  • Perhaps the most widely used model
  • J. McNames

Portland State University ECE 539/639 State Space Models

  • Ver. 1.03

35

Designing State Space Models xn+1 = Fnxn + Gnun yn = Hnxn + vn

  • In many signal processing applications you can write a

mathematical expression that relates the observed signal to the parameters of interest

  • However, there is often no obvious state space model that can be
  • btained from first-principles or domain knowledge of the problem
  • The Kalman filter approach to linear estimation is still often useful

in these circumstances

  • There are several models for the state space equations that are

frequently used in these cases

  • J. McNames

Portland State University ECE 539/639 State Space Models

  • Ver. 1.03

33

Unstable Deterministic Models xn+1 = Fnxn + Gnun xn+1 = λ−1/2xn yn = Hnxn + vn yn = Hnxn + vn

  • Oddly, the adaptive filter algorithm recursive least squares is

equivalent to using the Kalman filter with an unstable state space model

  • The user-specified scalar parameter λ controls the bias-variance

tradeoff – λ = 1: Equivalent to regularized least squares – 0 < λ < 1: Equivalent to recursive least squares – λ = 0: ?

  • The scalar measurement noise in this case is set Rn = 1
  • It turns out that if Rn = αI for any scalar, positive parameter α,

the same solution is obtained! (prove this)

  • J. McNames

Portland State University ECE 539/639 State Space Models

  • Ver. 1.03

36

Constant Parameter Models xn+1 = Fnxn + Gnun xn+1 = xn yn = Hnxn + vn yn = Hnxn + vn

  • Fn = I, Gn = 0 or Qn = 0
  • The state at time xn+1 is the same as xn
  • The estimate will be time varying because we can improve the

estimate of x as we gain more observations yn

  • Example: Suppose we are trying to estimate the mean of a WN

process

  • Note that there will be a transition from the a priori estimate

x0|−1 (provided by user) to the unbiased estimate x as n → ∞

  • In this sense, the KF can be expressed as a type of linear Bayesean

estimation

  • Could be used for a quasi-stationary AR parameter estimate
  • J. McNames

Portland State University ECE 539/639 State Space Models

  • Ver. 1.03

34

slide-10
SLIDE 10

Continuous-Time to Discrete-Time Suppose our sample interval is given by Ts = 1/fs. Then ˙ x(t) = F(t)x(t) + G(t)u(t) x(t + Ts) = x(t) + t+Ts

t

˙ x(τ) dτ = x(t) + t+Ts

t

F(τ)x(τ) + G(τ)u(τ) dτ = x(t) + t+Ts

t

F(τ)x(τ) dτ + t+Ts

t

G(τ)u(τ) dτ

  • J. McNames

Portland State University ECE 539/639 State Space Models

  • Ver. 1.03

39

White Noise Models xn+1 = Fnxn + Gnun xn+1 = Gnun yn = Hnxn + vn yn = Hnxn + vn

  • The state is just a white noise process
  • In this case the state space model contains no dynamics or

memory that can be used to help estimate xn

  • No advantage over the more general linear estimation problem:

Given yn = Hnxn + vn Find the best linear estimate of xn: ˆ xn = xn, ynyn−2yn xn, yn = ΠnH∗

n

ˆ xn = ΠnH∗

n(HnΠnH∗ n + Rn)−1yn

yn2 = HnΠnH∗

n + Rn

  • Not used in practice
  • J. McNames

Portland State University ECE 539/639 State Space Models

  • Ver. 1.03

37

Continuous-Time to Discrete-Time Coarse Approximations x(t + Ts) = x(t) + t+Ts

t

F(τ)x(τ) dτ + t+Ts

t

G(τ)u(τ) dτ Now if Ts is small enough, a coarse approximation of the CT state space equations can be obtained by assuming x(τ), F(τ), and G(τ) are constant over short intervals of t ≤ τ < t + Ts x(τ) ≈ xn F(τ) ≈ Fn G(τ) ≈ Gn for t = nTs ≤ τ < t + Ts = (n + 1)Ts Let us also define un t+Ts

t

u(τ) dτ

  • J. McNames

Portland State University ECE 539/639 State Space Models

  • Ver. 1.03

40

Continuous-Time State Space Models The continuous time state space model is given by ˙ x(t) = F(t)x(t) + G(t)u(t) y(t) = H(t)x(t) + v(t) ⎡ ⎣ x(0) u(t) v(t) ⎤ ⎦ , ⎡ ⎢ ⎢ ⎣ x(0) u(τ) v(τ) 1 ⎤ ⎥ ⎥ ⎦

  • =

⎡ ⎣ Π0 Q(t)δ(t − τ) S(t)δ(t − τ) S(t)∗δ(t − τ) R(t)δ(t − τ) ⎤ ⎦

  • ˙

x(t) is the derivative with respect to time of each element of x(t)

  • As before, all RVs have zero mean
  • Treatment of the noise processes is tricky (they have infinite

power and bandwidth)

  • Will not discuss here
  • Our only interest is to determine how the model terms relate to

the discrete-time model

  • J. McNames

Portland State University ECE 539/639 State Space Models

  • Ver. 1.03

38

slide-11
SLIDE 11

CT to DT Coarse Approximations: Measurement Noise Let us relax the requirement that v(t), v(τ) = R(t)δ(t − τ) so that v(t) is bandlimited white noise with finite power, v(t)2 = R(t) < ∞ such that v(nTs), v(kTs) = R(nTs)δnk = Rnδnk for any integers n, k This is a reasonable model if

  • 1. The signal component (H(t)x(t)) of y(t) is bandlimited
  • 2. An anti-aliasing filter (AAF) is applied to y(t) prior to sampling

As usual, the AAF should eliminate high-frequency components of v(t) without causing aliasing of the signal component.

  • J. McNames

Portland State University ECE 539/639 State Space Models

  • Ver. 1.03

43

CT to DT Coarse Approximations: State Updates Our approximations for nTs ≤ t < (n + 1)Ts are x(t) ≈ x(nTs) F(t) ≈ F(nTs) G(t) ≈ G(nTs) Then we have xn+1 x ((n + 1)Ts) = x(nTs) + (n+1)Ts

nTs

F(τ)x(τ) dτ + (n+1)Ts

nTs

G(τ)u(τ) dτ ≈ x(nTs) + F(nTs)x(nTs) + G(nTs)un = (I + F(nTs)) x(nTs) + G(nTs)un = Fnxn + Gnun where we have defined Fn I + F(nTs) Gn G(nTs)

  • J. McNames

Portland State University ECE 539/639 State Space Models

  • Ver. 1.03

41

CT to DT Coarse Approximations: Noise Covariance Matrices x0 = x(0) = Π0 vn, vk = v(nTs), v(kTs) = Rnδnk The process noise is a little more tricky un, uk = (n+1)Ts

nTs

u(t) dt, (k+1)Ts

kTs

u(τ) dτ

  • =

(n+1)Ts

nTs

(k+1)Ts

kTs

u(t), u(τ) dτdt = (n+1)Ts

nTs

(j+1)Ts

jTs

Q(t)δ(t − τ) dτdt = δn,j (n+1)Ts

nTs

Q(t) dt

  • J. McNames

Portland State University ECE 539/639 State Space Models

  • Ver. 1.03

44

CT to DT Coarse Approximations: Measurement Equation y(t) = H(t)x(t) + v(t) yn = Hnxn + vn Let us define Hn H(nTs) vn v(nTs)

  • J. McNames

Portland State University ECE 539/639 State Space Models

  • Ver. 1.03

42

slide-12
SLIDE 12

CT to DT Better Approximations ˙ x(t) = F(t)x(t) + G(t)u(t) y(t) = H(t)x(t) + v(t)

  • A better approximation can be obtained using a more exact model
  • f how the state evolves over a sampling interval Ts
  • Requires knowledge of matrix exponentials
  • This material is from [1]
  • J. McNames

Portland State University ECE 539/639 State Space Models

  • Ver. 1.03

47

CT to DT Coarse Approximations: Process Noise Now if we make the approximation Q(τ) = Q(t) for t ≤ τ < t + Ts we have un, uk = δn,j (n+1)Ts

nTs

Q(t) dt ≈ Ts Q(nTs) δn,j = Qn δn,j where we defined Qn Ts Q(nTs) Notice that the covariance of Qn scales linearly with the sampling interval Ts

  • J. McNames

Portland State University ECE 539/639 State Space Models

  • Ver. 1.03

45

Matrix Exponentials Consider the power series approximation of an exponential, generalized to a matrix exponential eAt I + At + 1 2!A2t2 + · · · + 1 k!Aktk + . . . =

  • k=0

Aktk k! If the series converges, we can differentiate it term by term to give d dteAt = A + A2t + 1 2!A3t2 + · · · + 1 (k − 1)!Aktk−1 + . . . = A

  • I + At + 1

2!A2t2 + · · · + 1 k!Aktk + . . .

  • = AeAt

=

  • I + At + 1

2!A2t2 + · · · + 1 k!Aktk + . . .

  • A

= eAtA

  • J. McNames

Portland State University ECE 539/639 State Space Models

  • Ver. 1.03

48

CT to DT Coarse Approximations: Measurement Equation ˙ x(t) = F(t)x(t) + G(t)u(t) xn+1 = Fnxn + Gnun y(t) = H(t)x(t) + v(t) yn = Hnxn + vn xn x(nTs) yn y(nTs) Fn I + F(nTs) Hn H(nTs) Gn G(nTs) un t+Ts

t

u(τ) dτ vn v(nTs) Qn Ts Q(nTs) Rn R(nTs)

  • This DT approximation of CT state space systems is only accurate

if the noise covariance matrices, model matrices, and state change very little over the period of a sampling interval Ts

  • May require large sample rates and excessive computation
  • J. McNames

Portland State University ECE 539/639 State Space Models

  • Ver. 1.03

46

slide-13
SLIDE 13

CT State Space Approximations Continued e−F (nTs)t [ ˙ x(t) − F(nTs)x(t)] ≈ e−F (nTs)tG(nTs)u(t) d dt

  • e−F (nTs)tx(t)
  • = e−F (nTs)tG(nTs)u(t)

t+Ts

t

d dτ

  • e−F (nTs)τx(τ)
  • dτ =

t+Ts

t

e−F (nTs)τG(nTs)u(τ) dτ e−F (nTs)(t+Ts)x(t + Ts) − e−F (nTs)tx(t) = t+Ts

t

e−F (nTs)τG(nTs)u(τ) dτ Solving for x(t + Ts) then gives us e−F (nTs)(t+Ts)x(t + Ts) = e−F (nTs)tx(t) + t+Ts

t

e−F (nTs)τG(nTs)u(τ) dτ x(t + Ts) = eF (nTs)Tsx(t) + eF (nTs)(t+Ts) t+Ts

t

e−F (nTs)τG(nTs)u(τ) dτ

  • J. McNames

Portland State University ECE 539/639 State Space Models

  • Ver. 1.03

51

Matrix Exponential Properties d dteAt = AeAt = eAtA Now consider eAteAτ = ∞

  • k=0

Aktk k! ∞

  • k=0

Akτ k k!

  • =

  • k=0

Ak k

  • i=0

tiτ k−1 i!(k − i)!

  • =

  • k=0

Ak (t + τ)k k! = eA(t+τ) Thus, if τ = −t we have eAte−At = e−AteAt = eA(t−t) = I Note, however, that e(A+B)t = eAteBt if and only if AB = BA

  • J. McNames

Portland State University ECE 539/639 State Space Models

  • Ver. 1.03

49

CT State Space Approximations Continued If we evaluate x(t+Ts) = eF (nTs)Tsx(t)+eF (nTs)(t+Ts) t+Ts

t

e−F (nTs)τG(nTs)u(τ) dτ at t = nTs and use the definition xn x(nTs), we get xn+1 = eF (nTs)Tsxn+eF (nTs)(n+1)Ts (n+1)Ts

nTs

e−F (nTs)τG(nTs)u(τ) dτ Suppose for a moment that we define Gn I un eF (nTs)(n+1)Ts (n+1)Ts

nTs

e−F (nTs)τG(nTs)u(τ) dτ What is the covariance of un?

  • J. McNames

Portland State University ECE 539/639 State Space Models

  • Ver. 1.03

52

CT State Space Approximations Let us assume F(t) and G(t) are constant over the interval nTs ≤ t < (n + 1)Ts, as before F(t) ≈ F(nTs) G(t) ≈ G(nTs) ˙ x(t) = F(t)x(t) + G(t)u(t) ˙ x(t) − F(t)x(t) = G(t)u(t) e−F (t)t [ ˙ x(t) − F(t)x(t)] = e−F (t)tG(t)u(t) Now over the interval nTs ≤ t < (n + 1)Ts we have the approximation e−F (nTs)t [ ˙ x(t) − F(nTs)x(t)] ≈ e−F (nTs)tG(nTs)u(t)

  • J. McNames

Portland State University ECE 539/639 State Space Models

  • Ver. 1.03

50

slide-14
SLIDE 14

CT State Space Approximations Continued Then we have xn+1 ≈ eF (nTs)Tsxn + eF (nTs)(n+1)Ts (n+1)Ts

nTs

e−F (nTs)τG(nTs)u(nTs) dτ = eF (nTs)Tsxn + un = Fnxn + Gnun where we have defined Fn eF (nTs)Ts Gn I

  • J. McNames

Portland State University ECE 539/639 State Space Models

  • Ver. 1.03

55

CT State Space Approximations: Process Noise Covariance un, uk =

  • eF (nTs)(n+1)Ts

(n+1)Ts

nTs

e−F (nTs)τG(nTs)u(τ) dτ, eF (kTs)(k+1)Ts (k+1)Ts

kTs

e−F (kTs)sG(kTs)u(s) ds = eF (nTs)(n+1)Ts (n+1)Ts

nTs

(n+1)Ts

nTs

e−F (nTs)sG(nTs) u(τ), u(s)G(kTs)∗ e−F (nTs)s∗ ds dτ eF (nTs)(n+1)Ts∗ = eF (nTs)(n+1)Ts (n+1)Ts

nTs

(n+1)Ts

nTs

e−F (nTs)sG(nTs) S(τ)δ(t − s)G(kTs)∗ e−F (nTs)s∗ ds dτ eF (nTs)(n+1)Ts∗

  • J. McNames

Portland State University ECE 539/639 State Space Models

  • Ver. 1.03

53

CT State Space Approximations Continued ˙ x(t) = F(t)x(t) + G(t)u(t) xn+1 = Fnxn + Gnun y(t) = H(t)x(t) + v(t) yn = Hnxn + vn

xn x(nTs) yn y(nTs) Fn eF (nTs)Ts Hn H(nTs) Gn I vn v(nTs) un eF (nTs)(n+1)Ts

(n+1)Ts

nTs

e−F (nTs)τ G(nTs)u(τ) dτ Qn δnkeF (nTs)(n+1)Ts

  • (n+1)Ts

nTs

e−F (nTs)sG(nTs)S(τ)G(kTs)∗

  • e−F (nTs)τ
∗ dτ
  • ×
  • eF (nTs)(n+1)Ts

Rn R(nTs)

  • J. McNames

Portland State University ECE 539/639 State Space Models

  • Ver. 1.03

56

CT State Space Approximations: Process Noise Covariance un, uk = δnkeF (nTs)(n+1)Ts (n+1)Ts

nTs

e−F (nTs)sG(nTs)S(τ)G(kTs)∗ e−F (nTs)τ∗ dτ

  • ×
  • eF (nTs)(n+1)Ts∗

Qnδnk

  • Unlike the measurement noise, we cannot filter the process noise
  • However, even if the process noise is continuous-time white noise,

the discrete-time noise has finite variance!

  • Not clear how to partition G(t)u(t) into Gnun
  • J. McNames

Portland State University ECE 539/639 State Space Models

  • Ver. 1.03

54

slide-15
SLIDE 15

The Nonlinear Functions The functions fn(xn), gn(xn) and hn(xn) are multivariate, nonlinear, time-varying functions fn(xn)

ℓ×1

= ⎡ ⎢ ⎢ ⎢ ⎣ fn,1(xn) fn,2(xn) . . . fn,ℓ(xn) ⎤ ⎥ ⎥ ⎥ ⎦ gn(xn)

ℓ×m

= ⎡ ⎢ ⎢ ⎢ ⎣ gn,11(xn) . . . gn,1m(xn) gn,21(xn) . . . gn,2m(xn) . . . ... . . . gn,ℓm(xn) . . . gn,ℓm(xn) ⎤ ⎥ ⎥ ⎥ ⎦ hn(xn)

p×1

= ⎡ ⎢ ⎢ ⎢ ⎣ hn,1(xn) hn,2(xn) . . . hn,p(xn) ⎤ ⎥ ⎥ ⎥ ⎦

  • J. McNames

Portland State University ECE 539/639 State Space Models

  • Ver. 1.03

59

  • This approximation is a more accurate representation of the CT

model

  • Both approximations can be generalized to the nonlinear case

easily

  • J. McNames

Portland State University ECE 539/639 State Space Models

  • Ver. 1.03

57

Nonlinear State Space Models Comments xn+1 = fn(xn) + gn(xn)un yn = hn(xn) + vn

  • There are other types of nonlinear state space models
  • This is one of the most general
  • Note that the noise is still additive for both the state update and

measurement equations

  • We will see that the optimal estimator uses Taylor series

approximations

  • This only works if the nonlinear functions are smooth
  • J. McNames

Portland State University ECE 539/639 State Space Models

  • Ver. 1.03

60

Nonlinear State Space Models xn+1 = fn(xn) + gn(xn)un yn = hn(xn) + vn where ⎡ ⎣ un vn x0 − ¯ x0 ⎤ ⎦ ⎡ ⎢ ⎢ ⎣ um vm x0 − ¯ x0 1 ⎤ ⎥ ⎥ ⎦

H

= ⎡ ⎣ Qnδn,m Rnδn,m Π0 ⎤ ⎦

  • Here the matrices Fn, Gn, and Hn have been replaced with

nonlinear functions of the state

  • Does not assume zero mean
  • Does assume no interaction between process and measurement

noise, un, vk = 0, but could probably be generalized to the earlier case of un, vk = δnkSn

  • J. McNames

Portland State University ECE 539/639 State Space Models

  • Ver. 1.03

58

slide-16
SLIDE 16

To Do

  • Give examples of processes modeled in MATLAB
  • Give complete summary of the more exact CT to DT conversion
  • Work out what the process noise power is in the exact CT to DT

conversion

  • J. McNames

Portland State University ECE 539/639 State Space Models

  • Ver. 1.03

63

Example 2: Random Walk Model of Position Suppose we have an accelerometer.

  • To be completed
  • J. McNames

Portland State University ECE 539/639 State Space Models

  • Ver. 1.03

61

References

[1] Katsuhiko Ogata. Discrete-time Control Systems. Prentice-Hall, Inc., 1995.

  • J. McNames

Portland State University ECE 539/639 State Space Models

  • Ver. 1.03

64

Summary

  • State space models can often be obtained directly from the

physical system (by Newton’s laws, Kirchoff’s laws, etc.)

  • State space models are wide sense Markov (WSM)
  • The nonstationary case is nearly as easy to handle as the

stationary case: a straightforward generalization

  • J. McNames

Portland State University ECE 539/639 State Space Models

  • Ver. 1.03

62