Rational Minimax Filtering Arthur J. Krener Wei Kang - - PowerPoint PPT Presentation

rational minimax filtering
SMART_READER_LITE
LIVE PREVIEW

Rational Minimax Filtering Arthur J. Krener Wei Kang - - PowerPoint PPT Presentation

Rational Minimax Filtering Arthur J. Krener Wei Kang ajkrener@nps.edu wkang@nps.edu Research supported in part by AFOSR and NSF Dedicated to our Esteemed Colleague Eduardo Sontag on the occasion of his 60 th birthday Kalman Filtering We


slide-1
SLIDE 1

Rational Minimax Filtering

Arthur J. Krener Wei Kang ajkrener@nps.edu wkang@nps.edu Research supported in part by AFOSR and NSF Dedicated to our Esteemed Colleague

Eduardo Sontag

  • n the occasion of his 60th birthday
slide-2
SLIDE 2

Kalman Filtering

We assume that the dynamics and measurement processes can be modeled by a linear system ˙ x = Ax y = Cx The state is x ∈ I Rn , the measurement is y ∈ I Rp and p ≤ n .

slide-3
SLIDE 3

Kalman Filtering

We assume that the dynamics and measurement processes can be modeled by a linear system ˙ x = Ax y = Cx The state is x ∈ I Rn , the measurement is y ∈ I Rp and p ≤ n . The model is said to be observable (more precisely, reconstructable) if the past measurements y(s), s ≤ t uniquely determine the current state x(t) .

slide-4
SLIDE 4

Kalman Filtering

We assume that the dynamics and measurement processes can be modeled by a linear system ˙ x = Ax y = Cx The state is x ∈ I Rn , the measurement is y ∈ I Rp and p ≤ n . The model is said to be observable (more precisely, reconstructable) if the past measurements y(s), s ≤ t uniquely determine the current state x(t) . One might try to reconstruct the state by differentiating the measurements y(t) = Cx(t) ˙ y(t) = CAx(t) ¨ y(t) = CA2x(t) . . .

slide-5
SLIDE 5

Kalman Filtering

If the matrix        C CA CA2 . . . CAn−1        is of full column rank n then the system is observable.

slide-6
SLIDE 6

Kalman Filtering

If the matrix        C CA CA2 . . . CAn−1        is of full column rank n then the system is observable. But the model is probably not completely accurate.

  • The process is not linear.
slide-7
SLIDE 7

Kalman Filtering

If the matrix        C CA CA2 . . . CAn−1        is of full column rank n then the system is observable. But the model is probably not completely accurate.

  • The process is not linear.
  • There are unmodeled dynamics.
slide-8
SLIDE 8

Kalman Filtering

If the matrix        C CA CA2 . . . CAn−1        is of full column rank n then the system is observable. But the model is probably not completely accurate.

  • The process is not linear.
  • There are unmodeled dynamics.
  • There are unknown exogenous inputs affecting the

dynamics.

slide-9
SLIDE 9

Kalman Filtering

If the matrix        C CA CA2 . . . CAn−1        is of full column rank n then the system is observable. But the model is probably not completely accurate.

  • The process is not linear.
  • There are unmodeled dynamics.
  • There are unknown exogenous inputs affecting the

dynamics.

  • There are unknown exogenous noises affecting the

measurements.

slide-10
SLIDE 10

Kalman Filtering

If the matrix        C CA CA2 . . . CAn−1        is of full column rank n then the system is observable. But the model is probably not completely accurate.

  • The process is not linear.
  • There are unmodeled dynamics.
  • There are unknown exogenous inputs affecting the

dynamics.

  • There are unknown exogenous noises affecting the

measurements.

slide-11
SLIDE 11

Kalman Filtering

To cope with these inaccuracies Kalman added driving and

  • bservation noises to the model.

˙ x = Ax + Bv y = Cx + Dw

slide-12
SLIDE 12

Kalman Filtering

To cope with these inaccuracies Kalman added driving and

  • bservation noises to the model.

˙ x = Ax + Bv y = Cx + Dw He assumed that v, w are standard white Gaussian noises (WGN) of dimensions m, p .

slide-13
SLIDE 13

Kalman Filtering

To cope with these inaccuracies Kalman added driving and

  • bservation noises to the model.

˙ x = Ax + Bv y = Cx + Dw He assumed that v, w are standard white Gaussian noises (WGN) of dimensions m, p . What is standard white Gaussian noise?

slide-14
SLIDE 14

Kalman Filtering

To cope with these inaccuracies Kalman added driving and

  • bservation noises to the model.

˙ x = Ax + Bv y = Cx + Dw He assumed that v, w are standard white Gaussian noises (WGN) of dimensions m, p . What is standard white Gaussian noise? It is the formal derivative of a standard Weiner process and is mathematically characterized by the following properties.

slide-15
SLIDE 15

Kalman Filtering

To cope with these inaccuracies Kalman added driving and

  • bservation noises to the model.

˙ x = Ax + Bv y = Cx + Dw He assumed that v, w are standard white Gaussian noises (WGN) of dimensions m, p . What is standard white Gaussian noise? It is the formal derivative of a standard Weiner process and is mathematically characterized by the following properties. If f(t) ∈ L2([t1, t2], I Rm) then the random variable X = t2

t1

f ′(t)w(t) dt is Gaussian with zero mean and variance E(X2) = t2

t1

f(t)2 dt

slide-16
SLIDE 16

Kalman Filtering

Why white Gaussian noise? There are several possible answers.

  • Because it is ”real”.
slide-17
SLIDE 17

Kalman Filtering

Why white Gaussian noise? There are several possible answers.

  • Because it is ”real”.
  • To keep us from doing something dumb like differentiating

the output to reconstruct the state.

slide-18
SLIDE 18

Kalman Filtering

Why white Gaussian noise? There are several possible answers.

  • Because it is ”real”.
  • To keep us from doing something dumb like differentiating

the output to reconstruct the state.

  • This requires that there is noise in every measurement so

we assume that D is invertible.

slide-19
SLIDE 19

Kalman Filtering

Why white Gaussian noise? There are several possible answers.

  • Because it is ”real”.
  • To keep us from doing something dumb like differentiating

the output to reconstruct the state.

  • This requires that there is noise in every measurement so

we assume that D is invertible.

  • Because it has a constant power spectrum density at all

frequencies.

slide-20
SLIDE 20

Kalman Filtering

Why white Gaussian noise? There are several possible answers.

  • Because it is ”real”.
  • To keep us from doing something dumb like differentiating

the output to reconstruct the state.

  • This requires that there is noise in every measurement so

we assume that D is invertible.

  • Because it has a constant power spectrum density at all

frequencies.

  • Unfortunately this means that it has infinite power.
slide-21
SLIDE 21

Kalman Filtering

Why white Gaussian noise? There are several possible answers.

  • Because it is ”real”.
  • To keep us from doing something dumb like differentiating

the output to reconstruct the state.

  • This requires that there is noise in every measurement so

we assume that D is invertible.

  • Because it has a constant power spectrum density at all

frequencies.

  • Unfortunately this means that it has infinite power.
  • Since we don’t know the errors in the dynamics and

measurements, modeling them as white is appropriate.

slide-22
SLIDE 22

Kalman Filtering

Why white Gaussian noise? There are several possible answers.

  • Because it is ”real”.
  • To keep us from doing something dumb like differentiating

the output to reconstruct the state.

  • This requires that there is noise in every measurement so

we assume that D is invertible.

  • Because it has a constant power spectrum density at all

frequencies.

  • Unfortunately this means that it has infinite power.
  • Since we don’t know the errors in the dynamics and

measurements, modeling them as white is appropriate.

  • This overlooks the fact that we have to choose B, D which

fixes the covariances of the errors.

slide-23
SLIDE 23

Kalman Filtering

Why white Gaussian noise? There are several possible answers.

  • Because it is ”real”.
  • To keep us from doing something dumb like differentiating

the output to reconstruct the state.

  • This requires that there is noise in every measurement so

we assume that D is invertible.

  • Because it has a constant power spectrum density at all

frequencies.

  • Unfortunately this means that it has infinite power.
  • Since we don’t know the errors in the dynamics and

measurements, modeling them as white is appropriate.

  • This overlooks the fact that we have to choose B, D which

fixes the covariances of the errors.

  • Because standard white Gaussian noise is relatively easy to

handle mathematically in a linear setting.

slide-24
SLIDE 24

Kalman Filtering

Why white Gaussian noise? There are several possible answers.

  • Because it is ”real”.
  • To keep us from doing something dumb like differentiating

the output to reconstruct the state.

  • This requires that there is noise in every measurement so

we assume that D is invertible.

  • Because it has a constant power spectrum density at all

frequencies.

  • Unfortunately this means that it has infinite power.
  • Since we don’t know the errors in the dynamics and

measurements, modeling them as white is appropriate.

  • This overlooks the fact that we have to choose B, D which

fixes the covariances of the errors.

  • Because standard white Gaussian noise is relatively easy to

handle mathematically in a linear setting.

slide-25
SLIDE 25

Kalman Filtering

For simplicity of exposition we are restricting the discussion to continuous time Kalman filtering of a time invariant linear system where the measurements are available over the infinite past.

slide-26
SLIDE 26

Kalman Filtering

For simplicity of exposition we are restricting the discussion to continuous time Kalman filtering of a time invariant linear system where the measurements are available over the infinite past. There are generalizations and extensions to handle the following.

  • Discrete time systems, x(t + 1) = Ax(t), . . .
slide-27
SLIDE 27

Kalman Filtering

For simplicity of exposition we are restricting the discussion to continuous time Kalman filtering of a time invariant linear system where the measurements are available over the infinite past. There are generalizations and extensions to handle the following.

  • Discrete time systems, x(t + 1) = Ax(t), . . .
  • Time varying linear systems, A = A(t), C = C(t), . . .
slide-28
SLIDE 28

Kalman Filtering

For simplicity of exposition we are restricting the discussion to continuous time Kalman filtering of a time invariant linear system where the measurements are available over the infinite past. There are generalizations and extensions to handle the following.

  • Discrete time systems, x(t + 1) = Ax(t), . . .
  • Time varying linear systems, A = A(t), C = C(t), . . .
  • Finite interval of measurements y(s), t0 ≤ s ≤ t
slide-29
SLIDE 29

Kalman Filtering

For simplicity of exposition we are restricting the discussion to continuous time Kalman filtering of a time invariant linear system where the measurements are available over the infinite past. There are generalizations and extensions to handle the following.

  • Discrete time systems, x(t + 1) = Ax(t), . . .
  • Time varying linear systems, A = A(t), C = C(t), . . .
  • Finite interval of measurements y(s), t0 ≤ s ≤ t
  • Partial knowledge of the initial state ˆ

x(t0) ≈ N(ˆ x0, P 0)

slide-30
SLIDE 30

Kalman Filtering

For simplicity of exposition we are restricting the discussion to continuous time Kalman filtering of a time invariant linear system where the measurements are available over the infinite past. There are generalizations and extensions to handle the following.

  • Discrete time systems, x(t + 1) = Ax(t), . . .
  • Time varying linear systems, A = A(t), C = C(t), . . .
  • Finite interval of measurements y(s), t0 ≤ s ≤ t
  • Partial knowledge of the initial state ˆ

x(t0) ≈ N(ˆ x0, P 0)

  • Known bias in the noises, Ev(t) = 0, Ew(t) = 0
slide-31
SLIDE 31

Kalman Filtering

For simplicity of exposition we are restricting the discussion to continuous time Kalman filtering of a time invariant linear system where the measurements are available over the infinite past. There are generalizations and extensions to handle the following.

  • Discrete time systems, x(t + 1) = Ax(t), . . .
  • Time varying linear systems, A = A(t), C = C(t), . . .
  • Finite interval of measurements y(s), t0 ≤ s ≤ t
  • Partial knowledge of the initial state ˆ

x(t0) ≈ N(ˆ x0, P 0)

  • Known bias in the noises, Ev(t) = 0, Ew(t) = 0
  • Correlation between the noises.
slide-32
SLIDE 32

Kalman Filtering

For simplicity of exposition we are restricting the discussion to continuous time Kalman filtering of a time invariant linear system where the measurements are available over the infinite past. There are generalizations and extensions to handle the following.

  • Discrete time systems, x(t + 1) = Ax(t), . . .
  • Time varying linear systems, A = A(t), C = C(t), . . .
  • Finite interval of measurements y(s), t0 ≤ s ≤ t
  • Partial knowledge of the initial state ˆ

x(t0) ≈ N(ˆ x0, P 0)

  • Known bias in the noises, Ev(t) = 0, Ew(t) = 0
  • Correlation between the noises.
  • An additional known input.
slide-33
SLIDE 33

Kalman Filtering

For simplicity of exposition we are restricting the discussion to continuous time Kalman filtering of a time invariant linear system where the measurements are available over the infinite past. There are generalizations and extensions to handle the following.

  • Discrete time systems, x(t + 1) = Ax(t), . . .
  • Time varying linear systems, A = A(t), C = C(t), . . .
  • Finite interval of measurements y(s), t0 ≤ s ≤ t
  • Partial knowledge of the initial state ˆ

x(t0) ≈ N(ˆ x0, P 0)

  • Known bias in the noises, Ev(t) = 0, Ew(t) = 0
  • Correlation between the noises.
  • An additional known input.
  • Extended Kalman filters for nonlinear systems.
slide-34
SLIDE 34

Kalman Filtering

For simplicity of exposition we are restricting the discussion to continuous time Kalman filtering of a time invariant linear system where the measurements are available over the infinite past. There are generalizations and extensions to handle the following.

  • Discrete time systems, x(t + 1) = Ax(t), . . .
  • Time varying linear systems, A = A(t), C = C(t), . . .
  • Finite interval of measurements y(s), t0 ≤ s ≤ t
  • Partial knowledge of the initial state ˆ

x(t0) ≈ N(ˆ x0, P 0)

  • Known bias in the noises, Ev(t) = 0, Ew(t) = 0
  • Correlation between the noises.
  • An additional known input.
  • Extended Kalman filters for nonlinear systems.
  • Unscented Kalman filters for nonlinear systems.
slide-35
SLIDE 35

Kalman Filtering

For simplicity of exposition we are restricting the discussion to continuous time Kalman filtering of a time invariant linear system where the measurements are available over the infinite past. There are generalizations and extensions to handle the following.

  • Discrete time systems, x(t + 1) = Ax(t), . . .
  • Time varying linear systems, A = A(t), C = C(t), . . .
  • Finite interval of measurements y(s), t0 ≤ s ≤ t
  • Partial knowledge of the initial state ˆ

x(t0) ≈ N(ˆ x0, P 0)

  • Known bias in the noises, Ev(t) = 0, Ew(t) = 0
  • Correlation between the noises.
  • An additional known input.
  • Extended Kalman filters for nonlinear systems.
  • Unscented Kalman filters for nonlinear systems.
  • Particle filters for nonlinear systems.
slide-36
SLIDE 36

Kalman Filtering

For simplicity of exposition we are restricting the discussion to continuous time Kalman filtering of a time invariant linear system where the measurements are available over the infinite past. There are generalizations and extensions to handle the following.

  • Discrete time systems, x(t + 1) = Ax(t), . . .
  • Time varying linear systems, A = A(t), C = C(t), . . .
  • Finite interval of measurements y(s), t0 ≤ s ≤ t
  • Partial knowledge of the initial state ˆ

x(t0) ≈ N(ˆ x0, P 0)

  • Known bias in the noises, Ev(t) = 0, Ew(t) = 0
  • Correlation between the noises.
  • An additional known input.
  • Extended Kalman filters for nonlinear systems.
  • Unscented Kalman filters for nonlinear systems.
  • Particle filters for nonlinear systems.
slide-37
SLIDE 37

Derivation of the Kalman Filter

˙ x = Ax + Bv y = Cx + Dw

slide-38
SLIDE 38

Derivation of the Kalman Filter

˙ x = Ax + Bv y = Cx + Dw We assume that the filter for xi(t) is a weighted sum of the past observations. The estimate is ˆ xi(t) = ∞ k(s)y(t − s) ds

slide-39
SLIDE 39

Derivation of the Kalman Filter

˙ x = Ax + Bv y = Cx + Dw We assume that the filter for xi(t) is a weighted sum of the past observations. The estimate is ˆ xi(t) = ∞ k(s)y(t − s) ds We wish to choose the weighing pattern k(s) ∈ I R1×p to minimize E(˜ xi(t))2 where ˜ xi(t) = xi(t) − ˆ xi(t) .

slide-40
SLIDE 40

Derivation of the Kalman Filter

˙ x = Ax + Bv y = Cx + Dw We assume that the filter for xi(t) is a weighted sum of the past observations. The estimate is ˆ xi(t) = ∞ k(s)y(t − s) ds We wish to choose the weighing pattern k(s) ∈ I R1×p to minimize E(˜ xi(t))2 where ˜ xi(t) = xi(t) − ˆ xi(t) . Given a k(s) define h(s) ∈ I R1×n by ˙ h = hA + kC h(0) = −ei where ei is the ith unit row vector.

slide-41
SLIDE 41

Derivation of the Kalman Filter

ˆ xi(t) = ∞ k(s)y(t − s) ds = ∞ k(s)Cx(t − s) + k(s)Dw(t − s) ds = ∞

  • ˙

h(s) − h(s)A

  • x(t − s) + k(s)Dw(t − s) ds

= [h(s)x(t − s)]∞

0 +

∞ h(s)Bv(t − s) + k(s)Dw(t − s) ds We assume that h(∞) = 0 so ˜ xi(t) = − ∞ h(s)Bv(t − s) + k(s)Dw(t − s) ds E(˜ xi(t))2 = ∞ h(s)BB′h′(s) + k(s)DD′k′(s) ds

slide-42
SLIDE 42

Linear Quadratic Regulator

So we have the optimal control problem of minimizing by choice

  • f k(s)

∞ h(s)BB′h′(s) + k(s)DD′k′(s) ds subject to ˙ h = hA + kC h(0) = h0

slide-43
SLIDE 43

Linear Quadratic Regulator

So we have the optimal control problem of minimizing by choice

  • f k(s)

∞ h(s)BB′h′(s) + k(s)DD′k′(s) ds subject to ˙ h = hA + kC h(0) = h0 We assume that the minimum is a quadratic form in h0 h0P (h0)′ = min

k

∞ h(s)BB′h′(s) + k(s)DD′k′(s) ds

slide-44
SLIDE 44

Completing the Square

h0P (h0)′ = min

k

∞ hBB′h′ + kDD′k′ ds

slide-45
SLIDE 45

Completing the Square

h0P (h0)′ = min

k

∞ hBB′h′ + kDD′k′ ds

  • h(s)P h′(s)

∞ = ∞ d dsh(s)P h′(s) ds h0P (h0)′ = − ∞ (hA + kC) P h′ + hP (hA + kC)′ ds

slide-46
SLIDE 46

Completing the Square

h0P (h0)′ = min

k

∞ hBB′h′ + kDD′k′ ds

  • h(s)P h′(s)

∞ = ∞ d dsh(s)P h′(s) ds h0P (h0)′ = − ∞ (hA + kC) P h′ + hP (hA + kC)′ ds Subtracting = min

k

∞ [h, k] AP + P A′ + BB′ P C′ CP DD′

  • [h, k]′ ds
slide-47
SLIDE 47

Completing the Square

If = AP + P A′ + BB′ − P C′(DD′)−1CP G = P C′(DD′)−1 then the above reduces to a perfect square = min

k

∞ (k + hG)DD′(k + hG)′ ds so the optimal k = −hG .

slide-48
SLIDE 48

Kalman Filtering

To filter all states at once we let H(s) ∈ I Rn×n satisfy ˙ H = H(A − GC) H(0) = −I and K(s) = H(s)G then ˙ H = (A − GC)H

slide-49
SLIDE 49

Kalman Filtering

To filter all states at once we let H(s) ∈ I Rn×n satisfy ˙ H = H(A − GC) H(0) = −I and K(s) = H(s)G then ˙ H = (A − GC)H ˆ x(t) = ∞ K(s)y(t − s) ds = − t

−∞

H(t − s)Gy(s) ds

slide-50
SLIDE 50

Kalman Filtering

To filter all states at once we let H(s) ∈ I Rn×n satisfy ˙ H = H(A − GC) H(0) = −I and K(s) = H(s)G then ˙ H = (A − GC)H ˆ x(t) = ∞ K(s)y(t − s) ds = − t

−∞

H(t − s)Gy(s) ds d dt ˆ x(t) = (A − GC)ˆ x(t) + Gy(t)

slide-51
SLIDE 51

Kalman Filtering

Kalman Filter d dt ˆ x(t) = (A − GC)ˆ x(t) + Gy(t)

slide-52
SLIDE 52

Kalman Filtering

Kalman Filter d dt ˆ x(t) = (A − GC)ˆ x(t) + Gy(t) Riccati equation = AP + P A′ + BB′ − P C′(DD′)−1CP

slide-53
SLIDE 53

Kalman Filtering

Kalman Filter d dt ˆ x(t) = (A − GC)ˆ x(t) + Gy(t) Riccati equation = AP + P A′ + BB′ − P C′(DD′)−1CP Filter Gain G = P C′(DD′)−1

slide-54
SLIDE 54

Kalman Filtering

Kalman Filter d dt ˆ x(t) = (A − GC)ˆ x(t) + Gy(t) Riccati equation = AP + P A′ + BB′ − P C′(DD′)−1CP Filter Gain G = P C′(DD′)−1 This derivation is easily extended to discrete time, time varying and/or finite horizon linear systems.

slide-55
SLIDE 55

Johansen and Berkovitz-Pollard Problem

Independently Johansen (1966) and Berkovitz-Pollard (1967) considered the following filtering problem. ¨ x = u, |u| ≤ 1 y = x + w, w WGN

slide-56
SLIDE 56

Johansen and Berkovitz-Pollard Problem

Independently Johansen (1966) and Berkovitz-Pollard (1967) considered the following filtering problem. ¨ x = u, |u| ≤ 1 y = x + w, w WGN They assumed a linear filter ˆ x(t) = ∞ k(s)y(t − s) ds

slide-57
SLIDE 57

Johansen and Berkovitz-Pollard Problem

Independently Johansen (1966) and Berkovitz-Pollard (1967) considered the following filtering problem. ¨ x = u, |u| ≤ 1 y = x + w, w WGN They assumed a linear filter ˆ x(t) = ∞ k(s)y(t − s) ds where the weighing pattern k(s) is chosen to min

k

max

|u|≤1 Ew(˜

x(t))2

slide-58
SLIDE 58

Johansen and Berkovitz-Pollard Problem

Given a k(s) define h(s) by ¨ h = k h(0) = −1 ˙ h(0) =

slide-59
SLIDE 59

Johansen and Berkovitz-Pollard Problem

Given a k(s) define h(s) by ¨ h = k h(0) = −1 ˙ h(0) = ˆ x(t) = ∞ k(s)y(t − s) ds = ∞ ¨ h(s)x(t − s) + k(s)w(t − s) ds = x(t) + ∞ h(s)u(t − s) + k(s)w(t − s) ds ˜ x(t) = − ∞ h(s)u(t − s) + k(s)w(t − s) ds

slide-60
SLIDE 60

Johansen and Berkovitz-Pollard Problem

Then Ew(˜ x(t))2 = ∞ h(s)u(s) ds 2 + ∞ (k(s))2 ds and we have a differential game.

slide-61
SLIDE 61

Johansen and Berkovitz-Pollard Problem

Then Ew(˜ x(t))2 = ∞ h(s)u(s) ds 2 + ∞ (k(s))2 ds and we have a differential game. Our adversary wishes to choose u(s) to maximize this quantity subject to |u(s)| ≤ 1 .

slide-62
SLIDE 62

Johansen and Berkovitz-Pollard Problem

Then Ew(˜ x(t))2 = ∞ h(s)u(s) ds 2 + ∞ (k(s))2 ds and we have a differential game. Our adversary wishes to choose u(s) to maximize this quantity subject to |u(s)| ≤ 1 . We wish to choose k(s), h(s) to minimize this maximum subject to ¨ h = k h(0) = −1 ˙ h(0) =

slide-63
SLIDE 63

Johansen and Berkovitz-Pollard Problem

Then Ew(˜ x(t))2 = ∞ h(s)u(s) ds 2 + ∞ (k(s))2 ds and we have a differential game. Our adversary wishes to choose u(s) to maximize this quantity subject to |u(s)| ≤ 1 . We wish to choose k(s), h(s) to minimize this maximum subject to ¨ h = k h(0) = −1 ˙ h(0) = Clearly for a given k(s), h(s) , the maximizing u(s) are u(s) = ± sign(h(s))

slide-64
SLIDE 64

Johansen and Berkovitz-Pollard Problem

So max

|u|≤1 Ew(˜

x(t))2 = ∞ |h(s)| ds 2 + ∞ (k(s))2 ds

slide-65
SLIDE 65

Johansen and Berkovitz-Pollard Problem

So max

|u|≤1 Ew(˜

x(t))2 = ∞ |h(s)| ds 2 + ∞ (k(s))2 ds The differential game reduces to a non standard optimal control

  • f choosing k(s), h(s) to minimize this quantity subject to

¨ h = k h(0) = −1 ˙ h(0) =

slide-66
SLIDE 66

Johansen and Berkovitz-Pollard Problem

So max

|u|≤1 Ew(˜

x(t))2 = ∞ |h(s)| ds 2 + ∞ (k(s))2 ds The differential game reduces to a non standard optimal control

  • f choosing k(s), h(s) to minimize this quantity subject to

¨ h = k h(0) = −1 ˙ h(0) = The Euler Lagrange equation for this problem is h(4) = −γ sign(h) where γ = ∞ |h(s)| ds

slide-67
SLIDE 67

Johansen and Berkovitz-Pollard Problem

Consider the related differential equation φ(4) = − sign(φ)

slide-68
SLIDE 68

Johansen and Berkovitz-Pollard Problem

Consider the related differential equation φ(4) = − sign(φ) Two one parameter groups act on the space of solutions of this equation. φ(s) → φ(s + σ), σ ∈ I R φ(s) → α4φ(s/α), α ∈ I R>0

slide-69
SLIDE 69

Johansen and Berkovitz-Pollard Problem

Consider the related differential equation φ(4) = − sign(φ) Two one parameter groups act on the space of solutions of this equation. φ(s) → φ(s + σ), σ ∈ I R φ(s) → α4φ(s/α), α ∈ I R>0 We look for a self similar solution that has consecutive simple zeros at s = 0, s = 1 and satisfies for s ∈ [0, α] φ(s + 1) = −α4φ(s/α)

slide-70
SLIDE 70

Johansen and Berkovitz-Pollard Problem

Consider the related differential equation φ(4) = − sign(φ) Two one parameter groups act on the space of solutions of this equation. φ(s) → φ(s + σ), σ ∈ I R φ(s) → α4φ(s/α), α ∈ I R>0 We look for a self similar solution that has consecutive simple zeros at s = 0, s = 1 and satisfies for s ∈ [0, α] φ(s + 1) = −α4φ(s/α) On s ∈ [0, 1] φ(s) = c1s + c2s2/2 + c3s3/6 + c4s4/24 where c4 = − sign(c1) = 0

slide-71
SLIDE 71

Johansen and Berkovitz-Pollard Problem

Matching φ(s) and its first three derivatives at s = 1± we

  • btain

        =     1 1/2! 1/3! 1/4! 1 + α3 1 1/2! 1/3! 1 + α2 1 1/2! 1 + α 1         c1 c2 c3 c4     so the determinant of this matrix must be zero.

slide-72
SLIDE 72

Johansen and Berkovitz-Pollard Problem

Matching φ(s) and its first three derivatives at s = 1± we

  • btain

        =     1 1/2! 1/3! 1/4! 1 + α3 1 1/2! 1/3! 1 + α2 1 1/2! 1 + α 1         c1 c2 c3 c4     so the determinant of this matrix must be zero. The determinant is p(s) = (−α6 + 3α5 + 5α4 − 5α2 − 3α + 1)/24 and it has three positive roots α =    0.2421 1 1/0.2421

slide-73
SLIDE 73

Johansen and Berkovitz-Pollard Problem

Matching φ(s) and its first three derivatives at s = 1± we

  • btain

        =     1 1/2! 1/3! 1/4! 1 + α3 1 1/2! 1/3! 1 + α2 1 1/2! 1 + α 1         c1 c2 c3 c4     so the determinant of this matrix must be zero. The determinant is p(s) = (−α6 + 3α5 + 5α4 − 5α2 − 3α + 1)/24 and it has three positive roots α =    0.2421 1 1/0.2421 The first and third roots yield self similar solutions to φ(4) = − sign(φ) while the second root yields a periodic solution to φ(4) = sign(φ) .

slide-74
SLIDE 74

Johansen and Berkovitz-Pollard Problem

We choose the first root because that solution chatters to zero at s = 1/(1 − α) = 1.3194 .

slide-75
SLIDE 75

Johansen and Berkovitz-Pollard Problem

We choose the first root because that solution chatters to zero at s = 1/(1 − α) = 1.3194 . Then h(s) = γβ4φ(s/β) where β is chosen so that 1 = ∞ |β4φ(s/β)| ds Then γ is chosen so that h(0) = −1

slide-76
SLIDE 76

Johansen and Berkovitz-Pollard Problem

For s ∈ [0, β] h(s) = −s + 0.872575492926169s2 − 0.253795996951782s3 +0.024616157365051s4 k(s) = 1.745150985852338 − 1.522775981710693s +0.295393888380611s2 and it chatters to zero at β/(1 − α) = 4.2244 .

slide-77
SLIDE 77

Johansen and Berkovitz-Pollard Problem

For s ∈ [0, β] h(s) = −s + 0.872575492926169s2 − 0.253795996951782s3 +0.024616157365051s4 k(s) = 1.745150985852338 − 1.522775981710693s +0.295393888380611s2 and it chatters to zero at β/(1 − α) = 4.2244 . Integration by parts yields the minmax expected error variance ¨ h(0) = k(0) = 1.745150985852338

slide-78
SLIDE 78

Johansen and Berkovitz-Pollard Problem

For s ∈ [0, β] h(s) = −s + 0.872575492926169s2 − 0.253795996951782s3 +0.024616157365051s4 k(s) = 1.745150985852338 − 1.522775981710693s +0.295393888380611s2 and it chatters to zero at β/(1 − α) = 4.2244 . Integration by parts yields the minmax expected error variance ¨ h(0) = k(0) = 1.745150985852338 The problem is that the resulting filter is infinite dimensional as it requires storing the values of y(t − s) for s ∈ [0, 4.2244] .

slide-79
SLIDE 79

Johansen and Berkovitz-Pollard Problem

For s ∈ [0, β] h(s) = −s + 0.872575492926169s2 − 0.253795996951782s3 +0.024616157365051s4 k(s) = 1.745150985852338 − 1.522775981710693s +0.295393888380611s2 and it chatters to zero at β/(1 − α) = 4.2244 . Integration by parts yields the minmax expected error variance ¨ h(0) = k(0) = 1.745150985852338 The problem is that the resulting filter is infinite dimensional as it requires storing the values of y(t − s) for s ∈ [0, 4.2244] . And what about a general linear system?

slide-80
SLIDE 80

Linear Time Invariant Minimax Filtering

Plant: ˙ x = Ax + Bu, u∞ ≤ 1 y = Cx + Dw, w WGN z = Lx, z ∈ I R

slide-81
SLIDE 81

Linear Time Invariant Minimax Filtering

Plant: ˙ x = Ax + Bu, u∞ ≤ 1 y = Cx + Dw, w WGN z = Lx, z ∈ I R Linear Filter: ˆ z = ∞ k(s)y(t − s) ds Goal: min

k

max

u∞≤1 Ew(˜

xi)2

slide-82
SLIDE 82

Linear Time Invariant Minimax Filtering

Given a k(s) define h(s) as before ˙ h = hA + kC h(0) = −L

slide-83
SLIDE 83

Linear Time Invariant Minimax Filtering

Given a k(s) define h(s) as before ˙ h = hA + kC h(0) = −L After integration by parts ˜ z(t) = ∞ h(s)Bu(t − s) + k(s)Dw(t − s) ds Ew(˜ z(t))2 = ∞ h(s)Bu(t − s) ds 2 + ∞ k(s)DD′k′(s) ds max

u∞≤1 Ew(˜

z(t))2 = ∞ h(s)B1 ds 2 + ∞ k(s)DD′k′(s) ds

slide-84
SLIDE 84

Non Standard Optimal Control Problem

Minimize ∞ h(s)B1 ds 2 + ∞ k(s)DD′k′(s) ds subject to ˙ h = hA + kC h(0) = −L

  • State h(s) ∈ I

R1×n , Control k(s) ∈ I R1×p

slide-85
SLIDE 85

Non Standard Optimal Control Problem

Minimize ∞ h(s)B1 ds 2 + ∞ k(s)DD′k′(s) ds subject to ˙ h = hA + kC h(0) = −L State h(s) ∈ I R1×n , Control k(s) ∈ I R1×p This optimization problem is too complicated for the Euler-Lagrange approach so we apply the Pontryagin Maximum Principle instead.

slide-86
SLIDE 86

Pontryagin Maximum Principle

Add an extra state coordinate ˙ hn+1 = hB1

slide-87
SLIDE 87

Pontryagin Maximum Principle

Add an extra state coordinate ˙ hn+1 = hB1 Adjoint variables ξ ∈ I Rn×1 , ζ ∈ I R .

slide-88
SLIDE 88

Pontryagin Maximum Principle

Add an extra state coordinate ˙ hn+1 = hB1 Adjoint variables ξ ∈ I Rn×1 , ζ ∈ I R . Control Hamiltonian H = hAξ + kCξ + hB1ζ + kDD′k Adjoint Dynamics ˙ ξ = − ∂H ∂h ′ = −Aξ − B ( sign(hB))′ ζ ˙ ζ = −

  • ∂H

∂hn+1

  • = 0
slide-89
SLIDE 89

Pontryagin Maximum Principle

Add an extra state coordinate ˙ hn+1 = hB1 Adjoint variables ξ ∈ I Rn×1 , ζ ∈ I R . Control Hamiltonian H = hAξ + kCξ + hB1ζ + kDD′k Adjoint Dynamics ˙ ξ = − ∂H ∂h ′ = −Aξ − B ( sign(hB))′ ζ ˙ ζ = −

  • ∂H

∂hn+1

  • = 0
slide-90
SLIDE 90

Pontryagin Maximum Principle

Maximize the Hamiltonian with respect to the control = ∂H ∂k = Cξ + 2DD′k′ k = −ξ′C′(DD′)−1 2 and plug into the dynamics.

slide-91
SLIDE 91

Pontryagin Maximum Principle

Hamiltonian Dynamics and Transversality Conditions ˙ h = hA − ξ′C′(DD′)−1C 2 ˙ hn+1 = hB1 ˙ ξ = −Aξ − B ( sign(hB))′ ζ ˙ ζ = −2hB1 h(0) = −L hn+1(0) = ξ(∞) = ζ(∞) =

slide-92
SLIDE 92

Pontryagin Maximum Principle

Hamiltonian Dynamics and Transversality Conditions ˙ h = hA − ξ′C′(DD′)−1C 2 ˙ hn+1 = hB1 ˙ ξ = −Aξ − B ( sign(hB))′ ζ ˙ ζ = −2hB1 h(0) = −L hn+1(0) = ξ(∞) = ζ(∞) = This is usually too complicated to solve explicitly and even if we could the resulting filter would probably be infinite dimensional.

slide-93
SLIDE 93

Rational Minimax Filtering

Therefore we restrict the optimization to weighing patterns k(s) that are the impulse responses of finite dimensional linear systems.

slide-94
SLIDE 94

Rational Minimax Filtering

Therefore we restrict the optimization to weighing patterns k(s) that are the impulse responses of finite dimensional linear systems. In other words we restrict to k(s) whose Laplace transforms are rational. k(s) =

N

  • i=1

γieλis

slide-95
SLIDE 95

Rational Minimax Filtering

Therefore we restrict the optimization to weighing patterns k(s) that are the impulse responses of finite dimensional linear systems. In other words we restrict to k(s) whose Laplace transforms are rational. k(s) =

N

  • i=1

γieλis This guarantees that the resulting filter is finite dimensional, it can be realized by a finite dimensional time invariant linear system.

slide-96
SLIDE 96

Rational Minimax Filtering

ˆ z(t) = ∞ k(s)y(t − s) ds k(s) =

N

  • i=1

γieλis is realized by ˙ ξ =    λ1 ... λN    ξ +    1 ... 1    y ˆ z(t) =

  • γ1

. . . γN

  • ξ
slide-97
SLIDE 97

Rational Minimax Filtering

If we look for a filter the same size as the original system N = n , A, B is a controllable pair and all the eigenvalues of A are in the closed right half plane then the filter takes the form k(s) = −h(s)G for some G .

slide-98
SLIDE 98

Rational Minimax Filtering

If we look for a filter the same size as the original system N = n , A, B is a controllable pair and all the eigenvalues of A are in the closed right half plane then the filter takes the form k(s) = −h(s)G for some G . In other words we are finding the linear feedback that min

G

∞ h(s)B1 ds 2 + ∞ k(s)DD′k′(s) ds subject to ˙ h = hA + kC h(0) = −L k(s) = −h(s)G

slide-99
SLIDE 99

Rational Minimax Filtering

One virtue of this approach is that the resulting filter is realized by the linear system ˙ ξ = (A − GC)ξ + Gy = Aξ + G(y − Cξ) ˆ z = Lξ and it looks like a Kalman filter or linear observer.

slide-100
SLIDE 100

Rational Minimax Filtering

One virtue of this approach is that the resulting filter is realized by the linear system ˙ ξ = (A − GC)ξ + Gy = Aξ + G(y − Cξ) ˆ z = Lξ and it looks like a Kalman filter or linear observer. Notice that there may be a different gain G and different filter for each linear functional of the state z = Lx .

slide-101
SLIDE 101

Rational Minimax Filtering

One virtue of this approach is that the resulting filter is realized by the linear system ˙ ξ = (A − GC)ξ + Gy = Aξ + G(y − Cξ) ˆ z = Lξ and it looks like a Kalman filter or linear observer. Notice that there may be a different gain G and different filter for each linear functional of the state z = Lx . This suggests the following approach. Use numerical routines to minimize the optimal control problem with and without the restriction that k(s) = h(s)G . If the optimal cost of the former is close enough to that of the latter, accept the filter. If not expand the class of rational filters that are considered.

slide-102
SLIDE 102

Single Integrator

We tried this approach on some model problems.

slide-103
SLIDE 103

Single Integrator

We tried this approach on some model problems. A = 0 B = 1 C = 1 D = 1 z = x Optimal Cost Suboptimal Rational Cost Ratio 1.1006 1.1906 1.0818 We were able to compute the optimal infinite dimensional filter explicitly. The suboptimal filter was computed using a numerical

  • ptimization routine.
slide-104
SLIDE 104

Double Integrator

A = 1

  • B =

1

  • C =
  • 1
  • D =
  • 1
  • (JBP Problem)

z = x1 Optimal Cost Suboptimal Rational Cost Ratio 1.7452 1.7880 1.0245

slide-105
SLIDE 105

Double Integrator

A = 1

  • B =

1

  • C =
  • 1
  • D =
  • 1
  • (JBP Problem)

z = x1 Optimal Cost Suboptimal Rational Cost Ratio 1.7452 1.7880 1.0245 z = x2 Optimal Cost Suboptimal Rational Cost Ratio 2.1269 2.2733 1.0688 Again we were able to compute the optimal infinite dimensional filters explicitly. The suboptimal filters were computed using a numerical

  • ptimization routine.
slide-106
SLIDE 106

Triple Integrator

Estimate x1 A =   1 1   B =   1   C =

  • 1
  • D =
  • 1
  • z = x1
  • Approx. Optimal Cost

Suboptimal Rational Cost Ratio 2.4074 2.4282 1.009 We computed the optimal filter and the suboptimal filter using numerical optimization routines.

slide-107
SLIDE 107

Quadruple Integrator

Estimate x1 A =     1 1 1     B =     1     C =

  • 1
  • D =
  • 1
  • z = x1
  • Approx. Optimal Cost

Suboptimal Rational Cost Ratio 3.0722 3.0901 1.006 We computed the optimal filter and the suboptimal filter using numerical optimization routines.

slide-108
SLIDE 108

Harmonic Oscillator

Estimate x1 A = −1 1

  • B =

1

  • C =
  • 1
  • D =
  • 1
  • z = x1
  • Approx. Optimal Cost

Suboptimal Cost Ratio 1.26 1.3536 1.07 We computed the optimal filter and the suboptimal filter using numerical optimization routines.

slide-109
SLIDE 109

Remarks

For the single integrator

  • The best first order filter that we found was 8.2% above
  • ptimal.
slide-110
SLIDE 110

Remarks

For the single integrator

  • The best first order filter that we found was 8.2% above
  • ptimal.
  • It is the Kalman filter we would have constructed if we had

assumed that the driving noise covariance was 2.5198.

slide-111
SLIDE 111

Remarks

For the single integrator

  • The best first order filter that we found was 8.2% above
  • ptimal.
  • It is the Kalman filter we would have constructed if we had

assumed that the driving noise covariance was 2.5198.

  • The Kalman filter with driving noise covariance 1 was 36%

above optimal.

slide-112
SLIDE 112

Remarks

For the single integrator

  • The best first order filter that we found was 8.2% above
  • ptimal.
  • It is the Kalman filter we would have constructed if we had

assumed that the driving noise covariance was 2.5198.

  • The Kalman filter with driving noise covariance 1 was 36%

above optimal.

slide-113
SLIDE 113

Remarks

For the double integrator

  • The best second order filter for estimating x1 that we found

was 2.4% above optimal.

slide-114
SLIDE 114

Remarks

For the double integrator

  • The best second order filter for estimating x1 that we found

was 2.4% above optimal.

  • The best Kalman filter for estimating x1 that we found was

2.6% above optimal. The driving noise covariance was 3.4.

slide-115
SLIDE 115

Remarks

For the double integrator

  • The best second order filter for estimating x1 that we found

was 2.4% above optimal.

  • The best Kalman filter for estimating x1 that we found was

2.6% above optimal. The driving noise covariance was 3.4.

  • The Kalman filter for estimating x1 with driving noise

covariance 1 was 6.4% above optimal.

slide-116
SLIDE 116

Remarks

For the double integrator

  • The best second order filter for estimating x1 that we found

was 2.4% above optimal.

  • The best Kalman filter for estimating x1 that we found was

2.6% above optimal. The driving noise covariance was 3.4.

  • The Kalman filter for estimating x1 with driving noise

covariance 1 was 6.4% above optimal.

  • The best gain for estimating x2 was 7% above optimal.
slide-117
SLIDE 117

Remarks

For the double integrator

  • The best second order filter for estimating x1 that we found

was 2.4% above optimal.

  • The best Kalman filter for estimating x1 that we found was

2.6% above optimal. The driving noise covariance was 3.4.

  • The Kalman filter for estimating x1 with driving noise

covariance 1 was 6.4% above optimal.

  • The best gain for estimating x2 was 7% above optimal.
  • If we used the best gain for estimating x1 to estimate x2

the performance was 9% above optimal.

slide-118
SLIDE 118

Remarks

For the double integrator

  • The best second order filter for estimating x1 that we found

was 2.4% above optimal.

  • The best Kalman filter for estimating x1 that we found was

2.6% above optimal. The driving noise covariance was 3.4.

  • The Kalman filter for estimating x1 with driving noise

covariance 1 was 6.4% above optimal.

  • The best gain for estimating x2 was 7% above optimal.
  • If we used the best gain for estimating x1 to estimate x2

the performance was 9% above optimal.

slide-119
SLIDE 119

Remarks

From this we might conclude that a Kalman filter can be a nearly optimal rational filter provided that we choose the driving noise covariance correctly.

slide-120
SLIDE 120

Remarks

From this we might conclude that a Kalman filter can be a nearly optimal rational filter provided that we choose the driving noise covariance correctly. This conclusion is wrong!

slide-121
SLIDE 121

Remarks

From this we might conclude that a Kalman filter can be a nearly optimal rational filter provided that we choose the driving noise covariance correctly. This conclusion is wrong! For the single integrator

  • The best first order filter that we found was 8.2% above
  • ptimal. It is the best Kalman filter.
slide-122
SLIDE 122

Remarks

From this we might conclude that a Kalman filter can be a nearly optimal rational filter provided that we choose the driving noise covariance correctly. This conclusion is wrong! For the single integrator

  • The best first order filter that we found was 8.2% above
  • ptimal. It is the best Kalman filter.
  • The best second order filter that we found was 1.4% above
  • ptimal. The poles were complex at −1.9572 ± 1.0372i .
slide-123
SLIDE 123

Remarks

From this we might conclude that a Kalman filter can be a nearly optimal rational filter provided that we choose the driving noise covariance correctly. This conclusion is wrong! For the single integrator

  • The best first order filter that we found was 8.2% above
  • ptimal. It is the best Kalman filter.
  • The best second order filter that we found was 1.4% above
  • ptimal. The poles were complex at −1.9572 ± 1.0372i .
slide-124
SLIDE 124

Remarks

From this we might conclude that a Kalman filter can be a nearly optimal rational filter provided that we choose the driving noise covariance correctly. This conclusion is wrong! For the single integrator

  • The best first order filter that we found was 8.2% above
  • ptimal. It is the best Kalman filter.
  • The best second order filter that we found was 1.4% above
  • ptimal. The poles were complex at −1.9572 ± 1.0372i .

For the double integrator

  • The best second order filter for estimating x1 that we found

was 2.4% above optimal. The best Kalman filter was 2.6% above optimal.

slide-125
SLIDE 125

Remarks

From this we might conclude that a Kalman filter can be a nearly optimal rational filter provided that we choose the driving noise covariance correctly. This conclusion is wrong! For the single integrator

  • The best first order filter that we found was 8.2% above
  • ptimal. It is the best Kalman filter.
  • The best second order filter that we found was 1.4% above
  • ptimal. The poles were complex at −1.9572 ± 1.0372i .

For the double integrator

  • The best second order filter for estimating x1 that we found

was 2.4% above optimal. The best Kalman filter was 2.6% above optimal.

  • The best fourth order filter for estimating x1 that we found

was 0.7% above optimal. The poles were at −1.4442 ± 0.9460i and −1.7142 ± 1.8055i.

slide-126
SLIDE 126

Remarks

From this we might conclude that a Kalman filter can be a nearly optimal rational filter provided that we choose the driving noise covariance correctly. This conclusion is wrong! For the single integrator

  • The best first order filter that we found was 8.2% above
  • ptimal. It is the best Kalman filter.
  • The best second order filter that we found was 1.4% above
  • ptimal. The poles were complex at −1.9572 ± 1.0372i .

For the double integrator

  • The best second order filter for estimating x1 that we found

was 2.4% above optimal. The best Kalman filter was 2.6% above optimal.

  • The best fourth order filter for estimating x1 that we found

was 0.7% above optimal. The poles were at −1.4442 ± 0.9460i and −1.7142 ± 1.8055i.

slide-127
SLIDE 127

Conclusions

  • Minimax filters focus on worse case rather than average

case performance.

slide-128
SLIDE 128

Conclusions

  • Minimax filters focus on worse case rather than average

case performance.

  • Minimax filters do not require knowledge of the driving

noise covariance, instead, a bound on its magnitude.

slide-129
SLIDE 129

Conclusions

  • Minimax filters focus on worse case rather than average

case performance.

  • Minimax filters do not require knowledge of the driving

noise covariance, instead, a bound on its magnitude.

  • Rational minimax filtering is a computationally feasible

alternative to Kalman filtering for low dimensional systems.

slide-130
SLIDE 130

Conclusions

  • Minimax filters focus on worse case rather than average

case performance.

  • Minimax filters do not require knowledge of the driving

noise covariance, instead, a bound on its magnitude.

  • Rational minimax filtering is a computationally feasible

alternative to Kalman filtering for low dimensional systems.

  • It is possible to compute how close to optimal is a rational

filter.

slide-131
SLIDE 131

Conclusions

  • Minimax filters focus on worse case rather than average

case performance.

  • Minimax filters do not require knowledge of the driving

noise covariance, instead, a bound on its magnitude.

  • Rational minimax filtering is a computationally feasible

alternative to Kalman filtering for low dimensional systems.

  • It is possible to compute how close to optimal is a rational

filter.

  • Increasing the dimension of the filter over that of the plant

can significantly improve performance.

slide-132
SLIDE 132

Conclusions

  • Minimax filters focus on worse case rather than average

case performance.

  • Minimax filters do not require knowledge of the driving

noise covariance, instead, a bound on its magnitude.

  • Rational minimax filtering is a computationally feasible

alternative to Kalman filtering for low dimensional systems.

  • It is possible to compute how close to optimal is a rational

filter.

  • Increasing the dimension of the filter over that of the plant

can significantly improve performance.

  • More research is needed to understand how to choose a

good suboptimal rational filter particularly when the dimension of the filter is greater than that of the original system.

slide-133
SLIDE 133

Conclusions

  • Minimax filters focus on worse case rather than average

case performance.

  • Minimax filters do not require knowledge of the driving

noise covariance, instead, a bound on its magnitude.

  • Rational minimax filtering is a computationally feasible

alternative to Kalman filtering for low dimensional systems.

  • It is possible to compute how close to optimal is a rational

filter.

  • Increasing the dimension of the filter over that of the plant

can significantly improve performance.

  • More research is needed to understand how to choose a

good suboptimal rational filter particularly when the dimension of the filter is greater than that of the original system.

  • Happy Birthday Eduardo