State space models Convenient description of dynamical systems that - - PowerPoint PPT Presentation

state space models
SMART_READER_LITE
LIVE PREVIEW

State space models Convenient description of dynamical systems that - - PowerPoint PPT Presentation

State space models Convenient description of dynamical systems that are causal, linear, finite-dimensional and time invariant k + 1 = A k + Bu k y k = C k + Du k denoted by [ A , B , C , D ] where A R n n , B R n m , C R l n ,


slide-1
SLIDE 1

State space models

Convenient description of dynamical systems that are causal, linear, finite-dimensional and time invariant

k+1 = Ak + Buk

yk = Ck + Duk denoted by [A, B, C, D]where A ∈ Rn×n, B ∈ Rn×m, C ∈ Rl×n, A ∈ Rl×m uk B + ∆ C + A D yk xk+1 xk

Lecture 4: State Space Models and Realization Theory 1 / 91

slide-2
SLIDE 2

Transfer function

Apply the z-transformation zX(z) − zx0 = AX(z) + BU(z) Y (z) = CX(z) + DU(z) Investigating input-output relations, we need to eliminate X(z) Y (z) = C(zI − A)−1B + D + C(zI − A)−1zx0 Assume x0 = 0, then Y(z) = G(z)U(z) G(z) = C(zI − A)−1B + D we call G(z) the transfer function matrix

Lecture 4: State Space Models and Realization Theory 2 / 91

slide-3
SLIDE 3

Inverse transfer function

Assume D invertible (only if number of inputs and outputs are the same!) G = [A, B, C, D]

  • G−1 = [A − BD−1C, BD−1, −D−1C, D−1]

G: Transformed outputs can be obtained as a linear transformation of the inputs G−1: Inputs can be obtained as a linear transformation of the transformed

  • utputs

Lecture 4: State Space Models and Realization Theory 3 / 91

slide-4
SLIDE 4

Similarity transform

Transfer function is unique, but state space model is not! xk = Twk where wk ∈ Rn, T ∈ Rn×n wk+1 = (T −1AT)wk + T −1Buk yk = (CT)wk + Duk Same transfer function, but [A, B, C, D] = [T −1AT, T −1B, CT, D] Note than under similarity transforms Λ(A) = Λ(T −1AT)

Lecture 4: State Space Models and Realization Theory 4 / 91

slide-5
SLIDE 5

Poles

The eigenvalues λi, i = 1, . . . , n of the system matrix A are called the poles of a state space description [A, B, C, D] of the system. The pole polynomial is defined as π(z) =

n

  • i=1

(z − λi) Poles of a transfer natrix G(z) of a system are the values of z where at least

  • ne of the entries of G(z) is infinite

Since G(z) = C(zI − A)−1B + D poles of transfer matrix = poles of the state space description!

Lecture 4: State Space Models and Realization Theory 5 / 91

slide-6
SLIDE 6

Poles

Why are poles interesting?

  • 1. They are system invariants, like the transfer function matrix
  • 2. Stability
  • 3. Observability
  • 4. Controllability

Lecture 4: State Space Models and Realization Theory 6 / 91

slide-7
SLIDE 7

Zeros

Normal rank of G(z) is the ‘generic’ rank of G(z), i.e. the rank of G(z) for almost all complex numbers z ∈ C, except for a finite number, called zeros of the transfer matrix Zeros z ∈ C lower the rank of G(z) rank (G(z)) < normal rank Zeros can be computed as the eigenvalues of A − BD−1C, therefore finite in number Note that when D is invertible, the zeros of G are the poles of G−1

Lecture 4: State Space Models and Realization Theory 7 / 91

slide-8
SLIDE 8

Zero structure at infinity

Mapping of complex plane onto itself and move point at s = ∞ to the point λ = −δ/γ using s = (αλ + β)(γλ + δ)−1 ¯ G(λ) = G

  • αλ + β

γλ + δ

  • Zero structure at infinity can be deduced from zero structure at

λ0 = δ/γ, γ = 0 and λ0 does not coincide with a pole of G(z) The zero polynomial is defined as ξ(z) =

  • i

(z − ξi) where the product is over all zeros

Lecture 4: State Space Models and Realization Theory 8 / 91

slide-9
SLIDE 9

Zeros

Poles and zeros can coincide in a multivariable system without cancelling in the transfer matrix Example G(z) =

z−1

z z+1 z−1

  • with zeros 1 and -1, poles 0 and 1

Example G(z) =

  • 1

1 z

1

  • with a pole and zero at 0

When G(z) ∈ Rl×l with rank l, poles of G are zeros of G−1

Lecture 4: State Space Models and Realization Theory 9 / 91

slide-10
SLIDE 10

Computing zeros

Zeros of a MIMO1 via generalized eigenvalue decomposition If ξ causes G(ξ) to drop rank, and ξ not a pole, then (D + C(ξI − A)−1B)v = 0 Denote u = (ξI − A)−1Bv, then

  • A

B C D u v

  • =
  • I

u v

  • ξ

If D is invertible, v = −D−1Cu (A − BD−1C)u = uξ

1multiple input, multiple output Lecture 4: State Space Models and Realization Theory 10 / 91

slide-11
SLIDE 11

Computing zeros

System with two inputs, two outputs

A =

1.9

−1.68 0.49 1 1

  • B =
  • 1

2 0.5 3 −1 2

  • C =

2 1.5 −5 −1 −2 3

  • D = 1
  • with transfer matrix

G(z) =

  • (z + 7.6609)(z2 − 1.8109z + 0.9679)

(z + 7.02664)(z − 0.5797) (z2 − 1.6860z + 0.8310) (z − 3.9695)(z − 0.5605)

  • (z − 0.5)(z2 − 1.4z + 0.98)

D is not invertible, the generalized eigenvalue problen

>> eig ([A B; C D],diag([1 1 1 0 0]) )

gives us -7.9471, 0.9771, ∞

Lecture 4: State Space Models and Realization Theory 11 / 91

slide-12
SLIDE 12

Physical interpretation of poles and zeros

What about the generalized eigenvectors? For a zero ξ we find vectors x0 and u0 such that

  • ξI − A

−B C D x0 u0

  • = 0

from which we can construct an input uk = u0ξk k ≤ 0 In combination with the initial state vector x0 the response is yk = 0 k ≤ 0

Lecture 4: State Space Models and Realization Theory 12 / 91

slide-13
SLIDE 13

Physical interpretation of poles and zeros

y0 = Cx0 + Du0 = 0 x1 = Ax0 + Bu0 = ξx0 y1 = Cx1 + Du1 = ξ(Cx0 + Du0) = 0 x2 = Ax1 + Bu1 = ξ(Ax0 + Bu0) = ξ2x0 . . .

Lecture 4: State Space Models and Realization Theory 13 / 91

slide-14
SLIDE 14

Physical interpretation of poles and zeros

A = 1.3 −0.4 1

  • B = 1

1

  • C = 1

0.6 2 1

  • D = 0

1

  • Generalized eigenvalue problem finds zero at ξ = −0.6

>> sys = ss ([1.3

  • 0.4; 1 0],[1 1; 0 0],[1

0.6; 2

  • 1],[0 0; 0 1],1) ;

>> u = u0*(xi.^t); >> [y,t,x] = lsim(sys ,u,t,x0);

5 10 15 20 −1 −0.5 0.5 1 In(1) 5 10 15 20 −1 −0.5 0.5 1 State(1) 5 10 15 20 −1 −0.5 0.5 1 Out(1) 5 10 15 20 −1 −0.5 0.5 1 In(2) 5 10 15 20 −1 −0.5 0.5 1 State(2) 5 10 15 20 −1 −0.5 0.5 1 Out(2)

Lecture 4: State Space Models and Realization Theory 14 / 91

slide-15
SLIDE 15

Physical interpretation of poles and zeros

If l ≥ m, there are m − l + 1 linear independent vectors u0, u1, . . . , um−l

  • rthogonal to G(ξ)

G(ξ) u0 u1 . . . um−l

  • = matx0

The set of all rational m × 1 vectors f (z) such that G(z)f (z) = 0 is called the right null-space of G(z) Every element of the right null-space is orthogonal to the row of G(z) and the dimension is m − r, r is the normal rank of G(z)

Lecture 4: State Space Models and Realization Theory 15 / 91

slide-16
SLIDE 16

Physical interpretation of poles and zeros

Example: poles and zeros in the z-domain for transfer function G(z) = z2 + 1.8z + 0.85 z3 − 0.9z2 + 0.51z + 0.061 with zeros at: −0.9 ± 0.2i and poles at: 0.5 ± 0.6i and −0.1

−1 −0.5 0.5 1 −1 −0.5 0.5 1 −40 −20 20 40 y x dB Lecture 4: State Space Models and Realization Theory 16 / 91

slide-17
SLIDE 17

Impulse response of a system

Assume initial state x0 = 0 for state space equations

k+1 = Ak + Buk

yk = Ck + Duk Impulse input sequence at i-th component gives output y0 = d i y1 = Cbi y2 = CAbi . . . y k = CAk−1bi

Lecture 4: State Space Models and Realization Theory 17 / 91

slide-18
SLIDE 18

Impulse response of a system

Impulse response matrices/Markov parameters H0 = D H1 = CB H2 = CAB . . . Hk = CAk−1B Similarity transformation xk = Twk Hk = CAk−1B = (CT)(T −1Ak−1T)(T −1B) k = 1, 2, . . . Impulse response matrices are invariants of a system!

Lecture 4: State Space Models and Realization Theory 18 / 91

slide-19
SLIDE 19

Impulse response of a system

Example:

>> A = [2

  • 1.83

0.794

  • 0.185;

1 0 0 0; 0 1 0 0; 0 0 1 0]; >> B = [1 0

  • 1; 2 -1 3; 0
  • 0.5 1; 3 -2 0];

>> C = [1 0 0 2; -5 1 -4 0; 0 7 6

  • 3; 3 -2 4 5];

>> D = [0 1 0; 1 0 0; 0 0

  • 1; 1 2 0];

>> sys = ss(A,B,C,D,1) ; >> impulse(sys) −20 20 From: In(1) To: Out(1) −100 100 To: Out(2) −200 200 To: Out(3) 20 40 −100 100 To: Out(4) From: In(2) 20 40 From: In(3) 20 40 Impulse Response Time (seconds) Amplitude Lecture 4: State Space Models and Realization Theory 19 / 91

slide-20
SLIDE 20

Modal decomposition

Let A have an eigenvalue decomposition XJX−1 Similarity transform with T = X gives [A, B, C, D] → [J, X−1B, CX, D] Matrix J reveals the system poles and their multiplicity

Lecture 4: State Space Models and Realization Theory 20 / 91

slide-21
SLIDE 21

Modal decomposition

Case 1: A is diagonalizable; left and right eigenvectors of A Api = piλi Atqi = qiλi normalized such that qt

i pi = 1. It can be shown that

X(z) = (zI − A)−1zx0 =

n

  • i=1

Riz z − λi x0 with Ri = piq∗

i . By rearranging and taking the z-transform

xk =

n

  • i=1

αiλk

i pi

where αi = qt

i x0. This is the modal decomposition.

Lecture 4: State Space Models and Realization Theory 21 / 91

slide-22
SLIDE 22

Modal decomposition

Properties of the modal decomposition

◮ There are n modes in total, one for each eigenvalue of A ◮ An arbitrary initial condition will in general excite all the modes, but the

amount of excitation αi = qt

i x0 of any mode is independent of that of any

  • ther mode

◮ By making x0 proportional to the i-th right eigenvector pi, we excite only

the i-th mode

◮ The decomposition is unique because all eigenvectors are linearly

independent

◮ If, in addition to distinct eigenvalues, A is symmetric, then the eigenvectors

are orthogonal to one another. The left and right eigenvectors coincide.

Lecture 4: State Space Models and Realization Theory 22 / 91

slide-23
SLIDE 23

Modal decomposition

Example:

A =

  • 60.44

−97.84 55.14 21 −32 18 −26.44 46.2844 −26.14

  • B =
  • 2

1 −1 3 −2

  • C =

2 1 0.5 −2 −3

  • D = 0

1

  • The matrix T is equal to

T =

−0.1473 + 0.3427i

−0.1473 − 0.3427i −0.3059 −0.0339 + 0.6028i −0.0339 − 0.6028i −0.6023 0.0968 + 0.6979i 0.0968 − 0.6979i −0.7373

  • The diagonalized system is

AT =

0.8 + 0.4i

0.8 − 0.4i 0.7

  • BT =

−802.5 + 2821.4i

104.1 − 384.1i −802.5 − 2821.4i 104.1 + 384.1i −5555.3 757.1

  • CT = −0.2801 + 1.6371i

−0.2801 − 1.6371i −1.5827 −0.3962 − 2.4937i 0.3962 + 2.4937i 2.4187

  • DT = 0

1

  • The poles of the system are 0.8 ± 0.4i and 0.7

Lecture 4: State Space Models and Realization Theory 23 / 91

slide-24
SLIDE 24

Modal decomposition

What if A is not diagonalizable? XJX−1 We know Ak = XJkX−1 Replace xk by Xzk, then zk = Jkz0

Lecture 4: State Space Models and Realization Theory 24 / 91

slide-25
SLIDE 25

Modal decomposition

Example:

A =

2.5

−0.8 −0.4 9 −3.5 −2 −9 4.4 2.7

  • B =
  • 2

1 −1 3 −2

  • C =

2 1 0.5 −2 −3

  • D = 0

1

  • The matrix T is equal to

T =

  • 1

2 −1 2 2 1 9

  • The transformed system is

AT =

0.5

1 0.5 0.7

  • BT =

−31

−26 −16 −13 9 7

  • CT = 0

4.5 8.5 3 −8 −4

  • DT = 0

1

  • The poles of the system are 0.5 (double pole) and 0.7

Lecture 4: State Space Models and Realization Theory 25 / 91

slide-26
SLIDE 26

Convolution

By repeated application of the impulse response and elimination of state xk, the output in terms of the input and initial state is given by yk = CAkx0 + CAk−1Bu0 + CAk−2Bu1 + . . . + CBuk−1 + Duk If x0 = 0, we can express the output using the Markov parameters y k = H0uk + H1uk−1 + . . . + Hk−1u1 + Hku0 This amounts to a convolution of the impulse response sequence Hk with the input sequence uk Assuming zero initial state, the impulse response sequence suffices to uniquely determine the outputs from the inputs

Lecture 4: State Space Models and Realization Theory 26 / 91

slide-27
SLIDE 27

Impulse response is a sum of exponentials

For a system matrix with a decomposition A = XJX−1 the impulse response matrices can be factorized as Hk = CAk−1B = C(XJX−1)k−1B = (CX)Jk−1(X−1B) The structure of Jk in function of k shows every impulse response can be written as a linear combination of real exponentials and sinusoids

Lecture 4: State Space Models and Realization Theory 27 / 91

slide-28
SLIDE 28

Impulse response is a sum of exponentials

Assume A is diagonalizable Case 1: λ is real, then we find a real exponential in Hk of the form σλk Case 2: λ = α + jβ = ρe jω has a complex conjugate ¯ λ = α − jβ = ρe−jω The coefficient µ = γ + jδ = σe jψ of the powers of λ in Hk will be the complex conjugate of the coefficient of ¯ λ, together they combine to a sinusoid µλk + ¯ µ¯ λk = (σe jψ)(ρke jkω) + (σe−jψ)(ρke−jkω) = 2σρk [cos ψ cos(kω) − sin ψ sin(kω)] = 2σρk cos(kω + ψ)

Lecture 4: State Space Models and Realization Theory 28 / 91

slide-29
SLIDE 29

Impulse response is a sum of exponentials

Example:

A =

2.7

−2.92 1.466 −0.2664 1 1 1

  • B =

1

  • Ct =
  • 1

4.06 −4.552 0.74

  • D =

possesses eigenvalues 0.7 ± 0.5i and 0.4. The impulse response is thus2

hk = 12.9262(0.8602)k−1 cos(0.6202(k − 1) − 2.029) + 4.5572(0.9)k−1 + 2.16(0.4)k−1 k ≥ 1

10 20 30 40 50 60 −6 −4 −2 2 4 6 8 10 12 k hk

2The coefficients follow from the matrices CX and X−1B Lecture 4: State Space Models and Realization Theory 29 / 91

slide-30
SLIDE 30

A series expansion for G(z)

A series expansion for G(z) = D + C(zI − A)−1B can be obtained using (zI − A)−1 = 1 z (I − A/z)−1 = 1 z

  • I + A/z + A/z2 + . . .

which converges for all |z| > 1 provided |λ(A)| < 1, resulting in G(z) = D + CBz−1 + CABz−2 + CA2Bz−3 + . . . = H0 + H1z−1 + H2z−2 + . . .

Lecture 4: State Space Models and Realization Theory 30 / 91

slide-31
SLIDE 31

Stability

A causal system is externally stable if a bounded input uk < µ1, ∀k produces a bounded output y k < µ, ∀k A system with state space matrices A,B,C,D is internally stable iff |λi(A)| < 1 for all eigenvalues of A internally stable → externally stable Internal stability is often called asymptotic stability, realizations in which xk remains bounded for k → ∞ are just stable (e.g. roots on unit circle in z-domain)

Lecture 4: State Space Models and Realization Theory 31 / 91

slide-32
SLIDE 32

Part I Controllability and observability

Lecture 4: State Space Models and Realization Theory 32 / 91

slide-33
SLIDE 33

Controllability

A state xk of a system is controllable if all initial conditions x0 can be transferred to the final state xk by some control input sequence uk(x0) A system is called controllable if we can reach an arbitrary state x from any given initial state x0 in a finite time xk+1 − Ak+1x0 = B AB . . . AkB

   

uk uk−1 . . . u0

   

From the Cayley-Hamilton theorem R B AB . . . AkB = R B AB . . . An−1B k ≥ n − 1

Lecture 4: State Space Models and Realization Theory 33 / 91

slide-34
SLIDE 34

Controllability

The system xk+1 = Axk + Buk with A ∈ Rn×n and B ∈ Rn×m is controllable iff rank B AB . . . An−1B = n We define the controllability matrix ∆n = B AB . . . An−1B Note that for the multiple input case, it suffices if the states are controllable by a combination of inputs, and not only one!

Lecture 4: State Space Models and Realization Theory 34 / 91

slide-35
SLIDE 35

Controllability to and from the origin

Recall for k = n − 1 xn − Anx0 = ∆n

  

un−1 . . . u0

  

From the origin: x0 = 0 demands that xn ∈ R (∆n) or rank (∆n) = n The required input to achieve this state is

  

un−1 . . . u0

   = ∆†

nxn

Lecture 4: State Space Models and Realization Theory 35 / 91

slide-36
SLIDE 36

Controllability to and from the origin

To the origin: xn = 0 requires that Anx0 ∈ R (∆n) and because x0 is arbitrary R (An) = R (∆n)

◮ A nonsingular: rank (∆n) = n ◮ A singular: R (An) ⊂ R (∆n)

  

un−1 . . . u0

   = −∆†

nAnx0 + ∆⊥ n z

with z an arbitrary vector

Lecture 4: State Space Models and Realization Theory 36 / 91

slide-37
SLIDE 37

Observability

A state xk of a system is observable if knowledge if the input uk and output yk

  • veeer a finite time interval suffices to determine xk

Necessary and sufficient if we can determine x0 from the output, because xk+1 = Axk + Buk, consider the autonomous system xk+1 = Axk yk = Cxk Then yk = CAkx0 or over all observed outputs

   

y0 y1 . . . yk

    =      

C CA CA2 . . . CAk

     

x0

Lecture 4: State Space Models and Realization Theory 37 / 91

slide-38
SLIDE 38

Observability

We define the observability matrix Γk =

     

C CA CA2 . . . CAk

     

and rΓ = rank (Γn). By the Cayley-Hamilton theorem rank

     

C CA CA2 . . . CAk

     

= rank

     

C CA CA2 . . . CAn−1

     

∀k ≥ n − 1

Lecture 4: State Space Models and Realization Theory 38 / 91

slide-39
SLIDE 39

Observability

Beyond the first n observations the rank of Γn will not increase and x0 = Γ†

n

  

y 0 . . . y n−1

   + Γ⊥

n z

The solution is unique iff rank (Γn) = n The system xk+1 = Axk yk = Cxk is observable iff rank (Γn) = n

Lecture 4: State Space Models and Realization Theory 39 / 91

slide-40
SLIDE 40

Observability

Assume we decompose x0 as x0 = x1

0 + x2

where x1

0 ∈ R (Γt n) and x2 0 ∈ N (Γn), then

Γnx0 = Γnx1 Only the part of x0 belonging to the row space of Γn ‘gets through’ in the

  • utput sequence!

The row space of Γn is called the observable subspace of the state space; if rΓ < n, then the null space of Γn is the unobservable subspace

Lecture 4: State Space Models and Realization Theory 40 / 91

slide-41
SLIDE 41

Popov-Belevitch-Hautus tests

Theorem 1 (PBH test for controllability).

A pair (A, B) is not controllable iff there exists a left eigenvector q = 0 of A such that Atq = qλ Btq = 0 The pair (A, B) is controllable iff no left eigenvector of A is orthogonal to B The PBH theorem offers a practical test to detect controllable and non-controllable modes of a system

Lecture 4: State Space Models and Realization Theory 41 / 91

slide-42
SLIDE 42

Popov-Belevitch-Hautus tests

Theorem 2 (PBH test for observability).

A pair (A, C) is non-observable iff there exists an eigenvector p = 0 of A such that Ap = pλ Cp = 0 Note that controllability and observability of every eigenvalue of a realization are assessed independently! The PBH theorem offers a practical test to detect observable and non-observable modes of a system

Lecture 4: State Space Models and Realization Theory 42 / 91

slide-43
SLIDE 43

Popov-Belevitch-Hautus tests

Example: A =

−1.9

−2.4 −1.6 1.2 1.7 0.8 2.4 2 2.3

  • B =

6

−2 −5

  • C =

3 5 1 The system eigenvalues are λ1 = 0.5, λ2 = 0.7 and λ3 = 0.9 A modal decomposition gives AT =

0.5

0.7 0.9

  • BT =

2

−1

  • CT =

−3 1 Thus, mode λ3 is uncontrollable and mode λ1 is non-observable

Lecture 4: State Space Models and Realization Theory 43 / 91

slide-44
SLIDE 44

Popov-Belevitch-Hautus tests

Theorem 3 (PBH rank tests).

A pair (A, B) is controllable iff rank zI − A B = n ∀z A pair (A, C) is observable iff rank

  • C

zI − A

  • = n

∀z

Lecture 4: State Space Models and Realization Theory 44 / 91

slide-45
SLIDE 45

Stabilizability and detectability

Lemma 4 (Stabilizability).

A pair of matrices (A, B) is stabilizable if rank λI − A B = n for all λ ∈ C with |λ| ≥ 1| This implies all unstable modes can be controlled

Definition 5 (Detectability).

The pair (A, C) is detectable if all unstable modes of (A, B, C, D) are

  • bservable

Lecture 4: State Space Models and Realization Theory 45 / 91

slide-46
SLIDE 46

Kalman decomposition

For every state space system [A, B, C, D], there exists a similarity transformation such that TAT −1 = r1 r2 r3 r4

   

r1 Aco A13 r2 A21 Ac¯

  • A23

A24 r3 A¯

co

r4 A43 A¯

  • TB =

m

   

r1 Bco r2 Bc¯

  • r3

r4 CT −1 = r1 r2 r3 r4 ( ) l Cco C¯

co

Lecture 4: State Space Models and Realization Theory 46 / 91

slide-47
SLIDE 47

Kalman decomposition

The dimensions are related to the rank properties of the controllability and

  • bservability matrices ∆n and Γn

r1 = rank (Γn∆n) r2 = rank (∆n) − r1 r3 = rank (Γn) − r1 r4 = n − r1 − r2 − r3 More importantly, the subsystem [Aco, Bco, Cco, D] is controllable and observable

Lecture 4: State Space Models and Realization Theory 47 / 91

slide-48
SLIDE 48

Kalman decomposition

The subsystem [

  • Aco

A21 Ac¯

  • ,
  • Bco

Bc¯

  • ,

C co , D] is controllable The subsystem [

  • Aco

A13 A¯

co

  • ,

co

  • ,

Cco C ¯

co

  • , D]

is observable The subsystem [A¯

  • , 0, 0, D]

is neither controllable nor observable

Lecture 4: State Space Models and Realization Theory 48 / 91

slide-49
SLIDE 49

Algorithm for the Kalman decomposition

◮ Sc = R (∆n) is the controllable subspace ◮ S¯

  • = N (Γn) is the unobservable subspace

Data: ∆n and Γn Result: Q begin Compute numerical basis Qc for Sc Compute numerical basis Q¯

  • for S¯
  • Qc¯
  • = basis of Sc ∩ S¯
  • Qco = orthogonal complement of Qc¯
  • in Sc

  • = orthogonal complement of Qc¯
  • in S¯

co = orthogonal complement of Qc¯

  • ⊕ Qco ⊕ Q¯

  • Q =

co

Qco Q¯

  • Qc¯
  • end

Lecture 4: State Space Models and Realization Theory 49 / 91

slide-50
SLIDE 50

Algorithm for the Kalman decomposition

System transformation with Q [A, B, C, D] → [QtAQ, QtB, CQ, D] and x → Qx Map Q has the lowest possible condition number to offer when computing a Kalman decomposition

>> A = [-1.9

  • 2.4
  • 1.6;

1.2 1.7 0.8; 2.4 2 2.3]; B = [6;

  • 2;
  • 5]; C = [3 5 1]; D = 0;

>> sys = ss(A,B,C,D,1) ; >> tf(sys ) z^3 + 0.9 z^2 - 2.77 z + 1.035

  • z^3 - 2.1 z^2 + 1.43 z - 0.315

>> [sysr ,Q] = minreal(sys); 2 states removed. >> tf(sysr) z + 2.3

  • z - 0.7

Lecture 4: State Space Models and Realization Theory 50 / 91

slide-51
SLIDE 51

Minimal realizations

A minimal realization is one that has the smallest-size A matrix for all triples [A, B, C] satisfying G(z) = D + C(zI − A)−1B

Theorem 6.

A realization [A, B, C, D] is minimal iff it is controllable and observable

Theorem 7.

Let [Ai, Bi, Ci, D] be two minimal realizations of a transfer matrix, there exists a unique invertible matrix T such that A2 = T −1A1T B2 = T −1B1 C2 = C 1T Furthermore, T can be specified as T = ∆1∆t

2(∆2∆t 2)−1

= [(Γt

2Γ2)−1Γt 2Γ1]−1

Lecture 4: State Space Models and Realization Theory 51 / 91

slide-52
SLIDE 52

Part II Realization theory

Lecture 4: State Space Models and Realization Theory 52 / 91

slide-53
SLIDE 53

Block Hankel matrices and Markov parameters

Definition 8 (Block Hankel matrix).

Given a set of l × m matrices Hk, k = 1, . . . , K, The il × jm block Hankel matrix H, with block dimensions i and j and (i + j − 1) ≤ K, is defined as H =

       

H1 H2 H3 . . . Hj H2 H3 H4 . . . Hj+1 H3 H4 H5 . . . Hj+2 . . . . . . Hi−1 Hi Hi+1 . . . Hi+j−2 Hi Hi+1 Hi+2 . . . Hi+j−1

       

The block Hankel matrix is constructed from the Markov parameters Hp+q−1 = CAp+q−2Bq p = 1, . . . , i q = 1, . . . , j

Lecture 4: State Space Models and Realization Theory 53 / 91

slide-54
SLIDE 54

Block Hankel matrices and Markov parameters

Lemma 9 (Factorization property).

Let Hk = CAk−1B be the Markov parameters of a linear system and H be a il × jm block Hankel matrix. Then H = Γi∆i

Lemma 10 (Rank property).

Let Hk = CAk−1B be the Markov parameters of a linear system and H be a il × jm block Hankel matrix with i ≥ n and j ≥ n. Then rank (H) = n is the dimension of the minimal state space representation Rank of a block Hankel matrix is always equal to the dimension of the

  • bservable and controllable part of the state space if i and j are chosen large

enough

Lecture 4: State Space Models and Realization Theory 54 / 91

slide-55
SLIDE 55

Block Hankel matrices and Markov parameters

What block dimensions are ‘large enough’? G(z) = H0 + H1z−1 + H2z−2 + . . . Infinite number of Markov parameters implies infinite block dimensions for H Can a finite dimensional block Hankel matrix H suffice? Define H1 and H2 as the submatrices consisting of the i − 1 first block rows and j − 1 first block columns of H, respectively

Lemma 11 (Partial realizability criterion).

If rank (H) = rank (H1) = rank (H2) = n, then the blocks of H are Markov parameters of a linear system of minimal state space dimension n

Lecture 4: State Space Models and Realization Theory 55 / 91

slide-56
SLIDE 56

Block Hankel matrices and Markov parameters

Example: Consider the SISO process yk = 1 2yk−1 − 1 4yk−2 + uk The impulse response is stable and dies out fast, Hankel values indicate the system order is 2

>> sys = tf ([1 0 0], [1

  • 1/2 1/4] ,1) ;

>> statesys = ss(sys ); % State space representation >> [h,t] = impulse(sys ); >> H = hankel(h(2:4),h(4:6)) >> h(1:5)' 1.0000 0.5000

  • 0.1250
  • 0.0625

2 4 6 8 10 −0.2 0.2 0.4 0.6 0.8 1 1.2 k hk 1 2 3 4 5 6 0.1 0.2 0.3 0.4 0.5 0.6 0.7 dim H singular values

Lecture 4: State Space Models and Realization Theory 56 / 91

slide-57
SLIDE 57

Interface between past and future

Block Hankel can be considered a linear transformation between past inputs and future outputs of a system, let x−j = 0 and u− =

   

u−1 u−2 . . . u−j

   

The present state equals x0 = ∆ju−; if uk = 0, k ≥ 0 y + =

   

y0 y2 . . . y i−1

    = Γix0 = Γi∆ju− = Hu−

Lecture 4: State Space Models and Realization Theory 57 / 91

slide-58
SLIDE 58

Interface between past and future

State as an interface between the past and the future time −j . . .

  • 1

1 . . . i − 1 uk u−j . . . u−1 . . . xk

  • x0 = ∆ju−
  • y k
  • y0

y1 . . . y i−1 The input-output relation implies that the singular values of the block Hankel matrix characterize in a quantitative way the input-output properties of the system and are important with respect to

◮ controllability ◮ observability ◮ model reduction

Lecture 4: State Space Models and Realization Theory 58 / 91

slide-59
SLIDE 59

Energy interpretation of Controllability

The trajectory from a state x0 to xk can be expressed as xk − Akx0 = B AB A2B . . . Ak−1B

   

uk−1 uk−2 . . . u0

    = ∆ku

with a guaranteed solution if (xk − Akx0) ∈ R (∆k) From here on we assume k ≥ n and rank (∆k) = n

Lecture 4: State Space Models and Realization Theory 59 / 91

slide-60
SLIDE 60

Energy interpretation of Controllability

Energy of the input u2 =

k

  • i=0

ut

i ui

Find the input of minimum energy that brings the system from initial state x0 to end state xk at time k Underdetermined set of equations in unknowns ui, i = 0, . . . , k u = Ơ

k(xk − Akx0) + ∆⊥ k z

with z an arbitrary vector of size (mk − n)

Lecture 4: State Space Models and Realization Theory 60 / 91

slide-61
SLIDE 61

Energy interpretation of Controllability

Minimum energy is simply the first term, by using the SVD ∆k = Uk

  • ∈Rn×n

Σk

  • ∈Rn×n

V t

k

  • ∈Rn×mk

the minimum norm solution is u = V kΣ−1

k Ut k(xk − Akx0)

with minimum energy u2

min = (xk − Akx0)tUk

   

1/σ2

1

1/σ2

2

... 1/σ2

n

    Ut(xk − Akx0)

Lecture 4: State Space Models and Realization Theory 61 / 91

slide-62
SLIDE 62

Energy interpretation of controllability

◮ If (xk − Akx0) lies in the direction of the first column vector of Uk, the

minimal input energy u2 = 1/σ2

1xk − Akx02 needs relatively little

energy to go from x0 to xk

◮ If xk − Akx0 lies in the direction of the last column of Uk, the required

energy is u2 = 1/σ2

nxk − Akx02 will be relatively large

Singular values dictate the energy efficiency to reach a state transition x0 → xk Note that a similarity transformation of a state space system does not preserve singular values of the controllability matrix, for example T = Σ−1

k Ut k

T∆k = Σ−1

k Ut kUkΣkV t k = V t k

Lecture 4: State Space Models and Realization Theory 62 / 91

slide-63
SLIDE 63

Energy interpretation of observability

Consider the autonomous linear system xk+1 = Axk yk = Cxk For a sequence of outputs

   

y 0 y 1 . . . yk−1

    =    

C CA . . . CAk−1

    x0

  • r in short

y = Γkx0 From here on we assume k ≥ n and rank (Γn) = n

Lecture 4: State Space Models and Realization Theory 63 / 91

slide-64
SLIDE 64

Energy interpretation of observability

Energy output Jy =

k

  • i=0

yt

i y i

=

k

  • i=0

xt

0((Ai)tCtCAi)x0

= xt

0Γt kΓkx0

Take the SVD of the observability matrix Γk = Uk

  • ∈Rkl×n

Σk

  • ∈Rn×n

V t

k

  • ∈Rn×n

Lecture 4: State Space Models and Realization Theory 64 / 91

slide-65
SLIDE 65

Energy interpretation of observability

Jy = xt

0V kΣ2V t kx0

= xt

0V k

   

σ2

1

σ2

2

... σ2

n

    V t

kx0 ◮ x0 proportional to i-th column vector of V k ⇒ Jy = σ2 i x02 ◮ ‘most observable’ direction is the first column with σ2 1x02 ◮ ‘least observable’ direction is the first column with σ2 nx02 ◮ singular values of Γk are not preserved under a similarity transformation

T : Γk → ΓkT −1

Lecture 4: State Space Models and Realization Theory 65 / 91

slide-66
SLIDE 66

A numerical example

matlab demo

Consider the system A =

  • −1.5

−4.4 1 2.7

  • B =
  • 1

−1

  • C =

2 −1 D = 2 For k = 4, the controllability matrix is equal to ∆4 =

  • 1

2.9 3.13 2.741 −1 −1.7 −1.69 −1.433

  • with rank 2 (controllable). Use the SVD of ∆4 = U4Σ4V t

4, zero initial state

and x4 = 0.8687 −0.4954t U4 =

  • 0.8687

0.4954 −0.4954 0.8687

  • Σ4 =
  • 5.9461

0.4007

  • State x4 is reachable with minimum energy

u2

min =

1 5.94632

  • 0.8687

−0.4954

  • 2 =

1 5.94632 = 0.0283

Lecture 4: State Space Models and Realization Theory 66 / 91

slide-67
SLIDE 67

A numerical example

The required input at these 4 time steps is umin = V 4Σ4Ut

4

  • 0.8687

−0.4954

  • =

0.0386 0.0951 0.1006 0.0874 If x4 = 0.4954 0.8687t, then the minimum energy is u2

min =

1 0.40072 0.4954 0.8687 2 = 1 0.40072 = 6.229 Applying a transformation T T = Σ−1

4 Ut 4

demands the same energy price to be applied for any state vector of same size, assuming x0 = 0 What happens to the required energy when you vary k in xk?

Lecture 4: State Space Models and Realization Theory 67 / 91

slide-68
SLIDE 68

A numerical example

−2.5 −2 −1.5 −1 −0.5 0.5 1 1.5 2 2.5 −2.5 −2 −1.5 −1 −0.5 0.5 1 1.5 2 2.5

Figure : Square root energy plot (full line) for non-transformed and transformed system (dotted unit circle)

Lecture 4: State Space Models and Realization Theory 68 / 91

slide-69
SLIDE 69

A numerical example

The observability matrix for k = 4 is Γ4 =

  

2 −1 −4 −11.5 −5.5 −13.45 −5.2 −12.115

  

The rank equals 2 (observable). Using the SVD of Γ4 = U4Σ4Ut

4

V 4 =

  • 0.3692

0.9293 0.9293 0.3692

  • Σ4 =

  

23.0829 2.3224

  

and x0 = 0.3692 0.9293t produces a maximum amount of energy Jy = (23.0829)2

  • 0.3692

0.9293

  • 2 = 23.08292 = 532.8223

Lecture 4: State Space Models and Realization Theory 69 / 91

slide-70
SLIDE 70

A numerical example

The output resulting from the initial state x0 is ymin = Γ4

  • 0.3692

0.9293

  • =

−0.1909 −12.1643 −14.5304 −13.1789 On the other hand, for an initial state x0 = −0.9293 0.3692t Jy = 2.32242

  • −0.9293

0.3692

  • 2 = 2.32242 = 5.3934

Lecture 4: State Space Models and Realization Theory 70 / 91

slide-71
SLIDE 71

A numerical example

−25 −20 −15 −10 −5 5 10 15 20 25 −25 −20 −15 −10 −5 5 10 15 20 25

Figure : Square root energy plot (full line) for non-transformed and transformed system (dotted unit circle)

Lecture 4: State Space Models and Realization Theory 71 / 91

slide-72
SLIDE 72

Realization algorithms

Given a sequence of impulse response matrices Hk, k = 0, 1, . . . , K. How do we find a state space model [A, B, C, D] such that H0 = D Hk = CAk−1B Two steps

  • 1. Estimate minimal system order n
  • 2. Determine matrices A, B and C

Block Hankel matrix H with block dimensions satisfying i + j − 1 ≤ K and large to satisfy the partial realization criterions! In such case rank (H) = n

Lecture 4: State Space Models and Realization Theory 72 / 91

slide-73
SLIDE 73

Realization algorithms

The second step the factorization property of H = Γi∆j is exploited

Property 1 (Algorithm of Kung).

Under the assumption that k ≥ n and n = rank (Γn), the following holds for all k

   

C CA . . . CAk−2

    A =    

CA CA2 . . . CAk−1

   

Denote this result by ΓkA = Γk If the partial realization criterion is fulfilled A = Γk

†Γk

In a similar manner using the controllability matrix A = (|∆k)(∆k|)†

Lecture 4: State Space Models and Realization Theory 73 / 91

slide-74
SLIDE 74

Realization algorithms

Property 2 (Algorithm of Zeiger-McEwen).

Another property exploits the structure of the matrix G, which is a block Hankel matrix of block dimensions i and j, with the Markov parameters H2, H3, . . . , Hi+j. Using the factorization of H = Γi∆j, it is easy to show that G =

   

H2 H3 H4 . . . Hj+1 H3 H4 H5 . . . Hj+2 . . . . . . Hi+1 Hi+2 Hi+3 . . . Hi+j

   

= ΓiA∆j By this observation, A can be obtained as A = (Γi)†G(∆j)†

Lecture 4: State Space Models and Realization Theory 74 / 91

slide-75
SLIDE 75

Realization algorithms

The two steps in finding a state space model [A, B, C, D] can be tackled with the SVD H = U1 U2 S1 V t

1

V t

2

  • Thus

n = rank (H) = rank (S1) and Γi = U1Sα

1 T −1

∆i = TS1−α

1

V t

1

herem α is an arbitrary real number and T an arbitrary non-singular matrix performing a similarity transformation3 Matrices B and C are then formed from the first l rows of U1S1/2

1

and the first m columns of S−1/2

1

V t

1, respectively

3Often, one chooses α = 0.5 (balanced realization) and T = In Lecture 4: State Space Models and Realization Theory 75 / 91

slide-76
SLIDE 76

Realization algorithms

Matrix A satisfies (U1S1/2

1

)A = U1S1/2

2

, therefore A = S−1/2

1

(U1

tU1)−1U1 tU1S1/2 1

The partial realization criterion ensures that (U1

tU1) is invertible; the

  • rthonormality of U1 can be exploited

U1

tU1 = In − utu

such that (U1

tU1)−1 = In + ut(Il − uut)−1u

Only a matrix of dimensions l × l must be inverted. The conditioning of the inverse depends upon the singular values of u, for some close to 1, the inverse will be more ill-conditioned. The partial realization criterion guarantees that the maximal singular value of u is smaller than 1

Lecture 4: State Space Models and Realization Theory 76 / 91

slide-77
SLIDE 77

An example: algorithm of Kung

Consider the SISO system from earlier (D = 1) yk = 1 2yk−1 − 1 4yk−2 + uk The block Hankel matrix of dimension 5 is H5 =

    

0.5 −0.125 −0.0625 −0.125 −0.0625 0.0156 −0.125 −0.0625 0.0156 0.0078 −0.0625 0.0156 0.0078 0.0156 0.0078 9 −0.002

    

with singular values: 0.5377, 0.1568, 0, 0, 0, using Kung’s approach A =

  • 0.0440

0.4795 −0.4795 0.4560

  • B =
  • 0.7078

−0.0318

  • C =

0.7078 0.0318 with transfer function G(z) = 1 1 − 0.5z−1 + 0.25z−2

Lecture 4: State Space Models and Realization Theory 77 / 91

slide-78
SLIDE 78

An example: algorithm of Zeiger-McEwen

The matrix G is G5 =

    

−0.125 −0.0625 0.0156 −0.125 −0.0625 0.0156 0.0078 −0.0625 0.0156 0.0078 0.0156 0.0078 −0.002 0.0156 0.0078 9 −0.002 −0.001

    

The only difference compared to Kung’s algorithm lies in determining A. Take T = I and α = 0.5 Same results but more computation and memory

◮ extra Hankel matrix G ◮ calculation of two pseudo-inverses instead of one

A = Γ†

i GƠ j

Lecture 4: State Space Models and Realization Theory 78 / 91

slide-79
SLIDE 79

Part III Model Reduction

Lecture 4: State Space Models and Realization Theory 79 / 91

slide-80
SLIDE 80

Controllability and observability Grammians

Assume a system [A, B, C, D] to be controllable and observable, for an ‘extended’ controllability matrix ∆k, the controllability Grammian Pk is defined as Pk = ∆k∆t

k = k−1

  • i=0

AiBBt(Ai)t Because the system is controllable and k ≥ n, Pk is positive definite The observability Grammian Qk is defined as Qk = Γt

kΓk = k−1

  • i=0

(Ai)tCtCAi If the system is observable and k ≥ n, Qk is positive definite Following relations hold for all k APkAt − Pk = −BBt + AkBBt(Ak)t AtQkA − Qk = −CtC + (Ak)tCtCAk

Lecture 4: State Space Models and Realization Theory 80 / 91

slide-81
SLIDE 81

Controllability and observability Grammians

Theorem 12.

If A is stable, P∞ and Q∞ are the unique solutions of the Lyapunov equations AP∞At − P∞ = −BBt AtQ∞A − Q∞ = −C tC If the system is controllable, P∞ is positive definite (no zero eigenvalues). If it is observable, Q∞ is positive definite (no zero eigenvalues).

Lecture 4: State Space Models and Realization Theory 81 / 91

slide-82
SLIDE 82

A contragradient transformation

We can modify the Lyapunov equations as (TAT −1)(TP∞T t)(T −tAT t) − (TP∞T t) = −(TB)(BtT t) (T −tAT t)(T −tQ∞T −1)(TAT −1) − (T −tQ∞T −1) = −(T −tCt)(CT −1) The model [A, B, C, D] is replaced by [TAT −1, TB, CT −1, D], and the Grammians modified as P∞ → TP∞T t Q∞ → T −tQ∞T −1 This contragradient transformation preserves the eigenvalues of the two matrices involved λ(P∞Q∞) = λ((TP∞T t)(T −tQ∞T −1)) = λ(TP∞Q∞T −1) The eigenvalues of P∞Q∞ are input-output invariants of the linear system [A, B, C, D]

Lecture 4: State Space Models and Realization Theory 82 / 91

slide-83
SLIDE 83

Hankel singular values and Hankel norm

The eigenvalues of P∞Q∞ are the singular values of the block Hankel matrix with block sizes H(∞, ∞) if the system matrix A is stable σ2(H(∞, ∞)) = λ(H(∞, ∞)H(∞, ∞)t) = λ(Γ∞∆∞∆t

∞Γt ∞)

= λ(Γt

∞Γ∞∆∞∆t ∞)

= λ(Q∞P∞) The Hankel norm is the maximal gain of the future output sequence with respect to a past input sequence with finite L2 energy4 G(z)H = maxu− y + u− = maxu− Hu− u− = σmax(H(∞, ∞))

4see: Interface between past and future Lecture 4: State Space Models and Realization Theory 83 / 91

slide-84
SLIDE 84

Balanced realization

Assume we have obtained the solutions from the Lyapunov equations5. Then there exists a non-singular matrix T which transforms P∞ and Q∞ in a contragradient way such that they are diagonal and equal Use the eigenvalue decomposition P∞Q∞ = XΛX−1 = XD′ΛD′−1X−1 = XD′Λ1/2D′tD′−tΛ1/2D′−1X−1 = XD′Λ1/2D′tXt

  • P∞

X−tD′−tΛ1/2D′−1X−1

  • Q∞

Here, D′ is an arbitrary non-singular diagonal matrix. We exploit its use by taking X−1P∞X−t = D′Λ1/2Dt = Dp XtQ∞X = D′−tΛ1/2D−1 = Dq with Dp and Dq both diagonal matrices. Hence D′4 = DpD−1

q

= D′Λ1/2D′tD′Λ−1/2D′t

5Solving the Lyapunov equations amounts to solving linear matrix equations Lecture 4: State Space Models and Realization Theory 84 / 91

slide-85
SLIDE 85

Balanced realization

We can now formulate a transformation T = D′−1X−1 that diagonalizes P∞ and Q∞ D−1X−1P∞X−tD−1 = Λ1/2 = DtXtQ∞XD The transformed model [TAT −1, TB, CT −1, D] is said to be balanced

Definition 13 (Balanced state space model).

A state space model [A, B, C, D] is balanced if the controllability and

  • bservability Grammians (which are the solutions to the Lyapunov equations)

are diagonal and equal

Lecture 4: State Space Models and Realization Theory 85 / 91

slide-86
SLIDE 86

Balancing algorithm

Data: State space matrices A, B, C, D which are minimal Result: Similarity transformation T begin Solve the Lyapunov equations for P∞ and Q∞ AP∞At − P∞ = −BBt AtQ∞A − Q∞ = −CtC Compute the eigenvalue decomposition P∞Q∞ = XΛX−1 Compute Dp and Dq Dp = X−1P∞X−t Dq = XtQ∞X Compute diagonal matrix Dd Dd = (DpD−1

q )−1/4

Compute the similarity transform T = D−1

d X−1

end

Lecture 4: State Space Models and Realization Theory 86 / 91

slide-87
SLIDE 87

Model reduction and controllability

Start from a balanced realization and partition the singular values of the Hankel matrix intro strong and weak ones S =

  • S1

S2

  • with S1 containing the r largest and S2 the n − r smallest singular values. For

a zero initial state xk = ∆k

   

uk−1 uk−2 . . . u0

    = S1/2V tu

With the partitioning applied to xk and the right singular matrix xk =

  • x1

k

x2

k

  • =
  • S1/2

1

V t

1

S1/2

2

V t

2

  • u

Lecture 4: State Space Models and Realization Theory 87 / 91

slide-88
SLIDE 88

Model reduction and controllability

Let u1(·) be the input sequence that brings the system from x0 = 0 to

  • x1

k

  • and u2(·) the input sequence that bring the system from x0 = 0 to
  • x2

k

  • Then

maxover all u1,u2 x2

k2

x1

k2 =

maxu2S1/2

2

V t

2u22

minu1S1/2

1

V t

1u12 = σr+1

σr Hence xk2 = x1

k2 + x2 k2 = x1 k2

  • 1 + x2

k2

x1

k2

  • ≤ x1

k2

1 + σr+1 σr

  • If the gap between the singular values σr and σr+1 is large, very little

information is lost about the state

Lecture 4: State Space Models and Realization Theory 88 / 91

slide-89
SLIDE 89

Model reduction and observability

Assume an initial state x0 and that the system operates autonomously, then the output follows from y =

   

y0 y1 . . . y k

    = Γkx0 = US1/2x0

Applying the same partitioning to x0 and the left singular matrix U, we get y = y 1 + y 2 = U1S1/2

1

x1

0 + U2S1/2 2

x2 It can be verified that maxover all x1

0,x2

y 22 y 12 = σr+1 σr Hence y 2 = y 12 + y22 = y 12

  • 1 + y 22

y 12

  • ≤ y 12

1 + σr+1 σr

  • Whenever σr ≫ σr+1, the part of the state corresponding to the largest singular

values, when expressed as a balanced realization, causes the most energy to the

  • utput

Lecture 4: State Space Models and Realization Theory 89 / 91

slide-90
SLIDE 90

Balanced model reduction

Assume a balanced model [A, B, C, D] AΣAt − Σ = −BBt AtΣA − Σ = −CtC where Σ =

  • Σ1

Σ2

  • a partition into the r largest and n − r smallest singular values

The state space model can then be truncated with minimal loss in terms of controllability and observability by

  • A

B C D

  • =

r n − r m

  • r

A11 A12 B1 n − r A21 A22 B2 l C 1 C2 D The reduced model is then [A11, B1, C1, D]6

6For discrete time systems, the reduced model is also balanced Lecture 4: State Space Models and Realization Theory 90 / 91

slide-91
SLIDE 91

Stability of balanced reduced models

Let [A, B, C, D] be a balanced realization with Hankel singular values σ1 ≥ . . . ≥ σr ≥ σr+1 ≥ . . . ≥ σn > 0 and compute a reduced model [A11, B1, C1, D] retaining the r largest singular values of the Hankel matrix

Theorem 14.

If σr > σr+1, then [A11, B1, C1, D] is a minimal stable realization. Let Gr(z) be the corresponding transfer matrix. Then G(z) − Gr(z)∞ ≤ 2(σr+1 + . . . + σn) with strict inequality if σk = σn

Lecture 4: State Space Models and Realization Theory 91 / 91