The Behavioral Approach to Systems Theory Paolo Rapisarda, Un. of - - PowerPoint PPT Presentation

the behavioral approach to systems theory
SMART_READER_LITE
LIVE PREVIEW

The Behavioral Approach to Systems Theory Paolo Rapisarda, Un. of - - PowerPoint PPT Presentation

The Behavioral Approach to Systems Theory Paolo Rapisarda, Un. of Southampton, U.K. & Jan C. Willems, K.U.Leuven, Belgium MTNS 2006 Kyoto, Japan, July 2428, 2006 Lecture 4: Bilinear and quadratic differential forms Lecturer: Paolo


slide-1
SLIDE 1

The Behavioral Approach to Systems Theory

Paolo Rapisarda, Un. of Southampton, U.K. & Jan C. Willems, K.U.Leuven, Belgium MTNS 2006 Kyoto, Japan, July 24–28, 2006

slide-2
SLIDE 2

Lecture 4: Bilinear and quadratic differential forms Lecturer: Paolo Rapisarda

slide-3
SLIDE 3

Part I: Basics

slide-4
SLIDE 4

Outline

Motivation and aim Definition Two-variable polynomial matrices The calculus of B/QDFs

slide-5
SLIDE 5

Dynamics and functionals in systems and control

Instances: Lyapunov theory, performance criteria, etc. Linear case = ⇒ quadratic and bilinear functionals.

slide-6
SLIDE 6

Dynamics and functionals in systems and control

Instances: Lyapunov theory, performance criteria, etc. Linear case = ⇒ quadratic and bilinear functionals. Usually: state-space equations, constant functionals. However, tearing and zooming = ⇒ state space eq.s

slide-7
SLIDE 7

Dynamics and functionals in systems and control

Instances: Lyapunov theory, performance criteria, etc. Linear case = ⇒ quadratic and bilinear functionals. Usually: state-space equations, constant functionals. However, tearing and zooming = ⇒ state space eq.s ¡High-order differential equations! ...involving also latent variables...

slide-8
SLIDE 8

Example : a mechanical system

m1 d2w1 dt2 + k1w1 − k1w2 = −k1w1 + m2 d2w2 dt2 + (k1 + k2)w2 =

slide-9
SLIDE 9

Example : a mechanical system

m1 d2w1 dt2 + k1w1 − k1w2 = −k1w1 + m2 d2w2 dt2 + (k1 + k2)w2 =

m1m2 d4 dt4w + (k1m1 + k2m1 + k1m2) d2 dt2w + k1k2w = 0

slide-10
SLIDE 10

Example : a mechanical system

m1 d2w1 dt2 + k1w1 − k1w2 = −k1w1 + m2 d2w2 dt2 + (k1 + k2)w2 =

m1m2 d4 dt4w + (k1m1 + k2m1 + k1m2) d2 dt2w + k1k2w = 0 ¿Stability, stored energy, conservation laws?

slide-11
SLIDE 11

Aim

An effective algebraic representation

  • f bilinear and quadratic functionals
  • f the system variables and their derivatives:

Operations/properties of functionals

  • algebraic operations/properties of representation

...a calculus of these functionals!

slide-12
SLIDE 12

Outline

Motivation and aim Definition Two-variable polynomial matrices The calculus of B/QDFs

slide-13
SLIDE 13

Bilinear differential forms (BDFs)

Φ :=

  • Φk,ℓ ∈ Rw1×w2

k,ℓ=0,...,L

LΦ : C∞(R, Rw1) × C∞(R, Rw2) → C∞(R, R) LΦ(w1, w2) :=

  • w⊤

1 dw1 dt ⊤

. . .

    

Φ0,0 Φ0,1 . . . Φ1,0 Φ1,1 . . . . . . . . . · · · Φk,0 Φk,1 . . . . . . . . . · · ·

       

w2

dw2 dt

. . .

  =

k,ℓ

  • dk

dtk w1

⊤ Φk,ℓ

  • dℓ

dtℓ w2

slide-14
SLIDE 14

Quadratic differential forms (QDFs)

Φ :=

  • Φk,ℓ ∈ Rw×w

k,ℓ=0,...,L symmetric, i.e. Φk,ℓ = Φ⊤ ℓ,k

QΦ : C∞(R, Rw) → C∞(R, R) QΦ(w) :=

  • w⊤

dw dt ⊤

. . .

    

Φ0,0 Φ0,1 . . . Φ1,0 Φ1,1 . . . . . . . . . · · · Φk,0 Φk,1 . . . . . . . . . · · ·

       

w

dw dt

. . .

  = L

k,ℓ=0

  • dk

dtk w

⊤ Φk,ℓ

  • dℓ

dtℓ w

slide-15
SLIDE 15

Example: total energy in mechanical system

1 2 d dt w1 2 + d dt w2 2 +1 2

  • k1w2

1 + k2w2 2

  • w1

w2

d dt w1 d dt w2

  

1 2k1 1 2k2 1 2 1 2

        w1 w2

d dt w1 d dt w2

   

slide-16
SLIDE 16

Outline

Motivation and aim Definition Two-variable polynomial matrices The calculus of B/QDFs

slide-17
SLIDE 17

Two-variable polynomial matrices for BDFs

  • Φk,ℓ ∈ Rw1×w2

k,ℓ=0,...,L

LΦ(w1, w2) =

L

  • k,ℓ=0

( dk dtk w1)⊤ Φk,ℓ dℓ dtℓw2 Φ(ζ, η) = L

k,ℓ=0 Φk,ℓ ζk ηℓ

slide-18
SLIDE 18

Two-variable polynomial matrices for BDFs

  • Φk,ℓ ∈ Rw1×w2

k,ℓ=0,...,L

LΦ(w1, w2) =

L

  • k,ℓ=0

( dk dtk w1)⊤ Φk,ℓ dℓ dtℓw2 Φ(ζ, η) = L

k,ℓ=0 Φk,ℓ ζk ηℓ

slide-19
SLIDE 19

Two-variable polynomial matrices for BDFs

  • Φk,ℓ ∈ Rw1×w2

k,ℓ=0,...,L

LΦ(w1, w2) =

L

  • k,ℓ=0

( dk dtk w1)⊤ Φk,ℓ dℓ dtℓw2 Φ(ζ, η) = L

k,ℓ=0 Φk,ℓ ζk ηℓ

2-variable polynomial matrix associated with LΦ

slide-20
SLIDE 20

Two-variable polynomial matrices for QDFs

  • Φk,ℓ ∈ Rw×w

k,ℓ=0,...,L symmetric (Φk,ℓ = Φ⊤ ℓ,k)

QΦ(w) =

L

  • k,ℓ=0

( dk dtk w)⊤ Φk,ℓ dℓ dtℓw Φ(ζ, η) = L

k,ℓ=0 Φk,ℓ ζk ηℓ

symmetric: Φ(ζ, η) = Φ(η, ζ)⊤

slide-21
SLIDE 21

Example: total energy in mechanical system

QE(w1, w2) =

  • w1

w2

d dt w1 d dt w2

  

1 2k1 1 2k2 1 2 1 2

        w1 w2

d dt w1 d dt w2

   

E(ζ, η) = 1

2k1 1 2k2

  • +

1

2ζη 1 2ζη

slide-22
SLIDE 22

Historical intermezzo

slide-23
SLIDE 23

Historical intermezzo

stability tests (‘60s)

slide-24
SLIDE 24

Historical intermezzo

stability tests (‘60s) path integrals (‘60s)

slide-25
SLIDE 25

Historical intermezzo

stability tests (‘60s) path integrals (‘60s) Lyapunov functionals (‘80s)

slide-26
SLIDE 26

Historical intermezzo

stability tests (‘60s) path integrals (‘60s) Lyapunov functionals (‘80s) QDFs (1998)

slide-27
SLIDE 27

Outline

Motivation and aim Definition Two-variable polynomial matrices The calculus of B/QDFs

slide-28
SLIDE 28

The calculus of B/QDFs

Using powers of ζ and η as placeholders, B/QDF two-variable polynomial matrix

slide-29
SLIDE 29

The calculus of B/QDFs

Using powers of ζ and η as placeholders, B/QDF two-variable polynomial matrix Operations and properties

  • f B/QDF
  • algebraic
  • perations/properties
  • n two-variable matrix
slide-30
SLIDE 30

Differentiation

Φ ∈ Rw×w

s

[ζ, η].

  • Φ derivative of QΦ:

Q •

Φ : C∞(R, Rw) → C∞(R, R)

Q •

Φ(w) := d

dt (QΦ(w))

  • Φ(ζ, η) = (ζ + η)Φ(ζ, η)

Two-variable version of Leibniz’s rule

slide-31
SLIDE 31

Integration

D(R, R•) C∞-compact-support trajectories LΦ : D(R, Rw1) × D(R, Rw2) → D(R, R)

  • LΦ : D(R, Rw1) × D(R, Rw2) → R
  • LΦ(w1, w2) :=

+∞

−∞ LΦ(w1, w2)dt

Analogous for QDFs

slide-32
SLIDE 32

Part II: Applications

slide-33
SLIDE 33

Outline

Lyapunov theory Dissipativity theory Balancing and model reduction

slide-34
SLIDE 34

Nonnegativity and positivity along a behavior

B

≥ 0 if QΦ(w) ≥ 0 ∀ w ∈ B

slide-35
SLIDE 35

Nonnegativity and positivity along a behavior

B

≥ 0 if QΦ(w) ≥ 0 ∀ w ∈ B QΦ

B

> 0 if QΦ

B

≥ 0, and [QΦ(w) = 0] = ⇒ [w = 0]

slide-36
SLIDE 36

Nonnegativity and positivity along a behavior

B

≥ 0 if QΦ(w) ≥ 0 ∀ w ∈ B QΦ

B

> 0 if QΦ

B

≥ 0, and [QΦ(w) = 0] = ⇒ [w = 0] Prop.: Let B = kerR( d

dt ). Then QΦ B

≥ 0 iff there exist D ∈ R•×w[ξ], X ∈ R•×w[ζ, η] such that Φ(ζ, η) = D(ζ)⊤D(η)

  • ≥0 for all w

+ R(ζ)⊤X(ζ, η) + X(η, ζ)⊤R(η)

  • =0 if evaluated onB
slide-37
SLIDE 37

Lyapunov theory

B autonomous is asymptotically stable iflimt→∞ w(t) = 0 ∀ w ∈ B B = kerR( d

dt ) stable ⇐

⇒ det(R) Hurwitz

slide-38
SLIDE 38

Lyapunov theory

B autonomous is asymptotically stable iflimt→∞ w(t) = 0 ∀ w ∈ B B = kerR( d

dt ) stable ⇐

⇒ det(R) Hurwitz Theorem: B asymptotically stable iff exists QΦ such that QΦ

B

≥ 0 andQ •

Φ B

< 0

slide-39
SLIDE 39

Example

B = ker

  • d2

dt2 + 3 d dt + 2

  • r(ξ) = ξ2 + 3ξ + 2
slide-40
SLIDE 40

Example

B = ker

  • d2

dt2 + 3 d dt + 2

  • r(ξ) = ξ2 + 3ξ + 2

Choose Ψ(ζ, η) s.t. QΨ

B

< 0, e.g. Ψ(ζ, η) = −ζη;

slide-41
SLIDE 41

Example

B = ker

  • d2

dt2 + 3 d dt + 2

  • r(ξ) = ξ2 + 3ξ + 2

Choose Ψ(ζ, η) s.t. QΨ

B

< 0, e.g. Ψ(ζ, η) = −ζη; Find Φ(ζ, η) s.t.

d dt QΦ(w) = QΨ(w) for all w ∈ B:

(ζ + η)Φ(ζ, η) = Ψ(ζ, η) + r(ζ)x(η) + x(ζ)r(η)

  • =0 on B
slide-42
SLIDE 42

Example

B = ker

  • d2

dt2 + 3 d dt + 2

  • r(ξ) = ξ2 + 3ξ + 2

Choose Ψ(ζ, η) s.t. QΨ

B

< 0, e.g. Ψ(ζ, η) = −ζη; Find Φ(ζ, η) s.t.

d dt QΦ(w) = QΨ(w) for all w ∈ B:

(ζ + η)Φ(ζ, η) = Ψ(ζ, η) + r(ζ)x(η) + x(ζ)r(η)

  • =0 on B

d dt QΦ(w) = QΨ(w) for all w ∈ B

slide-43
SLIDE 43

Example

B = ker

  • d2

dt2 + 3 d dt + 2

  • r(ξ) = ξ2 + 3ξ + 2

Choose Ψ(ζ, η) s.t. QΨ

B

< 0, e.g. Ψ(ζ, η) = −ζη; Find Φ(ζ, η) s.t.

d dt QΦ(w) = QΨ(w) for all w ∈ B:

(ζ + η)Φ(ζ, η) = Ψ(ζ, η) + r(ζ)x(η) + x(ζ)r(η)

  • =0 on B

Equivalent to solving polynomial Lyapunov equation 0 = Ψ(−ξ, ξ)

ξ2

+ r(−ξ)

ξ2−3ξ+2

x(ξ) + x(−ξ) r(ξ)

ξ2+3ξ+2

❀ x(ξ) = 1

slide-44
SLIDE 44

Example

B = ker

  • d2

dt2 + 3 d dt + 2

  • r(ξ) = ξ2 + 3ξ + 2

Choose Ψ(ζ, η) s.t. QΨ

B

< 0, e.g. Ψ(ζ, η) = −ζη; Find Φ(ζ, η) s.t.

d dt QΦ(w) = QΨ(w) for all w ∈ B:

(ζ + η)Φ(ζ, η) = Ψ(ζ, η) + r(ζ)x(η) + x(ζ)r(η)

  • =0 on B

Φ(ζ, η) = −ζη + (ζ2 + 3ζ + 2)1

6η + 1 6ζ(η2 + 3η + 2)

ζ + η = 1 6ζη + 1 3 > 0

slide-45
SLIDE 45

State-space case

d dt Ix − A

  • x = 0

❀ R(ξ) = ξIx − A

  • Choose Q < 0;
  • Solve polynomial Lyapunov equation

(ξIx − A)⊤P + P(ξIx − A) = −A⊤P − PA = Q equivalent with matrix Lyapunov equation!

  • Lyapunov functional is

x⊤(−P)x

slide-46
SLIDE 46

Outline

Lyapunov theory Dissipativity theory Balancing and model reduction

slide-47
SLIDE 47

Dissipativity theory

supply

SYSTEM

Power is supplied ❀ energy is stored RLC circuits Power V ⊤I Storage in capacitors and inductors Mechanical system Power F ⊤v + ( d

dt ϑ)⊤T

Potential+kinetic

slide-48
SLIDE 48

Setting the stage

Controllable system w = M( d

dt )ℓ ❀ M(ξ)

Power (‘supply rate’) QΦ(w) ❀ Φ(ζ, η)

slide-49
SLIDE 49

Setting the stage

Controllable system w = M( d

dt )ℓ ❀ M(ξ)

Power (‘supply rate’) QΦ(w) ❀ Φ(ζ, η) QΦ(w) = QΦ(M( d

dt )ℓ)

Φ′(ζ, η) := M(ζ)⊤Φ(ζ, η)M(η) QΦ′ acts on free variable ℓ, i.e. C∞

slide-50
SLIDE 50

Setting the stage

Controllable system w = M( d

dt )ℓ ❀ M(ξ)

Power (‘supply rate’) QΦ(w) ❀ Φ(ζ, η) QΦ(w) = QΦ(M( d

dt )ℓ)

Φ′(ζ, η) := M(ζ)⊤Φ(ζ, η)M(η) QΦ′ acts on free variable ℓ, i.e. C∞

slide-51
SLIDE 51

Dissipation inequality

DISSIPATION SUPPLY STORAGE

slide-52
SLIDE 52

Dissipation inequality

QΨ is storage function for the supply QΦ if

d dt QΨ ≤ QΦ

Rate of storage increase ≤ supply

DISSIPATION SUPPLY STORAGE

slide-53
SLIDE 53

Dissipation inequality

QΨ is storage function for the supply QΦ if

d dt QΨ ≤ QΦ

Rate of storage increase ≤ supply Q∆ is dissipation function for QΦ if Q∆ ≥ 0 and

  • Q∆dt =
  • QΦdt

DISSIPATION SUPPLY STORAGE

slide-54
SLIDE 54

Characterizations of dissipativity

Theorem: The following conditions are equivalent:

  • +∞

−∞ QΦ(ℓ)dt ≥ 0 for all C∞ compact-support ℓ;

  • QΦ admits a storage function;
  • QΦ admits a dissipation function

Also, storage and dissipation functions are one-one: d dt QΨ = QΦ − Q∆ (ζ + η)Ψ(ζ, η) = Φ(ζ, η) − ∆(ζ, η)

slide-55
SLIDE 55

Example: mechanical systems

M d2

dt2q + D d dt q + Kq = F

  • F

q

  • =
  • M d2

dt2 + D d dt + K

I3

slide-56
SLIDE 56

Example: mechanical systems

M d2

dt2q + D d dt q + Kq = F

  • F

q

  • =
  • M d2

dt2 + D d dt + K

I3

Supply rate: power F ⊤ d dt q

  • =
  • M d2

dt2ℓ + D d dt ℓ + Kℓ ⊤ d dt ℓ

  • corresponding to

Φ(ζ, η) = 1 2(Mζ2 + Dζ + K)⊤η + 1 2ζ(Mη2 + Dη + K)

slide-57
SLIDE 57

Example: mechanical systems

M d2

dt2q + D d dt q + Kq = F

  • F

q

  • =
  • M d2

dt2 + D d dt + K

I3

Φ(ζ, η) = 1

2(Mζ2 + Dζ + K)⊤η + 1 2ζ(Mη2 + Dη + K)

slide-58
SLIDE 58

Example: mechanical systems

M d2

dt2q + D d dt q + Kq = F

  • F

q

  • =
  • M d2

dt2 + D d dt + K

I3

Φ(ζ, η) = 1

2(Mζ2 + Dζ + K)⊤η + 1 2ζ(Mη2 + Dη + K)

If dissipation inequality Φ(ζ, η) = (ζ + η)Ψ(ζ, η) + ∆(ζ, η) holds, then Φ(−ξ, ξ) = −1 2ξ2(D⊤ + D) = ∆(−ξ, ξ) = ⇒ ∆(ζ, η) = 1 2(D⊤ + D)ζη Spectral factorization of Φ(−ξ, ξ) is key

slide-59
SLIDE 59

Example: mechanical systems

M d2

dt2q + D d dt q + Kq = F

  • F

q

  • =
  • M d2

dt2 + D d dt + K

I3

Φ(ζ, η) = 1

2(Mζ2 + Dζ + K)⊤η + 1 2ζ(Mη2 + Dη + K)

∆(ζ, η) = 1

2(D⊤ + D)ζη

slide-60
SLIDE 60

Example: mechanical systems

M d2

dt2q + D d dt q + Kq = F

  • F

q

  • =
  • M d2

dt2 + D d dt + K

I3

Φ(ζ, η) = 1

2(Mζ2 + Dζ + K)⊤η + 1 2ζ(Mη2 + Dη + K)

∆(ζ, η) = 1

2(D⊤ + D)ζη

Storage function Ψ(ζ, η) = Φ(ζ, η) − ∆(ζ, η) ζ + η = 1 2Mζη + 1 2K Total energy

slide-61
SLIDE 61

Outline

Lyapunov theory Dissipativity theory Balancing and model reduction

slide-62
SLIDE 62

Balancing

A minimal and stable realization (A, B, C, D) is balanced if exist σi ∈ R such that σ1 ≥ σ2 ≥ · · · ≥ σn > 0 and moreover AΣ + ΣA⊤ + BB⊤ = A⊤Σ + ΣA + C⊤C = where Σ := diag(σ1, σ2, . . . , σn)

IEEE TRANSACTIONS ON AUTOMATIC
  • CONTROL. VOL. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA
AC-26, NO. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA 1, FEBRUARY 1981 17

Principal Component Analysis in Linear Systems: Controllability, Observability, and Model Reduction

BRUCE C. MOORE zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA Abstmct--Knlmnn’s minimal realization theory involves geometric zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

  • b
jeds (controUabk, uuobsewable zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA subspaces) which are snbject to zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA stradural zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA i n s t a b i l i t y . SpedkaUy, arbitrarily small pertnrbations in a model may zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA cause a change in the dimensions of the associated subspaces. This situation is manifested in computatiooal diffiities which arise in attempts to apply textbmk zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA algorithms for computing a minimal realization. Structural instabiity associated with geometric theories is not unique to control; it a r i s e s in the theory of h e a r eqoatiors as well. In thif setting, ttse computational problems have been studied for decades and excellent tools have been developed for coping with the situation. One of the main goals of this paper is to Can attention to p&zipal component analysis (Hotelling, l 9 3 3 ) , and an algorithm (Golub and Reinsch, 1970) for comput- i n g t h e ~ w h r e ~ ~ s ~ o l a m a t r i x . T o g e t b e r t h e y f o r m a powerful tool for coping with structural zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA instability in dynamic system. As developed in this p

a p e r , principal m

m p
  • n
e
  • t
analysis is a technique for analyzing
  • signals. ( S i a r
value decomposition provides the computa- tional m a c h i n e r y . ) For this reason, K a l m a n ’ s minimal realization zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA theory is recast in t e m ~
  • f responses to injected signals.
Application o f the signal a m @ & to contrdlability and observabii leads t
  • a eoordinate system in
which the ‘’ininternally baland’ model t m special properties F
  • r
asymp totically stable systems, this yields working approximations of X,, X ; , the controllable and unobservable s u b s p a c e s . It is proposed that a n a t u r a l f i i stepinmodelreductioniotoapplythemechanicsofminimal~oa * ~ w o r k i n g s n b s p a c e s .
  • I. INTRODUCTION

A

COMMON and quite legitimate complaint directed toward multivariable control literature is that the apparent strength of the theory is not accompanied by strong numerical tools. Kalman’s minimal realization the-

  • ry [2], [3], for

example,

  • ffers a beautifully

clear picture

  • f the structure of linear systems. Practically every linear

systems text gives a discussion of controllability, observa- bility, and minimal

  • realization. The associated

textbook algorithms are far from satisfactory, however, serving mainly tp illustrate the theory with textbook examples. The problem with textbook algorithms for minimal realization theory is that they are based on the literal content of the theory and cannot cope with structural discontinuities (commonly called “structural instabilities”) which arise. Uncontrollability corresponds to the situation where a certain subspace (controllable subspace) is proper, Manuscript received July 17, 1978; revised October 25, 1979 and June 5, 1980. This work was supported by the Natural Sciences and Engineer- ing Research Council of Canada under Operating Grant

  • A3925. This
work represents a major revision
  • f [
11. sity of Toronto, Toronto, Ont, Canada. The author i s with the Department o f E l e c t r i c a l Engineering, Univer- but arbitrarily small perturbations in an uncontrollable model may make the subspace technically not proper. Hence, for the perturbed model,’ the theory, taken liter- ally, says that (assuming
  • bservability) there is no lower
  • rder model with the same impulse response matrix.

There may well exist, however, a lower order model which has effectively the same impulse response matrix. There is a gap between minimal realization theory and the problem

  • f finding a lower order approximation, which we shall

refer to as the “model reduction problem.” The purpose of this paper is to show that there are some very useful tools which can be used to cope with these structural instabilities. Specifically, the tools w

i l l be a p

plied to the model reduction problem. We shall draw heavily from the work of others in statistics and computer science, where the problem of structural instability associ- ated with geometric theories has been studied intensely. Principal component analysis, introduced in statistics (1933) by Hotelling [4], [5] will be used together with the algorithm by Golub and Reinsch [6] (see [7] for working code) for computing the singular value decomposition of matrix. Dempster [8] gives an excellent geometric treat- ment of principal component analysis as well as an over- view

  • f its history. A thorough discussion of the singular

value decomposition and its history is given in a recent paper by Klema and Laub [9]. There are excellent books [lo]-[ 151 w i t h i n the area of numerical linear algebra which explain how structural instabilities arise and are dealt with in the theory o f linear equations. The material given in Sections I1 and I11 o f this paper is more general than that appearing in the remaining sec-

  • tions. I

n Section I1 minimal realization theory is reviewed from a “signal injection” viewpoint. The main advantage

  • f this viewpoint

is that the relevant subspaces are char- acterized in terms of responses to injected signals rather than in terms of the model parameters ( A , B, C ) . The full power of the ability to’inject signals of various types is not fully exploited in this paper. Section I11 contains very general results which are valid whenever one is trying to find approximate linear relationships that exist among a set of time variables. In no other way is linearity required. (See [16] for ideas about nonlinear applications.) In Section IV controllability and observability analysis is discussed. Most of the effort is spent coming to grips with the problem o f internal coordinate transformations. 0018-9286/81/0200-0017$00.75 0,1981 IEEE

slide-63
SLIDE 63

Balancing

A minimal and stable realization (A, B, C, D) is balanced if exist σi ∈ R such that σ1 ≥ σ2 ≥ · · · ≥ σn > 0 and moreover AΣ + ΣA⊤ + BB⊤ = A⊤Σ + ΣA + C⊤C = where Σ := diag(σ1, σ2, . . . , σn)

IEEE TRANSACTIONS ON AUTOMATIC
  • CONTROL. VOL. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA
AC-26, NO. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA 1, FEBRUARY 1981 17

Principal Component Analysis in Linear Systems: Controllability, Observability, and Model Reduction

BRUCE C. MOORE zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA Abstmct--Knlmnn’s minimal realization theory involves geometric zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

  • b
jeds (controUabk, uuobsewable zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA subspaces) which are snbject to zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA stradural zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA i n s t a b i l i t y . SpedkaUy, arbitrarily small pertnrbations in a model may zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA cause a change in the dimensions of the associated subspaces. This situation is manifested in computatiooal diffiities which arise in attempts to apply textbmk zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA algorithms for computing a minimal realization. Structural instabiity associated with geometric theories is not unique to control; it a r i s e s in the theory of h e a r eqoatiors as well. In thif setting, ttse computational problems have been studied for decades and excellent tools have been developed for coping with the situation. One of the main goals of this paper is to Can attention to p&zipal component analysis (Hotelling, l 9 3 3 ) , and an algorithm (Golub and Reinsch, 1970) for comput- i n g t h e ~ w h r e ~ ~ s ~ o l a m a t r i x . T o g e t b e r t h e y f o r m a powerful tool for coping with structural zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA instability in dynamic system. As developed in this p

a p e r , principal m

m p
  • n
e
  • t
analysis is a technique for analyzing
  • signals. ( S i a r
value decomposition provides the computa- tional m a c h i n e r y . ) For this reason, K a l m a n ’ s minimal realization zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA theory is recast in t e m ~
  • f responses to injected signals.
Application o f the signal a m @ & to contrdlability and observabii leads t
  • a eoordinate system in
which the ‘’ininternally baland’ model t m special properties F
  • r
asymp totically stable systems, this yields working approximations of X,, X ; , the controllable and unobservable s u b s p a c e s . It is proposed that a n a t u r a l f i i stepinmodelreductioniotoapplythemechanicsofminimal~oa * ~ w o r k i n g s n b s p a c e s .
  • I. INTRODUCTION

A

COMMON and quite legitimate complaint directed toward multivariable control literature is that the apparent strength of the theory is not accompanied by strong numerical tools. Kalman’s minimal realization the-

  • ry [2], [3], for

example,

  • ffers a beautifully

clear picture

  • f the structure of linear systems. Practically every linear

systems text gives a discussion of controllability, observa- bility, and minimal

  • realization. The associated

textbook algorithms are far from satisfactory, however, serving mainly tp illustrate the theory with textbook examples. The problem with textbook algorithms for minimal realization theory is that they are based on the literal content of the theory and cannot cope with structural discontinuities (commonly called “structural instabilities”) which arise. Uncontrollability corresponds to the situation where a certain subspace (controllable subspace) is proper, Manuscript received July 17, 1978; revised October 25, 1979 and June 5, 1980. This work was supported by the Natural Sciences and Engineer- ing Research Council of Canada under Operating Grant

  • A3925. This
work represents a major revision
  • f [
11. sity of Toronto, Toronto, Ont, Canada. The author i s with the Department o f E l e c t r i c a l Engineering, Univer- but arbitrarily small perturbations in an uncontrollable model may make the subspace technically not proper. Hence, for the perturbed model,’ the theory, taken liter- ally, says that (assuming
  • bservability) there is no lower
  • rder model with the same impulse response matrix.

There may well exist, however, a lower order model which has effectively the same impulse response matrix. There is a gap between minimal realization theory and the problem

  • f finding a lower order approximation, which we shall

refer to as the “model reduction problem.” The purpose of this paper is to show that there are some very useful tools which can be used to cope with these structural instabilities. Specifically, the tools w

i l l be a p

plied to the model reduction problem. We shall draw heavily from the work of others in statistics and computer science, where the problem of structural instability associ- ated with geometric theories has been studied intensely. Principal component analysis, introduced in statistics (1933) by Hotelling [4], [5] will be used together with the algorithm by Golub and Reinsch [6] (see [7] for working code) for computing the singular value decomposition of matrix. Dempster [8] gives an excellent geometric treat- ment of principal component analysis as well as an over- view

  • f its history. A thorough discussion of the singular

value decomposition and its history is given in a recent paper by Klema and Laub [9]. There are excellent books [lo]-[ 151 w i t h i n the area of numerical linear algebra which explain how structural instabilities arise and are dealt with in the theory o f linear equations. The material given in Sections I1 and I11 o f this paper is more general than that appearing in the remaining sec-

  • tions. I

n Section I1 minimal realization theory is reviewed from a “signal injection” viewpoint. The main advantage

  • f this viewpoint

is that the relevant subspaces are char- acterized in terms of responses to injected signals rather than in terms of the model parameters ( A , B, C ) . The full power of the ability to’inject signals of various types is not fully exploited in this paper. Section I11 contains very general results which are valid whenever one is trying to find approximate linear relationships that exist among a set of time variables. In no other way is linearity required. (See [16] for ideas about nonlinear applications.) In Section IV controllability and observability analysis is discussed. Most of the effort is spent coming to grips with the problem o f internal coordinate transformations. 0018-9286/81/0200-0017$00.75 0,1981 IEEE

Balancing ≡ choice of basis of state space diagonalizing the Gramians

slide-64
SLIDE 64

Balancing

A minimal and stable realization (A, B, C, D) is balanced if exist σi ∈ R such that σ1 ≥ σ2 ≥ · · · ≥ σn > 0 and moreover AΣ + ΣA⊤ + BB⊤ = A⊤Σ + ΣA + C⊤C = where Σ := diag(σ1, σ2, . . . , σn)

IEEE TRANSACTIONS ON AUTOMATIC
  • CONTROL. VOL. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA
AC-26, NO. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA 1, FEBRUARY 1981 17

Principal Component Analysis in Linear Systems: Controllability, Observability, and Model Reduction

BRUCE C. MOORE zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA Abstmct--Knlmnn’s minimal realization theory involves geometric zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA

  • b
jeds (controUabk, uuobsewable zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA subspaces) which are snbject to zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA stradural zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA i n s t a b i l i t y . SpedkaUy, arbitrarily small pertnrbations in a model may zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA cause a change in the dimensions of the associated subspaces. This situation is manifested in computatiooal diffiities which arise in attempts to apply textbmk zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA algorithms for computing a minimal realization. Structural instabiity associated with geometric theories is not unique to control; it a r i s e s in the theory of h e a r eqoatiors as well. In thif setting, ttse computational problems have been studied for decades and excellent tools have been developed for coping with the situation. One of the main goals of this paper is to Can attention to p&zipal component analysis (Hotelling, l 9 3 3 ) , and an algorithm (Golub and Reinsch, 1970) for comput- i n g t h e ~ w h r e ~ ~ s ~ o l a m a t r i x . T o g e t b e r t h e y f o r m a powerful tool for coping with structural zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA instability in dynamic system. As developed in this p

a p e r , principal m

m p
  • n
e
  • t
analysis is a technique for analyzing
  • signals. ( S i a r
value decomposition provides the computa- tional m a c h i n e r y . ) For this reason, K a l m a n ’ s minimal realization zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA theory is recast in t e m ~
  • f responses to injected signals.
Application o f the signal a m @ & to contrdlability and observabii leads t
  • a eoordinate system in
which the ‘’ininternally baland’ model t m special properties F
  • r
asymp totically stable systems, this yields working approximations of X,, X ; , the controllable and unobservable s u b s p a c e s . It is proposed that a n a t u r a l f i i stepinmodelreductioniotoapplythemechanicsofminimal~oa * ~ w o r k i n g s n b s p a c e s .
  • I. INTRODUCTION

A

COMMON and quite legitimate complaint directed toward multivariable control literature is that the apparent strength of the theory is not accompanied by strong numerical tools. Kalman’s minimal realization the-

  • ry [2], [3], for

example,

  • ffers a beautifully

clear picture

  • f the structure of linear systems. Practically every linear

systems text gives a discussion of controllability, observa- bility, and minimal

  • realization. The associated

textbook algorithms are far from satisfactory, however, serving mainly tp illustrate the theory with textbook examples. The problem with textbook algorithms for minimal realization theory is that they are based on the literal content of the theory and cannot cope with structural discontinuities (commonly called “structural instabilities”) which arise. Uncontrollability corresponds to the situation where a certain subspace (controllable subspace) is proper, Manuscript received July 17, 1978; revised October 25, 1979 and June 5, 1980. This work was supported by the Natural Sciences and Engineer- ing Research Council of Canada under Operating Grant

  • A3925. This
work represents a major revision
  • f [
11. sity of Toronto, Toronto, Ont, Canada. The author i s with the Department o f E l e c t r i c a l Engineering, Univer- but arbitrarily small perturbations in an uncontrollable model may make the subspace technically not proper. Hence, for the perturbed model,’ the theory, taken liter- ally, says that (assuming
  • bservability) there is no lower
  • rder model with the same impulse response matrix.

There may well exist, however, a lower order model which has effectively the same impulse response matrix. There is a gap between minimal realization theory and the problem

  • f finding a lower order approximation, which we shall

refer to as the “model reduction problem.” The purpose of this paper is to show that there are some very useful tools which can be used to cope with these structural instabilities. Specifically, the tools w

i l l be a p

plied to the model reduction problem. We shall draw heavily from the work of others in statistics and computer science, where the problem of structural instability associ- ated with geometric theories has been studied intensely. Principal component analysis, introduced in statistics (1933) by Hotelling [4], [5] will be used together with the algorithm by Golub and Reinsch [6] (see [7] for working code) for computing the singular value decomposition of matrix. Dempster [8] gives an excellent geometric treat- ment of principal component analysis as well as an over- view

  • f its history. A thorough discussion of the singular

value decomposition and its history is given in a recent paper by Klema and Laub [9]. There are excellent books [lo]-[ 151 w i t h i n the area of numerical linear algebra which explain how structural instabilities arise and are dealt with in the theory o f linear equations. The material given in Sections I1 and I11 o f this paper is more general than that appearing in the remaining sec-

  • tions. I

n Section I1 minimal realization theory is reviewed from a “signal injection” viewpoint. The main advantage

  • f this viewpoint

is that the relevant subspaces are char- acterized in terms of responses to injected signals rather than in terms of the model parameters ( A , B, C ) . The full power of the ability to’inject signals of various types is not fully exploited in this paper. Section I11 contains very general results which are valid whenever one is trying to find approximate linear relationships that exist among a set of time variables. In no other way is linearity required. (See [16] for ideas about nonlinear applications.) In Section IV controllability and observability analysis is discussed. Most of the effort is spent coming to grips with the problem o f internal coordinate transformations. 0018-9286/81/0200-0017$00.75 0,1981 IEEE

Balancing ≡ choice of basis of state space diagonalizing the Gramians ≡ choice of state map!

slide-65
SLIDE 65

The controllability Gramian K

p( d

dt )y = q( d dt )u

  • y

u

  • =

q( d

dt )

p( d

dt )

where GCD(p, q) = 1, p stable, deg(q) ≤ deg(p) =: n

slide-66
SLIDE 66

The controllability Gramian K

p( d

dt )y = q( d dt )u

  • y

u

  • =

q( d

dt )

p( d

dt )

where GCD(p, q) = 1, p stable, deg(q) ≤ deg(p) =: n In state-space framework, K is defined as infu

−∞

u(t)2dt =: x⊤

0 Kx0

where u is such that x(−∞) ❀ x(0) = x0

slide-67
SLIDE 67

The controllability Gramian K

p( d

dt )y = q( d dt )u

  • y

u

  • =

q( d

dt )

p( d

dt )

where GCD(p, q) = 1, p stable, deg(q) ≤ deg(p) =: n In our framework: let ℓ ∈ C∞(R, R). Then QK is QDF such that infℓ′

−∞

  • p( d

dt )ℓ′

  • dt =: QK(ℓ)(0)

where ℓ′ ∈ C∞(R+, R) is such that ℓ′

|[0,+∞) = ℓ|[0,+∞)

slide-68
SLIDE 68

The controllability Gramian K

p( d

dt )y = q( d dt )u

  • y

u

  • =

q( d

dt )

p( d

dt )

where GCD(p, q) = 1, p stable, deg(q) ≤ deg(p) =: n In our framework: let ℓ ∈ C∞(R, R). Then QK is QDF such that infℓ′

−∞

  • p( d

dt )ℓ′

  • dt =: QK(ℓ)(0)

where ℓ′ ∈ C∞(R+, R) is such that ℓ′

|[0,+∞) = ℓ|[0,+∞)

¿How to compute K(ζ, η) ?

slide-69
SLIDE 69

Computation of K(ζ, η)

infℓ′

−∞

  • p( d

dt )ℓ′

  • dt =: QK(ℓ)(0)
slide-70
SLIDE 70

Computation of K(ζ, η)

infℓ′

−∞

  • p( d

dt )ℓ′

  • dt =: QK(ℓ)(0)

Since p(−ξ)p(ξ) = p(ξ)p(−ξ), exists K ′ ∈ R[ζ, η] such that p(ζ)p(η) − p(−ζ)p(−η) = (ζ + η)K(ζ, η)

slide-71
SLIDE 71

Computation of K(ζ, η)

infℓ′

−∞

  • p( d

dt )ℓ′

  • dt =: QK(ℓ)(0)

Since p(−ξ)p(ξ) = p(ξ)p(−ξ), exists K ′ ∈ R[ζ, η] such that p(ζ)p(η) − p(−ζ)p(−η) = (ζ + η)K(ζ, η) Consequently,

−∞

  • p( d

dt )ℓ′

  • dt =

−∞

  • p(− d

dt )ℓ′

  • dt + QK(ℓ′)(0)

minimized for the ℓ′ in ker p(− d

dt ) with the given initial

conditions.

slide-72
SLIDE 72

Computation of K(ζ, η)

infℓ′

−∞

  • p( d

dt )ℓ′

  • dt =: QK(ℓ)(0)

Since p(−ξ)p(ξ) = p(ξ)p(−ξ), exists K ′ ∈ R[ζ, η] such that p(ζ)p(η) − p(−ζ)p(−η) = (ζ + η)K(ζ, η)

slide-73
SLIDE 73

Computation of K(ζ, η)

infℓ′

−∞

  • p( d

dt )ℓ′

  • dt =: QK(ℓ)(0)

Since p(−ξ)p(ξ) = p(ξ)p(−ξ), exists K ′ ∈ R[ζ, η] such that p(ζ)p(η) − p(−ζ)p(−η) = (ζ + η)K(ζ, η) Highest power of ζ and η in K is n − 1 = ⇒ QK is quadratic function of djℓ

dtj , j = 0, . . . , n−1

slide-74
SLIDE 74

Computation of K(ζ, η)

infℓ′

−∞

  • p( d

dt )ℓ′

  • dt =: QK(ℓ)(0)

Since p(−ξ)p(ξ) = p(ξ)p(−ξ), exists K ′ ∈ R[ζ, η] such that p(ζ)p(η) − p(−ζ)p(−η) = (ζ + η)K(ζ, η) Highest power of ζ and η in K is n − 1 = ⇒ QK is quadratic function of djℓ

dtj , j = 0, . . . , n−1

QK is quadratic function of the state: for every state map X( d

dt ) there exists KX such that

QK(ℓ) =

  • X( d

dt )ℓ ⊤ KX

  • X( d

dt )ℓ

slide-75
SLIDE 75

The observability Gramian W

p( d

dt )y = q( d dt )u

  • y

u

  • =

q( d

dt )

p( d

dt )

where GCD(p, q) = 1, p stable, deg(q) ≤ deg(p)

slide-76
SLIDE 76

The observability Gramian W

p( d

dt )y = q( d dt )u

  • y

u

  • =

q( d

dt )

p( d

dt )

where GCD(p, q) = 1, p stable, deg(q) ≤ deg(p) In state-space framework, W is defined as

−∞

y(t)2dt =: x⊤

0 Wx0

where y is free response emanating from x(0) = x0

slide-77
SLIDE 77

The observability Gramian W

p( d

dt )y = q( d dt )u

  • y

u

  • =

q( d

dt )

p( d

dt )

where GCD(p, q) = 1, p stable, deg(q) ≤ deg(p) In our framework: let ℓ ∈ C∞(R, R). Then QW is QW(ℓ)(0) := +∞

  • q( d

dt )ℓ′

  • dt

where ℓ′ ∈ C∞(R+, R) is such that

  • ℓ′

|(−∞,0] = ℓ|(−∞,0]

  • p( d

dt )ℓ′ = 0 on R+

q( d

dt )ℓ′, p( d dt )ℓ′

∈ B

slide-78
SLIDE 78

The observability Gramian W

p( d

dt )y = q( d dt )u

  • y

u

  • =

q( d

dt )

p( d

dt )

where GCD(p, q) = 1, p stable, deg(q) ≤ deg(p) In our framework: let ℓ ∈ C∞(R, R). Then QW is QW(ℓ)(0) := +∞

  • q( d

dt )ℓ′

  • dt

where ℓ′ ∈ C∞(R+, R) is such that

  • ℓ′

|(−∞,0] = ℓ|(−∞,0]

  • p( d

dt )ℓ′ = 0 on R+

q( d

dt )ℓ′, p( d dt )ℓ′

∈ B ¿How to compute W(ζ, η) ?

slide-79
SLIDE 79

Computation of W(ζ, η)

QW(ℓ)(0) := +∞

  • q( d

dt )ℓ′

  • dt
slide-80
SLIDE 80

Computation of W(ζ, η)

QW(ℓ)(0) := +∞

  • q( d

dt )ℓ′

  • dt

Since p is Hurwitz, there exists solution f ∈ R[ξ] to p(−ξ)f(ξ) + f(−ξ)p(ξ) = q(−ξ)q(ξ)

slide-81
SLIDE 81

Computation of W(ζ, η)

QW(ℓ)(0) := +∞

  • q( d

dt )ℓ′

  • dt

Since p is Hurwitz, there exists solution f ∈ R[ξ] to p(−ξ)f(ξ) + f(−ξ)p(ξ) = q(−ξ)q(ξ) Define W from (ζ + η)W(ζ, η) = q(ζ)q(η) − [p(ζ)f(η) + f(ζ)p(η)]

slide-82
SLIDE 82

Computation of W(ζ, η)

QW(ℓ)(0) := +∞

  • q( d

dt )ℓ′

  • dt

Since p is Hurwitz, there exists solution f ∈ R[ξ] to p(−ξ)f(ξ) + f(−ξ)p(ξ) = q(−ξ)q(ξ) Define W from (ζ + η)W(ζ, η) = q(ζ)q(η) − [p(ζ)f(η) + f(ζ)p(η)]

slide-83
SLIDE 83

Computation of W(ζ, η)

QW(ℓ)(0) := +∞

  • q( d

dt )ℓ′

  • dt

Since p is Hurwitz, there exists solution f ∈ R[ξ] to p(−ξ)f(ξ) + f(−ξ)p(ξ) = q(−ξ)q(ξ) Define W from (ζ + η)W(ζ, η) = q(ζ)q(η) − [p(ζ)f(η) + f(ζ)p(η)] then QW(ℓ)(0) = +∞

  • q( d

dt )ℓ 2 dt for all ℓ ∈ ker p( d

dt )

slide-84
SLIDE 84

Computation of W(ζ, η)

QW(ℓ)(0) := +∞

  • q( d

dt )ℓ′

  • dt

Since p is Hurwitz, there exists solution f ∈ R[ξ] to p(−ξ)f(ξ) + f(−ξ)p(ξ) = q(−ξ)q(ξ) Define W from (ζ + η)W(ζ, η) = q(ζ)q(η) − [p(ζ)f(η) + f(ζ)p(η)] QW is quadratic function of the state: for every state map X( d

dt ) there exists WX such that

QW(ℓ) =

  • X( d

dt )ℓ ⊤ WX

  • X( d

dt )ℓ

slide-85
SLIDE 85

Balanced state maps

State map X( d

dt ) is balanced if

slide-86
SLIDE 86

Balanced state maps

State map X( d

dt ) is balanced if

  • If ℓk is such that X(ℓk)(0) is the k-th canonical

basis vector, then QK(ℓk)(0) = 1 QW(ℓk)(0) ‘difficult to reach ⇐ ⇒ difficult to observe’

slide-87
SLIDE 87

Balanced state maps

State map X( d

dt ) is balanced if

  • If ℓk is such that X(ℓk)(0) is the k-th canonical

basis vector, then QK(ℓk)(0) = 1 QW(ℓk)(0) ‘difficult to reach ⇐ ⇒ difficult to observe’

  • QW(ℓ1)(0) ≥ QW(ℓ2)(0) ≥ . . . ≥ QW(ℓn)(0) > 0
  • r equivalently

0 < QK(ℓ1)(0) ≤ QK(ℓ2)(0) ≤ . . . ≤ QK(ℓn)(0) ‘first who contributes most’

slide-88
SLIDE 88

Balancing with QDFs

Linear algebra = ⇒ there is basis {xb

i ∈ Rn−1[ξ]}i=1,...,n

and σi ∈ R such that σ1 ≥ σ2 ≥ . . . σn such that

W(ζ, η) = n

i=1 σixb i (ζ)xb i (η)

K(ζ, η) = n

i=1 1 σi xb i (ζ)xb i (η)

slide-89
SLIDE 89

Balancing with QDFs

Linear algebra = ⇒ there is basis {xb

i ∈ Rn−1[ξ]}i=1,...,n

and σi ∈ R such that σ1 ≥ σ2 ≥ . . . σn such that

W(ζ, η) = n

i=1 σixb i (ζ)xb i (η)

K(ζ, η) = n

i=1 1 σi xb i (ζ)xb i (η)

σi ❀ (classical) Hankel singular values

slide-90
SLIDE 90

Balancing with QDFs

Linear algebra = ⇒ there is basis {xb

i ∈ Rn−1[ξ]}i=1,...,n

and σi ∈ R such that σ1 ≥ σ2 ≥ . . . σn such that

W(ζ, η) = n

i=1 σixb i (ζ)xb i (η)

K(ζ, η) = n

i=1 1 σi xb i (ζ)xb i (η)

Then X b(ξ) := col(xb

i (ξ))i=1,...,n

is balanced state map.

slide-91
SLIDE 91

Balancing with QDFs

Linear algebra = ⇒ there is basis {xb

i ∈ Rn−1[ξ]}i=1,...,n

and σi ∈ R such that σ1 ≥ σ2 ≥ . . . σn such that

W(ζ, η) = n

i=1 σixb i (ζ)xb i (η)

K(ζ, η) = n

i=1 1 σi xb i (ζ)xb i (η)

Then X b(ξ) := col(xb

i (ξ))i=1,...,n

is balanced state map. (Classical) balanced state space representation: solve

  • ξX b(ξ)

q(ξ)

  • =
  • Ab

Bb Cb Db X b(ξ) p(ξ)

slide-92
SLIDE 92

Balancing with QDFs

Linear algebra = ⇒ there is basis {xb

i ∈ Rn−1[ξ]}i=1,...,n

and σi ∈ R such that σ1 ≥ σ2 ≥ . . . σn such that

W(ζ, η) = n

i=1 σixb i (ζ)xb i (η)

K(ζ, η) = n

i=1 1 σi xb i (ζ)xb i (η)

Then X b(ξ) := col(xb

i (ξ))i=1,...,n

is balanced state map. (Classical) balanced state space representation: solve

  • ξX b(ξ)

q(ξ)

  • =
  • Ab

Bb Cb Db X b(ξ) p(ξ)

  • Model reduction by balancing follows
slide-93
SLIDE 93

Summary

  • Working with functionals at most natural level;
slide-94
SLIDE 94

Summary

  • Working with functionals at most natural level;
  • Two-variable polynomial representation;
slide-95
SLIDE 95

Summary

  • Working with functionals at most natural level;
  • Two-variable polynomial representation;
  • Operations/properties in time domain

❀ algebraic operations;

slide-96
SLIDE 96

Summary

  • Working with functionals at most natural level;
  • Two-variable polynomial representation;
  • Operations/properties in time domain

❀ algebraic operations;

  • Differentiation, integration, positivity;
slide-97
SLIDE 97

Summary

  • Working with functionals at most natural level;
  • Two-variable polynomial representation;
  • Operations/properties in time domain

❀ algebraic operations;

  • Differentiation, integration, positivity;
  • Lyapunov theory, dissipativity, model reduction

by balancing.