Plan of the Lecture Review: observability; Luenberger observer and - - PowerPoint PPT Presentation

plan of the lecture
SMART_READER_LITE
LIVE PREVIEW

Plan of the Lecture Review: observability; Luenberger observer and - - PowerPoint PPT Presentation

Plan of the Lecture Review: observability; Luenberger observer and state estimation error. Todays topic: joint observer and controller design: dynamic output feedback. Plan of the Lecture Review: observability; Luenberger observer


slide-1
SLIDE 1

Plan of the Lecture

◮ Review: observability; Luenberger observer and state

estimation error.

◮ Today’s topic: joint observer and controller design:

dynamic output feedback.

slide-2
SLIDE 2

Plan of the Lecture

◮ Review: observability; Luenberger observer and state

estimation error.

◮ Today’s topic: joint observer and controller design:

dynamic output feedback. Goal: learn how to design an observer and a controller to achieve accurate closed-loop pole placement.

slide-3
SLIDE 3

Plan of the Lecture

◮ Review: observability; Luenberger observer and state

estimation error.

◮ Today’s topic: joint observer and controller design:

dynamic output feedback. Goal: learn how to design an observer and a controller to achieve accurate closed-loop pole placement. Reading: FPE, Chapter 7

slide-4
SLIDE 4

Is Full State Feedback Always Available?

In a typical system, measurements are provided by sensors:

plant u y controller

slide-5
SLIDE 5

Is Full State Feedback Always Available?

In a typical system, measurements are provided by sensors:

plant u y controller

Full state feedback u = −Kx is not implementable!! In that case, an observer is used to estimate the state x: plant u y

  • bserver

b x

slide-6
SLIDE 6

State Estimation Using an Observer

If the system is observable, the state estimate x is asymptotically accurate:

  • x(t) − x(t) =
  • n
  • i=1

( xi(t) − xi(t))2 t→∞ − − − → 0

slide-7
SLIDE 7

State Estimation Using an Observer

If the system is observable, the state estimate x is asymptotically accurate:

  • x(t) − x(t) =
  • n
  • i=1

( xi(t) − xi(t))2 t→∞ − − − → 0 If we are successful, then we can try estimated state feedback:

plant y

  • bserver

b x K u = −Kb x

slide-8
SLIDE 8

Observability

Consider a single-output system (y ∈ R): ˙ x = Ax + Bu, y = Cx x ∈ Rn

slide-9
SLIDE 9

Observability

Consider a single-output system (y ∈ R): ˙ x = Ax + Bu, y = Cx x ∈ Rn The Observability Matrix is defined as O(A, C) =      C CA . . . CAn−1     

slide-10
SLIDE 10

Observability

Consider a single-output system (y ∈ R): ˙ x = Ax + Bu, y = Cx x ∈ Rn The Observability Matrix is defined as O(A, C) =      C CA . . . CAn−1      We say that the above system is observable if its observability matrix O(A, C) is invertible.

slide-11
SLIDE 11

Observability

Consider a single-output system (y ∈ R): ˙ x = Ax + Bu, y = Cx x ∈ Rn The Observability Matrix is defined as O(A, C) =      C CA . . . CAn−1      We say that the above system is observable if its observability matrix O(A, C) is invertible.

(This definition is only true for the single-output case; the multiple-output case involves the rank of O(A, C).)

slide-12
SLIDE 12

Observer Canonical Form

A single-output state-space model ˙ x = Ax + Bu, y = Cx is said to be in Observer Canonical Form (OCF) if the matrices A, C are of the form A =        . . . ∗ 1 . . . ∗ . . . . . . ... . . . . . . . . . . . . 1 ∗ . . . 1 ∗        , C =

  • . . .

1

slide-13
SLIDE 13

Observer Canonical Form

A single-output state-space model ˙ x = Ax + Bu, y = Cx is said to be in Observer Canonical Form (OCF) if the matrices A, C are of the form A =        . . . ∗ 1 . . . ∗ . . . . . . ... . . . . . . . . . . . . 1 ∗ . . . 1 ∗        , C =

  • . . .

1

  • Fact: A system in OCF is always observable!!
slide-14
SLIDE 14

Observer Canonical Form

A single-output state-space model ˙ x = Ax + Bu, y = Cx is said to be in Observer Canonical Form (OCF) if the matrices A, C are of the form A =        . . . ∗ 1 . . . ∗ . . . . . . ... . . . . . . . . . . . . 1 ∗ . . . 1 ∗        , C =

  • . . .

1

  • Fact: A system in OCF is always observable!!

(The proof of this for n > 2 uses the Jordan canonical form, we will not worry about this.)

slide-15
SLIDE 15

The Luenberger Observer

System: ˙ x = Ax y = Cx Observer: ˙

  • x = (A − LC)

x + Ly.

slide-16
SLIDE 16

The Luenberger Observer

System: ˙ x = Ax y = Cx Observer: ˙

  • x = (A − LC)

x + Ly. What happens to state estimation error e = x − x as t → ∞?

slide-17
SLIDE 17

The Luenberger Observer

System: ˙ x = Ax y = Cx Observer: ˙

  • x = (A − LC)

x + Ly. What happens to state estimation error e = x − x as t → ∞? ˙ e = (A − LC)e

slide-18
SLIDE 18

The Luenberger Observer

System: ˙ x = Ax y = Cx Observer: ˙

  • x = (A − LC)

x + Ly. What happens to state estimation error e = x − x as t → ∞? ˙ e = (A − LC)e Does e(t) converge to zero in some sense?

slide-19
SLIDE 19

The Luenberger Observer

System: ˙ x = Ax y = Cx Observer: ˙

  • x = (A − LC)

x + Ly Error: ˙ e = (A − LC)e

slide-20
SLIDE 20

The Luenberger Observer

System: ˙ x = Ax y = Cx Observer: ˙

  • x = (A − LC)

x + Ly Error: ˙ e = (A − LC)e Recall our assumption that A − LC is Hurwitz (all eigenvalues are in LHP). This implies that x(t) − x(t)2 = e(t)2 =

n

  • i=1

|ei(t)|2 t→∞ − − − → 0 at an exponential rate, determined by the eigenvalues of A − LC.

slide-21
SLIDE 21

The Luenberger Observer

System: ˙ x = Ax y = Cx Observer: ˙

  • x = (A − LC)

x + Ly Error: ˙ e = (A − LC)e Recall our assumption that A − LC is Hurwitz (all eigenvalues are in LHP). This implies that x(t) − x(t)2 = e(t)2 =

n

  • i=1

|ei(t)|2 t→∞ − − − → 0 at an exponential rate, determined by the eigenvalues of A − LC. For fast convergence, want eigenvalues of A − LC far into LHP!!

slide-22
SLIDE 22

Observability and Estimation Error

Fact: If the system ˙ x = Ax, y = Cx is observable, then we can arbitrarily assign eigenvalues of A − LC by a suitable choice of the output injection matrix L.

slide-23
SLIDE 23

Observability and Estimation Error

Fact: If the system ˙ x = Ax, y = Cx is observable, then we can arbitrarily assign eigenvalues of A − LC by a suitable choice of the output injection matrix L. This is similar to the fact that controllability implies arbitrary closed-loop pole placement by state feedback.

slide-24
SLIDE 24

Observability and Estimation Error

Fact: If the system ˙ x = Ax, y = Cx is observable, then we can arbitrarily assign eigenvalues of A − LC by a suitable choice of the output injection matrix L. This is similar to the fact that controllability implies arbitrary closed-loop pole placement by state feedback. In fact, these two facts are closely related because CCF is dual to OCF.

slide-25
SLIDE 25

Combining Full-State Feedback with an Observer

slide-26
SLIDE 26

Combining Full-State Feedback with an Observer

◮ So far, we have focused on autonomous systems (u = 0).

slide-27
SLIDE 27

Combining Full-State Feedback with an Observer

◮ So far, we have focused on autonomous systems (u = 0). ◮ What about nonzero inputs?

˙ x = Ax + Bu y = Cx

slide-28
SLIDE 28

Combining Full-State Feedback with an Observer

◮ So far, we have focused on autonomous systems (u = 0). ◮ What about nonzero inputs?

˙ x = Ax + Bu y = Cx — assume (A, B) is controllable and (A, C) is observable.

slide-29
SLIDE 29

Combining Full-State Feedback with an Observer

◮ So far, we have focused on autonomous systems (u = 0). ◮ What about nonzero inputs?

˙ x = Ax + Bu y = Cx — assume (A, B) is controllable and (A, C) is observable.

◮ Today, we will learn how to use an observer together with

estimated state feedback to (approximately) place closed-loop poles.

slide-30
SLIDE 30

Combining Full-State Feedback with an Observer

◮ So far, we have focused on autonomous systems (u = 0). ◮ What about nonzero inputs?

˙ x = Ax + Bu y = Cx — assume (A, B) is controllable and (A, C) is observable.

◮ Today, we will learn how to use an observer together with

estimated state feedback to (approximately) place closed-loop poles.

plant y

  • bserver

b x −K u = −Kb x B

slide-31
SLIDE 31

Combining Full-State Feedback with an Observer

◮ Consider

˙ x = Ax + Bu y = Cx where (A, B) is controllable and (A, C) is observable.

slide-32
SLIDE 32

Combining Full-State Feedback with an Observer

◮ Consider

˙ x = Ax + Bu y = Cx where (A, B) is controllable and (A, C) is observable.

◮ We know how to find K, such that A − BK has desired

eigenvalues (controller poles).

slide-33
SLIDE 33

Combining Full-State Feedback with an Observer

◮ Consider

˙ x = Ax + Bu y = Cx where (A, B) is controllable and (A, C) is observable.

◮ We know how to find K, such that A − BK has desired

eigenvalues (controller poles).

◮ Since we do not have access to x, we must design an

  • bserver. But this time, we need a slight modification

because of the Bu term.

slide-34
SLIDE 34

Observer in the Presence of Control Input

◮ Let’s see what goes wrong when we use the old approach:

˙

  • x = (A − LC)

x + Ly

slide-35
SLIDE 35

Observer in the Presence of Control Input

◮ Let’s see what goes wrong when we use the old approach:

˙

  • x = (A − LC)

x + Ly

◮ For the estimation error e = x −

x, we have

slide-36
SLIDE 36

Observer in the Presence of Control Input

◮ Let’s see what goes wrong when we use the old approach:

˙

  • x = (A − LC)

x + Ly

◮ For the estimation error e = x −

x, we have ˙ e = ˙ x − ˙

  • x
slide-37
SLIDE 37

Observer in the Presence of Control Input

◮ Let’s see what goes wrong when we use the old approach:

˙

  • x = (A − LC)

x + Ly

◮ For the estimation error e = x −

x, we have ˙ e = ˙ x − ˙

  • x

= Ax + Bu − [(A − LC) x + LCx]

slide-38
SLIDE 38

Observer in the Presence of Control Input

◮ Let’s see what goes wrong when we use the old approach:

˙

  • x = (A − LC)

x + Ly

◮ For the estimation error e = x −

x, we have ˙ e = ˙ x − ˙

  • x

= Ax + Bu − [(A − LC) x + LCx] = (A − LC)e + Bu

slide-39
SLIDE 39

Observer in the Presence of Control Input

◮ Let’s see what goes wrong when we use the old approach:

˙

  • x = (A − LC)

x + Ly

◮ For the estimation error e = x −

x, we have ˙ e = ˙ x − ˙

  • x

= Ax + Bu − [(A − LC) x + LCx] = (A − LC)e + Bu – not good

slide-40
SLIDE 40

Observer in the Presence of Control Input

◮ Let’s see what goes wrong when we use the old approach:

˙

  • x = (A − LC)

x + Ly

◮ For the estimation error e = x −

x, we have ˙ e = ˙ x − ˙

  • x

= Ax + Bu − [(A − LC) x + LCx] = (A − LC)e + Bu – not good

◮ Idea: since u is a signal we can access, let’s use it as an

input to the observer to cancel the Bu term from ˙ x.

slide-41
SLIDE 41

Observer in the Presence of Control Input

◮ Let’s see what goes wrong when we use the old approach:

˙

  • x = (A − LC)

x + Ly

◮ For the estimation error e = x −

x, we have ˙ e = ˙ x − ˙

  • x

= Ax + Bu − [(A − LC) x + LCx] = (A − LC)e + Bu – not good

◮ Idea: since u is a signal we can access, let’s use it as an

input to the observer to cancel the Bu term from ˙ x.

◮ Modified observer:

slide-42
SLIDE 42

Observer in the Presence of Control Input

◮ Let’s see what goes wrong when we use the old approach:

˙

  • x = (A − LC)

x + Ly

◮ For the estimation error e = x −

x, we have ˙ e = ˙ x − ˙

  • x

= Ax + Bu − [(A − LC) x + LCx] = (A − LC)e + Bu – not good

◮ Idea: since u is a signal we can access, let’s use it as an

input to the observer to cancel the Bu term from ˙ x.

◮ Modified observer:

˙

  • x = (A − LC)

x + Ly + Bu

slide-43
SLIDE 43

Observer in the Presence of Control Input

◮ Let’s see what goes wrong when we use the old approach:

˙

  • x = (A − LC)

x + Ly

◮ For the estimation error e = x −

x, we have ˙ e = ˙ x − ˙

  • x

= Ax + Bu − [(A − LC) x + LCx] = (A − LC)e + Bu – not good

◮ Idea: since u is a signal we can access, let’s use it as an

input to the observer to cancel the Bu term from ˙ x.

◮ Modified observer:

˙

  • x = (A − LC)

x + Ly + Bu ˙ e = ˙ x − ˙

  • x
slide-44
SLIDE 44

Observer in the Presence of Control Input

◮ Let’s see what goes wrong when we use the old approach:

˙

  • x = (A − LC)

x + Ly

◮ For the estimation error e = x −

x, we have ˙ e = ˙ x − ˙

  • x

= Ax + Bu − [(A − LC) x + LCx] = (A − LC)e + Bu – not good

◮ Idea: since u is a signal we can access, let’s use it as an

input to the observer to cancel the Bu term from ˙ x.

◮ Modified observer:

˙

  • x = (A − LC)

x + Ly + Bu ˙ e = ˙ x − ˙

  • x

= Ax + Bu − [(A − LC) x + LCx + Bu]

slide-45
SLIDE 45

Observer in the Presence of Control Input

◮ Let’s see what goes wrong when we use the old approach:

˙

  • x = (A − LC)

x + Ly

◮ For the estimation error e = x −

x, we have ˙ e = ˙ x − ˙

  • x

= Ax + Bu − [(A − LC) x + LCx] = (A − LC)e + Bu – not good

◮ Idea: since u is a signal we can access, let’s use it as an

input to the observer to cancel the Bu term from ˙ x.

◮ Modified observer:

˙

  • x = (A − LC)

x + Ly + Bu ˙ e = ˙ x − ˙

  • x

= Ax + Bu − [(A − LC) x + LCx + Bu] = (A − LC)e

slide-46
SLIDE 46

Observer in the Presence of Control Input

◮ Let’s see what goes wrong when we use the old approach:

˙

  • x = (A − LC)

x + Ly

◮ For the estimation error e = x −

x, we have ˙ e = ˙ x − ˙

  • x

= Ax + Bu − [(A − LC) x + LCx] = (A − LC)e + Bu – not good

◮ Idea: since u is a signal we can access, let’s use it as an

input to the observer to cancel the Bu term from ˙ x.

◮ Modified observer:

˙

  • x = (A − LC)

x + Ly + Bu ˙ e = ˙ x − ˙

  • x

= Ax + Bu − [(A − LC) x + LCx + Bu] = (A − LC)e regardless of u

slide-47
SLIDE 47

Observer and Controller

System: ˙ x = Ax + Bu y = Cx Observer: ˙

  • x = (A − LC)

x + Ly + Bu Error: ˙ e = (A − LC)e

slide-48
SLIDE 48

Observer and Controller

System: ˙ x = Ax + Bu y = Cx Observer: ˙

  • x = (A − LC)

x + Ly + Bu Error: ˙ e = (A − LC)e

◮ By observability, we can arbitrarily assign eig(A − LC);

these should be farther into LHP than desired controller poles.

slide-49
SLIDE 49

Observer and Controller

System: ˙ x = Ax + Bu y = Cx Observer: ˙

  • x = (A − LC)

x + Ly + Bu Error: ˙ e = (A − LC)e

◮ By observability, we can arbitrarily assign eig(A − LC);

these should be farther into LHP than desired controller poles. Controller: u = −K x (estimated state feedback)

slide-50
SLIDE 50

Observer and Controller

System: ˙ x = Ax + Bu y = Cx Observer: ˙

  • x = (A − LC)

x + Ly + Bu Error: ˙ e = (A − LC)e

◮ By observability, we can arbitrarily assign eig(A − LC);

these should be farther into LHP than desired controller poles. Controller: u = −K x (estimated state feedback)

◮ By controllability, we can arbitrarily assign eig(A − BK).

slide-51
SLIDE 51

Observer and Controller

System: ˙ x = Ax + Bu y = Cx Observer: ˙

  • x = (A − LC)

x + Ly + Bu Controller: u = −K x The overall observer-controller system is: ˙

  • x = (A − LC)

x + Ly + B (−K x)

=u

= (A − LC − BK) x + Ly u = −K x (dynamic output feedback) — this is a dynamical system with input y and output u

slide-52
SLIDE 52

Dynamic Output Feedback

˙ x = Ax + Bu y = Cx ˙

  • x = (A − LC − BK)

x + Ly u = −K x

plant y

  • bserver

b x −K u = −Kb x B controller

slide-53
SLIDE 53

Dynamic Output Feedback

˙

  • x = (A − LC − BK)

x + Ly, u = −K x

plant y

  • bserver

b x −K u = −Kb x B controller

slide-54
SLIDE 54

Dynamic Output Feedback

˙

  • x = (A − LC − BK)

x + Ly, u = −K x

plant y

  • bserver

b x −K u = −Kb x B controller

Controller transfer function (from y to u): s X = (A − LC − BK) X + LY, U = −K X U = −K(Is − A + LC + BK)−1L

  • =D(s)

Y

slide-55
SLIDE 55

Dynamic Output Feedback: Does It Work?

Summarizing:

slide-56
SLIDE 56

Dynamic Output Feedback: Does It Work?

Summarizing:

◮ When y = x, full state feedback u = −Kx achieves desired

pole placement.

slide-57
SLIDE 57

Dynamic Output Feedback: Does It Work?

Summarizing:

◮ When y = x, full state feedback u = −Kx achieves desired

pole placement.

◮ How do we know that u = −K

x achieves similar objectives?

slide-58
SLIDE 58

Dynamic Output Feedback: Does It Work?

Summarizing:

◮ When y = x, full state feedback u = −Kx achieves desired

pole placement.

◮ How do we know that u = −K

x achieves similar objectives? Here is our overall closed-loop system:

slide-59
SLIDE 59

Dynamic Output Feedback: Does It Work?

Summarizing:

◮ When y = x, full state feedback u = −Kx achieves desired

pole placement.

◮ How do we know that u = −K

x achieves similar objectives? Here is our overall closed-loop system: ˙ x = Ax − BK x ˙

  • x = (A − LC − BK)

x + LCx

slide-60
SLIDE 60

Dynamic Output Feedback: Does It Work?

Summarizing:

◮ When y = x, full state feedback u = −Kx achieves desired

pole placement.

◮ How do we know that u = −K

x achieves similar objectives? Here is our overall closed-loop system: ˙ x = Ax − BK x ˙

  • x = (A − LC − BK)

x + LCx We can write it in block matrix form: ˙ x ˙

  • x
  • =

A −BK LC A − LC − BK x

  • x
slide-61
SLIDE 61

Dynamic Output Feedback: Does It Work?

Summarizing:

◮ When y = x, full state feedback u = −Kx achieves desired

pole placement.

◮ How do we know that u = −K

x achieves similar objectives? Here is our overall closed-loop system: ˙ x = Ax − BK x ˙

  • x = (A − LC − BK)

x + LCx We can write it in block matrix form: ˙ x ˙

  • x
  • =

A −BK LC A − LC − BK x

  • x
  • How do we relate this to “nominal” behavior, A − BK?
slide-62
SLIDE 62

Dynamic Output Feedback

˙ x ˙

  • x
  • =

A −BK LC A − LC − BK x

  • x
slide-63
SLIDE 63

Dynamic Output Feedback

˙ x ˙

  • x
  • =

A −BK LC A − LC − BK x

  • x
  • Let us transform to new coordinates:

x

  • x

→ x e

  • =
  • x

x − x

  • =

I I −I

  • T

x

  • x
slide-64
SLIDE 64

Dynamic Output Feedback

˙ x ˙

  • x
  • =

A −BK LC A − LC − BK x

  • x
  • Let us transform to new coordinates:

x

  • x

→ x e

  • =
  • x

x − x

  • =

I I −I

  • T

x

  • x
  • Two key observations:
slide-65
SLIDE 65

Dynamic Output Feedback

˙ x ˙

  • x
  • =

A −BK LC A − LC − BK x

  • x
  • Let us transform to new coordinates:

x

  • x

→ x e

  • =
  • x

x − x

  • =

I I −I

  • T

x

  • x
  • Two key observations:

◮ T is invertible, so the new representation is equivalent to

the old one

slide-66
SLIDE 66

Dynamic Output Feedback

˙ x ˙

  • x
  • =

A −BK LC A − LC − BK x

  • x
  • Let us transform to new coordinates:

x

  • x

→ x e

  • =
  • x

x − x

  • =

I I −I

  • T

x

  • x
  • Two key observations:

◮ T is invertible, so the new representation is equivalent to

the old one

◮ in the new coordinates, we have

slide-67
SLIDE 67

Dynamic Output Feedback

˙ x ˙

  • x
  • =

A −BK LC A − LC − BK x

  • x
  • Let us transform to new coordinates:

x

  • x

→ x e

  • =
  • x

x − x

  • =

I I −I

  • T

x

  • x
  • Two key observations:

◮ T is invertible, so the new representation is equivalent to

the old one

◮ in the new coordinates, we have

˙ x = Ax − BK x

slide-68
SLIDE 68

Dynamic Output Feedback

˙ x ˙

  • x
  • =

A −BK LC A − LC − BK x

  • x
  • Let us transform to new coordinates:

x

  • x

→ x e

  • =
  • x

x − x

  • =

I I −I

  • T

x

  • x
  • Two key observations:

◮ T is invertible, so the new representation is equivalent to

the old one

◮ in the new coordinates, we have

˙ x = Ax − BK x = (A − BK)x + BK(x − x)

slide-69
SLIDE 69

Dynamic Output Feedback

˙ x ˙

  • x
  • =

A −BK LC A − LC − BK x

  • x
  • Let us transform to new coordinates:

x

  • x

→ x e

  • =
  • x

x − x

  • =

I I −I

  • T

x

  • x
  • Two key observations:

◮ T is invertible, so the new representation is equivalent to

the old one

◮ in the new coordinates, we have

˙ x = Ax − BK x = (A − BK)x + BK(x − x) = (A − BK)x + BKe

slide-70
SLIDE 70

Dynamic Output Feedback

˙ x ˙

  • x
  • =

A −BK LC A − LC − BK x

  • x
  • Let us transform to new coordinates:

x

  • x

→ x e

  • =
  • x

x − x

  • =

I I −I

  • T

x

  • x
  • Two key observations:

◮ T is invertible, so the new representation is equivalent to

the old one

◮ in the new coordinates, we have

˙ x = Ax − BK x = (A − BK)x + BK(x − x) = (A − BK)x + BKe ˙ e = (A − LC)e

slide-71
SLIDE 71

The Main Result: Separation Principle

So now we can write ˙ x ˙ e

  • =

A − BK BK A − LC

  • upper triangular matrix

x e

slide-72
SLIDE 72

The Main Result: Separation Principle

So now we can write ˙ x ˙ e

  • =

A − BK BK A − LC

  • upper triangular matrix

x e

  • The closed-loop characteristic polynomial is

det Is − A + BK −BK Is − A + LC

  • = det (Is − A + BK) · det (Is − A + LC)
slide-73
SLIDE 73

The Main Result: Separation Principle

So now we can write ˙ x ˙ e

  • =

A − BK BK A − LC

  • upper triangular matrix

x e

  • The closed-loop characteristic polynomial is

det Is − A + BK −BK Is − A + LC

  • = det (Is − A + BK) · det (Is − A + LC)

Separation principle. The closed-loop eigenvalues are: {controller poles (roots of det(Is − A + BK))} ∪ {observer poles (roots of det(Is − A + LC))} — this holds only for linear systems!!

slide-74
SLIDE 74

Separation Principle

Separation principle. The closed-loop eigenvalues are: {controller poles (roots of det(Is − A + BK))} ∪ {observer poles (roots of det(Is − A + LC))} — this holds only for linear systems!!

slide-75
SLIDE 75

Separation Principle

Separation principle. The closed-loop eigenvalues are: {controller poles (roots of det(Is − A + BK))} ∪ {observer poles (roots of det(Is − A + LC))} — this holds only for linear systems!! Moral of the story:

slide-76
SLIDE 76

Separation Principle

Separation principle. The closed-loop eigenvalues are: {controller poles (roots of det(Is − A + BK))} ∪ {observer poles (roots of det(Is − A + LC))} — this holds only for linear systems!! Moral of the story:

◮ If we choose observer poles to be several times faster than

the controller poles (e.g., 2–5 times), then the controller poles will be dominant.

slide-77
SLIDE 77

Separation Principle

Separation principle. The closed-loop eigenvalues are: {controller poles (roots of det(Is − A + BK))} ∪ {observer poles (roots of det(Is − A + LC))} — this holds only for linear systems!! Moral of the story:

◮ If we choose observer poles to be several times faster than

the controller poles (e.g., 2–5 times), then the controller poles will be dominant.

◮ Dynamic output feedback gives essentially the same

performance as (nonimplementable) full-state feedback — provided observer poles are far enough into LHP.

slide-78
SLIDE 78

Separation Principle

Separation principle. The closed-loop eigenvalues are: {controller poles (roots of det(Is − A + BK))} ∪ {observer poles (roots of det(Is − A + LC))} — this holds only for linear systems!! Moral of the story:

◮ If we choose observer poles to be several times faster than

the controller poles (e.g., 2–5 times), then the controller poles will be dominant.

◮ Dynamic output feedback gives essentially the same

performance as (nonimplementable) full-state feedback — provided observer poles are far enough into LHP.

◮ Remember: the system must be controllable and

  • bservable!!