SLIDE 1
Plan of the Lecture Review: observability; Luenberger observer and - - PowerPoint PPT Presentation
Plan of the Lecture Review: observability; Luenberger observer and - - PowerPoint PPT Presentation
Plan of the Lecture Review: observability; Luenberger observer and state estimation error. Todays topic: joint observer and controller design: dynamic output feedback. Plan of the Lecture Review: observability; Luenberger observer
SLIDE 2
SLIDE 3
Plan of the Lecture
◮ Review: observability; Luenberger observer and state
estimation error.
◮ Today’s topic: joint observer and controller design:
dynamic output feedback. Goal: learn how to design an observer and a controller to achieve accurate closed-loop pole placement. Reading: FPE, Chapter 7
SLIDE 4
Is Full State Feedback Always Available?
In a typical system, measurements are provided by sensors:
plant u y controller
SLIDE 5
Is Full State Feedback Always Available?
In a typical system, measurements are provided by sensors:
plant u y controller
Full state feedback u = −Kx is not implementable!! In that case, an observer is used to estimate the state x: plant u y
- bserver
b x
SLIDE 6
State Estimation Using an Observer
If the system is observable, the state estimate x is asymptotically accurate:
- x(t) − x(t) =
- n
- i=1
( xi(t) − xi(t))2 t→∞ − − − → 0
SLIDE 7
State Estimation Using an Observer
If the system is observable, the state estimate x is asymptotically accurate:
- x(t) − x(t) =
- n
- i=1
( xi(t) − xi(t))2 t→∞ − − − → 0 If we are successful, then we can try estimated state feedback:
plant y
- bserver
b x K u = −Kb x
SLIDE 8
Observability
Consider a single-output system (y ∈ R): ˙ x = Ax + Bu, y = Cx x ∈ Rn
SLIDE 9
Observability
Consider a single-output system (y ∈ R): ˙ x = Ax + Bu, y = Cx x ∈ Rn The Observability Matrix is defined as O(A, C) = C CA . . . CAn−1
SLIDE 10
Observability
Consider a single-output system (y ∈ R): ˙ x = Ax + Bu, y = Cx x ∈ Rn The Observability Matrix is defined as O(A, C) = C CA . . . CAn−1 We say that the above system is observable if its observability matrix O(A, C) is invertible.
SLIDE 11
Observability
Consider a single-output system (y ∈ R): ˙ x = Ax + Bu, y = Cx x ∈ Rn The Observability Matrix is defined as O(A, C) = C CA . . . CAn−1 We say that the above system is observable if its observability matrix O(A, C) is invertible.
(This definition is only true for the single-output case; the multiple-output case involves the rank of O(A, C).)
SLIDE 12
Observer Canonical Form
A single-output state-space model ˙ x = Ax + Bu, y = Cx is said to be in Observer Canonical Form (OCF) if the matrices A, C are of the form A = . . . ∗ 1 . . . ∗ . . . . . . ... . . . . . . . . . . . . 1 ∗ . . . 1 ∗ , C =
- . . .
1
SLIDE 13
Observer Canonical Form
A single-output state-space model ˙ x = Ax + Bu, y = Cx is said to be in Observer Canonical Form (OCF) if the matrices A, C are of the form A = . . . ∗ 1 . . . ∗ . . . . . . ... . . . . . . . . . . . . 1 ∗ . . . 1 ∗ , C =
- . . .
1
- Fact: A system in OCF is always observable!!
SLIDE 14
Observer Canonical Form
A single-output state-space model ˙ x = Ax + Bu, y = Cx is said to be in Observer Canonical Form (OCF) if the matrices A, C are of the form A = . . . ∗ 1 . . . ∗ . . . . . . ... . . . . . . . . . . . . 1 ∗ . . . 1 ∗ , C =
- . . .
1
- Fact: A system in OCF is always observable!!
(The proof of this for n > 2 uses the Jordan canonical form, we will not worry about this.)
SLIDE 15
The Luenberger Observer
System: ˙ x = Ax y = Cx Observer: ˙
- x = (A − LC)
x + Ly.
SLIDE 16
The Luenberger Observer
System: ˙ x = Ax y = Cx Observer: ˙
- x = (A − LC)
x + Ly. What happens to state estimation error e = x − x as t → ∞?
SLIDE 17
The Luenberger Observer
System: ˙ x = Ax y = Cx Observer: ˙
- x = (A − LC)
x + Ly. What happens to state estimation error e = x − x as t → ∞? ˙ e = (A − LC)e
SLIDE 18
The Luenberger Observer
System: ˙ x = Ax y = Cx Observer: ˙
- x = (A − LC)
x + Ly. What happens to state estimation error e = x − x as t → ∞? ˙ e = (A − LC)e Does e(t) converge to zero in some sense?
SLIDE 19
The Luenberger Observer
System: ˙ x = Ax y = Cx Observer: ˙
- x = (A − LC)
x + Ly Error: ˙ e = (A − LC)e
SLIDE 20
The Luenberger Observer
System: ˙ x = Ax y = Cx Observer: ˙
- x = (A − LC)
x + Ly Error: ˙ e = (A − LC)e Recall our assumption that A − LC is Hurwitz (all eigenvalues are in LHP). This implies that x(t) − x(t)2 = e(t)2 =
n
- i=1
|ei(t)|2 t→∞ − − − → 0 at an exponential rate, determined by the eigenvalues of A − LC.
SLIDE 21
The Luenberger Observer
System: ˙ x = Ax y = Cx Observer: ˙
- x = (A − LC)
x + Ly Error: ˙ e = (A − LC)e Recall our assumption that A − LC is Hurwitz (all eigenvalues are in LHP). This implies that x(t) − x(t)2 = e(t)2 =
n
- i=1
|ei(t)|2 t→∞ − − − → 0 at an exponential rate, determined by the eigenvalues of A − LC. For fast convergence, want eigenvalues of A − LC far into LHP!!
SLIDE 22
Observability and Estimation Error
Fact: If the system ˙ x = Ax, y = Cx is observable, then we can arbitrarily assign eigenvalues of A − LC by a suitable choice of the output injection matrix L.
SLIDE 23
Observability and Estimation Error
Fact: If the system ˙ x = Ax, y = Cx is observable, then we can arbitrarily assign eigenvalues of A − LC by a suitable choice of the output injection matrix L. This is similar to the fact that controllability implies arbitrary closed-loop pole placement by state feedback.
SLIDE 24
Observability and Estimation Error
Fact: If the system ˙ x = Ax, y = Cx is observable, then we can arbitrarily assign eigenvalues of A − LC by a suitable choice of the output injection matrix L. This is similar to the fact that controllability implies arbitrary closed-loop pole placement by state feedback. In fact, these two facts are closely related because CCF is dual to OCF.
SLIDE 25
Combining Full-State Feedback with an Observer
SLIDE 26
Combining Full-State Feedback with an Observer
◮ So far, we have focused on autonomous systems (u = 0).
SLIDE 27
Combining Full-State Feedback with an Observer
◮ So far, we have focused on autonomous systems (u = 0). ◮ What about nonzero inputs?
˙ x = Ax + Bu y = Cx
SLIDE 28
Combining Full-State Feedback with an Observer
◮ So far, we have focused on autonomous systems (u = 0). ◮ What about nonzero inputs?
˙ x = Ax + Bu y = Cx — assume (A, B) is controllable and (A, C) is observable.
SLIDE 29
Combining Full-State Feedback with an Observer
◮ So far, we have focused on autonomous systems (u = 0). ◮ What about nonzero inputs?
˙ x = Ax + Bu y = Cx — assume (A, B) is controllable and (A, C) is observable.
◮ Today, we will learn how to use an observer together with
estimated state feedback to (approximately) place closed-loop poles.
SLIDE 30
Combining Full-State Feedback with an Observer
◮ So far, we have focused on autonomous systems (u = 0). ◮ What about nonzero inputs?
˙ x = Ax + Bu y = Cx — assume (A, B) is controllable and (A, C) is observable.
◮ Today, we will learn how to use an observer together with
estimated state feedback to (approximately) place closed-loop poles.
plant y
- bserver
b x −K u = −Kb x B
SLIDE 31
Combining Full-State Feedback with an Observer
◮ Consider
˙ x = Ax + Bu y = Cx where (A, B) is controllable and (A, C) is observable.
SLIDE 32
Combining Full-State Feedback with an Observer
◮ Consider
˙ x = Ax + Bu y = Cx where (A, B) is controllable and (A, C) is observable.
◮ We know how to find K, such that A − BK has desired
eigenvalues (controller poles).
SLIDE 33
Combining Full-State Feedback with an Observer
◮ Consider
˙ x = Ax + Bu y = Cx where (A, B) is controllable and (A, C) is observable.
◮ We know how to find K, such that A − BK has desired
eigenvalues (controller poles).
◮ Since we do not have access to x, we must design an
- bserver. But this time, we need a slight modification
because of the Bu term.
SLIDE 34
Observer in the Presence of Control Input
◮ Let’s see what goes wrong when we use the old approach:
˙
- x = (A − LC)
x + Ly
SLIDE 35
Observer in the Presence of Control Input
◮ Let’s see what goes wrong when we use the old approach:
˙
- x = (A − LC)
x + Ly
◮ For the estimation error e = x −
x, we have
SLIDE 36
Observer in the Presence of Control Input
◮ Let’s see what goes wrong when we use the old approach:
˙
- x = (A − LC)
x + Ly
◮ For the estimation error e = x −
x, we have ˙ e = ˙ x − ˙
- x
SLIDE 37
Observer in the Presence of Control Input
◮ Let’s see what goes wrong when we use the old approach:
˙
- x = (A − LC)
x + Ly
◮ For the estimation error e = x −
x, we have ˙ e = ˙ x − ˙
- x
= Ax + Bu − [(A − LC) x + LCx]
SLIDE 38
Observer in the Presence of Control Input
◮ Let’s see what goes wrong when we use the old approach:
˙
- x = (A − LC)
x + Ly
◮ For the estimation error e = x −
x, we have ˙ e = ˙ x − ˙
- x
= Ax + Bu − [(A − LC) x + LCx] = (A − LC)e + Bu
SLIDE 39
Observer in the Presence of Control Input
◮ Let’s see what goes wrong when we use the old approach:
˙
- x = (A − LC)
x + Ly
◮ For the estimation error e = x −
x, we have ˙ e = ˙ x − ˙
- x
= Ax + Bu − [(A − LC) x + LCx] = (A − LC)e + Bu – not good
SLIDE 40
Observer in the Presence of Control Input
◮ Let’s see what goes wrong when we use the old approach:
˙
- x = (A − LC)
x + Ly
◮ For the estimation error e = x −
x, we have ˙ e = ˙ x − ˙
- x
= Ax + Bu − [(A − LC) x + LCx] = (A − LC)e + Bu – not good
◮ Idea: since u is a signal we can access, let’s use it as an
input to the observer to cancel the Bu term from ˙ x.
SLIDE 41
Observer in the Presence of Control Input
◮ Let’s see what goes wrong when we use the old approach:
˙
- x = (A − LC)
x + Ly
◮ For the estimation error e = x −
x, we have ˙ e = ˙ x − ˙
- x
= Ax + Bu − [(A − LC) x + LCx] = (A − LC)e + Bu – not good
◮ Idea: since u is a signal we can access, let’s use it as an
input to the observer to cancel the Bu term from ˙ x.
◮ Modified observer:
SLIDE 42
Observer in the Presence of Control Input
◮ Let’s see what goes wrong when we use the old approach:
˙
- x = (A − LC)
x + Ly
◮ For the estimation error e = x −
x, we have ˙ e = ˙ x − ˙
- x
= Ax + Bu − [(A − LC) x + LCx] = (A − LC)e + Bu – not good
◮ Idea: since u is a signal we can access, let’s use it as an
input to the observer to cancel the Bu term from ˙ x.
◮ Modified observer:
˙
- x = (A − LC)
x + Ly + Bu
SLIDE 43
Observer in the Presence of Control Input
◮ Let’s see what goes wrong when we use the old approach:
˙
- x = (A − LC)
x + Ly
◮ For the estimation error e = x −
x, we have ˙ e = ˙ x − ˙
- x
= Ax + Bu − [(A − LC) x + LCx] = (A − LC)e + Bu – not good
◮ Idea: since u is a signal we can access, let’s use it as an
input to the observer to cancel the Bu term from ˙ x.
◮ Modified observer:
˙
- x = (A − LC)
x + Ly + Bu ˙ e = ˙ x − ˙
- x
SLIDE 44
Observer in the Presence of Control Input
◮ Let’s see what goes wrong when we use the old approach:
˙
- x = (A − LC)
x + Ly
◮ For the estimation error e = x −
x, we have ˙ e = ˙ x − ˙
- x
= Ax + Bu − [(A − LC) x + LCx] = (A − LC)e + Bu – not good
◮ Idea: since u is a signal we can access, let’s use it as an
input to the observer to cancel the Bu term from ˙ x.
◮ Modified observer:
˙
- x = (A − LC)
x + Ly + Bu ˙ e = ˙ x − ˙
- x
= Ax + Bu − [(A − LC) x + LCx + Bu]
SLIDE 45
Observer in the Presence of Control Input
◮ Let’s see what goes wrong when we use the old approach:
˙
- x = (A − LC)
x + Ly
◮ For the estimation error e = x −
x, we have ˙ e = ˙ x − ˙
- x
= Ax + Bu − [(A − LC) x + LCx] = (A − LC)e + Bu – not good
◮ Idea: since u is a signal we can access, let’s use it as an
input to the observer to cancel the Bu term from ˙ x.
◮ Modified observer:
˙
- x = (A − LC)
x + Ly + Bu ˙ e = ˙ x − ˙
- x
= Ax + Bu − [(A − LC) x + LCx + Bu] = (A − LC)e
SLIDE 46
Observer in the Presence of Control Input
◮ Let’s see what goes wrong when we use the old approach:
˙
- x = (A − LC)
x + Ly
◮ For the estimation error e = x −
x, we have ˙ e = ˙ x − ˙
- x
= Ax + Bu − [(A − LC) x + LCx] = (A − LC)e + Bu – not good
◮ Idea: since u is a signal we can access, let’s use it as an
input to the observer to cancel the Bu term from ˙ x.
◮ Modified observer:
˙
- x = (A − LC)
x + Ly + Bu ˙ e = ˙ x − ˙
- x
= Ax + Bu − [(A − LC) x + LCx + Bu] = (A − LC)e regardless of u
SLIDE 47
Observer and Controller
System: ˙ x = Ax + Bu y = Cx Observer: ˙
- x = (A − LC)
x + Ly + Bu Error: ˙ e = (A − LC)e
SLIDE 48
Observer and Controller
System: ˙ x = Ax + Bu y = Cx Observer: ˙
- x = (A − LC)
x + Ly + Bu Error: ˙ e = (A − LC)e
◮ By observability, we can arbitrarily assign eig(A − LC);
these should be farther into LHP than desired controller poles.
SLIDE 49
Observer and Controller
System: ˙ x = Ax + Bu y = Cx Observer: ˙
- x = (A − LC)
x + Ly + Bu Error: ˙ e = (A − LC)e
◮ By observability, we can arbitrarily assign eig(A − LC);
these should be farther into LHP than desired controller poles. Controller: u = −K x (estimated state feedback)
SLIDE 50
Observer and Controller
System: ˙ x = Ax + Bu y = Cx Observer: ˙
- x = (A − LC)
x + Ly + Bu Error: ˙ e = (A − LC)e
◮ By observability, we can arbitrarily assign eig(A − LC);
these should be farther into LHP than desired controller poles. Controller: u = −K x (estimated state feedback)
◮ By controllability, we can arbitrarily assign eig(A − BK).
SLIDE 51
Observer and Controller
System: ˙ x = Ax + Bu y = Cx Observer: ˙
- x = (A − LC)
x + Ly + Bu Controller: u = −K x The overall observer-controller system is: ˙
- x = (A − LC)
x + Ly + B (−K x)
=u
= (A − LC − BK) x + Ly u = −K x (dynamic output feedback) — this is a dynamical system with input y and output u
SLIDE 52
Dynamic Output Feedback
˙ x = Ax + Bu y = Cx ˙
- x = (A − LC − BK)
x + Ly u = −K x
plant y
- bserver
b x −K u = −Kb x B controller
SLIDE 53
Dynamic Output Feedback
˙
- x = (A − LC − BK)
x + Ly, u = −K x
plant y
- bserver
b x −K u = −Kb x B controller
SLIDE 54
Dynamic Output Feedback
˙
- x = (A − LC − BK)
x + Ly, u = −K x
plant y
- bserver
b x −K u = −Kb x B controller
Controller transfer function (from y to u): s X = (A − LC − BK) X + LY, U = −K X U = −K(Is − A + LC + BK)−1L
- =D(s)
Y
SLIDE 55
Dynamic Output Feedback: Does It Work?
Summarizing:
SLIDE 56
Dynamic Output Feedback: Does It Work?
Summarizing:
◮ When y = x, full state feedback u = −Kx achieves desired
pole placement.
SLIDE 57
Dynamic Output Feedback: Does It Work?
Summarizing:
◮ When y = x, full state feedback u = −Kx achieves desired
pole placement.
◮ How do we know that u = −K
x achieves similar objectives?
SLIDE 58
Dynamic Output Feedback: Does It Work?
Summarizing:
◮ When y = x, full state feedback u = −Kx achieves desired
pole placement.
◮ How do we know that u = −K
x achieves similar objectives? Here is our overall closed-loop system:
SLIDE 59
Dynamic Output Feedback: Does It Work?
Summarizing:
◮ When y = x, full state feedback u = −Kx achieves desired
pole placement.
◮ How do we know that u = −K
x achieves similar objectives? Here is our overall closed-loop system: ˙ x = Ax − BK x ˙
- x = (A − LC − BK)
x + LCx
SLIDE 60
Dynamic Output Feedback: Does It Work?
Summarizing:
◮ When y = x, full state feedback u = −Kx achieves desired
pole placement.
◮ How do we know that u = −K
x achieves similar objectives? Here is our overall closed-loop system: ˙ x = Ax − BK x ˙
- x = (A − LC − BK)
x + LCx We can write it in block matrix form: ˙ x ˙
- x
- =
A −BK LC A − LC − BK x
- x
SLIDE 61
Dynamic Output Feedback: Does It Work?
Summarizing:
◮ When y = x, full state feedback u = −Kx achieves desired
pole placement.
◮ How do we know that u = −K
x achieves similar objectives? Here is our overall closed-loop system: ˙ x = Ax − BK x ˙
- x = (A − LC − BK)
x + LCx We can write it in block matrix form: ˙ x ˙
- x
- =
A −BK LC A − LC − BK x
- x
- How do we relate this to “nominal” behavior, A − BK?
SLIDE 62
Dynamic Output Feedback
˙ x ˙
- x
- =
A −BK LC A − LC − BK x
- x
SLIDE 63
Dynamic Output Feedback
˙ x ˙
- x
- =
A −BK LC A − LC − BK x
- x
- Let us transform to new coordinates:
x
- x
- −
→ x e
- =
- x
x − x
- =
I I −I
- T
x
- x
SLIDE 64
Dynamic Output Feedback
˙ x ˙
- x
- =
A −BK LC A − LC − BK x
- x
- Let us transform to new coordinates:
x
- x
- −
→ x e
- =
- x
x − x
- =
I I −I
- T
x
- x
- Two key observations:
SLIDE 65
Dynamic Output Feedback
˙ x ˙
- x
- =
A −BK LC A − LC − BK x
- x
- Let us transform to new coordinates:
x
- x
- −
→ x e
- =
- x
x − x
- =
I I −I
- T
x
- x
- Two key observations:
◮ T is invertible, so the new representation is equivalent to
the old one
SLIDE 66
Dynamic Output Feedback
˙ x ˙
- x
- =
A −BK LC A − LC − BK x
- x
- Let us transform to new coordinates:
x
- x
- −
→ x e
- =
- x
x − x
- =
I I −I
- T
x
- x
- Two key observations:
◮ T is invertible, so the new representation is equivalent to
the old one
◮ in the new coordinates, we have
SLIDE 67
Dynamic Output Feedback
˙ x ˙
- x
- =
A −BK LC A − LC − BK x
- x
- Let us transform to new coordinates:
x
- x
- −
→ x e
- =
- x
x − x
- =
I I −I
- T
x
- x
- Two key observations:
◮ T is invertible, so the new representation is equivalent to
the old one
◮ in the new coordinates, we have
˙ x = Ax − BK x
SLIDE 68
Dynamic Output Feedback
˙ x ˙
- x
- =
A −BK LC A − LC − BK x
- x
- Let us transform to new coordinates:
x
- x
- −
→ x e
- =
- x
x − x
- =
I I −I
- T
x
- x
- Two key observations:
◮ T is invertible, so the new representation is equivalent to
the old one
◮ in the new coordinates, we have
˙ x = Ax − BK x = (A − BK)x + BK(x − x)
SLIDE 69
Dynamic Output Feedback
˙ x ˙
- x
- =
A −BK LC A − LC − BK x
- x
- Let us transform to new coordinates:
x
- x
- −
→ x e
- =
- x
x − x
- =
I I −I
- T
x
- x
- Two key observations:
◮ T is invertible, so the new representation is equivalent to
the old one
◮ in the new coordinates, we have
˙ x = Ax − BK x = (A − BK)x + BK(x − x) = (A − BK)x + BKe
SLIDE 70
Dynamic Output Feedback
˙ x ˙
- x
- =
A −BK LC A − LC − BK x
- x
- Let us transform to new coordinates:
x
- x
- −
→ x e
- =
- x
x − x
- =
I I −I
- T
x
- x
- Two key observations:
◮ T is invertible, so the new representation is equivalent to
the old one
◮ in the new coordinates, we have
˙ x = Ax − BK x = (A − BK)x + BK(x − x) = (A − BK)x + BKe ˙ e = (A − LC)e
SLIDE 71
The Main Result: Separation Principle
So now we can write ˙ x ˙ e
- =
A − BK BK A − LC
- upper triangular matrix
x e
SLIDE 72
The Main Result: Separation Principle
So now we can write ˙ x ˙ e
- =
A − BK BK A − LC
- upper triangular matrix
x e
- The closed-loop characteristic polynomial is
det Is − A + BK −BK Is − A + LC
- = det (Is − A + BK) · det (Is − A + LC)
SLIDE 73
The Main Result: Separation Principle
So now we can write ˙ x ˙ e
- =
A − BK BK A − LC
- upper triangular matrix
x e
- The closed-loop characteristic polynomial is
det Is − A + BK −BK Is − A + LC
- = det (Is − A + BK) · det (Is − A + LC)
Separation principle. The closed-loop eigenvalues are: {controller poles (roots of det(Is − A + BK))} ∪ {observer poles (roots of det(Is − A + LC))} — this holds only for linear systems!!
SLIDE 74
Separation Principle
Separation principle. The closed-loop eigenvalues are: {controller poles (roots of det(Is − A + BK))} ∪ {observer poles (roots of det(Is − A + LC))} — this holds only for linear systems!!
SLIDE 75
Separation Principle
Separation principle. The closed-loop eigenvalues are: {controller poles (roots of det(Is − A + BK))} ∪ {observer poles (roots of det(Is − A + LC))} — this holds only for linear systems!! Moral of the story:
SLIDE 76
Separation Principle
Separation principle. The closed-loop eigenvalues are: {controller poles (roots of det(Is − A + BK))} ∪ {observer poles (roots of det(Is − A + LC))} — this holds only for linear systems!! Moral of the story:
◮ If we choose observer poles to be several times faster than
the controller poles (e.g., 2–5 times), then the controller poles will be dominant.
SLIDE 77
Separation Principle
Separation principle. The closed-loop eigenvalues are: {controller poles (roots of det(Is − A + BK))} ∪ {observer poles (roots of det(Is − A + LC))} — this holds only for linear systems!! Moral of the story:
◮ If we choose observer poles to be several times faster than
the controller poles (e.g., 2–5 times), then the controller poles will be dominant.
◮ Dynamic output feedback gives essentially the same
performance as (nonimplementable) full-state feedback — provided observer poles are far enough into LHP.
SLIDE 78
Separation Principle
Separation principle. The closed-loop eigenvalues are: {controller poles (roots of det(Is − A + BK))} ∪ {observer poles (roots of det(Is − A + LC))} — this holds only for linear systems!! Moral of the story:
◮ If we choose observer poles to be several times faster than
the controller poles (e.g., 2–5 times), then the controller poles will be dominant.
◮ Dynamic output feedback gives essentially the same
performance as (nonimplementable) full-state feedback — provided observer poles are far enough into LHP.
◮ Remember: the system must be controllable and
- bservable!!