Introduction to Constrained Control Graham C. Goodwin September - - PowerPoint PPT Presentation

introduction to constrained control
SMART_READER_LITE
LIVE PREVIEW

Introduction to Constrained Control Graham C. Goodwin September - - PowerPoint PPT Presentation

Introduction to Constrained Control Graham C. Goodwin September 2004 Centre for Complex Dynamic Systems and Control 1.1 Background Most of the literature on Control Theory deals with Linear Unconstrained Systems . However to get the most out


slide-1
SLIDE 1

Introduction to Constrained Control

Graham C. Goodwin September 2004

Centre for Complex Dynamic Systems and Control

slide-2
SLIDE 2

1.1 Background

Most of the literature on Control Theory deals with Linear Unconstrained Systems. However to get the most out of a system, we usually need to deal with nonlinearities. The most common nonlinearity met in practice are Actuator Limits.

Centre for Complex Dynamic Systems and Control

slide-3
SLIDE 3

To get the most out of a system you need to push up against limits.

Centre for Complex Dynamic Systems and Control

slide-4
SLIDE 4

Other examples?

Playing sport at international level Excelling in business or academia Aerospace, chemical process control, . . . Control is a key enabling technology in many (all?) areas Getting the most out of control means pushing against boundaries

Centre for Complex Dynamic Systems and Control

slide-5
SLIDE 5

1.2 Approaches to Constrained Control

Cautious (back off performance demands so constraints are not met) Serendipitous (allow occasional constraint violation) Evolutionary (begin with a linear design and add embellishments, for example, antiwindup) Tactical (include constraints from the beginning, for example, MPC)

Centre for Complex Dynamic Systems and Control

slide-6
SLIDE 6

1.3 Example: Rudder Roll Stabilisation of Ships (See lecture 3.5)

It has been observed that unless appropriate actions are taken to deal with constraints, then the performance of rudder roll stabilisation systems can be worse than if nothing is done due to the effect of actuator amplitude and slew rate constraints.

Centre for Complex Dynamic Systems and Control

slide-7
SLIDE 7

1.4 Model Predictive Control Model Predictive Control (MPC) is a prime example of tactical method.

Long history in petrochemical industry. Many thousands of applications. Several commercial products. Industrial “credibility”.

Centre for Complex Dynamic Systems and Control

slide-8
SLIDE 8

Background

A survey by Mayne et al. (2000) divides the literature on MPC in three categories: Theoretical foundations: the optimal control literature. Dynamic Programming (Bellman 1957), the Maximum Principle (for example, Lee & Markus 1967). “Process control” literature, responsible for MPC’s adoption by

  • industry. Evolving generations of MPC technology. An

example of practice leading theory. “Modern” literature, dealing with theoretical advances such as stability and robustness.

Centre for Complex Dynamic Systems and Control

slide-9
SLIDE 9

General Description

MPC is a control strategy which, for a model of the system,

  • ptimises performance (measured through a cost function)

subject to constraints on the inputs, outputs and/or internal states. Due to the presence of constraints it is difficult, in general, to

  • btain closed formulae that solve the above control problem.

Hence, MPC has traditionally solved the optimisation problem on line over a finite horizon using the receding horizon technique. This has also restricted the applicability of MPC to processes with slow time constants that allow the optimisation to be solved on line. Recent results allow faster systems to be handled.

Centre for Complex Dynamic Systems and Control

slide-10
SLIDE 10

An Illustrative Example

We will base our design on linear quadratic regulator [LQR] theory. Thus, consider an objective function of the form: VN({xk}, {uk}) 1 2x

NPxN + 1

2

N−1

  • k=0
  • x

kQxk + u kRuk

  • ,

(1) where {uk} denotes the control sequence {u0, u1, . . . , uN−1}, and

{xk} denotes the corresponding state sequence {x0, x1, . . . , xN}.

In (1), {uk} and {xk} are related by the linear state equation: xk+1 = Axk + Buk, k = 0, 1, . . . , N − 1, where x0, the initial state, is assumed to be known.

Centre for Complex Dynamic Systems and Control

slide-11
SLIDE 11

The following parameters allow one to influence performance: the optimisation horizon N the state weighting matrix Q the control weighting matrix R the terminal state weighting matrix P For example, reducing R gives less weight on control effort, hence faster response. R → 0 is called “cheap control”.

Centre for Complex Dynamic Systems and Control

slide-12
SLIDE 12

Details of Example

Consider the specific linear system: xk+1 = Axk + Buk, (2) yk = Cxk, with A =

  • 1

1 1

  • ,

B =

  • 0.5

1

  • ,

C =

  • 1
  • ,

which is the zero-order hold discretisation with sampling period 1

  • f the double integrator

d2y(t) dt2

= u(t).

Centre for Complex Dynamic Systems and Control

slide-13
SLIDE 13

Example

uk xk controller linear system sat

Figure: Feedback control loop for Example

sat(u)             

1 if u > 1, u if |u| ≤ 1,

−1

if u < −1. (3)

Centre for Complex Dynamic Systems and Control

slide-14
SLIDE 14

(i) Cautious Design

(N = ∞, P = 0) and weighting matrices Q = CC =

  • 1
  • and

R = 20 gives the linear state feedback law: uk = −Kxk = −

  • 0.1603

0.5662

  • xk.

Centre for Complex Dynamic Systems and Control

slide-15
SLIDE 15

Cautious Design

5 10 15 20 25 −0.4 −0.2 0.2 0.4 0.6 0.8 1 5 10 15 20 25 −6 −5 −4 −3 −2 −1 1

k k uk yk

Figure: uk and yk for the cautious design uk = −Kxk with weights Q = CC and R = 20.

Centre for Complex Dynamic Systems and Control

slide-16
SLIDE 16

(ii) Serendipitous Design

Using the same Q = CC in the infinite horizon objective function we try to obtain a faster response by reducing the control weight to R = 2. We expect that this will lead to a control law having “higher gain.”

Centre for Complex Dynamic Systems and Control

slide-17
SLIDE 17

Serendipitous Design

5 10 15 20 25 −1 1 2 3 5 10 15 20 25 −6 −5 −4 −3 −2 −1 1

k k uk yk

Figure: uk and yk for the unconstrained LQR design uk = −Kxk (dashed line), and for the serendipitous strategy uk = −sat(Kxk) (circle-solid line), with weights Q = CC and R = 2.

Centre for Complex Dynamic Systems and Control

slide-18
SLIDE 18

Encouraged by the above result, we might be tempted to “push our luck” even further and aim for an even faster response by further reducing the weighting on the input signal. Accordingly, we decrease the control weighting in the LQR design even further, for example, to R = 0.1.

Centre for Complex Dynamic Systems and Control

slide-19
SLIDE 19

5 10 15 20 25 −6 −4 −2 2 4 6 5 10 15 20 25 −6 −4 −2 2 4

k k uk yk

Figure: uk and yk for the unconstrained LQR design uk = −Kxk (dashed line), and for the serendipitous strategy uk = −sat(Kxk) (circle-solid line), with weights Q = CC and R = 0.1.

Centre for Complex Dynamic Systems and Control

slide-20
SLIDE 20

The control law u = −sat(Kx) partitions the state space into three regions in accordance with the definition of the saturation function (3). Hence, the serendipitous strategy can be characterised as a switched control strategy in the following way: u = K(x) =

             −Kx

if x ∈ R0, 1 if x ∈ R1,

−1

if x ∈ R2. (4) Notice that this is simply an alternative way of describing the serendipitous strategy since for x ∈ R0 the input actually lies between the saturation limits. The partition is shown in following figure.

Centre for Complex Dynamic Systems and Control

slide-21
SLIDE 21

Figure 5

−6 −4 −2 2 4 6 −4 −3 −2 −1 1 2 3 4

x1

k

x2

k

R0 R1 R2

Figure: State space trajectory and space partition for the serendipitous strategy uk = −sat(Kxk), with weights Q = CC and R = 0.1.

Centre for Complex Dynamic Systems and Control

slide-22
SLIDE 22

Examination of figure 8 suggests a heuristic argument as to why the serendipitous control law may not be performing well in this

  • case. We can think, in this example, of x2 as “velocity” and x1 as

“position.” Now, in our attempt to change the position rapidly (from

−6 to 0), the velocity has been allowed to grow to a relatively high

level (+3). This would be fine if the braking action were

  • unconstrained. However, our input (including braking) is limited to

the range [−1, 1]. Hence, the available braking is inadequate to “pull the system up”, and overshoot occurs.

Centre for Complex Dynamic Systems and Control

slide-23
SLIDE 23

(iii) Tactical Design

Perhaps the above heuristic argument gives us some insight into how we could remedy the problem. A sensible idea would seem to be to try to “look ahead” and take account of future input constraints (that is, the limited braking authority available). To test this idea, we take the objective function (1) as a starting point.

Centre for Complex Dynamic Systems and Control

slide-24
SLIDE 24

Tactical Design

We use a prediction horizon N = 2 and minimise, at each sampling instant i and for the current state xi, the two-step objective function: V2({xk}, {uk}) = 1 2x

i+2Pxi+2 + 1

2

i+1

  • k=i
  • x

kQxk + u kRuk

  • ,

(5) subject to the equality and inequality constraints: xk+1 = Axk + Buk,

|uk| ≤ 1,

(6) for k = i and k = i + 1.

Centre for Complex Dynamic Systems and Control

slide-25
SLIDE 25

Tactical Design

In the objective function (5), we set, as before, Q = CC, R = 0.1. The terminal state weighting matrix P is taken to be the solution of the Riccati equation P = A PA + Q − K (R + BPB)K, where K = (R + BPB)−1BPA is the corresponding gain.

Centre for Complex Dynamic Systems and Control

slide-26
SLIDE 26

Tactical Design

As a result of minimising (5) subject to (6), we obtain an optimal fixed-horizon control sequence {ui, ui+1}. We then apply the resulting value of ui to the system. The state evolves to xi+1. We now shift the time instant from i to i + 1 and repeat this procedure. This is called receding horizon control [RHC] or model predictive control.

Centre for Complex Dynamic Systems and Control

slide-27
SLIDE 27

Receding Horizon Technique

(1) At time i and for the current state xi solve an

  • pen-loop (OL) optimal control problem over a

prediction horizon using a model of the system to predict future states and taking into account the present and future constraints; (2) Apply the fi rst step of the resulting optimal OL control sequence; (3) “Move the horizon”, that is, repeat the procedure at time i + 1 for the current state xi + 1.

1 2 3 k 1 2 3 k 1 2 3 4 k

Centre for Complex Dynamic Systems and Control

slide-28
SLIDE 28

5 10 15 20 25 −6 −4 −2 2 4 6 5 10 15 20 25 −6 −4 −2 2 4

k k uk yk

Figure: uk and yk for the unconstrained LQR design uk = −Kxk (dashed line), and for the receding horizon design (circle-solid line), with weights Q = CC and R = 0.1.

Centre for Complex Dynamic Systems and Control

slide-29
SLIDE 29

We will see later that the receding horizon strategy described above also leads to a partition of the state space into different regions in which affine control laws hold. The result is shown (for interest) in figure 9. The region R2 corresponds to the region R2 in figure 8 and represents the area of state space where u = −1 is

  • applied. Comparing figure 8 and figure 9 we see that the region R2

has been “bent over” in the figure 8 so that u = −1 occurs at lower values of x2 (velocity) than was the case in figure 8. This is in accordance with our heuristic argument about “needing to brake earlier.”

Centre for Complex Dynamic Systems and Control

slide-30
SLIDE 30

Figure 7

−6 −4 −2 2 4 6 −4 −3 −2 −1 1 2 3 4

x1

k

x2

k

R0 R1 R2 R3 R4

Figure: State space plot for the receding horizon tactical design.

Centre for Complex Dynamic Systems and Control

slide-31
SLIDE 31

Figure 5 and Figure 7

−6 −4 −2 2 4 6 −4 −3 −2 −1 1 2 3 4

x1

k

x2

k

R0 R1 R2

Figure: State space trajectory and space partition for the serendipitous strategy uk = −sat(Kxk), with weights Q = CC and R = 0.1.

−6 −4 −2 2 4 6 −4 −3 −2 −1 1 2 3 4

x1

k

x2

k

R0 R1 R2 R3 R4

Figure: State space plot for the receding horizon tactical design.

Centre for Complex Dynamic Systems and Control

slide-32
SLIDE 32

Summary

Can often avoid constraints by lowering performance demands However, this is at a cost If we increase demands - constraints are met Small violations not too significant Soon get poor performance Rethink the problem - add constraints into the design Leads to idea of Receding Horizon Control

Centre for Complex Dynamic Systems and Control