Improving MPC performance by stabilizing feedbacks Roberto - - PowerPoint PPT Presentation

improving mpc performance by stabilizing feedbacks
SMART_READER_LITE
LIVE PREVIEW

Improving MPC performance by stabilizing feedbacks Roberto - - PowerPoint PPT Presentation

Improving MPC performance by stabilizing feedbacks Roberto Guglielmi Gran Sasso Science Institute (GSSI) - L Aquila, IT based on a joint work with Eduardo Cerpa, CL, Lars Grne, DE, and Dante Kalise, UK VI Latin American Workshop on


slide-1
SLIDE 1

Improving MPC performance by stabilizing feedbacks

Roberto Guglielmi

Gran Sasso Science Institute (GSSI) - L ’Aquila, IT

based on a joint work with Eduardo Cerpa, CL, Lars Grüne, DE, and Dante Kalise, UK

VI Latin American Workshop on Optimization and Control Escuela Politécnica Nacional, Quito, Ecuador September 3-7, 2018

slide-2
SLIDE 2

Introduction

Common tasks in an optimal control problem min

u∈U J(x0, u)

J(x0, u) := TE ℓ(x(t), u(t)) dt , TE ∈ (0, ∞] ˙ x(t) = f(x(t), u(t)) , t ∈ (0, TE) , x(0) = x0 , include ◮ Steering the state to a desired equilibrium and keep it there ◮ Following a reference trajectory

slide-3
SLIDE 3

Introduction

Common tasks in an optimal control problem min

u∈U J(x0, u)

J(x0, u) := TE ℓ(x(t), u(t)) dt , TE ∈ (0, ∞] ˙ x(t) = f(x(t), u(t)) , t ∈ (0, TE) , x(0) = x0 , include ◮ Steering the state to a desired equilibrium and keep it there ◮ Following a reference trajectory (Possibly) infinite horizon optimal control problem = ⇒ hard computational complexity

slide-4
SLIDE 4

Introduction

MPC scheme applied to the nonlinear infinite horizon OCP splits the problem into several iterative OCPs on a finite horizon min

u∈UT (x0) JT(x0, u)

JT(x0, u) := T ℓ(x(t), u(t)) dt , ˙ x(t) = f(x(t), u(t)) , t ∈ (0, T) , x(0) = x0 , where T > 0 optimization horizon, x0, x(t) ∈ X, UT(x0) admissible controls for IC x0 up to time T ℓ(x, u) running cost f(x, u) controlled nonlinear dynamics

slide-5
SLIDE 5

In a discrete-time setting, MPC scheme constructs a feedback law µ according to the following steps:

  • 1. Given and initial condition zµ(0) ∈ X and a horizon N ≥ 2,

set n = 0.

  • 2. Minimize

JN(z0, u) :=

N−1

  • k=0

ℓ(zu(k; z0), u(k)) subject to z(k + 1) = f(z(k), u(k)) , with initial value z0 = zµ(n). Apply the first value of the resulting optimal control sequence u∗ ∈ UN, that is, set µ(zµ(n)) := u∗(0).

  • 3. Evaluate zµ(n + 1) according to

zµ(k + 1) = f(zµ(k), µ(zµ(k))) , set n := n + 1 and run again from step 2.

slide-6
SLIDE 6

MPC by a trajectory point of view

Past Future t TE Reference trajectory

Consider an optimal control problem on [0, TE].

slide-7
SLIDE 7

MPC by a trajectory point of view

Past Future t TE Reference1trajectory Measured1state Past1control

...

Sample1rate (Prediction)1Horizon n n+1 n+2 n+N

Choose a horizon N ∈ N and a sample rate T > 0. For each time tn := nT, n = 0, 1, 2, ... :

slide-8
SLIDE 8

MPC by a trajectory point of view

Past Future t TE Reference1trajectory Measured1state Past1control

...

Sample1rate (Prediction)1Horizon n n+1 n+2 n+N

  • 1. Measure the current state z(n).
  • 2. Set z0 := z(n) and solve the optimal control problem on

the current time horizon [tn, tn+N].

slide-9
SLIDE 9

MPC by a trajectory point of view

Past Future t TE Reference1trajectory Measured1state Past1control Predicted1state Predicted1control

...

Sample1rate (Prediction)1Horizon n n+1 n+2 n+N

  • 3. Denote the calculated optimal control sequence by u∗(·)

and apply its first value u∗(0) on [tn, tn+1].

slide-10
SLIDE 10

MPC by a trajectory point of view

Past Future t TE Reference1trajectory Measured1state Past1control Predicted1state Predicted1control

...

Sample1rate (Prediction)1Horizon n n+1 n+2 n+N

  • 4. If tn+1 < TE, set n := n + 1 and go to 1. Otherwise end.

Adaption of http://en.wikipedia.org/wiki/File:MPC_scheme_basic.svg (CC BY-SA 3.0)

slide-11
SLIDE 11

Stability of the MPC closed loop

Main theoretical question: How to guarantee stability of the MPC closed loop system zµ(k + 1) = f(zµ(k), µ(zµ(k))) ?

slide-12
SLIDE 12

Stability of the MPC closed loop

Main theoretical question: How to guarantee stability of the MPC closed loop system zµ(k + 1) = f(zµ(k), µ(zµ(k))) ? Two possible approaches:

  • Add stabilizing terminal conditions and/or constraints
  • Tune the horizon length N and/or the stage cost ℓ
slide-13
SLIDE 13

Stability of the MPC closed loop

Main theoretical question: How to guarantee stability of the MPC closed loop system zµ(k + 1) = f(zµ(k), µ(zµ(k))) ? Two possible approaches:

  • Add stabilizing terminal conditions and/or constraints
  • Tune the horizon length N and/or the stage cost ℓ
slide-14
SLIDE 14

Stability of the MPC closed loop

Main theoretical question: How to guarantee stability of the MPC closed loop system zµ(k + 1) = f(zµ(k), µ(zµ(k))) ? Two possible approaches:

  • Add stabilizing terminal conditions and/or constraints
  • Tune the horizon length N and/or the stage cost ℓ

Main motivation: even for small optimization horizons N we can

  • in principle - obtain large feasible sets, i.e., sets of initial

values for which the finite horizon problem is well defined

slide-15
SLIDE 15

Objective of the talk

Describe a general framework that enable stability of the MPC closed loop by an instantaneous control

slide-16
SLIDE 16

Objective of the talk

Describe a general framework that enable stability of the MPC closed loop by an instantaneous control

  • 1. The method shall be suitable for
  • Unstable equilibrium ¯

z

  • Stage cost of tracking type towards ¯

z

  • 2. The method requires the knowledge of a given stabilizing

feedback towards ¯ z

slide-17
SLIDE 17

Outline

Exponential Controllability Condition Improving MPC performance by stabilizing feedback Linear Heat Equation The Schlögl system The Burgers’ Equation Conclusions

slide-18
SLIDE 18

Exponential Controllability condition

Without terminal constraints, stability is known to hold for "sufficiently large optimization horizon N " [Alamir/Bornard ’95, Jadbabaie/Hauser ’05, Grimm/Messina/Tuna/Teel ’05] In order to get an estimate of how large shall N be, we need quantitative information.

slide-19
SLIDE 19

Exponential Controllability condition

Without terminal constraints, stability is known to hold for "sufficiently large optimization horizon N " [Alamir/Bornard ’95, Jadbabaie/Hauser ’05, Grimm/Messina/Tuna/Teel ’05] In order to get an estimate of how large shall N be, we need quantitative information. Exponential Controllability w.r.t. stage cost ℓ The system z(k + 1) = f(z(k), u(k)) is called exponentially controllable w.r.t. stage cost ℓ iff there exist an overshoot bound C ≥ 1 and a decay rate σ ∈ (0, 1) such that for each state ¯ z = z(0) ∈ X there is a control u¯

z ∈ U satisfying

ℓ(zu¯

z(k, ¯

z), u¯

z(k)) ≤ Cσk min u∈U ℓ(¯

z, u) for all k ∈ N.

slide-20
SLIDE 20

Stability of MPC without stabilizing terminal constraints

Theorem (Grüne, Pannek, 2011)

Let (¯ z, ¯ u) be an equilibrium, i.e., f(¯ z, ¯ u) = ¯ z . Consider the MPC scheme with stage cost ℓ(z(k), u(k)) = 1 2z(k) − ¯ z2 + λ 2u(k) − ¯ u2 , λ > 0 .

slide-21
SLIDE 21

Stability of MPC without stabilizing terminal constraints

Theorem (Grüne, Pannek, 2011)

Let (¯ z, ¯ u) be an equilibrium, i.e., f(¯ z, ¯ u) = ¯ z . Consider the MPC scheme with stage cost ℓ(z(k), u(k)) = 1 2z(k) − ¯ z2 + λ 2u(k) − ¯ u2 , λ > 0 .

In particular, ℓ(¯ z, ¯ u) = 0 and ℓ(z, u) > 0 for (z, u) = (¯ z, ¯ u) .

slide-22
SLIDE 22

Stability of MPC without stabilizing terminal constraints

Theorem (Grüne, Pannek, 2011)

Let (¯ z, ¯ u) be an equilibrium, i.e., f(¯ z, ¯ u) = ¯ z . Consider the MPC scheme with stage cost ℓ(z(k), u(k)) = 1 2z(k) − ¯ z2 + λ 2u(k) − ¯ u2 , λ > 0 .

  • 1. Assume the exponential controllability w.r.t. ℓ hold.

Then there exists N0 ≥ 2 such that the equilibrium (¯ z, ¯ u) is globally asymptotically stable for the MPC closed loop for any optimization horizon N ≥ N0 .

slide-23
SLIDE 23

Stability of MPC without stabilizing terminal constraints

Theorem (Grüne, Pannek, 2011)

Let (¯ z, ¯ u) be an equilibrium, i.e., f(¯ z, ¯ u) = ¯ z . Consider the MPC scheme with stage cost ℓ(z(k), u(k)) = 1 2z(k) − ¯ z2 + λ 2u(k) − ¯ u2 , λ > 0 .

  • 1. Assume the exponential controllability w.r.t. ℓ hold.

Then there exists N0 ≥ 2 such that the equilibrium (¯ z, ¯ u) is globally asymptotically stable for the MPC closed loop for any optimization horizon N ≥ N0 .

  • 2. If, in addition, the exponential controllability property holds

with C = 1 , then N0 = 2 (instantaneous control).

slide-24
SLIDE 24

Stability chart for C and σ

Exponential controllability condition ℓ(zu¯

z(k, ¯

z), u¯

z(k)) ≤ Cσk min u∈U ℓ(¯

z, u) , ∀k ∈ N

slide-25
SLIDE 25

Stability chart for C and σ

Exponential controllability condition ℓ(zu¯

z(k, ¯

z), u¯

z(k)) ≤ Cσk min u∈U ℓ(¯

z, u) , ∀k ∈ N Dependence of C and σ on N

(Figure: Harald Voit)

slide-26
SLIDE 26

Outline

Exponential Controllability Condition Improving MPC performance by stabilizing feedback Linear Heat Equation The Schlögl system The Burgers’ Equation Conclusions

slide-27
SLIDE 27

Setting of the problem

Let ¯ z be an unstable equilibrium of z(k + 1) = f(z(k), u(k))

slide-28
SLIDE 28

Setting of the problem

Let ¯ z be an unstable equilibrium of z(k + 1) = f(z(k), u(k))

¯ z unstable = ⇒ usually a long horizon N to ensure stability of the MPC

slide-29
SLIDE 29

Setting of the problem

Let ¯ z be an unstable equilibrium of z(k + 1) = f(z(k), u(k)) Assume that ◮ the stage cost ℓ is of tracking type towards ¯ z , i.e., ℓ(z, u) = 1 2z − ¯ z2 + λ 2u2 , λ > 0 . ◮ there exists us(z0) a given feedback stabilizing the system towards ¯ z , at an exponential rate

slide-30
SLIDE 30

Setting of the problem

Let ¯ z be an unstable equilibrium of z(k + 1) = f(z(k), u(k)) Assume that ◮ the stage cost ℓ is of tracking type towards ¯ z , i.e., ℓ(z, u) = 1 2z − ¯ z2 + λ 2u2 , λ > 0 . ◮ there exists us(z0) a given feedback stabilizing the system towards ¯ z , at an exponential rate Then, suitably tuning the stage cost ℓ (or the dynamics f), we can achieve stability of the MPC closed loop with N = 2 (instantaneous control)

slide-31
SLIDE 31

Main Result

The equilibrium ¯ z is globally asimptotically stable for the MPC closed loop with dynamics z(k + 1) = f(z(k), u(k)) , z(0) = z0 and stage cost ℓs(z, u) = 1 2z − ¯ z2 + λ 2u−us(z0)2 , λ > 0 , for any optimization horizon N ≥ 2 .

slide-32
SLIDE 32

Main Result

The equilibrium ¯ z is globally asimptotically stable for the MPC closed loop with dynamics z(k + 1) = f(z(k), u(k)) , z(0) = z0 and stage cost ℓs(z, u) = 1 2z − ¯ z2 + λ 2u−us(z0)2 , λ > 0 ,

  • r, equivalently, with dynamics and stage cost

z(k + 1) = fs(z(k), u(k)) = f(z(k), us(z0) + u(k)) , z(0) = z0 ℓ(z, u) = 1 2z − ¯ z2 + λ 2u2 , λ > 0 . for any optimization horizon N ≥ 2 .

slide-33
SLIDE 33

Outline

Exponential Controllability Condition Improving MPC performance by stabilizing feedback Linear Heat Equation The Schlögl system The Burgers’ Equation Conclusions

slide-34
SLIDE 34

Internal control of the Heat Equation

Consider yt − yxx − µy = χωu , in (0, 1) × (0, +∞) , y(0, t) = 0 , t ∈ (0, +∞) , y(1, t) = 0 , t ∈ (0, +∞) , y(x, 0) = y0(x) , x ∈ (0, 1) .

slide-35
SLIDE 35

Internal control of the Heat Equation

Consider yt − yxx − µy = χωu , in (0, 1) × (0, +∞) , y(0, t) = 0 , t ∈ (0, +∞) , y(1, t) = 0 , t ∈ (0, +∞) , y(x, 0) = y0(x) , x ∈ (0, 1) . Set λ1 the smallest eigenvalue of the operator (−∂xx) in H1

0(0, 1) , for µ ≥ λ1 the origin ¯

y = 0 is not asymptotically stable for the uncontrolled equation (with u = 0).

slide-36
SLIDE 36

Internal control of the Heat Equation

Consider yt − yxx − µy = χωu , in (0, 1) × (0, +∞) , y(0, t) = 0 , t ∈ (0, +∞) , y(1, t) = 0 , t ∈ (0, +∞) , y(x, 0) = y0(x) , x ∈ (0, 1) . Set λ1 the smallest eigenvalue of the operator (−∂xx) in H1

0(0, 1) , for µ ≥ λ1 the origin ¯

y = 0 is not asymptotically stable for the uncontrolled equation (with u = 0).

slide-37
SLIDE 37

Minimal horizon from Altmüller, Grüne, 2012

Discrete-time setting: y(k) := y(·, kT) , u(k) := u(·, kT) for some T > 0 (sampled data system with sampling time T)

slide-38
SLIDE 38

Minimal horizon from Altmüller, Grüne, 2012

Discrete-time setting: y(k) := y(·, kT) , u(k) := u(·, kT) for some T > 0 (sampled data system with sampling time T) Compare the cost functionals ℓy(z(k), u(k)) := 1 2y(k)2

2,Ω + λ

2u(k)2 , and ℓyx(z(k), u(k)) := 1 2yx(k)2

2,Ω + λ

2u(k)2 ,

slide-39
SLIDE 39

Minimal horizon from Altmüller, Grüne, 2012

Discrete-time setting: y(k) := y(·, kT) , u(k) := u(·, kT) for some T > 0 (sampled data system with sampling time T) Compare the cost functionals ℓy(z(k), u(k)) := 1 2y(k)2

2,Ω + λ

2u(k)2 , and ℓyx(z(k), u(k)) := 1 2yx(k)2

2,Ω + λ

2u(k)2 ,

slide-40
SLIDE 40

Minimal horizon from Altmüller, Grüne, 2012

Discrete-time setting: y(k) := y(·, kT) , u(k) := u(·, kT) for some T > 0 (sampled data system with sampling time T) Compare the cost functionals ℓy(z(k), u(k)) := 1 2y(k)2

2,Ω + λ

2u(k)2 , and ℓyx(z(k), u(k)) := 1 2yx(k)2

2,Ω + λ

2u(k)2 , In particular, the minimal horizon for ℓy is N∗ = 6

slide-41
SLIDE 41

Existence of stabilizing feedback

On the other hand, it is well understood that a Riccati feedback

  • perator ¯

u = Py stabilizes the problem to the origin ¯ y = 0

  • ver an infinite time horizon.
slide-42
SLIDE 42

Existence of stabilizing feedback

On the other hand, it is well understood that a Riccati feedback

  • perator ¯

u = Py stabilizes the problem to the origin ¯ y = 0

  • ver an infinite time horizon.

We thus plug the stabilizing feedback ¯ u in the equation. Let ys the solution of the controlled (stabilized) equation yt − yxx − µy = χω¯ u + χωu , in (0, 1) × (0, +∞) =: Q , y(0, t) = 0 , t ∈ (0, +∞) , y(1, t) = 0 , t ∈ (0, +∞) , y(x, 0) = y0(x) , x ∈ (0, 1) .

slide-43
SLIDE 43

Existence of stabilizing feedback

On the other hand, it is well understood that a Riccati feedback

  • perator ¯

u = Py stabilizes the problem to the origin ¯ y = 0

  • ver an infinite time horizon.

We thus plug the stabilizing feedback ¯ u in the equation. Let ys the solution of the controlled (stabilized) equation yt − yxx − µy = χω¯ u + χωu , in (0, 1) × (0, +∞) =: Q , y(0, t) = 0 , t ∈ (0, +∞) , y(1, t) = 0 , t ∈ (0, +∞) , y(x, 0) = y0(x) , x ∈ (0, 1) . The MPC closed loop with cost functional ℓ(ys, u) := 1 2ys(t)2

2,Ω + λ

2u(t)2 is asymptotically stable towards ¯ y = 0 for any N ≥ 2 .

slide-44
SLIDE 44

Numerical simulation for the linear heat equation

The origin ¯ y = 0 is unstable for the free equation ( u ≡ 0 ) Applying MPC with N = 2 to the heat equation fails to stabi- lize ys towards ¯ y = 0

slide-45
SLIDE 45

Numerical simulation for the linear heat equation

Applying MPC with N = 9 to the heat equation without Ric- cati feedback stabilizes ys to- wards ¯ y = 0 Applying MPC with N = 2 to the heat equation with Riccati feedback stabilizes ys towards ¯ y = 0

slide-46
SLIDE 46

Outline

Exponential Controllability Condition Improving MPC performance by stabilizing feedback Linear Heat Equation The Schlögl system The Burgers’ Equation Conclusions

slide-47
SLIDE 47

Boundary Control of the Schlögl system

Given y1 ≤ y2 ≤ y3 real numbers, consider the polynomial ϕ(y) = (y − y1)(y − y2)(y − y3) , ∀y ∈ R .

slide-48
SLIDE 48

Boundary Control of the Schlögl system

Given y1 ≤ y2 ≤ y3 real numbers, consider the polynomial ϕ(y) = (y − y1)(y − y2)(y − y3) , ∀y ∈ R . Then ϕ has the following properties: i) mϕ := inf ϕ′(y) > −∞ , ii) the infimum mϕ < 0 is attained at the point (y1 + y2 + y3)/3.

slide-49
SLIDE 49

Boundary Control of the Schlögl system

Given y1 ≤ y2 ≤ y3 real numbers, consider the polynomial ϕ(y) = (y − y1)(y − y2)(y − y3) , ∀y ∈ R . Then ϕ has the following properties: i) mϕ := inf ϕ′(y) > −∞ , ii) the infimum mϕ < 0 is attained at the point (y1 + y2 + y3)/3. We consider the semilinear parabolic PDE yt = yxx − Kϕ(y) , (x, t) ∈ (0, L) × (0, ∞) , (SchlogEQ) with positive constants K, L , and appropriate initial and boundary conditions.

slide-50
SLIDE 50

Explicit stabilizing feedback for the Schlögl system

Global exponential stability result from Gugat, Tröltzsch, 2015:

slide-51
SLIDE 51

Explicit stabilizing feedback for the Schlögl system

Global exponential stability result from Gugat, Tröltzsch, 2015: Assume that L2K <

1 2|mϕ| , and let yd be a desired trajectory,

that is, yd ∈ H2(Q) satisfies (SchlogEQ).

slide-52
SLIDE 52

Explicit stabilizing feedback for the Schlögl system

Global exponential stability result from Gugat, Tröltzsch, 2015: Assume that L2K <

1 2|mϕ| , and let yd be a desired trajectory,

that is, yd ∈ H2(Q) satisfies (SchlogEQ). Consider the solution y to (SchlogEQ) with initial state y(. , 0) = y0 ∈ L∞(0, L) and with boundary conditions yx(0, t) = C(y(0, t) − yd(0, t)) + yd

x (0, t) ,

yx(L, t) = −C(y(L, t) − yd(L, t)) + yd

x (L, t) ,

(SchlogBC) where C ≥

1 2L .

slide-53
SLIDE 53

Explicit stabilizing feedback for the Schlögl system

Global exponential stability result from Gugat, Tröltzsch, 2015: Assume that L2K <

1 2|mϕ| , and let yd be a desired trajectory,

that is, yd ∈ H2(Q) satisfies (SchlogEQ). Consider the solution y to (SchlogEQ) with initial state y(. , 0) = y0 ∈ L∞(0, L) and with boundary conditions yx(0, t) = C(y(0, t) − yd(0, t)) + yd

x (0, t) ,

yx(L, t) = −C(y(L, t) − yd(L, t)) + yd

x (L, t) ,

(SchlogBC) where C ≥

1 2L .

Then, setting µ = 1 L2 − 2K|mϕ| , y converges exponentially fast in the L2−sense to yd, i.e., L

  • y(x, t) − yd(x, t)

2 dx ≤ L

  • y0(x) − yd(x, 0)

2 dx e−µt holds for all t ≥ 0 .

slide-54
SLIDE 54

Improving the tracking w.r.t. a given functional

We now introduce the cost functional J(ys, u) = 1 2ys − yd2

L2(Q) + λ1

2 u12

L2(0,∞) + λ2

2 u22

L2(0,∞) ,

constrained to the control system yt = yxx − Kϕ(y) , (x, t) ∈ Q , y(x, 0) = y0(x) , x ∈ (0, L) , yx(0, t) − C(y(0, t) − yd(0, t)) − yd

x (0, t) = u1(t) ,

t ∈ (0, ∞) , yx(L, t) + C(y(L, t) − yd(L, t)) − yd

x (L, t) = u2(t) ,

t ∈ (0, ∞) .

slide-55
SLIDE 55

Improving the tracking w.r.t. a given functional

We now introduce the cost functional J(ys, u) = 1 2ys − yd2

L2(Q) + λ1

2 u12

L2(0,∞) + λ2

2 u22

L2(0,∞) ,

constrained to the control system yt = yxx − Kϕ(y) , (x, t) ∈ Q , y(x, 0) = y0(x) , x ∈ (0, L) , yx(0, t) − C(y(0, t) − yd(0, t)) − yd

x (0, t) = u1(t) ,

t ∈ (0, ∞) , yx(L, t) + C(y(L, t) − yd(L, t)) − yd

x (L, t) = u2(t) ,

t ∈ (0, ∞) . From Gugat, Tröltzsch: the closed loop system with ui = 0 is exponentially stable towards yd .

slide-56
SLIDE 56

Improving the tracking w.r.t. a given functional

We now introduce the cost functional J(ys, u) = 1 2ys − yd2

L2(Q) + λ1

2 u12

L2(0,∞) + λ2

2 u22

L2(0,∞) ,

constrained to the control system yt = yxx − Kϕ(y) , (x, t) ∈ Q , y(x, 0) = y0(x) , x ∈ (0, L) , yx(0, t) − C(y(0, t) − yd(0, t)) − yd

x (0, t) = u1(t) ,

t ∈ (0, ∞) , yx(L, t) + C(y(L, t) − yd(L, t)) − yd

x (L, t) = u2(t) ,

t ∈ (0, ∞) . From Gugat, Tröltzsch: the closed loop system with ui = 0 is exponentially stable towards yd . Thus, we look for controls u1, u2 which improve the tracking performance of the feedback (SchlogBC) with respect to the cost functional J(ys, u) .

slide-57
SLIDE 57

MPC closed loop

In a discrete-time setting, the cost functional is recast as J∞(ys, u) =

  • k=1

ℓ(ys(k), u(k)) , where the running cost ℓ is given by ℓ(ys(k), u(k)) = 1 2ys(k)−yd(k)2

L2(0,L)+λ1

2 |u1(k)|2+λ2 2 |u2(k)|2 , for a given sampling rate T > 0.

slide-58
SLIDE 58

MPC closed loop

In a discrete-time setting, the cost functional is recast as J∞(ys, u) =

  • k=1

ℓ(ys(k), u(k)) , where the running cost ℓ is given by ℓ(ys(k), u(k)) = 1 2ys(k)−yd(k)2

L2(0,L)+λ1

2 |u1(k)|2+λ2 2 |u2(k)|2 , for a given sampling rate T > 0. The associated cost functional on a finite horizon N is JN(ys, u) =

N

  • k=1

ℓ(ys(k), u(k)) .

slide-59
SLIDE 59

Stability of the MPC closed loop

Moreover, the stage cost ℓ satisfies ℓ(ys(k; 0), 0) = 1 2y(k) − yd(k)2

L2(0,L)

≤ 1 2y0 − yd(0)2

L2(0,L) e−µkT = e−µkT min u∈U ℓ(y0, u)

slide-60
SLIDE 60

Stability of the MPC closed loop

Moreover, the stage cost ℓ satisfies ℓ(ys(k; 0), 0) = 1 2y(k) − yd(k)2

L2(0,L)

≤ 1 2y0 − yd(0)2

L2(0,L) e−µkT = e−µkT min u∈U ℓ(y0, u)

= ⇒ the exponential controllability condition is satisfied with σ = e−µT and C = 1

slide-61
SLIDE 61

Stability of the MPC closed loop

Moreover, the stage cost ℓ satisfies ℓ(ys(k; 0), 0) = 1 2y(k) − yd(k)2

L2(0,L)

≤ 1 2y0 − yd(0)2

L2(0,L) e−µkT = e−µkT min u∈U ℓ(y0, u)

= ⇒ the exponential controllability condition is satisfied with σ = e−µT and C = 1 = ⇒ the MPC closed loop is stable for any N ≥ 2 .

slide-62
SLIDE 62

Numerical simulation for the Schlögl system

For ϕ = y3 − y , K = 15 , the

  • rigin yd = 0 is unstable

Applying MPC with N = 2 to (SchlogEQ) without feed- back (SchlogBC) fails to stabi- lize it towards yd = 0

slide-63
SLIDE 63

Numerical simulation for the Schlögl system

Applying MPC with N = 2 to (SchlogEQ) with feed- back (SchlogBC) stabilizes ys towards yd = 0

slide-64
SLIDE 64

Outline

Exponential Controllability Condition Improving MPC performance by stabilizing feedback Linear Heat Equation The Schlögl system The Burgers’ Equation Conclusions

slide-65
SLIDE 65

Boundary Control of the Burgers equation

Given a positive constant d, we consider the Burgers equation yt − dyxx + yyx = 0 , in (0, 1) × (0, +∞) =: Q , yx(0, t) = u0(t) , t ∈ (0, +∞) , yx(1, t) = u1(t) , t ∈ (0, +∞) , y(x, 0) = y0(x) , x ∈ (0, 1) . (BurgEQ)

slide-66
SLIDE 66

Boundary Control of the Burgers equation

Given a positive constant d, we consider the Burgers equation yt − dyxx + yyx = 0 , in (0, 1) × (0, +∞) =: Q , yx(0, t) = u0(t) , t ∈ (0, +∞) , yx(1, t) = u1(t) , t ∈ (0, +∞) , y(x, 0) = y0(x) , x ∈ (0, 1) . (BurgEQ) The controls ui act as Neumann boundary conditions and aim to stabilize the state y towards the equilibrium yd ≡ 0.

slide-67
SLIDE 67

Boundary Control of the Burgers equation

Given a positive constant d, we consider the Burgers equation yt − dyxx + yyx = 0 , in (0, 1) × (0, +∞) =: Q , yx(0, t) = u0(t) , t ∈ (0, +∞) , yx(1, t) = u1(t) , t ∈ (0, +∞) , y(x, 0) = y0(x) , x ∈ (0, 1) . (BurgEQ) The controls ui act as Neumann boundary conditions and aim to stabilize the state y towards the equilibrium yd ≡ 0. However, the origin is not asymptotically stable for the free equation (with ui = 0), since in this case any nonzero constant is also an equilibrium of the system.

slide-68
SLIDE 68

Explicit stabilizing feedback for the Burgers equation

Global exponential stability result from Krstic 1999:

slide-69
SLIDE 69

Explicit stabilizing feedback for the Burgers equation

Global exponential stability result from Krstic 1999: Let y0 ∈ H1(0, 1) and c0, c1 positive parameters.

slide-70
SLIDE 70

Explicit stabilizing feedback for the Burgers equation

Global exponential stability result from Krstic 1999: Let y0 ∈ H1(0, 1) and c0, c1 positive parameters. Then the solution y to equation (BurgEQ) with boundary conditions ¯ u0 = 1

d

  • c0y(0, t) + 1

3y2(0, t)

  • ,

¯ u1 = − 1

d

  • c1y(1, t) + 1

3y2(1, t)

  • ,

(BurgBC) satisfies 1. sup

(x,t)∈¯ Q

|y(x, t)| < ∞;

  • 2. lim

t→∞ max x∈[0,1] |y(x, t)| = 0;

3. 1 |y(x, t)|2 dx decays to zero at an exponential rate given by c = min(d/2, c0, c1) > 0.

slide-71
SLIDE 71

Explicit stabilizing feedback for the Burgers equation

Global exponential stability result from Krstic 1999: Let y0 ∈ H1(0, 1) and c0, c1 positive parameters. Then the solution y to equation (BurgEQ) with boundary conditions ¯ u0 = 1

d

  • c0y(0, t) + 1

3y2(0, t)

  • ,

¯ u1 = − 1

d

  • c1y(1, t) + 1

3y2(1, t)

  • ,

(BurgBC) satisfies 1. sup

(x,t)∈¯ Q

|y(x, t)| < ∞;

  • 2. lim

t→∞ max x∈[0,1] |y(x, t)| = 0;

3. 1 |y(x, t)|2 dx decays to zero at an exponential rate given by c = min(d/2, c0, c1) > 0.

In particular, the equilibrium yd ≡ 0 is globally exponentially stable in L2(0, 1) for the closed loop system (BurgEQ)-(BurgBC)

slide-72
SLIDE 72

Improving the tracking w.r.t. a given functional

We introduce the cost functional J(ys, u) = 1 2ys2

L2(Q) + λ

2

  • u02

L2(0,∞) + u12 L2(0,∞)

  • ,
slide-73
SLIDE 73

Improving the tracking w.r.t. a given functional

We introduce the cost functional J(ys, u) = 1 2ys2

L2(Q) + λ

2

  • u02

L2(0,∞) + u12 L2(0,∞)

  • ,

where λ > 0 and ys is the solution to yt − dyxx + yyx = 0 , in (0, 1) × (0, +∞) =: Q , yx(0, t) − ¯ u0 = u0 , t ∈ (0, +∞) , yx(1, t) − ¯ u1 = u1 , t ∈ (0, +∞) , y(x, 0) = y0(x) , x ∈ (0, 1) , and ¯ u0, ¯ u1 are given by (BurgBC).

slide-74
SLIDE 74

Improving the tracking w.r.t. a given functional

We introduce the cost functional J(ys, u) = 1 2ys2

L2(Q) + λ

2

  • u02

L2(0,∞) + u12 L2(0,∞)

  • ,

where λ > 0 and ys is the solution to yt − dyxx + yyx = 0 , in (0, 1) × (0, +∞) =: Q , yx(0, t) − ¯ u0 = u0 , t ∈ (0, +∞) , yx(1, t) − ¯ u1 = u1 , t ∈ (0, +∞) , y(x, 0) = y0(x) , x ∈ (0, 1) , and ¯ u0, ¯ u1 are given by (BurgBC). Rmk: we expect the optimal controls u0, u1 to approach asymptotically (as t → +∞) the control references ¯ u0, ¯ u1.

slide-75
SLIDE 75

MPC closed loop

In a discrete-time setting, given a sampling rate T > 0, the cost functional is recast as J∞(ys, u) =

  • k=1

ℓ(ys(k), u(k)) , with ℓ(ys(k), u(k)) = 1 2ys(k)2

L2(0,1) + λ

2

1

  • i=0

|ui(kT)|2 , where ys(k) = ys(. , kT) ∈ L2(0, 1) and u(k) = (u0(kT), u1(kT)).

slide-76
SLIDE 76

MPC closed loop

In a discrete-time setting, given a sampling rate T > 0, the cost functional is recast as J∞(ys, u) =

  • k=1

ℓ(ys(k), u(k)) , with ℓ(ys(k), u(k)) = 1 2ys(k)2

L2(0,1) + λ

2

1

  • i=0

|ui(kT)|2 , where ys(k) = ys(. , kT) ∈ L2(0, 1) and u(k) = (u0(kT), u1(kT)). The associated cost functional on a finite horizon N is JN(ys, u) =

N

  • k=1

ℓ(ys(k), u(k)) .

slide-77
SLIDE 77

Stability of the MPC closed loop

Since min

u∈U ℓ(ys(k), u(k)) = 1

2ys(k; 0)2

L2(0,1) ,

from the exponential stability result we deduce that ℓ(ys(k; 0), 0) = 1 2y(k; ¯ u(k))2

L2(0,1) ≤ 1

2e−ckTy(0)2

L2(0,1)

= σk min

u∈U ℓ(y(0), u) .

slide-78
SLIDE 78

Stability of the MPC closed loop

Since min

u∈U ℓ(ys(k), u(k)) = 1

2ys(k; 0)2

L2(0,1) ,

from the exponential stability result we deduce that ℓ(ys(k; 0), 0) = 1 2y(k; ¯ u(k))2

L2(0,1) ≤ 1

2e−ckTy(0)2

L2(0,1)

= σk min

u∈U ℓ(y(0), u) .

= ⇒ the exponential controllability condition is satisfied with σ = e−cT and C = 1

slide-79
SLIDE 79

Stability of the MPC closed loop

Since min

u∈U ℓ(ys(k), u(k)) = 1

2ys(k; 0)2

L2(0,1) ,

from the exponential stability result we deduce that ℓ(ys(k; 0), 0) = 1 2y(k; ¯ u(k))2

L2(0,1) ≤ 1

2e−ckTy(0)2

L2(0,1)

= σk min

u∈U ℓ(y(0), u) .

= ⇒ the exponential controllability condition is satisfied with σ = e−cT and C = 1 = ⇒ the MPC closed loop is stable for any N ≥ 2 .

slide-80
SLIDE 80

Outline

Exponential Controllability Condition Improving MPC performance by stabilizing feedback Linear Heat Equation The Schlögl system The Burgers’ Equation Conclusions

slide-81
SLIDE 81

Conclusions

The knowledge of an explicit stabilizing feedback may lead to stability of the MPC with the shortest possible reciding horizon.

slide-82
SLIDE 82

Conclusions

The knowledge of an explicit stabilizing feedback may lead to stability of the MPC with the shortest possible reciding horizon. We compare the effectiveness of two different approaches: A) Apply MPC to an unstable equation with a sufficiently long horizon; B) Apply MPC to a stabilized equation with the shortest horizon possible.

slide-83
SLIDE 83

Conclusions

The knowledge of an explicit stabilizing feedback may lead to stability of the MPC with the shortest possible reciding horizon. We compare the effectiveness of two different approaches: A) Apply MPC to an unstable equation with a sufficiently long horizon; B) Apply MPC to a stabilized equation with the shortest horizon possible. Cons of A): ◮ Need to compute the best horizon N∗ ensuring stability of the MPC scheme in connection with the specific cost functional ℓ ; ◮ Increased computational time and complexity at each iteration of the scheme.

slide-84
SLIDE 84

Conclusions

Pros of B): ◮ Stability of the MPC with instantaneous control; ◮ Easier to adapt to different cost functionals targeting the same equilibrium state; ◮ Simplified computational complexity at each iteration of the scheme.

slide-85
SLIDE 85

Conclusions

Pros of B): ◮ Stability of the MPC with instantaneous control; ◮ Easier to adapt to different cost functionals targeting the same equilibrium state; ◮ Simplified computational complexity at each iteration of the scheme. Cons of B): ◮ It requires the knowledge of an explicit stabilizing feedback; ◮ Not suitable in case of state target different from the equilibrium.

slide-86
SLIDE 86

Conclusions

Pros of B): ◮ Stability of the MPC with instantaneous control; ◮ Easier to adapt to different cost functionals targeting the same equilibrium state; ◮ Simplified computational complexity at each iteration of the scheme. Cons of B): ◮ It requires the knowledge of an explicit stabilizing feedback; ◮ Not suitable in case of state target different from the equilibrium.

Thank you for your attention!