Optimal control of parabolic equations using spectral calculus - - PowerPoint PPT Presentation

optimal control of parabolic equations using spectral
SMART_READER_LITE
LIVE PREVIEW

Optimal control of parabolic equations using spectral calculus - - PowerPoint PPT Presentation

Optimal control of parabolic equations using spectral calculus Ivica Naki Faculty of Science, University of Zagreb Joint work (in progress) with L. Grubii, M. Lazar and M. Tautenhahn Operator Theory and Krein Spaces, TU Wien, 2019 The


slide-1
SLIDE 1

Optimal control of parabolic equations using spectral calculus Ivica Nakić

Faculty of Science, University of Zagreb Joint work (in progress) with L. Grubišić, M. Lazar and

  • M. Tautenhahn

Operator Theory and Krein Spaces, TU Wien, 2019

slide-2
SLIDE 2

The problem

Inital condition or starting optimal control: solve the following problem min

u∈H {J(u): ∥y(T) − y∗∥ ≤ ε} ,

where (in the weak sense) { y′(t) + Ay(t) = f(t) for 0 ≤ t ≤ T, y(0) = u, J(u) = α 2 ∥u∥2 + 1 2 ∫ T β(t)∥y(t) − w(t)∥2 dt. Here we assume: A selfadjoint, lower semi-bounded

  • perator on a Hilbert space H, f ∈ L2((0, T); H).

Parameters: y∗ is the target state, ε > 0 is the tolerance, α > 0, β ∈ L∞((0, T); [0, ∞)) are weights, and w ∈ L2((0, T); H) is the desired trajectory of the system.

2

slide-3
SLIDE 3

The soluon

The solution ˆ u of the problem is given by STˆ u = (ˆ µ + B)−1(ˆ µy∗

h + b) − y∗ − y∗ h,

where B = αI + ∫ T β(t)S2t dt, b = ∫ T β(t)ST+twh(t) dt, y∗

h = y∗ −

∫ T Sτf(τ) dτ, wh = w − ∫ · Sτf(τ) dτ, ˆ µ is the unique solution of G(µ) := ∥y∗

h − (µ + B)−1(µy∗ h + b)∥ = ε,

and {St} is the semigroup generated by −A.

3

slide-4
SLIDE 4

Remarks

→ The optimal final state ˆ y is given by ˆ y = (ˆ µ + B)−1(ˆ µy∗

h + b).

The solution of the unconstrained problem ( ) is given by the same formula with . The function G is decreasing. Let g yh B yh b , so G g . Then g yh x, where x is the solution of the equation B x yh b hence the calculation of G reduces to solving a linear equation. The optimal final state is the solution of the equation B x yh b hence we obtain it for free.

4

slide-5
SLIDE 5

Remarks

→ The optimal final state ˆ y is given by ˆ y = (ˆ µ + B)−1(ˆ µy∗

h + b).

→ The solution of the unconstrained problem (ε = ∞) is given by the same formula with ˆ µ = 0. The function G is decreasing. Let g yh B yh b , so G g . Then g yh x, where x is the solution of the equation B x yh b hence the calculation of G reduces to solving a linear equation. The optimal final state is the solution of the equation B x yh b hence we obtain it for free.

4

slide-6
SLIDE 6

Remarks

→ The optimal final state ˆ y is given by ˆ y = (ˆ µ + B)−1(ˆ µy∗

h + b).

→ The solution of the unconstrained problem (ε = ∞) is given by the same formula with ˆ µ = 0. → The function G is decreasing. Let g yh B yh b , so G g . Then g yh x, where x is the solution of the equation B x yh b hence the calculation of G reduces to solving a linear equation. The optimal final state is the solution of the equation B x yh b hence we obtain it for free.

4

slide-7
SLIDE 7

Remarks

→ The optimal final state ˆ y is given by ˆ y = (ˆ µ + B)−1(ˆ µy∗

h + b).

→ The solution of the unconstrained problem (ε = ∞) is given by the same formula with ˆ µ = 0. → The function G is decreasing. → Let g(µ) = y∗

h − (µ + B)−1(µy∗ h + b), so G(µ) = ∥g(µ)∥.

Then g(µ) = y∗

h − x, where x is the solution of the

equation (µ + B) x = µy∗

h + b,

hence the calculation of G(µ) reduces to solving a linear equation. The optimal final state is the solution of the equation B x yh b hence we obtain it for free.

4

slide-8
SLIDE 8

Remarks

→ The optimal final state ˆ y is given by ˆ y = (ˆ µ + B)−1(ˆ µy∗

h + b).

→ The solution of the unconstrained problem (ε = ∞) is given by the same formula with ˆ µ = 0. → The function G is decreasing. → Let g(µ) = y∗

h − (µ + B)−1(µy∗ h + b), so G(µ) = ∥g(µ)∥.

Then g(µ) = y∗

h − x, where x is the solution of the

equation (µ + B) x = µy∗

h + b,

hence the calculation of G(µ) reduces to solving a linear equation. → The optimal final state is the solution of the equation (ˆ µ + B) x = ˆ µy∗

h + b,

hence we obtain it for free.

4

slide-9
SLIDE 9

Remarks

→ In applications, it is enough to find good µ (µ ≥ ˆ µ, µ close to ˆ µ), not the optimal one. One choice is to take µ = ∥By∗

h − b∥

ε + α + ∫ T 1 2tβ(t)e−2tκ dt, where A ≥ κ. In applications, we can use B A , where

T

t exp t dt and we can approximate b by (wh t

N i

wi

ti ti ) N i i A wi where i ti ti

t exp T t dt

5

slide-10
SLIDE 10

Remarks

→ In applications, it is enough to find good µ (µ ≥ ˆ µ, µ close to ˆ µ), not the optimal one. One choice is to take µ = ∥By∗

h − b∥

ε + α + ∫ T 1 2tβ(t)e−2tκ dt, where A ≥ κ. → In applications, we can use B = ˜ β0(A), where ˜ β0(λ) = α + ∫ T

0 β(t) exp(−2tλ) dt and we can

approximate b by (wh(t) ≈ ∑N

i=1 wiχ[ti−1,ti]) N

i=1

˜ βi(A)wi, where ˜ βi(λ) = ∫ ti

ti−1

β(t) exp(−(T+t)λ) dt,

5

slide-11
SLIDE 11

An example

Let A be positive definite operator and let f = 0. We take β = χ[T/3,2T/3] and assume that w does not depend

  • n time.

Then B = α + 1 2A−1S2T/3(I − S2T/3), b = A−1S4T/3(I − ST/3)w.

6

slide-12
SLIDE 12

Sensivity of the problem

Let us perturbe all the parameters of the problem with perturbations < ν, in respective norms, such that that the perturbed problem has the same structure (A still selfadjoint etc.). Then

Theorem

For small enough ν > 0 the optimal solutions of the

  • riginal and the perturbed problem differ by < Cν with

explicit C.

7

slide-13
SLIDE 13

Idea of the proof

Proof is mostly geometry. We can assume f = 0. We work in ˜ H = Ran ST, with the scalar product ⟨S−1

T ·, S−1 T ·⟩, and define

ω(·) = J(S−1

T ·).

Then ˆ y is the unique solution of min

y∈ ˜ H {ω(y): ∥y − y∗∥ ≤ ε} .

We define Wc = {y ∈ ˜ H: ω(y) ≤ c}. Let Πc(x) be a projection of x to Wc. Then Πˆ

c(y∗) = ˆ

y, where ˆ c = ω(ˆ y).

8

slide-14
SLIDE 14

Idea of the proof

The second geometric ingredient is the following result: there exists ˆ γ > 0 such that y∗ − ˆ y = ˆ γ∇ω(ˆ y). ˜ y Wˆ

c

y∗ ˆ y ε

9

slide-15
SLIDE 15

Non-homogeneous boundary condion

Suppose we have      y′(t) + Ly(t) = 0 for 0 ≤ t ≤ T, Gy(t) = g(t), y(0) = u, where G is a boundary trace operator. We assume that (L, G) forms so-called boundary control system.

Definition

A boundary control system is a pair of operators (L, G) where L ∈ L(Z, X) and G ∈ L(Z, U), if there exists β ∈ C such that: G is surjective, Ker G is dense in X, β − L restricted to Ker G is surjective, and Ker(β − L) ∩ Ker G = {0}.

10

slide-16
SLIDE 16

Non-homogeneous boundary condion

We define the operator A on X by Au = Lu and D(A) = Ker G. Let X−1 be the extrapolation space corresponding to A and let ˆ A be the extension of A to X−1. There exists a unique T ∈ L(U, X−1) such that the problem can be written as ˙ y(t) + ˆ Ay(t) = Tg(t). So we are back in business if A is a lower semi-bounded selfadjoint operator (ˆ A inherits the properties of A) but X−1 is not a nice space to work with. Fear not, B and b can still be seen as operator/element in X using the fact that Tg A h , for h function with values in X. If g is constant in time, B and b have nice formulas.

11

slide-17
SLIDE 17

Non-homogeneous boundary condion

We define the operator A on X by Au = Lu and D(A) = Ker G. Let X−1 be the extrapolation space corresponding to A and let ˆ A be the extension of A to X−1. There exists a unique T ∈ L(U, X−1) such that the problem can be written as ˙ y(t) + ˆ Ay(t) = Tg(t). So we are back in business if A is a lower semi-bounded selfadjoint operator (ˆ A inherits the properties of A) but X−1 is not a nice space to work with. Fear not, B and b can still be seen as operator/element in X using the fact that Tg(·) = (β − ˆ A)h(·), for h function with values in X. If g is constant in time, B and b have nice formulas.

11

slide-18
SLIDE 18

What's the point?

Standard (and much more general) solution is based

  • n Lagrange multipliers, and to find the solution one

needs to solve two coupled time-dependent problems, the original and the adjoint one. We need to solve just one stationary problem, but with a more complicated operator and vector. Should have some advantages. Constructing efficient numerical procedure is a work in progress. To learn about one possible approach go to Luka’s talk.

12

slide-19
SLIDE 19

What's the point?

Standard (and much more general) solution is based

  • n Lagrange multipliers, and to find the solution one

needs to solve two coupled time-dependent problems, the original and the adjoint one. We need to solve just one stationary problem, but with a more complicated operator and vector. Should have some advantages. Constructing efficient numerical procedure is a work in progress. To learn about one possible approach go to Luka’s talk.

12

slide-20
SLIDE 20

Look ahead

→ proper numerics distributed control boundary control non-selfadjoint case

13

slide-21
SLIDE 21

Look ahead

→ proper numerics → distributed control boundary control non-selfadjoint case

13

slide-22
SLIDE 22

Look ahead

→ proper numerics → distributed control → boundary control non-selfadjoint case

13

slide-23
SLIDE 23

Look ahead

→ proper numerics → distributed control → boundary control → non-selfadjoint case

13

slide-24
SLIDE 24

14

Thanks for the attention!