Optimal control of parabolic equations using spectral calculus Ivica Nakić
Faculty of Science, University of Zagreb Joint work (in progress) with L. Grubišić, M. Lazar and
- M. Tautenhahn
Operator Theory and Krein Spaces, TU Wien, 2019
Optimal control of parabolic equations using spectral calculus - - PowerPoint PPT Presentation
Optimal control of parabolic equations using spectral calculus Ivica Naki Faculty of Science, University of Zagreb Joint work (in progress) with L. Grubii, M. Lazar and M. Tautenhahn Operator Theory and Krein Spaces, TU Wien, 2019 The
Faculty of Science, University of Zagreb Joint work (in progress) with L. Grubišić, M. Lazar and
Operator Theory and Krein Spaces, TU Wien, 2019
Inital condition or starting optimal control: solve the following problem min
u∈H {J(u): ∥y(T) − y∗∥ ≤ ε} ,
where (in the weak sense) { y′(t) + Ay(t) = f(t) for 0 ≤ t ≤ T, y(0) = u, J(u) = α 2 ∥u∥2 + 1 2 ∫ T β(t)∥y(t) − w(t)∥2 dt. Here we assume: A selfadjoint, lower semi-bounded
Parameters: y∗ is the target state, ε > 0 is the tolerance, α > 0, β ∈ L∞((0, T); [0, ∞)) are weights, and w ∈ L2((0, T); H) is the desired trajectory of the system.
2
The solution ˆ u of the problem is given by STˆ u = (ˆ µ + B)−1(ˆ µy∗
h + b) − y∗ − y∗ h,
where B = αI + ∫ T β(t)S2t dt, b = ∫ T β(t)ST+twh(t) dt, y∗
h = y∗ −
∫ T Sτf(τ) dτ, wh = w − ∫ · Sτf(τ) dτ, ˆ µ is the unique solution of G(µ) := ∥y∗
h − (µ + B)−1(µy∗ h + b)∥ = ε,
and {St} is the semigroup generated by −A.
3
→ The optimal final state ˆ y is given by ˆ y = (ˆ µ + B)−1(ˆ µy∗
h + b).
The solution of the unconstrained problem ( ) is given by the same formula with . The function G is decreasing. Let g yh B yh b , so G g . Then g yh x, where x is the solution of the equation B x yh b hence the calculation of G reduces to solving a linear equation. The optimal final state is the solution of the equation B x yh b hence we obtain it for free.
4
→ The optimal final state ˆ y is given by ˆ y = (ˆ µ + B)−1(ˆ µy∗
h + b).
→ The solution of the unconstrained problem (ε = ∞) is given by the same formula with ˆ µ = 0. The function G is decreasing. Let g yh B yh b , so G g . Then g yh x, where x is the solution of the equation B x yh b hence the calculation of G reduces to solving a linear equation. The optimal final state is the solution of the equation B x yh b hence we obtain it for free.
4
→ The optimal final state ˆ y is given by ˆ y = (ˆ µ + B)−1(ˆ µy∗
h + b).
→ The solution of the unconstrained problem (ε = ∞) is given by the same formula with ˆ µ = 0. → The function G is decreasing. Let g yh B yh b , so G g . Then g yh x, where x is the solution of the equation B x yh b hence the calculation of G reduces to solving a linear equation. The optimal final state is the solution of the equation B x yh b hence we obtain it for free.
4
→ The optimal final state ˆ y is given by ˆ y = (ˆ µ + B)−1(ˆ µy∗
h + b).
→ The solution of the unconstrained problem (ε = ∞) is given by the same formula with ˆ µ = 0. → The function G is decreasing. → Let g(µ) = y∗
h − (µ + B)−1(µy∗ h + b), so G(µ) = ∥g(µ)∥.
Then g(µ) = y∗
h − x, where x is the solution of the
equation (µ + B) x = µy∗
h + b,
hence the calculation of G(µ) reduces to solving a linear equation. The optimal final state is the solution of the equation B x yh b hence we obtain it for free.
4
→ The optimal final state ˆ y is given by ˆ y = (ˆ µ + B)−1(ˆ µy∗
h + b).
→ The solution of the unconstrained problem (ε = ∞) is given by the same formula with ˆ µ = 0. → The function G is decreasing. → Let g(µ) = y∗
h − (µ + B)−1(µy∗ h + b), so G(µ) = ∥g(µ)∥.
Then g(µ) = y∗
h − x, where x is the solution of the
equation (µ + B) x = µy∗
h + b,
hence the calculation of G(µ) reduces to solving a linear equation. → The optimal final state is the solution of the equation (ˆ µ + B) x = ˆ µy∗
h + b,
hence we obtain it for free.
4
→ In applications, it is enough to find good µ (µ ≥ ˆ µ, µ close to ˆ µ), not the optimal one. One choice is to take µ = ∥By∗
h − b∥
ε + α + ∫ T 1 2tβ(t)e−2tκ dt, where A ≥ κ. In applications, we can use B A , where
T
t exp t dt and we can approximate b by (wh t
N i
wi
ti ti ) N i i A wi where i ti ti
t exp T t dt
5
→ In applications, it is enough to find good µ (µ ≥ ˆ µ, µ close to ˆ µ), not the optimal one. One choice is to take µ = ∥By∗
h − b∥
ε + α + ∫ T 1 2tβ(t)e−2tκ dt, where A ≥ κ. → In applications, we can use B = ˜ β0(A), where ˜ β0(λ) = α + ∫ T
0 β(t) exp(−2tλ) dt and we can
approximate b by (wh(t) ≈ ∑N
i=1 wiχ[ti−1,ti]) N
∑
i=1
˜ βi(A)wi, where ˜ βi(λ) = ∫ ti
ti−1
β(t) exp(−(T+t)λ) dt,
5
Let A be positive definite operator and let f = 0. We take β = χ[T/3,2T/3] and assume that w does not depend
Then B = α + 1 2A−1S2T/3(I − S2T/3), b = A−1S4T/3(I − ST/3)w.
6
Let us perturbe all the parameters of the problem with perturbations < ν, in respective norms, such that that the perturbed problem has the same structure (A still selfadjoint etc.). Then
For small enough ν > 0 the optimal solutions of the
explicit C.
7
Proof is mostly geometry. We can assume f = 0. We work in ˜ H = Ran ST, with the scalar product ⟨S−1
T ·, S−1 T ·⟩, and define
ω(·) = J(S−1
T ·).
Then ˆ y is the unique solution of min
y∈ ˜ H {ω(y): ∥y − y∗∥ ≤ ε} .
We define Wc = {y ∈ ˜ H: ω(y) ≤ c}. Let Πc(x) be a projection of x to Wc. Then Πˆ
c(y∗) = ˆ
y, where ˆ c = ω(ˆ y).
8
The second geometric ingredient is the following result: there exists ˆ γ > 0 such that y∗ − ˆ y = ˆ γ∇ω(ˆ y). ˜ y Wˆ
c
y∗ ˆ y ε
9
Suppose we have y′(t) + Ly(t) = 0 for 0 ≤ t ≤ T, Gy(t) = g(t), y(0) = u, where G is a boundary trace operator. We assume that (L, G) forms so-called boundary control system.
A boundary control system is a pair of operators (L, G) where L ∈ L(Z, X) and G ∈ L(Z, U), if there exists β ∈ C such that: G is surjective, Ker G is dense in X, β − L restricted to Ker G is surjective, and Ker(β − L) ∩ Ker G = {0}.
10
We define the operator A on X by Au = Lu and D(A) = Ker G. Let X−1 be the extrapolation space corresponding to A and let ˆ A be the extension of A to X−1. There exists a unique T ∈ L(U, X−1) such that the problem can be written as ˙ y(t) + ˆ Ay(t) = Tg(t). So we are back in business if A is a lower semi-bounded selfadjoint operator (ˆ A inherits the properties of A) but X−1 is not a nice space to work with. Fear not, B and b can still be seen as operator/element in X using the fact that Tg A h , for h function with values in X. If g is constant in time, B and b have nice formulas.
11
We define the operator A on X by Au = Lu and D(A) = Ker G. Let X−1 be the extrapolation space corresponding to A and let ˆ A be the extension of A to X−1. There exists a unique T ∈ L(U, X−1) such that the problem can be written as ˙ y(t) + ˆ Ay(t) = Tg(t). So we are back in business if A is a lower semi-bounded selfadjoint operator (ˆ A inherits the properties of A) but X−1 is not a nice space to work with. Fear not, B and b can still be seen as operator/element in X using the fact that Tg(·) = (β − ˆ A)h(·), for h function with values in X. If g is constant in time, B and b have nice formulas.
11
Standard (and much more general) solution is based
needs to solve two coupled time-dependent problems, the original and the adjoint one. We need to solve just one stationary problem, but with a more complicated operator and vector. Should have some advantages. Constructing efficient numerical procedure is a work in progress. To learn about one possible approach go to Luka’s talk.
12
Standard (and much more general) solution is based
needs to solve two coupled time-dependent problems, the original and the adjoint one. We need to solve just one stationary problem, but with a more complicated operator and vector. Should have some advantages. Constructing efficient numerical procedure is a work in progress. To learn about one possible approach go to Luka’s talk.
12
→ proper numerics distributed control boundary control non-selfadjoint case
13
→ proper numerics → distributed control boundary control non-selfadjoint case
13
→ proper numerics → distributed control → boundary control non-selfadjoint case
13
→ proper numerics → distributed control → boundary control → non-selfadjoint case
13
14