Controllability and stabilization of some Korteweg-de Vries - - PowerPoint PPT Presentation

controllability and stabilization of some korteweg de
SMART_READER_LITE
LIVE PREVIEW

Controllability and stabilization of some Korteweg-de Vries - - PowerPoint PPT Presentation

Controllability and stabilization of some Korteweg-de Vries equations Jean-Michel Coron Laboratoire J.-L. Lions, Universit Pierre et Marie Curie (Paris 6) Workshop New trends in modeling, control and inverse problems Workshop New trends in


slide-1
SLIDE 1

Controllability and stabilization of some Korteweg-de Vries equations

Jean-Michel Coron Laboratoire J.-L. Lions, Université Pierre et Marie Curie (Paris 6) Workshop New trends in modeling, control and inverse problems Workshop New trends in modeling, control and inverse problems Institut de Mathématiques de Toulouse, June 16 - 19, 2014

slide-2
SLIDE 2

The Korteweg de Vries (KdV) equation

devenue ip\

  • ou

Joseph Boussinesq, Essai sur la théorie des eaux courantes. Mémoires présentés par divers savants à l’Acad. des Sci. Inst. Nat. France, XXIII, pp. 1-680, 1877. In this equation, t is time, s is the spatial variable and H + h′ is the water surface elevation, H being the water surface elevation when the water is at rest (Korteweg and de Vries (1895)). This equation is used to describe approximately long waves in water of relatively shallow depth. After suitable scalings, the equation can be written as yt + yx + yxxx + yyx = 0. (1)

slide-3
SLIDE 3

y(t, x)

y(t, x) x

slide-4
SLIDE 4

A KdV control system

yt + yx + yxxx + yyx = 0, t ∈ [0, T], x ∈ [0, L], (1) y(t, 0) = y(t, L) = 0, yx(t, L) = u(t), t ∈ [0, T]. (2) where, at time t ∈ [0, T], the control is u ∈ R and the state is y(t, ·) ∈ L2(0, L).

slide-5
SLIDE 5

Definition of the local controllability

Our control system is yt + yx + yxxx + yyx = 0, t ∈ [0, T], x ∈ [0, L], (1) y(t, 0) = y(t, L) = 0, yx(t, L) = u(t), t ∈ [0, T]. (2)

Definition of the local controllability of (1)-(2)

Let T > 0. The control system (1)-(2) is locally controllable in time T if, for every ε > 0, there exists η > 0 such that, for every y0 ∈ L2(0, L) and for every y1 ∈ L2(0, L) satisfying |y0|L2(0,L) < η and |y1|L2(0,L) < η, there exists u ∈ L2(0, T) satisfying |u|L2(0,T) < ε such that the solution y ∈ C0([0, T]; L2(0, L)) of (1)-(2) satisfying the initial condition y(0, x) = y0(x) is such that y(T, x) = y1(x). Question: Let T > 0, is is true that (1)-(2) is locally controllable.

slide-6
SLIDE 6

Controllability of the linearized control system

The linearized control system (around 0) is yt + yx + yxxx = 0, t ∈ [0, T], x ∈ [0, L], (1) y(t, 0) = y(t, L) = 0, yx(t, L) = u(t), t ∈ [0, T]. (2) where, at time t ∈ [0, T], the control is u ∈ R and the state is y(t, ·) ∈ L2(0, L).

Definition of the controllability of (1)-(2)

Let T > 0. The linear control system (1)-(2) is controllable in time T if, for every y0 ∈ L2(0, L) and for every y1 ∈ L2(0, L), there exists u ∈ L2(0, T) such that the solution y ∈ C0([0, T]; L2(0, L)) of (1)-(2) satisfying the initial condition y(0, x) = y0(x) is such that y(T, x) = y1(x).

slide-7
SLIDE 7

Controllability of the linearized control system

Theorem (L. Rosier (1997))

For every T > 0, the linearized control system is controllable in time T if and only L ∈ N :=

  • k2 + kl + l2

3 , k ∈ N∗, l ∈ N∗

  • .
slide-8
SLIDE 8

Application to the nonlinear system

Theorem (L. Rosier (1997))

For every T > 0, the KdV control system is locally controllable in time T if L ∈ N. Question: Does one have controllability if L ∈ N?

slide-9
SLIDE 9

Controllability when L ∈ N

Theorem (JMC and E. Crépeau (2004))

If L = 2π (which is in N: take k = l = 1), for every T > 0 the KdV control system is locally controllable in time T.

Theorem (E. Cerpa (2007), E. Cerpa and E. Crépeau (2008))

For every L ∈ N, there exists T > 0 such that the KdV control system is locally controllable in time T.

slide-10
SLIDE 10

The proof relies on a power series expansion. Let us explain the method on control systems of finite dimension ˙ y = f(y, u), where the state is y ∈ Rn and the control is u ∈ Rm. We assume that (0, 0) ∈ Rn × Rm is an equilibrium of the control system ˙ y = f(y, u), i.e. that f(0, 0) = 0. Let H := Span {AiBu; u ∈ Rm, i ∈ {0, . . . , n − 1}} with A := ∂f ∂y (0, 0), B := ∂f ∂u(0, 0). If H = Rn, the linearized control system around (0, 0) is controllable and therefore the nonlinear control system ˙ y = f(y, u) is small-time locally controllable at (0, 0) ∈ Rn × Rm.

slide-11
SLIDE 11

Let us look at the case where the dimension of H is n − 1. Let us make a (formal) power series expansion of the control system ˙ y = f(y, u) in (y, u) around 0. We write y = y1 + y2 + . . . , u = v1 + v2 + . . . . The order 1 is given by (y1, v1); the order 2 is given by (y2, v2) and so on. The dynamics of these different orders are given by ˙ y1 = ∂f ∂y (0, 0)y1 + ∂f ∂u(0, 0)v1, (1) (2) ˙ y2 = ∂f ∂y (0, 0)y2 + ∂f ∂u(0, 0)v2 + 1 2 ∂2f ∂y2 (0, 0)(y1, y1) + ∂2f ∂y∂u(0, 0)(y1, v1) + 1 2 ∂2f ∂u2 (0, 0)(v1, v1), and so on.

slide-12
SLIDE 12

Let e1 ∈ H⊥. Let T > 0. Let us assume that there are controls v1

± and

v2

±, both in L∞((0, T); Rm), such that, if y1 ± and y2 ± are solutions of

˙ y1

± = ∂f

∂y (0, 0)y1

± + ∂f

∂u(0, 0)v1

±,

(1) y1

±(0) = 0,

(2) (3) ˙ y2

± = ∂f

∂y (0, 0)y2

± + ∂f

∂u(0, 0)v2

± + 1

2 ∂2f ∂y2 (0, 0)(y1

±, y1 ±)

+ ∂2f ∂y∂u(0, 0)(y1

±, u1 ±) + 1

2 ∂2f ∂u2 (0, 0)(u1

±, u1 ±),

y2

±(0) = 0,

(4) then y1

±(T) = 0,

(5) y2

±(T) = ±e1.

(6)

slide-13
SLIDE 13

Let (ei)i∈{2,...n} be a basis of H. By the definition of H, there are (ui)i=2,...,n, all in L∞(0, T)m, such that, if (yi)i=2,...,n are the solutions of ˙ yi = ∂f ∂y (0, 0)yi + ∂f ∂u(0, 0)ui, (1) yi(0) = 0, (2) then, for every i ∈ {2, . . . , n}, yi(T) = ei. (3) Now let b =

n

  • i=1

biei (4) be a point in Rn. Let v1 and v2, both in L∞((0, T); Rm), be defined by the following If b1 0, then v1 := v1

+ and v2 := v2 +,

(5) If b1 < 0, then v1 := v1

− and v2 := v2 −.

(6)

slide-14
SLIDE 14

Let u : (0, T) → Rm be defined by u(t) := |b1|1/2v1(t) + |b1|v2(t) +

n

  • i=2

biui(t). (1) Let y : [0, T] → Rn be the solution of ˙ y = f(y, u(t)), y(0) = 0. (2) Then one has, as b → 0, y(T) = b + o(b). (3) Hence, using the Brouwer fixed-point theorem and standard estimates on

  • rdinary differential equations, one gets the local controllability of

˙ y = f(y, u) (around (0, 0) ∈ Rn × Rm) in time T, that is, for every ε > 0, there exists η > 0 such that, for every (a, b) ∈ Rn × Rn with |a| < η and |b| < η, there exists a trajectory (y, u) : [0, T] → Rn × Rm of the control system ˙ y = f(y, u) such that y(0) = a, y(T) = b, (4) |u(t)| ε, t ∈ (0, T). (5)

slide-15
SLIDE 15

Bad and good news for L = 2π

  • Bad news: The order 2 is not sufficient. One needs to go to the order

3.

  • Good news: the fact that the order is odd allows to get the local

controllability in arbitrary small time. The reason: If one can move in the direction ξ ∈ H⊥ one can move in the direction −ξ. Hence it suffices to argue by contradiction (assume that it is impossible to enter in H⊥ in small time etc.).

slide-16
SLIDE 16

Lie brackets and obstruction to small-time local controllability

Theorem (H. Sussmann (1983))

Let us assume that f0 and f1 are analytic in an open neighborhood Ω of a and that f0(a) = 0. (1) We consider the control system ˙ y = f0(y) + uf1(y), (2) where the state is y ∈ Ω and the control is u ∈ R. Let us also assume that the control system (2) is small-time locally controllable (a, 0). Then [f1, [f1, f0]](a) ∈ Span {adk

f0f1(a); k ∈ N}.

(3)

slide-17
SLIDE 17

How to apply it to the KdV control system?

Question 1: What is [f1, [f1, f0]](0) in the case of the KdV control system? Question 2: What is Span {adk

f0f1(0); k ∈ N} in the case of the KdV

control system? Note that in the finite dimensional case Span {adk

f0f1(a); k ∈ N} is just

the controllable space of the linearized control system of ˙ y = f0(y) + uf1(y) at (a, 0), i.e. the linear control system ˙ y = ∂f0 ∂y (a)y + uf1(a), (1) where the state is y ∈ Rn and the control is u ∈ R. Hence, one may expect that, for the KdV control system, Span {adk

f0f1(0); k ∈ N} is the

controllable part of yt + yx + yxxx = 0, t ∈ [0, T], x ∈ [0, L], (2) y(t, 0) = y(t, L) = 0, yx(t, L) = u(t), t ∈ [0, T]. (3) where, at time t ∈ [0, T], the control is u ∈ R and the state is y(t, ·) ∈ L2(0, L).

slide-18
SLIDE 18

Lie bracket [f0, f1](a) when f0(a) = 0

bc

a

slide-19
SLIDE 19

Lie bracket [f0, f1](a) when f0(a) = 0

bc

a u = −η y(ε)

slide-20
SLIDE 20

Lie bracket [f0, f1](a) when f0(a) = 0

bc

a u = −η y(ε)

bc

u = η y(2ε)

slide-21
SLIDE 21

Lie bracket [f0, f1](a) when f0(a) = 0

bc

a u = −η y(ε)

bc

u = η y(2ε) ≃ a + ηε2[f0, f1](0) ε → 0+

slide-22
SLIDE 22

An example of the computation of a bracket for a PDE

Consider the simplest PDE control system yt + yx = 0, x ∈ [0, L], y(t, 0) = u(t). (1) It is a control system where, at time t, the state is y(t, ·) : (0, L) → R and the control is u(t) ∈ R. Formally it can be written in the form ˙ y = f0(y) + uf1(y). Here f0 is linear and f1 is constant.

slide-23
SLIDE 23

Lie bracket [f0, f1](a) for ˙ y = f0(y) + uf1(y), with f0(a) = 0

bc

a u = −η y(ε)

bc

u = η y(2ε) ≃ a + ηε2[f0, f1](0) ε → 0+

slide-24
SLIDE 24

An example of the computation of a bracket for a PDE (continued)

Let us consider, for ε > 0, the control defined on [0, 2ε] by u(t) := −η for t ∈ (0, ε), u(t) := η for t ∈ (ε, 2ε). Let y : (0, 2ε) × (0, L) → R be the solution of the Cauchy problem yt + yx = 0, t ∈ (0, 2ε), x ∈ (0, L), (1) y(t, 0) = u(t), t ∈ (0, 2ε), y(0, x) = 0, x ∈ (0, L). (2) Then one readily gets, if 2ε L, y(2ε, x) = η, x ∈ (0, ε), y(2ε, x) = −η, x ∈ (ε, 2ε), (3) y(2ε, x) = 0, x ∈ (2ε, L). (4)

slide-25
SLIDE 25

An example of the computation of a bracket for a PDE (continued)

  • y(2ε, ·) − y(0, ·)

ε2

  • L2(0,L)

→ +∞ as ε → 0+. (1) For every φ ∈ H2(0, L), one gets after suitable computations lim

ε→0+

L φ(x) y(2ε, x) − y(0, x) ε2

  • dx = −ηφ′(0).

(2) So, in some sense, we could say that [f0, f1](0) = δ′

  • 0. Note however that

this derivative of a Dirac mass at 0 does not belong to the (natural) state space L2(0, L).

slide-26
SLIDE 26

Open problems for the global controllability

Do we have global controllability? This is open even with three boundary controls: yt + yx + yxxx + yyx = 0, (1) y(t, 0) = u1(t), y(t, L) = u2(t), yx(t, L) = u3(t). (2) Note that, for (1)-(2),

1 The global controllability holds in large time (L. Rosier (1997). 2 The global null controllability in small time holds in Lagrangian

coordinates (L. Gagnon (2014)). One has global controllability in small time for yt + yx + yxxx + yyx = u4(t), (3) y(t, 0) = u1(t), y(t, L) = u2(t), yx(t, L) = u3(t). (4) (M. Chapouly (2009)). The proof uses the return method as for the Navier-Stokes control system.

slide-27
SLIDE 27

The return method for KdV or Navier-Stokes equations

T y t

slide-28
SLIDE 28

The return method for KdV or Navier-Stokes equations

T y t

slide-29
SLIDE 29

The return method for KdV or Navier-Stokes equations

T y t B1 B2

slide-30
SLIDE 30

The return method for KdV or Navier-Stokes equations

T y t B1 B2 ¯ y(t)

slide-31
SLIDE 31

The return method for KdV or Navier-Stokes equations

T y t B1 B2 ¯ y(t) B3 B4

slide-32
SLIDE 32

An example without small-time global controllability

We consider quantum systems whose dynamics can be described by a linear Schrödinger equation of the form iψt(t, x) =

  • − 1

2∆ + V (x) − E(t), x

  • ψ(t, x), (t, x) ∈ (0, T) × RN.

(1) Here, N ∈ N∗ is the space dimension, ., . is the usual scalar product on RN, V : x ∈ RN → R, E : t ∈ (0, T) → RN and ψ : (t, x) ∈ (0, T) × RN → C are a static potential, a time-dependent electric field, and the wave function, respectively. This equation represents a quantum particle in the potential V subject to the electric field E(t). Planck’s constant and the particle mass have been set to one. System (1) is a control system in which the state is the wave function ψ, that belongs to the unitary L2(RN, C)-sphere, denoted by S; and the control is the electric field E.

slide-33
SLIDE 33

Global controllability in large time

We assume that (1) V ∈ C∞(RN) and, ∀α ∈ NN such that |α| 2, ∂α

x V ∈ L∞(RN) .

Theorem (U. Boscain, M. Caponigro, Th. Chambrion, P. Mason and

  • M. Sigalotti (2009, 2012))

For generic V satisfying (1) the following holds. For every ǫ > 0 and ψ0, ψ1 ∈ S, there exist a time T > 0 and a piecewise constant function u : [0, T] → R such that the solution of iψt(t, x) =

  • − 1

2∆ + V (x) − E(t), x

  • ψ(t, x), ψ(0) = ψ0,

(2) satisfies (3) ψ(T) − ψ1L2(RN) < ǫ .

slide-34
SLIDE 34

Theorem (K. Beauchard, JMC and H. Teismann (2014))

Let b > 0, x0, ˙ x0 ∈ RN, CN :=

  • RN e−y2dy

1/2 and let ψ0 ∈ S be defined by (1) ψ0(x) := bN/4 CN e− b

2x−x02+i ˙

x0,x−x0.

Let ψ1 ∈ S a state that does not have a Gaussian profile in the sense that |ψ1(.)| = det(S)1/4 CN e− 1

2

√ S(.−γ)2, ∀γ ∈ RN, ∀S ∈ MN(R), ST = S > 0.

Then there exist T > 0 and δ > 0 such that, for every E : [0, T] → RN, the solution ψ of iψt(t, x) =

  • − 1

2∆ + V (x) − E(t), x

  • ψ(t, x), ψ(0) = ψ0,

(2) satisfies ψ(t) − ψ1L2(RN) > δ, ∀t ∈ [0, T].

slide-35
SLIDE 35

Sketch of proof

We use Gaussian approximate solutions that are localized around classical

  • trajectories. They are called “trajectory-coherent states”. They rely on

Gaussian approximate solutions that are localized around classical

  • trajectories. They are called “trajectory-coherent states”. This strategy was

introduced by H. Teismann in 2005. For E : R → RN, we introduce the solutions xc : R → RN and Q : R → CN×N of ¨ xc + ∇V (xc) = E(t), xc(0) = x0, ˙ xc(0) = ˙ x0 , (1) ˙ Q + Q2 + V ′′[xc] = 0, Q(0) = ibIdN , (2)

slide-36
SLIDE 36

We introduce the “classical action” S : (t, x) ∈ R × RN → R (1) S(t, x) := t 1 2 ˙ xc(s)2 − V [xc(s)]

  • ds + ˙

xc(t), x − xc(t) and the approximate solution (2)

  • ψ(t, x) := bN/4

CN exp

  • Φ(t, x)
  • where

Φ(t, x) := i

  • S(t, x) + 1

2Q(t)[x − xc(t)], x − xc(t)

  • +

t

  • ixc(s), E(s) − Tr[Q(s)]

2

  • ds.

Then the key estimates are the following one, where t is small enough, (ψ − ψ)(t)L2(RN) C t ℑ[Q(s)]−13/2ds, (3) b 2IdN ℑ(Q(t)) 3b 2 IdN. (4) ...

slide-37
SLIDE 37

1

The Korteweg-de Vries equations

2

Controllability

3

Stabilization

slide-38
SLIDE 38

Double inverted pendulum (CAS, ENSMP/La Villette)

slide-39
SLIDE 39
slide-40
SLIDE 40
slide-41
SLIDE 41
slide-42
SLIDE 42
slide-43
SLIDE 43

Rapid stabilization in finite dimension

We consider the control system (1) ˙ y = f(y, u), where the state is y ∈ Rn and the control is u ∈ Rm. We assume that f(0, 0) = 0. We are interested in the following question (rapid stabilization). Is it true that, for every ν > 0, there exist a feedback law y ∈ Rn → u(y) ∈ Rm, C > 0 and r > 0 such that, for every solution of the closed loop system ˙ y = f(y, u) such that |y(0)| r, one has (2) |y(t)| Ce−νt|y(0)|, ∀t 0? If the answer is yes, one says that the rapid stabilization property holds for ˙ y = f(y, u).

slide-44
SLIDE 44

A classical result on rapid stabilization in finite dimension

Theorem (Pole shifting theorem, M. Wonham (1967))

If the linearized control system at (0, 0) ∈ Rn × Rm ˙ y = ∂f ∂y (0, 0)y + ∂f ∂u(0, 0)u (1) is controllable, then the rapid stabilization property holds for ˙ y = f(y, u).

slide-45
SLIDE 45

Stabilization and damping

Let us consider the linear KdV control system yt + yx + yxxx = 0, y(t, 0) = y(t, L) = 0, yx(t, L) − yx(t, 0) = u(t), (1) where, at time t 0, the state is y(t, ·) ∈ L2(0, L) and the control is u(t) ∈ R. Simple integrations by parts show that, along the solution of (1),

  • ne has

(2) d dt L y2dx = u(yx(t, L) + yx(t, 0)). Hence it is tempting to consider the feedback law (3) u(y) = −M(yx(t, L) + yx(t, 0)), with M > 0. It has been proved by G.P. Menzala, C.F. Vasconcellos and E.Zuazua in 2002 that this feedback law leads to exponential stability of the closed loop system if the length is not critical even for the nonlinear KdV equation. Unfortunately letting M → +∞ do not lead to rapid stabilization, i.e. to an exponential decay rate as large as one wants.

slide-46
SLIDE 46

Damping

For mechanical systems at least, a natural candidate for a control Lyapunov function is given by the total energy, i.e., the sum of potential and kinetic energies.

slide-47
SLIDE 47

Damping

For mechanical systems at least, a natural candidate for a control Lyapunov function is given by the total energy, i.e., the sum of potential and kinetic

  • energies. Consider the classical spring-mass control system.

bc

u m

slide-48
SLIDE 48

bc

x1 u m The control system is ˙ x1 = x2, ˙ x2 = − k mx1 + u m, (Spring-mass) where m is the mass of the point attached to the spring, x1 is the displacement of the mass (on a line), x2 is the speed of the mass, k is the spring constant, and u is the external force applied to the mass. The state is (x1, x2)tr ∈ R2 and the control is u ∈ R.

slide-49
SLIDE 49

The total energy E of the system is E = 1 2(kx2

1 + mx2 2).

One has ˙ E = ux2. Hence if x2 = 0, one cannot have ˙ E < 0. However it tempting to consider the following feedback laws u := −Mx2, where M > 0. Using the LaSalle invariance principle, one gets that these feedback laws globally asymptotically stabilize the spring-mass control system.

slide-50
SLIDE 50

An important limitation of the damping method

Let us consider the spring-mass control system with normalized physical constants (k = m = g = 1) ˙ x1 = x2, ˙ x2 = −x1 + u. Let V : R2 → R be defined by V (x) = x2

1 + x2 2, ∀x = (x1, x2)tr ∈ R2.

One has ˙ V = 2x2u. and it is tempting to take u := −Mx2, where M is some fixed positive real number. An a priori guess would be that, if we let M be quite large, then we get a quite good convergence, as fast as we want.

slide-51
SLIDE 51

An important limitation of the damping method

Let us consider the spring-mass control system with normalized physical constants (k = m = g = 1) ˙ x1 = x2, ˙ x2 = −x1 + u. Let V : R2 → R be defined by V (x) = x2

1 + x2 2, ∀x = (x1, x2)tr ∈ R2.

One has ˙ V = 2x2u. and it is tempting to take u := −Mx2, where M is some fixed positive real number. An a priori guess would be that, if we let M be quite large, then we get a quite good convergence, as fast as we want. But this is completely wrong. On a given [0, T] time-interval, as ν → +∞, x2 goes very quickly to 0 and x1 does not change. This is the

  • verdamping phenomenon.
slide-52
SLIDE 52

˙ x1 = x2, ˙ x2 = −x1 − (1/10)x2

slide-53
SLIDE 53

˙ x1 = x2, ˙ x2 = −x1 − (1/2)x2

slide-54
SLIDE 54

˙ x1 = x2, ˙ x2 = −x1 − x2

slide-55
SLIDE 55

˙ x1 = x2, ˙ x2 = −x1 − 2x2

slide-56
SLIDE 56

˙ x1 = x2, ˙ x2 = −x1 − 3x2

slide-57
SLIDE 57

˙ x1 = x2, ˙ x2 = −x1 − 4x2

slide-58
SLIDE 58

˙ x1 = x2, ˙ x2 = −x1 − 5x2

slide-59
SLIDE 59

˙ x1 = x2, ˙ x2 = −x1 − 6x2

slide-60
SLIDE 60

˙ x1 = x2, ˙ x2 = −x1 − 10x2

slide-61
SLIDE 61

˙ x1 = x2, ˙ x2 = −x1 − 20x2

slide-62
SLIDE 62

A second KdV control system

(1) yt + yx + yxxx + yyx = 0, t ∈ (0, T), x ∈ (0, L), y(t, 0) = u(t), y(t, L) = 0, yx(t, L) = 0, t ∈ (0, T). The control system (2) is locally null controllable: L. Rosier (2004).

Theorem (E. Cerpa and JMC (2013))

For every λ > 0, there exist C > 0, r > 0 and a feedback law y → u(y) such that, for this feedback law,

  • |y(0)|L2(0,L) r
  • |y(t)|2

L2(0,L) Ce−λt|y(0)|2 L2(0,L), ∀t > 0.

  • (2)
slide-63
SLIDE 63

Proof: With M. Krstić’s backstepping approach

We look for a transformation y ∈ L2(0, L) → z ∈ L2(0, L) defined by (1) z(x1) := y(x1) − L

x1

k(x1, x2)y(x2)dx2, such that the trajectory y of (2) yt + yx + yxxx = 0, y(t, 0) = u(t), y(t, L) = 0, yx(t, L) = 0, with the feedback law u(t) := L

0 k(0, x2)y(t, x2)dx2 is mapped into the

trajectory z = z(t, x), solution of the linear system (3) zt + zx + zxxx + λz = 0, z(t, 0) = 0, z(t, L) = 0, zx(t, L) = 0. Note that, for (3), one has (just multiply (3) by z and do some integrations by parts): (4) |z(t)|2

L2(0,L) e−λt|z(0)|2 L2(0,L), ∀t 0.

slide-64
SLIDE 64

Kernel equation

This property for the transformation y → z holds if (and only if) (1)          k111 + k1 + k222 + k2 = −λk, for 0 < x1 < x2 < L, k(x1, L) = 0, in [0, L], k(x1, x1) = 0, in [0, L], k1(x1, x1) = λ 3(L − x1), in [0, L]. with ki := ∂xik, kiii := ∂3

  • xixixik. Moreover, if k is smooth enough

(Lipschitz is sufficient), one can check that the same feedback law provides for the initial nonlinear KdV control system (local) asymptotic stability with an exponential decay rate at least equal to λ.

slide-65
SLIDE 65

Proof of the existence of k

Let us make the following change of variables t = x2 − x1, s = x1 + x2 and define G(s, t) := k(x1, x2) on T0 := {(s, t); t ∈ [0, L], s ∈ [t, 2L − t]}. Then k satisfies the kernel equation if and only if (1)          6Gtts + 2Gsss + 2Gs = −λG, in T0, G(s, 2L − s) = 0, in [L, 2L], G(s, 0) = 0, in [0, 2L], Gt(s, 0) = λ 6(s − 2L), in [0, 2L]. We transform this equation by integrating twice with respect to t. We get that (1) is equivalent to (2) G(s, t) = −λt 6 (2L − t − s) + 1 6 2L−t

s

t τ

  • 2Gsss + 2Gs + λG
  • (η, ξ)dξdτdη.
slide-66
SLIDE 66

To prove that such a function G = G(s, t) exists, we use the method of successive approximations. We take as an initial guess (1) G1(s, t) = −λ(2L − t − s)/6 and define the recursive formula as follows, (2) Gn+1(s, t) = (1/6) 2L−t

s

t τ

  • 2Gn

sss + 2Gn s + λGn

(η, ξ)dξdτdη. Performing some computations, we get for instance (3) G2(s, t) = 1/(108)

  • t3

λ − λ2L + λ2t 4

  • 2L − t − s
  • + t3λ2

4

  • (2L − t)2 − s2

,

slide-67
SLIDE 67

More generally, one has the following formula (1) Gk(s, t) =

k

  • i=1
  • ai

kt2k−1 + bi kt2k

(2L − t)i − si , where the coefficients satisfy bk

k = 0 and, more importantly, there exist

positive constants M, B such that, for any k ≥ 1 and any (s, t) ∈ T0 (2)

  • Gk(s, t)
  • M Bk

(2k)!(t2k−1 + t2k). This implies that the series ∞

n=1 Gn(s, t) is uniformly convergent in T0.

Therefore the series defines a continuous function G : T0 → R (3) G(s, t) =

  • n=1

Gn(s, t) Then, one checks that G is a solution of our integral equation and that is C1 on T0.

slide-68
SLIDE 68

References for the backstepping

  • 1. Backstepping was initially a recursive method to stabilize finite

dimensional control system of the form ˙ x = f(x, y), ˙ y = u.

  • 2. First application to PDE: JMC and B. d’Andréa-Novel (1998).
  • 3. This method has been used on the discretization of partial differential

equations by D. Bošković, A. Balogh and M. Krstić in 2003.

  • 4. A key modification of the method by using a Volterra transformation is

introduced by D. Bošković, M. Krstić and W. Liu in 2001.

  • 5. For a survey on this method with Volterra transformations, see the book

by M. Krstić and A. Smyshlyaevin 2008.

slide-69
SLIDE 69

Return to the initial KdV control system

(1) yt + yx + yxxx + yyx = 0, t ∈ (0, T), x ∈ (0, L), y(t, 0) = 0, y(t, L) = 0, yx(t, L) = u(t) t ∈ (0, T). We assume that (2) L ∈ N :=

  • k2 + kl + l2

3 , k ∈ N∗, l ∈ N∗

  • .

Then the linearized control system around 0 is controllable and the nonlinear control system is locally controllable in small-time. We are interested in the rapid (local) stabilization of the nonlinear system. Unfortunately the backstepping approach is not working.

slide-70
SLIDE 70

Rapid stabilization of the initial KdV-control system

Theorem (JMC and Q. Lü (2013))

Let us assume that L ∈ N For every λ > 0, there exist C > 0, r > 0 and a feedback law y → u(y) such that, for this feedback law,

  • |y(0)|L2(0,L) r
  • |y(t)|2

L2(0,L) Ce−λt|y(0)|2 L2(0,L), ∀t > 0.

  • (1)

Remarks 1. For the linearized KdV-Rosier control system, such a result was

  • btained before by E. Cerpa and E. Crépeau in 2009 for the H1 norm. But

it seems that, due to some regularity issues, this does not allow to get rapid stabilization for the nonlinear KdV-Rosier system.

  • 2. The feedback law u = 0 already provides (local) exponential stability if

L ∈ N: G. Perla Menzala, C. Vasconcellos and E. Zuazua (2002).

  • 3. If L = 2π (the first critical length) and u = 0, one does not have

asymptotic stability for the linearized system; however, one has asymptotic stability for the nonlinear system: JMC, J. Chu and P. Shang (2012). The decay rate is not exponential (it is 1/ √ t).

slide-71
SLIDE 71

Proof of the rapid stabilizability

The backstepping approach does not work. We need to use more general transformation: y ∈ L2(0, L) → z ∈ L2(0, L) is now defined by (1) z(x1) := y(x1) − L k(x1, x2)y(x2)dx2. (Every linear transformation y ∈ L2(0, L) → z ∈ L2(0, L) can been written in this form). Again, we want that the trajectory y of (2) yt + yx + yxxx = 0, y(t, 0) = 0, y(t, L) = 0, yx(t, L) = u(t), with the feedback law u(t) := L

0 kx1(0, x2)y(t, x2)dx2 is mapped into the

trajectory z = z(t, x), solution of the linear system (3) zt + zx + zxxx + λz = 0, z(t, 0) = 0, z(t, L) = 0, zx(t, L) = 0.

slide-72
SLIDE 72

Kernel equation

This property for the transformation y → z holds if (and only if) (1)    k111 + k1 + k222 + k2 + λk = λδ(x1 − x2),

  • n (0, L)2,

k(x1, 0) = k(x1, L) = k2(x1, 0) = k2(x1, L)

  • n (0, L),

k(0, x2) = k(L, x2) = 0

  • n (0, L),

where δ(x1 − x2) is the Dirac mass on the diagonal of the square [0, L] × [0, L]. Next step: Prove the existence of a solution to the kernel equation (1).

slide-73
SLIDE 73

How to prove the existence of k

Let us define an unbounded linear operator A : D(A) ⊂ L2(0, L) → L2(0, L) as follows. D(A) := {ϕ; ϕ ∈ H3(0, L), ϕ(0) = ϕ(L) = 0, ϕx(0) = ϕx(L)}, (1) Aϕ := −ϕxxx − ϕx. (2) The operator A is a skew-adjoint operator and has compact resolvent. Furthermore, since L ∈ N, L ∈ 2πN, which, as one easily checks, implies that 0 is not an eigenvalue of A. Denote by {iµj}j∈Z, µj ∈ R, the eigenvalues of A, which are organized in the following way: . . . µ−2 µ−1 < 0 < µ0 µ1 µ2 . . . . (3) Since the control is of dimension 1 and the linearized control system is controllable, all these eigenvalues are simple. Let us write {ϕj}j∈Z for the corresponding eigenfunctions with |ϕj|L2(0,L) = 1 (j ∈ Z). It is well known that {ϕj}j∈Z constitutes an orthonormal basis of L2(0, L).

slide-74
SLIDE 74

For j ∈ Z, let ψj : [0, L] → C be the solution of (1)    ψ′′′

j + ψ′ j + λψj − iµjψj = 0 in (0, L),

ψj(0) = ψj(L) = 0, ψ′

j(L) − ψ′ j(0) = 1.

The idea is to search k in the following form (2) k(x1, x2) =

  • j∈Z

cjψj(x1)ϕj(x2). ...

slide-75
SLIDE 75

Stabilization of the KdV equation for critical lengths

We assume that the length is critical but that (1) L ∈ 2πN∗. Then the power expansion up to the order 2 is sufficient for the local

  • controllability. (If (1) does not hold, an expansion up to the order 3 is

necessary -and sufficient- for the local controllability.) Let Hu ⊂ L2(0, L) be the uncontrollable part of the linearized control system around 0. For y ∈ L2(0, L), let yu be the orthogonal projection of y on Hu for the L2 scalar product. Let (2) yc := y − yu.

slide-76
SLIDE 76

Then, one has the following theorem.

Theorem (JMC and I. Rivas (2014))

There exist T > 0, r > 0, λ > 0, C > 0 and a continuous T-periodic time-varying feedback law u : R × Hu → R, (t, h) → u(t, h) such that, for every solution y of the closed loop system (1) yt + yx + yxxx + yyx = 0, y(t, 0) = y(t, L) = 0, yx(t, L) = u(t, yu), such that |y(0, ·)|L2(0,L) r, one has (2) |yc(t)|2

L2(0,L) + |yu(t)|L2(0,L) Ce−λt

|yc(0)|2

L2(0,L) + |yu(0)|L2(0,L)

  • .
slide-77
SLIDE 77

Sketch of the proof in a finite dimensional framework

Let n, m and k be three positive integers. We consider the system given by ˙ x = Ax + Bu and ˙ y = Ey + Q(x, x), (1) where A ∈ Rn×n, B ∈ Rn×m, E ∈ Rk×k and Q is a quadratic map from Rn × Rn into Rk with n ∈ N \ {0}. Equation (1) defines a control system where the state is (xtr, ytr)tr ∈ Rn+k with x ∈ Rn and y ∈ Rk and where the control is u ∈ Rm.

slide-78
SLIDE 78

We assume the existence of T > 0 such that the following properties hold (P1) There exists ρ1 ∈ (0, 1) such that ( ˙ x = Ax) ⇒

  • |x(T)|2 ρ1|x(0)|2

, (1) (P2) |eET y| ≤ |y|, ∀y ∈ Rk, (2) (P3) There exist δ > 0, C0 > 0 and v : [0, T] × Sk−1 → Rn such that v ∈ L∞([0, T] × Sk−1; Rn), (3) |v(t, b) − v(t, b′)| C0|b − b′|, ∀t ∈ (0, T), ∀b, b′ ∈ Sk−1, (4) ˙ ˜ x = A˜ x + v(t, b), ˙ ˜ y = E˜ y + Q(˜ x, ˜ x), ˜ x(0) = 0, ˜ y(0) = 0

  • (5)

  • ˜

x(T) = 0, ˜ y(T) · eTEb −δ , ∀b ∈ Sk−1.

slide-79
SLIDE 79

Asymptotic controllability

Let us point out that P1, P2 and P3 imply asymptotic controllability to (0, 0). However it do not imply that the system can be stabilized by means

  • f (continuous) feedback laws u(x, y) (R. Brockett (1983), JMC (1991).

However we are going to see that one can stabilize the system by means of time-varying feedback laws u(t, x, y) (which are continuous with respect to x and y).

Remark

Let us recall that (most) of controllable systems can be stabilized (in finite time) by means of time-varying feedback laws (JMC (1995)).

slide-80
SLIDE 80

An example

Denote Hq(M) is the q-th singular homology group of M with coefficients in Z, where M is a topological space and q a non negative integer. We consider the control system ˙ z = f(z, u), where the state is z ∈ Rk and the control is u ∈ Rm and f(0, 0) = 0. Then,

Theorem (JMC (1991))

If the system ˙ z = f(z, u) is locally asymptotically stabilizable by means of a (continuous) stationary feedback law vanishing at 0, then (1) f∗({(z, u); f(z, u) = 0, |z| < ε and |u| < ε}) = Hn−1(Rn \ {0}) = Z, for every ε > 0 small enough. The control system ˙ x1 = −x1 + u1, ˙ x2 = −x2 + u2, ˙ y1 = x2

1 − x2 2, ˙

y2 = 2x1x2, (2) satisfies P1, P2 and P3 but does not satisfy (1): The left hand side of (1) is 2Z.

slide-81
SLIDE 81

For ε > 0, let us consider the following periodic time-varying feedback law uε : R × Rk → Rm uε(t, y) :=    ε

  • |e−tEy|v
  • t, e−tEy

|e−tEy|

  • ,

t ∈ [0, T), y ∈ Rk \ {0}, 0, t ∈ [0, T), y = 0 ∈ Rk. (1) uε(t + T, y) = u(t, y), t ∈ R, y ∈ Rk. (2) We are interested in the asymptotic behavior of the solutions to the closed loop system ˙ x = Ax + Buε(t, y) and ˙ y = Ey + Q(x, x), (3)

slide-82
SLIDE 82

The following theorem shows that the feedback law uε defined just defined leads to global asymptotic stability provided that ε > 0 is small enough.

Theorem

There exists ε0 > 0 such that, for every ε ∈ [0, ε0], there exist C > 0 and λ > 0 such that, for every solution (x, y) of the closed loop system (1) ˙ x = Ax + Buε(t, y) and ˙ y = Ey + Q(x, x),

  • ne has

|x(t)|2 + |y(t)| Ce−λt |x(0)|2 + |y(0)|

  • , ∀t ∈ [0, +∞).

(2)

slide-83
SLIDE 83

Our next result allows to stabilize nonlinear control systems for which a “good quadratic approximation” is given by our previous control system. The control system takes now the following more general form (1) ˙ x = Ax + Rx(x, y) + Bu, ˙ y = Ly + Q(x, x) + Ry(x, y), where the state is (xtr, ytr)tr ∈ Rn+k, with x ∈ Rn and y ∈ Rk, and the control is u ∈ Rm. We assume that Ry : Rn × Rk → Rn and Rx : Rn × Rk → Rk are both continuous. Our next result deals with the asymptotic stability of 0 for the closed loop system ˙ x = Ax + Buε(t, y) + Rx(x, y) and ˙ y = Ly + Q(x, x) + Ry(x, y), (2)

slide-84
SLIDE 84

Theorem

Let us assume the existence of ρ > 0, η > 0 and M > 0 such that, for every (x, y) ∈ Rn × Rk such that |x| + |y| 1, |Rx(εx, ε2y| ≤ Mε1+η, ∀ε ∈ (0, 1), (1) |Ry(εx, ε2y)| ≤ Mε2+η, ∀ε ∈ (0, 1), (2) Then, there exists ε0 > 0 such that, for every ε ∈ (0, ε0], there exist C > 0, ρ > 0 and λ > 0 such that, for every solution (x, y) of ˙ x = Ax + Buε(t, y) + Rx(x, y), ˙ y = Ly + Q(x, x) + Ry(x, y), (3) with |x(0)|2 + |y(0)| ρ, one has |x(t)|2 + |y(t)| Ce−λt |x(0)|2 + |y(0)|

  • , ∀t ∈ [0, +∞).

(4)