Observer convergence: From necessary to sufficient conditions - - PowerPoint PPT Presentation

observer convergence from necessary to sufficient
SMART_READER_LITE
LIVE PREVIEW

Observer convergence: From necessary to sufficient conditions - - PowerPoint PPT Presentation

Observer convergence: From necessary to sufficient conditions Laurent PRALY and Vincent ANDRIEU IHP, June 2016 1 / 12 Observation Problem 1 1.1 Observation Problem model dx dt ( t ) = x = f ( x, t, u ( t )) ( ) Exogenous


slide-1
SLIDE 1

Observer convergence: From necessary to sufficient conditions

Laurent PRALY and Vincent ANDRIEU

IHP, June 2016

slide-2
SLIDE 2

§ 1/12

Observation Problem

1

slide-3
SLIDE 3

§1.1

Observation Problem model

dx dt (t) = ˙ x = f(x, t, u(t)) (∗) ↑

Exogenous actions

− → y(t) ? = h(x, t)

reality Observation problem: Find (=estimate) x(t) solution of (*) to make

✡ ✡ ✡ ✡ ✡ ✡ ✡ ✣

this hold.

2

slide-4
SLIDE 4

§1.2

Observation Problem Observer = any device solving this problem. Focus on observers which are dynamical systems of the following form:

˙ ξ = ϕ(ξ, y) , ˆ x = η(ξ, y)

with y the measurement, ˆ

x the estimate and ξ finite dimensional.

Very simplified/degraded form of observer Facing data-loss and unable to give any information on confidence. Compare with stochastic filters or set-valued observers. Interested only in convergence ˆ

x(t) → x(t)

t → ∞

3

slide-5
SLIDE 5

§ 2/12

Context and main property

4

slide-6
SLIDE 6

§2.1

The technical context

  • The model is defined by two functions f and h:

˙ x = f(x) , y = h(x)

(1) with x in an open set Sx of Rn and y in Rp. Its solutions are denoted X(x, t).

  • The observer is defined by two functions ϕ and η:

˙ ξ = ϕ(ξ, y) , ˆ x = η(ξ, y)

(2) with ξ in an open set Sξ of Rm and ˆ

x in Sx.

Its solutions are denoted Ξy(ξ, t) or Ξ((x, ξ), t) .

  • dx, dξ and dy are distances on Rn, Rm and Rp respectively.

We omit time-dependence to simplify notations. No loss of generality as long as we do not want to consider families of models indexed by inputs (exogenous actions).

5

slide-7
SLIDE 7

§2.2

The technical context The functions f, h, ϕ and η are assumed to be such that :

  • When (x, ξ) is in Sx × Sξ, the corresponding solution of (1),(2),

(X(x, t), Ξ((x, ξ), t)), is unique and defined, with values in Sx × Sξ,

maximally on ]σ−

Sx×Sξ(x, ξ) , σ+ Sx×Sξ(x, ξ)[.

  • If η does not depend on y it is uniformly continuous.
  • If η depends on y, both h and η are uniformly continuous.

6

slide-8
SLIDE 8

§2.3

Main property Conv Property “Conv”: The zero error set

ˆ x

❅ ❅ ❘

Z = {(x, ξ) ∈ Sx × Sξ : x = η(ξ, h(x))}

contains an asymptotically stable subset Zω with Sx × Sξ as domain of attraction. More precisely, there exist i) a function β of class1 KL ii) a continuous function γ : Sx × Sξ → R+ such that :

σ+

Sx×Sξ(x, ξ) = σ+ Sx(x)

∀(x, ξ) ∈ Sx × Sξ ,

and, for all t in [0, σ+

Sx(x)),

dx,ξ

  • (X(x, t), Ξ((x, ξ), t)) , Zω
  • ≤ γ(x, ξ)

ւ not “equi” β

  • dx,ξ
  • (x, ξ), Zω
  • , t
  • .

Notation: Sx

ω =Cartesian projection of Zω on Sx=set of estimable points x.

1A function β : R+ × R+ → R+ is said of class KL if, for each s in R+, the function

s → β(r, s) is continuous, strictly increasing and zero at zero, and, for each r in R+, the

function s → β(r, s) is strictly decreasing and satisfies lims→∞ β(r, s) = 0.

7

slide-9
SLIDE 9

§2.4

Variation of main property Convtune Property “Convtune”: Given an integer m and an open subset Sξ of Rm, for any compact subset C of Rm and for all pairs (εs, εt) of strictly positive real numbers, we can find i) a compact subset Γ of Sξ, ii) a locally Lipschitz function ϕ : Rm × Rp → Rm iii) a uniformly continuous function η : Rm × Rp → Sx such that, for the observer given by these functions, we have, for all (x, ξ) in Sx × Rm,

  • 1. σ+

Sx×Rm(x, ξ) = σ+ Sx(x) ,

  • 2. For all (x, ξ) in C × Γ, such that σ+

Sx(x) > εt, we have :

dx (X(x, t) , η(Ξ((x, ξ), t), h(X(x, t))) ≤ εs ∀t ∈ [εt, σ+

Sx(x)) .

As fast as we want (εt) to as small as we want (εs), but maybe not convergence.

8

slide-10
SLIDE 10

Part I Necessary conditions

9

slide-11
SLIDE 11

§ 3/12

Necessary condition 1 for Conv:

Detectability

10

slide-12
SLIDE 12

§3.1

Necessary condition 1 : Detectability Proposition 1: [V. Andrieu, G. Besan¸ con, U. Serres] Under Assumption Conv, the model is forward detectable, i.e. for all xa and xb in Sx satisfying :

h

  • X(xa, t)
  • = h
  • X(xb, t)
  • ∀t ≥ 0 ,

we have :

lim

t→+∞ dx

  • X(xa, t) , X(xb, t)
  • = 0 .

11

slide-13
SLIDE 13

§3.2

Necessary condition 1 : Detectability

xa xb X(xa, t) X(xb, t) the distance converges to 0 h(xa) = h(xb) h(X(xa, t)) = h(X(xb, t))

12

slide-14
SLIDE 14

§ 4/12

Necessary condition 2 for Conv:

“The graph of an injective set-valued map is asymptotically stable”

13

slide-15
SLIDE 15

§4.1

Necessary condition 2 :

  • Asymp. stability of graph of map

Proposition 2: Under Assumption Conv, there exists an injective set-valued map x ∈ Sx

ω →

ηinv (x)

  • 1. which is a right inverse of η given h, i.e.

η(ξ, h(x)) = x ∀ξ ∈ ηinv (x) , ∀x ∈ Sx

ω .

  • 2. Its graph is Zω, i.e.

Zω =

  • (x, ξ) ∈ Sx

ω × Sξ : ξ ∈ ηinv (x)

  • ,

and therefore is asymptotically stable with Sx × Sξ as domain of attrac- tion.

14

slide-16
SLIDE 16

§4.2

Necessary condition 2 :

  • Asymp. stability of graph of map

Away from singularities and with sufficient differentiability . . . it is necessary to have: dimension m of ξ (observe state) ≥ dimension n of x (model state) − dimension p of y (measurement)

15

slide-17
SLIDE 17

§4.3

Supplementary assumption Assumption Obs Assumption “Obs”: The system:

˙ ξ = ϕ(ξ, y, t) , ˆ x = η(ξ, y, t)

(= observer) with state ξ, input y and output ˆ

x is instantaneously observable at Sξ

uniformly in y in h(Sx) i.e. for each continuous time function s → y(s) ∈ h(Sx), there is no pair of distinct points ξa and ξb in Sξ such that ∃ σ in [0, min{σ+

Sξ(ξa), σ+ Sξ(ξb)})

such that we have :

η(Ξy(ξa, t), y(t), t) = η(Ξy(ξb, t), y(t), t) ∀t ∈ [0, σ] .

16

slide-18
SLIDE 18

§4.4

Necessary condition 2 with Obs :

  • Asymp. stability of graph of map

Proposition 2 continued: If besides Assumption Conv, Assumption Obs holds also then ηinv is single- valued. Hence, it is necessary that there exists a function x ∈ Sx

ω → ηinv (x) ∈ Sξ

  • 1. which is left invertible given h, i.e.

η(ηinv (x), h(x)) = x ∀x ∈ Sx

ω .

  • 2. Its graph is asymptotically stable with Sx × Sξ as domain of attraction.

17

slide-19
SLIDE 19

§4.5

Necessary condition 2 with Obs :

  • Asymp. stability of graph of map

Luenberger wrote in 1962 Instead of requiring that the observer reconstructs the state vector

x itself, we require only that it reconstructs some [ ] transformation

[ηinv ] of the state vector . . . It is clear that it would then be possible to reconstruct the state vector itself, provided that the transforma- tion were invertible.

18

slide-20
SLIDE 20

§ 5/12

Necessary condition 3 for Convtune :

Instantaneous observability

19

slide-21
SLIDE 21

§5.1

Necessary condition 3 : Instantaneous Observability Proposition 3: [V. Andrieu, G. Besan¸ con, U. Serres] Under Assumption Convtune, the model is instantaneously observable at Sx.

20

slide-22
SLIDE 22

§5.2

Supplementary Assumption Assumption Inj Assumption “Inj”: The observer output function η is injective given h, i.e.: There exists a function αη of class1 K such that:

dξ(ξa, ξb) ≤ αη

  • dx
  • η(ξa, h(xa)) , η(ξb, h(xa))
  • ր

= xa ∀

  • (xa, ξa) , ξb
  • ∈ Zω × Sξ .

1A function α : R+ → R+ is said of class K if it is continuous, strictly increasing and zero

at zero. It is of class K∞ if it is onto R+.

21

slide-23
SLIDE 23

§5.3

Supplementary Assumption Assumption Inj Away from singularities and with sufficient differentiability . . . Inj requires: dimension m of ξ (observe state) ≤ dimension n of x (model state)

≈ (ξ, y) can be used as (maybe non independent) coordinates for x.

22

slide-24
SLIDE 24

§5.4

Supplementary Assumption Assumption Inj Example (Frequency estimation): For the model:

˙ x1 = x2 , ˙ x2 = −x1 x3 , ˙ x3 = 0 , y = x1 ,

an observer satisfying Conv is:

˙ ξj = ϕj(ξ, y) = λj [ξj − y] j = 1, . . . , m ˆ x = η(ξ) = (M(ξ)TM(ξ))−1M(ξ)TN(ξ)

where the λj are complex numbers with strictly negative real parts and

M(ξ) and N(ξ) are defined as: M =

  

λ2

1

λ1 ξ1

. . . . . . . . .

λ2

m λm ξm

  

, N(ξ) =

  

λ2

1ξ1

. . .

λ2

mξm

  

Given ˆ

x, (M(ξ)TM(ξ)) ˆ x = M(ξ)TN(ξ) are 3 polynomial equations of

degree 4 in m unknowns ⇒ infinitely many solutions when m ≥ 4. Hence the observer output function η is not injective.

Assumption Inj does not hold.

23

slide-25
SLIDE 25

§ 6/12

Necessary condition 4 for Conv and Inj :

Weak differential detectability

24

slide-26
SLIDE 26

§6.1

Necessary condition 4 : Weak differential detectability Proposition 4: [R. Sanfelice, L. P.] If Assumptions Conv and Inj hold and the matrix

∂η

∂ξ(ξ, y) ∂η ∂y(ξ, y)

  • is

invertible for each (ξ, y) in Sξ × h(Sx), then the model is weakly differentially detectable, i.e. there exists a covariant 2-tensor P : Sx

ω → Pn + satisfying1 :

v⊤LfP(x) v ≤ 0 ∀(x, v) ∈ Sx

ω × Sn : ∂h

∂x(x) v = 0 .

1

vTLfP(x) v = ∂ ∂x

  • vTP(x)v
  • f(x) + vT
  • P(x)∂f

∂x(x) + ∂f ∂x(x)P(x)

  • v.

25

slide-27
SLIDE 27

§6.2

Necessary condition 4 : Differential detectability

x x + sv x + tf(x) (x + sv) + tf(x + sv) d(s, t) = s

  • [v + t f(x+sv)−f(x)

s

]⊤P(x + tf(x)) [v + t f(x+sv)−f(x)

s

] h(x) = y d(s, t) ≤ d(s, 0)

lim

(s,t)→0

d(s, t) − d(s, 0) st = v⊤LfP(x) v

26

slide-28
SLIDE 28

§6.3

Necessary condition 4 : Differential detectability Strong differential detectability on Rn: There exists a covariant 2-tensor

P : Sx

ω → Pn + satisfying :

0 < pIn ≤ P(x) ≤ pIn ∀x ∈ Rn v⊤LfP(x) v ≤ −qIn < 0 ∀(x, v) ∈ Rn × Sn : ∂h ∂x(x) v = 0

Proposition 5: (link with “tangent systems”) [R. Sanfelice, L. P.] Suppose the model is forward complete and we have the strong differential detectability on Rn, Then the model is infinitesimally detectable on Rn, i.e. for each x ∈ Rn, the linear time-varying system given by the linearization of the model along X(x, t) is uniformly detectable i.e. with the notations

Ax(t) = ∂f ∂x(X(x, t)) , Cx(t) = ∂h ∂x(X(x, t))

there exists a continuous (maybe unbounded) function t → Kx(t) such that the origin is uniformly stable for the linear system

∂˙ ξ = [Ax(t) − Kx(t)Cx(t)] ∂ξ

27

slide-29
SLIDE 29

§ 7/12 Summary on necessary conditions

28

slide-30
SLIDE 30

§7.1

Summary on necessary conditions Notations: The model is

˙ x = f(x) , y = h(x)

The observer is

˙ ξ = ϕ(ξ, y) , ˆ x = η(ξ, y)

Properties: Conv = The zero error set

  • (x, ξ) : x = η(ξ, h(x))
  • contains

an asymptotically stable set Zω. Convtune = For all (εs, εt), exist (ϕ, η) such that, for all t ∈ [εt, σ+

Sx(x))

dx (X(x, t) , η(Ξ((x, ξ), t), h(X(x, t))) ≤ εs

Obs = The observer is instantaneously observable Inj

  • η(ξa, y) = η(ξb, y) and y = η(ξa, y)

ξa = ξb

29

slide-31
SLIDE 31

§7.2

Summary on necessary conditions Conv

= ⇒

The model is forward detectable.

= ⇒

The graph of ηinv , an injective set-valued map, right inverse of η, is asymptotically stable. Conv + Obs

= ⇒ ηinv is single-valued

Convtune

= ⇒

The model is instantaneously observable Conv + Inj

= ⇒

Weak differential detectability

30

slide-32
SLIDE 32

Part II Sufficient conditions

31

slide-33
SLIDE 33

§ 8/12

Sufficient condition 1:

Backward observability

(instead of detectability)

32

slide-34
SLIDE 34

§8.1

Assumption Backward observability Backward observability: The model:

˙ x = f(x) , y = h(x)

is backward observable at a neighborhood N of Sx if, for any pair of distinct points xa and xb in Sx, there exists t in ] max{σ−

N(xa), σ− N(xb)}, 0] such that we have :

h(X(xa, t)) = h(X(xb, t)) .

Reminder : Forward detectability

h(X(xa, t)) = h(X(xb, t)) ∀t ≥ 0 ⇒ lim

t→+ dx(X(xa, t), X(xb, t) = 0

is necessary for Conv.

33

slide-35
SLIDE 35

§8.2

Sufficient condition 1 : Backward observability Proposition 6: [V. Andrieu, L. P.]

To simplify this presentation

Assume the model is backward complete and backward observable at Sx. Assume also the existence of i) an injective C1 function b : Rp → Cp, ii) a continuous function M : Sx → R+, iii) and a negative real number ℓ such that, for each x and each each t in (σ−

Sx(x), 0], we have

| exp(−ℓt)b(h(X(x, t)))| +

  • exp(−ℓt)∂b ◦ h ◦ X

∂x (x, t)

  • ≤ M(x),

34

slide-36
SLIDE 36

§8.3

Sufficient condition 1 : Backward observability Proposition 7 (continued): Under these conditions, there exists a subset S of Cn+1 of zero Lebesgue measure such that, for any diagonal matrix A with n + 1 complex eigenvalues λi arbitrarily chosen in Cn+1 \ S and with real part strictly smaller than ℓ, there exists a C1 and injective function x ∈ Sx → ηinv (x) solution of

∂ηinv ∂x (x)f(x) = A ηinv (x) +

  

1

. . .

1

   b(h(x))

Proof: . . . Thank you Jean-Michel Corollary: (Luenberger observer) There exists a continuous function η such that Conv is satisfied by

˙ ξ = A ξ +

  

1

. . .

1

   b(y)

, ˆ x = η(ξ)

35

slide-37
SLIDE 37

§ 9/12

Sufficient condition 2:

Differential observability of order m

(instead of instantaneous observability)

36

slide-38
SLIDE 38

§9.1

Assumption Differential observability of order m Differential observability of order m: The model:

˙ x = f(x) , y = h(x)

is differentially observable of order m at Sx if the function1 x → Hm(x) is injective.

Hm(x) =

     

h(x) Lfh(x)

. . .

Lm−1

f

h(x)

     

= ηinv (x)

Reminder : Instantaneous observability

h(X(xa, t)) = h(X(xb, t)) ∀t ∈ [0, σ) ∀σ > 0 ⇒ xa = xb

is necessary for Convtune.

1Lfh(x) = ∂h

∂x(x)f(x) , h(X(x, t)) =

q

  • i=0

tj j!Lj

fh(x) + o(tq).

37

slide-39
SLIDE 39

§9.2

Sufficient condition 2 : Differential observability of order m Proposition 8: (High gain observer) [H. Khalil, A. Saberi], [S. Emelyanov, S. Korovin S. Nikitin, M. Nikitina], [J. Gauthier, H. Hammouri, S. Othman] [Tornamb` e], . . . , [P. Bernard, V. Andrieu, L. P.] If the model is differentially observable with order m, there exist continuous functions ϕm and η such that the observer

˙ ξ1 = ξ2 + k1[y − ξ1] . . . ˙ ξm−1 = ξm + km−1[y − ξ1] ˙ ξm−1 = ϕm(ξ) + km[y − ξ1] , ˆ x = η(ξ)

satisfies Convtune when tuning the real numbers (k1, . . . , km). We have even Conv when Hm is an immersion. This can be extended to the case with control

˙ x = f(x, u) , y = h(x)

when we have forward observability uniformly in u.

38

slide-40
SLIDE 40

§10/12

Sufficient condition 3:

Uniform infinitesimal backward observability

(instead of weak differential detectability)

39

slide-41
SLIDE 41

§10.1

Assumption Uniform infinitesimal backward observability Uniform infinitesimal backward observability The model is uniformly infinitesimally backward observable on Rn if there exist strictly positive real numbers τ and ǫ such that we have1 :

−τ Φx(s, 0)⊤Cx(s)⊤Cx(s)Φx(s, 0) ds ≥ ǫ I

∀x ∈ Rn .

Reminder :

  • Weak differential detectability is necessary for Conv + Inj.
  • Strong differential detectability implies infinitesimal detectability.

1Φx is the transition matrix of ˙

ξ = Ax(t)ξ.

40

slide-42
SLIDE 42

§10.2 Sufficient condition 3 :

Uniform infinitesimal backward observability Proposition 9:

  • R. Sanfelice, L. P.

If the model is uniformly infinitesimally backward observable then it is strongly differentially detectable.

41

slide-43
SLIDE 43

§10.3 Sufficient condition 3 :

Uniform infinitesimal backward observability Proposition 10 (local convergence): (Riemannian observer)

  • R. Sanfelice, L. P.

If the model is uniformly infinitesimally backward observable then, with P given by the strong differential detectability, there exist strictly positive real numbers k, ε and r such that, with the observer given by1

˙ ξ = f(ξ) − k gradPh(ξ)[h(ξ) − y] , ˆ x = ξ ,

with k ≥ k, the following holds23

D+d(ˆ x, x) ≤ −r d(ˆ x, x) ∀(x, ˆ x) : d(ˆ x, x) < ε k .

1gradPh(ξ) = P(ξ)−1∂h ∂x(ξ)⊤. 2d(ˆ

x, x) is the Riemannian distance, induced by P , between ˆ x and x.

3D+d(ˆ

x, x) = lim sup

t→0+

d( ˆ X(ˆ x, t), X(x, t)) − d(ˆ x, x) t

.

42

slide-44
SLIDE 44

§10.4

Supplementary assumption Assumption GM Assumption “GM”: The function h : Rn → Rp is geodesically monotonic if there exists a C2 function δ : Rp × Rp → R+ such that

  • 1. δ(y, y) = 0

, δ(ya, yb) = δ(yb, ya) , ∂2δ ∂y2

a

(ya, yb)

  • ya=yb

> 0 ,

  • 2. for any pair (xa, xb) in Rn × Rn satisfying

h(xa) = h(xb)

and any minimal P -geodesic γ∗ between xa = γ∗(sa) and xb = γ∗(sb), with sa ≤ sb, we have :

d dsδ(h(γ∗(s)), h(γ∗(sa))) > 0 ∀s ∈ (sa, sb] .

Property : The function geodesic monotonicity of h implies its level sets are strongly geodesically convex and totally geodesic and conversely when p = dim(y) = 1.

43

slide-45
SLIDE 45

§10.5 Sufficient condition 3 :

Uniform infinitesimal backward observability Proposition 11 (semi-global convergence):

  • R. Sanfelice, L. P.

If the model is uniformly infinitesimally backward observable and the function h : Rn → Rp is geodesically monotonic then, for any positive real number E there exists a continuous function

kE : Rn → R such that, with the observer given by ˙ ξ = f(ξ) − kE(ξ) gradPh(ξ)[h(ξ) − y] , ˆ x = ξ ,

the following holds

D+d(ˆ x, x) ≤ −q 4 d(ˆ x, x) ∀(x, ˆ x) ∈ {(x, ˆ x) : d(ˆ x, x) < E} .

44

slide-46
SLIDE 46

§ 11/12 Summary on sufficient conditions

45

slide-47
SLIDE 47

§11.1

Summary on sufficient conditions

  • Backward completeness and backward observability

implies a Luenberger observer satisfies Conv

  • Differential observability with order m

implies a high gain observer satisfies Convtune

  • Uniform infinitesimal backward observability

implies a Riemannian observer satisfies Conv locally, semi-globally whenever the function h is geodesically monotonic.

46

slide-48
SLIDE 48

§ 12/12

Example Non uniformly contracting observer

47

slide-49
SLIDE 49

§12.1

Example Non uniformly contracting observer “Proposition ”:

[Aeyels], [Takens], [J-P. Gauthier, H. Hammouri, I. Kupka], [J-M Coron], [E. Sontag]

“Generically”, from the measurement y evaluated at 2n + 1 different time instants we can reconstruct the model state Idea : By processing the measurement by different 2n + 1 systems with bounded forward solutions we hope to be able reconstruct the model state

48

slide-50
SLIDE 50

§12.2

Example Non uniformly contracting observer The model is

˙ x1 = x3

2 ,

˙ x2 = −x1 , y = h(x) := x1

with (x1, x2) ∈ Sx = R2 \ {(0, 0)}. All its solutions are periodic. The pattern system to make the observer is

˙ ξ = aξ − bξ3 + y

with a and b strictly positive real numbers. Its flow is contracting (uniformly in y) only when ξ >

a

3b.

To ecah model periodic solutions with r =

  • 2x2

1 + x4 2 small,

correspond 3 periodic solutions,

2 exponentially stable, 1 exponentially unstable.

49

slide-51
SLIDE 51

§12.3

Example Non uniformly contracting observer

10 20 30 40 50

t

  • 2
  • 1.5
  • 1
  • 0.5

0.5 1 1.5

r = 0.2

ˆ ξ stable ˆ ξ unstable

⇒ A pattern system gives a set ηinv (x) with two elements.

The graph of ηinv is exponentially attractive.

50

slide-52
SLIDE 52

§12.4

Example Non uniformly contracting observer

0.1 0.2 0.3 0.4

δx

0.05 0.1 0.15

δξ modulus of injectivity with m patterns

m=1 m=2 m=3 m=4

x → ηinv (x) is injective for m = 4 ⇒ Make the observer with 4 patterns

51

slide-53
SLIDE 53

§12.5

Example Non uniformly contracting observer The observer is :

         

˙ ξ1 ˙ ξ2 ˙ ξ3 ˙ ξ4

         

= ϕ(ξ, y) =

         

a1ξ1 − b1ξ3

1 + y

a2ξ2 − b2ξ3

2 + y

a3ξ3 − b3ξ3

3 + y

a4ξ4 − b4ξ4

3 + y

         

,

  ˆ

x1 ˆ x2

   = η(ξ)

numerical table

52

slide-54
SLIDE 54

§12.6

Example Non uniformly contracting observer

10 20 30 40 50

t

  • 5

5

r = 0.2

ˆ ξ1 ˆ ξ2 ˆ ξ3 ˆ ξ4

10 20 30 40 50

t

  • 2
  • 1

1 2

x2 ˆ x2

Simulation with various initial conditions ξ

53