The algebraic approach to state and constant parameter estimation: - - PDF document

the algebraic approach to state and constant parameter
SMART_READER_LITE
LIVE PREVIEW

The algebraic approach to state and constant parameter estimation: - - PDF document

The algebraic approach to state and constant parameter estimation: Some experimental results H. Sira-Ram rez CINVESTAV-IPN Departamento de Ing. El ectrica Secci on de Mecatr onica M exico, D.F., M exico Ciudad Real,


slide-1
SLIDE 1

The algebraic approach to state and constant parameter estimation: Some experimental results

  • H. Sira-Ram

´ ırez CINVESTAV-IPN Departamento de Ing. El´ ectrica Secci´

  • n de Mecatr´
  • nica

M´ exico, D.F., M´ exico Ciudad Real, March 2006

slide-2
SLIDE 2

Contents of the Presentation

  • Introduction
  • Algebraic state estimation via time deriva-

tives calculation

  • Control of a mass position system
  • Sinusoid frequency estimation
  • Conclusions

1

slide-3
SLIDE 3

The fundamental mathematical developments supporting the approaches to state estimation and constant parameter identification, to be presented in this talk, are due primarily to M. Fliess and his highly professional mathematical vision of real life engineering problems.

  • M. Fliess and H. Sira-Ram

´ ırez, “An algebraic framework for linear identification”, ESAIM COCV, Vol. 9, pp 151- 168, 2003.

  • M. Fliess and H. Sira-Ram

´ ırez, “Reconstructeurs d’etat”, C.R. Acad. Sci. Paris t332, (1), pp. 91-96, 2004.

  • M. Fliess, M. Mboup, H. Mounier, and H. Sira-Ram

´ ırez, “Questioning some paradigms of signal processing via concrete examples”. Ch 1 of Algebraic methods in flat- ness, signal processing and state estimation. Editorial Lagares, Mexico.

  • M. Fliess, “Anayse non standard du bruit”. C.R. Acad.
  • Sci. Paris, Ser. I 342, 2006.

2

slide-4
SLIDE 4

Algebraic state estimation via time derivatives calculation

3

slide-5
SLIDE 5

Algebraic state estimation via time deriv- atives calculation

  • In an observable system, the state estima-

tion problem is intimately related to the problem of computing the successive time derivatives of the output and input signals in a sufficiently large number (See Diop y Fliess 1991).

  • In this work, we revisit a recently proposed

non-asymptotic algebraic means for the ap- proximate estimation of the system states from the calculation of a finite number of time derivatives of the output signal. The method furnishes some specific formulae for the time derivatives of a measurable signal.

4

slide-6
SLIDE 6
  • The estimation method to be proposed may

be combined with the notion of differential flatness to complete a feedback loop with a desirable closed loop dynamics which is based on output feedback alone of a dif- ferentially flat system.

  • There are some other interesting contribu-

tions in the existing literature that propose non asymptotic approaches to state esti- mation in dynamical control systems

– Diop, Grizzle, Moral, Stephanopoulou (1994), – Diop, Grizzle y Chaplais (2000), – Ibir (2004).

5

slide-7
SLIDE 7

Derivative calculations Given a signal y(t), we want to compute a cer- tain number of its time derivatives, with the following restrictions:

  • The information on y(t) is obtained “on

line” and it is required to on-line compute the time derivatives ˙ y(t), ¨ y(t), · · · , y(k)(t).

  • The signal y(t) is contaminated by noises

whose statistics are unknown.

  • It is desired that the estimation process

does not depend on the model of the dy- namic system that generates the output signal y(t).

6

slide-8
SLIDE 8

Derivatives calculation In a laboratory, one of the most popular me- thods for obtaining the first order time deriv- ative of a signal y(t), is the so called finite difference method. It consists in the following approximation to the derivative ˙ y(t): ˙ ye(ti) ≈ y(t) − y(ti) t − ti , t ≥ ti where t − ti = ǫ > 0, is a known scalar and t is an arbitrary instant of time close to ti.

7

slide-9
SLIDE 9

Derivatives calculation We note:

  • The method is quite general and it

does not depend on the system model.

  • The quality of the approximation depends
  • n ǫ = t − ti.
  • The estimation process is not asymptotic

and the estimate is available “instantaneously” immediately after the instant t.

  • The method is quite sensitive to the pre-

sence of noise perturbations in the signal to be processed.

8

slide-10
SLIDE 10

Derivatives calculation The finite difference approximation of the deriv- ative is evidently based on a truncated expan- sion of the Taylor series of the underlying sig- nal, ˜ y(t) = y(ti) + [ ˙ y(ti)](t − ti), t ≥ ti We insist upon the fact that, the model is that

  • f the signal and not of the system producing

it. d2 y dt2 = 0 The local problem is reduced to compute the unmeasured state of a homogeneous, second

  • rder, linear time-invariant system.

9

slide-11
SLIDE 11

Derivatives calculation A higher order model for the approximation

  • f the output signal y(t), at t = ti, may be

proposed to be:

  • y(t) =

N−1

  • k=0

1 k!y(k)(ti)(t − ti)k, t ≥ ti This approximation satisfies the homogeneous linear, time-invariant, differential equation: dN y dtN = 0 The local problem is then reduced to compute the states of a time-invariant, linear, homoge- neous system of order N, from unknown initial conditions.

10

slide-12
SLIDE 12

Derivatives calculation The linear approximation adopted: ˜ y(N)(t) = 0 satisfies, in terms of operational transforms, the following relation: dN dsN

  • sN ˜

Y (s)

  • = 0

The expressions given by

s−k dN dsN

  • sNY (s)
  • = 0,

k = N − 1, N − 2, · · · , N − k

contain, respectively, implicit information on the first, second, etc., k-th derivatives of y(t) in an approximate manner.

11

slide-13
SLIDE 13

Example Consider a fifth order approximation around ti = 0 of a sufficiently differentiable signal, y(t): ˜ y(t) =

5

  • k=0

1 k!tky(k)(0) We have d6 ˜ y dt6 = 0 and hence: d6 ds6

  • s6˜

y − s5y(0) − ... − y(5)(0)

  • = 0
  • r

d6 ds6

  • s6˜

y

  • = 0

12

slide-14
SLIDE 14

Example Consider: s−6 d6 ds6[s6˜ y(s)] = 0 We obtain the following time-varying realiza- tion of the underlying approximating system: ˙ z1 = z2 + 36 t5 y(t) ˙ z2 = z3 − 450 t4 y(t) ˙ z3 = z4 + 2400 t3 y(t) ˙ z4 = z5 − 5400 t2 y(t) ˙ z5 = z6 + 4320 t y(t) ˙ z6 = −720 y(t) z1 = t6 y(t) The approximate derivatives of y(t) are lin- ear combinations of the unstable, time-varying, linear filter states.

13

slide-15
SLIDE 15

Indeed: ˙ y(t) ≈ t−7 [tz2 − z1] ¨ y(t) ≈ t−8 −624z1 − 12tz2 + t2z3

  • y(3)(t) ≈ t−9

−9672z1 + 540tz2 − 18t2z3 + t3z4

  • The obtained formulae present a singularity

at t = 0, which disappears for any t = ǫ > 0. The computation is feasible, provided the math processor yields an accurate quotient at t = ǫ.

14

slide-16
SLIDE 16

Example We propose the following estimate of the first

  • rder time derivative of y(t) with respect to

time:

˙ ye(t) =

  

arbitrary 0 ≤ t < ǫ t−7 [tz2 − z1] t ≥ ǫ ˙ z1 = z2 + 36 t5 y(t) ˙ z2 = z3 − 450 t4 y(t) ˙ z3 = z4 + 2400 t3 y(t) ˙ z4 = z5 − 5400 t2 y(t) ˙ z5 = z6 + 4320 t y(t) ˙ z6 = −720 y(t) z1 = t6 y(t)

15

slide-17
SLIDE 17

Example Similarly, we propose the following estimate of the second order time derivative of y(t) with respect to time:

¨ ye(t) =

  

arbitrary 0 ≤ t < ǫ t−8 −624z1 − 12tz2 + t2z3

  • t ≥ ǫ

˙ z1 = z2 + 36 t5 y(t) ˙ z2 = z3 − 450 t4 y(t) ˙ z3 = z4 + 2400 t3 y(t) ˙ z4 = z5 − 5400 t2 y(t) ˙ z5 = z6 + 4320 t y(t) ˙ z6 = −720 y(t) z1 = t6 y(t)

etc.

16

slide-18
SLIDE 18

Example

17

slide-19
SLIDE 19

Remark The validity of the formulae for the calculation

  • f the time derivatives is limited in the time
  • horizon. For this reason, it becomes necessary

to re-initialize the computations at some time tr > 0. As the derivatives drift from their actual val- ues; so will the estimated signal, computed on the basis of the truncated Taylor series approx- imation, from the measured signal values.

18

slide-20
SLIDE 20

Remark An automatic resetting of the calculations can be devised on the basis of a weighted integral square error of the reconstructed signal devi- ation and a pre-specified threshold value for such a reconstruction error,

e =

t

tr+ǫ

W|y(σ) − ye(σ)|2dσ, W > 0 ye(t) = y(tr+ǫ)+[ ˙ y(tr+ǫ)](t−tr−ǫ)+1 2[¨ y(tr+ǫ)](t−tr−ǫ)2+· · ·

19

slide-21
SLIDE 21

On-line time derivative calculation

˙ ye(t) =

      

1 2y(3) e

(t−

i )(t − tr)2 + ¨

ye(t−

r )(t − tr) + ˙

ye(t−

r ),

t ∈ [tr, tr + ǫ) n1(t) d(t) , t > tr + ǫ where n1(t) = 30(t − tr)5y(t) + z1, d(t) = (t − tr)6 ¨ ye(t) =

  

y(3)

e

(t−

r )(t − tr) + ¨

ye(t−

r ),

t ∈ [tr, tr + ǫ) n2(t) d(t) , t > tr + ǫ where n2(t) = −300(t − tr)4y(t) + 24(t − tr)5 ˙ ye(t) + z2, ... y e(t) =

  

y(3)

e

(t−

r )

t ∈ [tr, tr + ǫ) n3(t) d(t) , t > tr + ǫ where n3(t) = 1200(t − tr)3y(t) − 180(t − tr)4 ˙ ye(t) +18(t − tr)5¨ ye(t) + z3

20

slide-22
SLIDE 22

Recall that z3, z2, z1 are given by: ˙ z1 = z2 + 36 t5 y(t) ˙ z2 = z3 − 450 t4 y(t) ˙ z3 = z4 + 2400 t3 y(t) ˙ z4 = z5 − 5400 t2 y(t) ˙ z5 = z6 + 4320 t y(t) ˙ z6 = −720 y(t) z1 = t6 y(t) tr is a calculation resetting instant decided upon by either an integral square error criterion, com- paring y(t) and ye(t), or taken to be periodical

21

slide-23
SLIDE 23

ǫ = 0.02, T = 0.05

22

slide-24
SLIDE 24

ǫ = 0.02, T = 0.2

23

slide-25
SLIDE 25

ǫ = 0.02, T = 0.5

24

slide-26
SLIDE 26

Control of a mass position system

Joint work with M.Sc. Carlos Garc ´ ıa Rodr ´ ıguez

  • f CINVESTAV

25

slide-27
SLIDE 27

Experimental results on the control of a mechanical system We tested the proposed state estimation me- thods in the control of the following EPS me- chanical system.

26

slide-28
SLIDE 28

Experimental results...

m1 m2 k2 k1 c1 c2 x1 x2 F

The mathematical model of the system is given by m1¨ x1 + (k1 + k2)x1 − k2x2 + c1 ˙ x1 = F m2¨ x2 + k2(x2 − x1) + c2 ˙ x2 =

27

slide-29
SLIDE 29

Experimental results... The system is differentially flat, with flat out- put given by the position of the second car x2, which we can measure. Indeed, all variables are expressible as differen- tial functions of the flat output:

x2 = y ˙ x2 = ˙ y x1 = 1 k2 [−m2¨ y + c2 ˙ y + k2y] ˙ x1 = 1 k2

  • −m2y(3) + c2¨

y + k2 ˙ y

  • F

= m1 m2 y(4) +

m1c2 + c2m2

k2

  • y(3)

+

  • m1 + m2 + m2k1 + c1c2

k2

  • y(2)

+

  • c1 + c2 + c2k1

k2

  • ˙

y + k1y

28

slide-30
SLIDE 30

Experimental results... It is desired to follow an arbitrary trajectory for the flat output y = x2 given by the smooth function y∗(t). The feedback control input is computed as:

F = m1 m2 v +

m1c2 + c2m2

k2

  • y(3) +
  • m1 + m2 + m2k1 + c1c2

k2

  • y(2)

+

  • c1 + c2 + c2k1

k2

  • ˙

y + k1y v = [y∗(t)](4) − α4(y(3) − [y∗(t)](3)) − α3(¨ y − ¨ y∗(t)) − α2( ˙ y − ˙ y∗(t)) −α1(y − y∗(t)) − α0

t

(y − y∗(σ))dσ

with coefficients αs chosen to set the poles of the closed loop system in the stable portion of the complex plane.

29

slide-31
SLIDE 31

Experimental results... The implementation of the flatness based con- troller was achieved through the computation

  • f the time derivatives of the measured output

using the following formulae:

  • ˙

ye(t) ¨ ye(t) y(3)

e

(t)

 

1 (t−tr)7 35 (t−tr)8 1 (t−tr)7

420 (t−tr)9 28 (t−tr)8 1 (t−tr)7

 

  • 42(t − tr)6y(t) + z1

−630(t − tr)5y(t) + z2 4200(t − tr)4y(t) + z3

  • where

˙ z1 = z2 − 882(t − tr)5y(t) ˙ z2 = z3 + 7350(t − tr)4y(t) ˙ z3 = z4 − 29400(t − tr)3y(t) ˙ z4 = z5 + 52920(t − tr)2y(t) ˙ z5 = z6 − 35280(t − tr)y(t) ˙ z6 = 5040y(t) 30

slide-32
SLIDE 32

Experimental results... All the parameters were properly identified by traditional methods. m1 = 2.7945 Kg m2 = 2.5434 Kg k1 = 360.1603N/m k2 = 723.07 N/m c1 = 2.05 N/(m/s) c2 = 1.8283N/(m/s)

31

slide-33
SLIDE 33

Simulations for stabilization tasks

2 4 6 8 10 12 14 −0.01 0.01 0.02

m

2 4 6 8 10 12 14 −0.1 −0.05 0.05 0.1 0.15

m/s

2 4 6 8 10 12 14 −1 −0.5 0.5 1

m/s2

2 4 6 8 10 12 14 −20 −10 10 20 30

t[s] m/s3

2 4 6 8 10 12 14 −5 5 10 15

t[s] N

y(t)* y(t) u(t) dy/dt dy/dte

d2y/dt2 d2y/dt2

rec

d3/dt3 d3/dt3

rec

32

slide-34
SLIDE 34

Experimental results for stabilization tasks

2 4 6 8 10 12 14 −0.01 0.01 0.02

m

2 4 6 8 10 12 14 −0.1 −0.05 0.05 0.1 0.15

m/s

2 4 6 8 10 12 14 −1 −0.5 0.5 1

m/s2

2 4 6 8 10 12 14 −20 −10 10 20 30

t[s] m/s3

2 4 6 8 10 12 14 −5 5 10 15

t[s] N

y(t)* y(t) u(t) dy/dte d2y/dt2

rec

d3y/dt3

rec

33

slide-35
SLIDE 35

Simulation results for trajectory tracking tasks

2 4 6 8 10 12 14 −5 5 10 15 20 x 10

−3

m

2 4 6 8 10 12 14 −0.04 −0.02 0.02 0.04

m/s

2 4 6 8 10 12 14 −0.2 −0.1 0.1 0.2

m/s2

2 4 6 8 10 12 14 −2 −1 1 2

t[s] m/s3

2 4 6 8 10 12 14 −2 2 4 6

t[s] N

y(t)* y(t) u(t) dy/dt* dy/dte

d2y/dt2* d2y/dt2

rec

d3y/dt3* d3y/dt3

rec

34

slide-36
SLIDE 36

Experimental results for trajectory track- ing tasks

2 4 6 8 10 12 14 −5 5 10 15 20 x 10

−3

m

2 4 6 8 10 12 14 −0.04 −0.02 0.02 0.04 0.06

m/s

2 4 6 8 10 12 14 −0.4 −0.2 0.2 0.4

m/s2

2 4 6 8 10 12 14 −10 −5 5 10 15

t[s] m/s3

2 4 6 8 10 12 14 −10 −5 5 10

t[s] N

y(t)* y(t) u(t) dy/dte d2y/dt2

e

d3y/dt3

e

35

slide-37
SLIDE 37

Simulation results for trajectory tracking tasks

5 10 15 20 25 30 −0.02 −0.01 0.01 0.02

m

5 10 15 20 25 30 −0.05 0.05

m/s

5 10 15 20 25 30 −0.2 −0.1 0.1 0.2

m/s2

5 10 15 20 25 30 −5 5 10 15 20 x 10

−3

m

5 10 15 20 25 30 −0.5 0.5

t[s] m/s3

5 10 15 20 25 30 −5 5

t[s] N

y(t)* y(t) A(t)* −A(t)*

A(t)* u(t)

d2y/dt2* d2y/dt2

rec

dy/dt* dy/dte d3y/dt3* d3y/dt3

rec

36

slide-38
SLIDE 38

Experimental results for trajectory track- ing tasks

5 10 15 20 25 30 −0.02 −0.01 0.01 0.02

m

5 10 15 20 25 30 −0.1 −0.05 0.05 0.1

m/s

5 10 15 20 25 30 −0.4 −0.2 0.2 0.4

m/s2

5 10 15 20 25 30 −10 −5 5 10 15

t[s] m/s3

5 10 15 20 25 30 −10 −5 5 10

t[s] N

u(t) dy/dte d2y/dt2

e

d3y/dt3

e

y(t)* y(t) poly poly neg

37

slide-39
SLIDE 39

Simulation results for trajectory tracking tasks

10 20 30 40 50 −0.015 −0.01 −0.005 0.005 0.01 0.015

m

10 20 30 40 50 −0.1 −0.05 0.05 0.1

m/s

10 20 30 40 50 −0.5 0.5

m/s2

10 20 30 40 50 0.5 1 1.5 2 2.5

rad/s

10 20 30 40 50 −5 5

t[s] m/s3

10 20 30 40 50 −5 5

t[s] N

y(t)* y(t)

w(t) u(t) dy/dt* dy/dte d2y/dt2* d2y/dt2

rec

d3y/dt3* d3y/dt3

rec

38

slide-40
SLIDE 40

Experimental results for trajectory track- ing tasks

10 20 30 40 50 −0.02 −0.01 0.01 0.02

m

10 20 30 40 50 −0.1 −0.05 0.05 0.1

m/s

10 20 30 40 50 −1 −0.5 0.5 1

m/s2

10 20 30 40 50 0.5 1 1.5 2 2.5

rad/s

10 20 30 40 50 −20 −10 10 20

t[s] m/s3

10 20 30 40 50 −20 −10 10 20

t[s] N

y(t)* y(t) Poly u(t) dy/dte d2y/dt2

e

d3y/dt3

e

39

slide-41
SLIDE 41

Sinusoid frequency estimation

Joint work with Juan Ram´

  • n Trapero and

Vicente Feliu-Battle of UCLM

40

slide-42
SLIDE 42

Some contributions

  • Kay and Marple, 1981— spectral power based
  • Roy and Kailath, 1989— Eigenvalue methods to

non-parametric spectral techniques

  • Haykin, 1991— adaptive filters
  • Bittanti, et al. 1997 — Adaptive notch filters
  • Li and Kedem, 1998— Notch filter approach
  • Hsu, Ortega and Damm, 1999—globally conver-

gent adaptive identifier

  • Bittanti and Savaresi, 2000— Extended Kalman fil-

ter

  • Marino and Tomei, 2000—adaptive observer ap-

proach

  • Mojiri and Bakhsahi, 2004— adaptive identifier for

arbitrary periodic signals

41

slide-43
SLIDE 43

Single frequency estimation Given a noisy sinusoidal signal: y(t) = A sin(ωt + φ) + ξ(t) compute, fast and reliably, the unknown angu- lar frequency ω, the unknown amplitude A and phase angle φ The unperturbed signal x(t) = A sin(ωt + φ) satisfies ¨ x = −ω2x x(0) = A sin(φ) ˙ x(0) = Aω cos(φ) Once ω is found, A and φ are computed as tan(φ) = ω

  • x(0)

˙ x(0)

  • ,

A =

  • x(0)2 + ( ˙

x(0))2 ω

42

slide-44
SLIDE 44

Clearly, x(0), ˙ x(0) and ω2 are linearly identi- fiable, hence A, φ and ω2 are weakly linearly identifiable

            

1 s2 1 s −s−2x(s) 1 s2 −s−2dx(s) ds −s−2d2x(s) ds2

               

˙ x(0) x(0) ω2

  

=

    

x(s) 2s−1x(s) + dx(s)

ds d2x(s) ds2

+ 4s−1dx(s)

ds

+ 2s−2x(s)

    

43

slide-45
SLIDE 45

Time domain

¨ x = −ω2x t2¨ x = −ω2t2x t2x(t) − 4

  • tx
  • + 2

(2)

x

  • = −ω2

(2)

t2x

  • ω2 =

−t2x(t) + 4 tx − 2

(2) x

  • (2) t2x
  • Frequency domain

s2x(s) − sx(0) − ˙ x(0) = −ω2x(s) d2 ds2

  • s2x(s)
  • = −ω2 d2x(s)

ds2 s2d2x(s) ds2 + 4sdx(s) ds + 2x(s) = −ω2

d2x(s)

ds2

  • d2x(s)

ds2 + 4s−1dx(s) ds + 2s−2x(s) = −ω2

  • s−2d2x(s)

ds2

  • ω2 = −

d2x(s) ds2

+ 4s−1dx(s)

ds

+ 2s−2x(s) s−2d2x(s)

ds2

44

slide-46
SLIDE 46

ω2 = − t2x(t) − 4 tx + 2

(2) x

  • (2) t2x
  • = n(t)

d(t) ˙ z1 = z2 + 4tx(t) ˙ z2 = −2x(t) n(t) = −t2x(t) + z1 ˙ ζ1 = ζ2 ˙ ζ2 = t2x(t) d(t) = ζ1

Substitution of the unmeasured signal x(t) by the noisy signal y(t) = x(t) + ξ(t) leads to pro- pose the following estimate of ω2: ˆ ω2 =

      

arbitrary for t ∈ [0, ǫ)

F(s)n(t) F(s)d(t)

for t ∈ [ǫ, +∞) where F(s) is either a Butterworth low pass filter or a chain of integrations enhancing the signal to noise ration within n(t) and d(t).

45

slide-47
SLIDE 47

Experimental results

0.1 0.2 0.3 0.4 0.5 0.6 −4 −3 −2 −1 1 2 3 4

Time [s] y(t)

46

slide-48
SLIDE 48

Experimental results

0.05 0.1 0.15 10 20 30

ω [rad/s]

0.05 0.1 0.15 2 4

A

0.05 0.1 0.15 0.5 1

Time [s] φ [rad]

47

slide-49
SLIDE 49

Two frequency identification Given a noisy measurement of the sum of two sinusoidal signals: y(t) = A1 sin(ω1t+φ1)+A2 sin(ω2t+φ2)+ξ(t) where ξ(t) is a zero mean, high frequency, noise

  • f completely unknown statistics, it is desired

to compute, in a fast and reliable manner, the unknown frequencies ω1, ω2, the unknown am- plitudes A1, A2 and the unknown phase angles φ1, φ2. Define the unperturbed signal as x(t), i.e., y(t) = x(t) + ξ(t)

48

slide-50
SLIDE 50

The unperturbed signal x(t), satisfies: x(4) + (ω2

1 + ω2 2)¨

x + ω2

1ω2 2 = 0

The parameters ω1, ω2 are weakly linearly iden- tifiable. i.e., letting X = ω2

1 + ω2 2 and Z =

ω2

1ω2 2, then X and Z are linearly identifiable

and: ω1 =

  • 1

2

  • X +
  • X2 − 4Z
  • ω2

=

  • 2Z

X +

  • X2 − 4Z

Our aim is to identify X and Z from: x(4) + X¨ x + Z = 0

49

slide-51
SLIDE 51

In operational calculus notation we have: s4x(s) + Xs2x(s) + Zx(s) = p(1, s, s2, s3) Differentiating with respect to s, four times, we obtain:

  • 12d2x(s)

ds2 + 8sd3x(s) ds3 + s2d4x(s) ds4

  • X +
  • d4x(s)

ds4

  • Z

= −

  • 24x(s) + 96sdx(s)

ds + 72s2d2x(s) ds2 + 16s3d3x(s) ds3 + s4d4x(s) ds4

  • Multiplying now by s−4 and s−5 we obtain the

following system of equations:

  • 12s−4d2x(s)

ds2 + 8s−3d3x(s) ds3 + s−2d4x(s) ds4

  • X +
  • s−4d4x(s)

ds4

  • Z

= −

  • 24s−4x(s) + 96s−3dx(s)

ds + 72s−2d2x(s) ds2 + 16s−1d3x(s) ds3 + d4x(s) ds4

  • 12s−5d2x(s)

ds2 + 8s−4d3x(s) ds3 + s−3d4x(s) ds4

  • X +
  • s−5d4x(s)

ds4

  • Z

= −

  • 24s−5x(s) + 96s−4dx(s)

ds + 72s−3d2x(s) ds2 + 16s−2d3x(s) ds3 + s−1d4x(s) ds4

  • 50
slide-52
SLIDE 52

In the time domain, we have a system of equa- tions of the form:

      

η1(t)X + ξ1(t)Z = q(t)

t

0 η1(σ)

  • X +

t

0 ξ1(σ)dσ

  • Z =

t

0 q(σ)dσ

X = q(t)

t

0 ξ1(σ)dσ − ξ1(t)

t

0 q(σ)dσ

η1(t)

t

0 ξ1(σ)dσ − ξ1(t)

t

0 η1(σ)dσ := n1(t)

d(t) Z = η1(t)

t

0 q(σ)dσ − q(t)

t

0 η1(σ)dσ

η1(t)

t

0 ξ1(σ)dσ − ξ1(t)

t

0 η1(σ)dσ := n2(t)

d(t) where

q(t) = −t4y(t) − z1 ˙ z1 = z2 − 16t3x(t) ˙ z2 = z3 + 72t2x(t) ˙ z3 = z4 − 96tx(t) ˙ z4 = 24x(t) , ξ1 = z5 ˙ z5 = z6 ˙ z6 = z7 ˙ z7 = z8 ˙ z8 = t4x(t) , η1 = z9 ˙ z9 = z10 ˙ z10 = z11 + t4x(t) ˙ z11 = z12 − 8t3x(t) ˙ z12 = 12t2x(t) 51

slide-53
SLIDE 53

Since x(t) is not available, we propose the fol- lowing estimates for X and Z with x(t) re- placed by y(t): X

  • Z
  • =

                          

arbitrary t ∈ [0, ǫ]

        

n1f(t) df(t) n2f(t) df(t)

        

t ∈ (ǫ, +∞) where n1f(t), n2f(t) and df(t) represent, re- spectively, the low pass filtering of n1(t), n2(t) and d(t): ˆ X = F(s)n1(t) F(s)d(t) ˆ Z = F(s)n2(t) F(s)d(t) with F(s) = ω2

n

s2 + 2ζωns + ω2

n

52

slide-54
SLIDE 54

Once X and Z are estimated, then ω1 and ω2 are immediately obtained. The amplitudes and phases are obtained in terms of the linearly identifiable initial states x(0), ˙ x(0),¨ x(0),x(3)(0)

A1 =

  • x(0)ω2

2 + ¨

x(0)

2 + 1

ω2

1

  • ˙

x(0)ω2

2 + x(3)(0)

2

  • ω2

2 − ω2 1

  • A2

=

  • x(0)ω2

1 + ¨

x(0)

2 + 1

ω2

2

  • ˙

x(0)ω2

1 + x(3)(0)

2

  • ω2

1 − ω2 2

  • φ1

= arctan

  • ω1

x(0)ω2

2 + ¨

x(0) ˙ x(0)ω2

2 + x(3)(0)

  • φ2

= arctan

  • ω2

x(0)ω2

1 + ¨

x(0) ˙ x(0)ω2

1 + x(3)(0)

  • 53
slide-55
SLIDE 55

Indeed, the initial states are linearly identifiable

           

1 s4 1 s3

1

s2 + X s4

  • 1

s + X s3

  • 1

s4 2 s3

3

s2 + X s4

  • 2

s4 6 s3 6 s3

                 

x(3)(0) ¨ x(0) ˙ x(0) x(0)

     

=

         

[s−2y(s)X] + [s−4y(s)]Z + y(s)

  • 2s−3y(s) + s−2 dy(s)

ds

  • X +

s−4 dy(s)

ds

  • Z + 4s−1y(s) + dy(s)

ds

  • 2s−4y(s) + 4s−3 dy(s)

ds

+ s−2 d2y(s)

ds2

  • X +

s−4 d2y(s)

ds2

  • Z + 12s−2y(s)+

. . . + 8s−1 + d2y(s)

ds2

  • 6s−4 dy(s)

ds

+ 6s−3 d2y(s)

ds2

+ s−2 d3y(s)

ds3

  • X +

s−4 d3y(s)

ds3

  • Z + 24s−3y(s)+

. . . + 36s−2 dy(s)

ds

+ 12s−1 d2y(s)

ds2

+ d3y(s)

ds3

         

54

slide-56
SLIDE 56

Experimental Results

0.5 1 1.5 −2 −1 1 2

y(t)

y(t)=0.72sin(2π13.7t+φ1)+1.14sin(2π10.1t+φ2) 0.02 0.04 0.06 0.08 0.1 0.12 50 100

ω1[rad/s]

0.02 0.04 0.06 0.08 0.1 0.12 20 40 60

ω2[rad/s] Time [s]

Experimental signal with the sum of two sinu- soids and the results of the algebraic on-line frequency identification process.

55

slide-57
SLIDE 57

0.02 0.04 0.06 0.08 0.1 0.12 0.2 0.4 0.6 0.8

A1

0.02 0.04 0.06 0.08 0.1 0.12 0.2 0.4 0.6 0.8 1 1.2

A2 Time [s]

Experimental results for the algebraic estima- tion of the amplitudes A1 and A2 of the si- nusoidal signals components from their noisy sum.

56

slide-58
SLIDE 58

0.02 0.04 0.06 0.08 0.1 0.12 −1.5 −1 −0.5

φ1 [rad]

0.02 0.04 0.06 0.08 0.1 0.12 0.2 0.4 0.6 0.8

φ2 [rad] Time [s]

Experimental results for the algebraic estima- tion of the phases φ1 and φ2 of the sinusoidal signals components from their noisy sum.

57

slide-59
SLIDE 59

Conclusions

  • Algebraic state estimation and algebraic pa-

rameter estimation can be advantageously used in experimental set-ups with the help

  • f nowadays digital processing cards.
  • The areas of applications include automatic

control, signal processing, signal compres- sion, artificial vision, fault detection, con- trol without models, and many other inter- esting emerging engineering fields.

  • Embracing this line of work has been par-

ticularly easy and rewarding... since I have had the luck, and privilege, of interact- ing with a true scientist and a marvellous friend.

58

slide-60
SLIDE 60

Thanks Michel! ...and a long life to you and Claudia.

59