Approaching NSP-EMD with B-Splines Laslo Hunhold Oberseminar in SS - - PowerPoint PPT Presentation

approaching nsp emd with b splines
SMART_READER_LITE
LIVE PREVIEW

Approaching NSP-EMD with B-Splines Laslo Hunhold Oberseminar in SS - - PowerPoint PPT Presentation

Approaching NSP-EMD with B-Splines Laslo Hunhold Oberseminar in SS 2018 Mathematisches Institut Universitt zu Kln hosted by Prof. Dr. Angela Kunoth 7th June 2018 1 / 14 introduction intrinsic mode function (IMF) (1/2) 2 / 14


slide-1
SLIDE 1

Approaching NSP-EMD with B-Splines

Laslo Hunhold

‘Oberseminar’ in SS 2018 Mathematisches Institut Universität zu Köln hosted by

  • Prof. Dr. Angela Kunoth

7th June 2018

1 / 14

slide-2
SLIDE 2

introduction

intrinsic mode function (IMF) (1/2)

2 / 14

slide-3
SLIDE 3

introduction

intrinsic mode function (IMF) (1/2)

definition

2 / 14

slide-4
SLIDE 4

introduction

intrinsic mode function (IMF) (1/2)

definition Let a, φ ∈ C2

b(R) with bounded derivatives.

2 / 14

slide-5
SLIDE 5

introduction

intrinsic mode function (IMF) (1/2)

definition Let a, φ ∈ C2

b(R) with bounded derivatives.

a(t)cos(φ(t)) :∈ I

2 / 14

slide-6
SLIDE 6

introduction

intrinsic mode function (IMF) (1/2)

definition Let a, φ ∈ C2

b(R) with bounded derivatives.

a(t)cos(φ(t)) :∈ I is an IMF with accuracy ε ∈ (0, 1)

2 / 14

slide-7
SLIDE 7

introduction

intrinsic mode function (IMF) (1/2)

definition Let a, φ ∈ C2

b(R) with bounded derivatives.

a(t)cos(φ(t)) :∈ I is an IMF with accuracy ε ∈ (0, 1) if and only if for all t ∈ R

◮ a(t) > 0, φ′(t) > 0,

2 / 14

slide-8
SLIDE 8

introduction

intrinsic mode function (IMF) (1/2)

definition Let a, φ ∈ C2

b(R) with bounded derivatives.

a(t)cos(φ(t)) :∈ I is an IMF with accuracy ε ∈ (0, 1) if and only if for all t ∈ R

◮ a(t) > 0, φ′(t) > 0, ◮ |a′(t)| ≤ ε|φ′(t)|,

2 / 14

slide-9
SLIDE 9

introduction

intrinsic mode function (IMF) (1/2)

definition Let a, φ ∈ C2

b(R) with bounded derivatives.

a(t)cos(φ(t)) :∈ I is an IMF with accuracy ε ∈ (0, 1) if and only if for all t ∈ R

◮ a(t) > 0, φ′(t) > 0, ◮ |a′(t)| ≤ ε|φ′(t)|, ◮ |φ′′(t)| ≤ ε|φ′(t)|.

2 / 14

slide-10
SLIDE 10

introduction

intrinsic mode function (IMF) (1/2)

definition Let a, φ ∈ C2

b(R) with bounded derivatives.

a(t)cos(φ(t)) :∈ I is an IMF with accuracy ε ∈ (0, 1) if and only if for all t ∈ R

◮ a(t) > 0, φ′(t) > 0, ◮ |a′(t)| ≤ ε|φ′(t)|, ◮ |φ′′(t)| ≤ ε|φ′(t)|.

heuristic

2 / 14

slide-11
SLIDE 11

introduction

intrinsic mode function (IMF) (1/2)

definition Let a, φ ∈ C2

b(R) with bounded derivatives.

a(t)cos(φ(t)) :∈ I is an IMF with accuracy ε ∈ (0, 1) if and only if for all t ∈ R

◮ a(t) > 0, φ′(t) > 0, ◮ |a′(t)| ≤ ε|φ′(t)|, ◮ |φ′′(t)| ≤ ε|φ′(t)|.

heuristic

◮ Physical reasonability.

2 / 14

slide-12
SLIDE 12

introduction

intrinsic mode function (IMF) (1/2)

definition Let a, φ ∈ C2

b(R) with bounded derivatives.

a(t)cos(φ(t)) :∈ I is an IMF with accuracy ε ∈ (0, 1) if and only if for all t ∈ R

◮ a(t) > 0, φ′(t) > 0, ◮ |a′(t)| ≤ ε|φ′(t)|, ◮ |φ′′(t)| ≤ ε|φ′(t)|.

heuristic

◮ Physical reasonability. ◮ Slowly varying amplitude.

2 / 14

slide-13
SLIDE 13

introduction

intrinsic mode function (IMF) (1/2)

definition Let a, φ ∈ C2

b(R) with bounded derivatives.

a(t)cos(φ(t)) :∈ I is an IMF with accuracy ε ∈ (0, 1) if and only if for all t ∈ R

◮ a(t) > 0, φ′(t) > 0, ◮ |a′(t)| ≤ ε|φ′(t)|, ◮ |φ′′(t)| ≤ ε|φ′(t)|.

heuristic

◮ Physical reasonability. ◮ Slowly varying amplitude. ◮ Slowly varying frequency.

2 / 14

slide-14
SLIDE 14

introduction

intrinsic mode function (IMF) (2/2)

example

3 / 14

slide-15
SLIDE 15

introduction

intrinsic mode function (IMF) (2/2)

example a(t)cos(φ(t)) a(t) φ(t)

3 / 14

slide-16
SLIDE 16

introduction

empirical mode decomposition (EMD) (N.E. Huang et al., 1998)

4 / 14

slide-17
SLIDE 17

introduction

empirical mode decomposition (EMD) (N.E. Huang et al., 1998)

input

4 / 14

slide-18
SLIDE 18

introduction

empirical mode decomposition (EMD) (N.E. Huang et al., 1998)

input signal s : [p, q] → R

4 / 14

slide-19
SLIDE 19

introduction

empirical mode decomposition (EMD) (N.E. Huang et al., 1998)

input signal s : [p, q] → R decomposition

4 / 14

slide-20
SLIDE 20

introduction

empirical mode decomposition (EMD) (N.E. Huang et al., 1998)

input signal s : [p, q] → R decomposition s(t) =

w

  • k=0

sk(t) + rw+1(t), sk ∈ I sk(t) = ak(t)cos(φk(t))

4 / 14

slide-21
SLIDE 21

introduction

empirical mode decomposition (EMD) (N.E. Huang et al., 1998)

input signal s : [p, q] → R decomposition s(t) =

w

  • k=0

sk(t) + rw+1(t), sk ∈ I sk(t) = ak(t)cos(φk(t)) iterative

4 / 14

slide-22
SLIDE 22

introduction

empirical mode decomposition (EMD) (N.E. Huang et al., 1998)

input signal s : [p, q] → R decomposition s(t) =

w

  • k=0

sk(t) + rw+1(t), sk ∈ I sk(t) = ak(t)cos(φk(t)) iterative r0(t) = s(t) rk+1(t) = rk(t) − sk(t)

4 / 14

slide-23
SLIDE 23

introduction

empirical mode decomposition (EMD) (N.E. Huang et al., 1998)

input signal s : [p, q] → R decomposition s(t) =

w

  • k=0

sk(t) + rw+1(t), sk ∈ I sk(t) = ak(t)cos(φk(t)) iterative r0(t) = s(t) rk+1(t) = rk(t) − sk(t) goal

4 / 14

slide-24
SLIDE 24

introduction

empirical mode decomposition (EMD) (N.E. Huang et al., 1998)

input signal s : [p, q] → R decomposition s(t) =

w

  • k=0

sk(t) + rw+1(t), sk ∈ I sk(t) = ak(t)cos(φk(t)) iterative r0(t) = s(t) rk+1(t) = rk(t) − sk(t) goal Find ak and φk.

4 / 14

slide-25
SLIDE 25

introduction

  • perator-based signal separation (OSS) (S. Peng et al., 2008) (1/2)

5 / 14

slide-26
SLIDE 26

introduction

  • perator-based signal separation (OSS) (S. Peng et al., 2008) (1/2)

idea

5 / 14

slide-27
SLIDE 27

introduction

  • perator-based signal separation (OSS) (S. Peng et al., 2008) (1/2)

idea Let Pk = Pk(ak, φk), Qk = Qk(ak, φk), Rk = Rk(ak, φk).

5 / 14

slide-28
SLIDE 28

introduction

  • perator-based signal separation (OSS) (S. Peng et al., 2008) (1/2)

idea Let Pk = Pk(ak, φk), Qk = Qk(ak, φk), Rk = Rk(ak, φk). Find differential operator DPk,Qk,Rk analytically such that DPk,Qk,Rksk = 0.

5 / 14

slide-29
SLIDE 29

introduction

  • perator-based signal separation (OSS) (S. Peng et al., 2008) (1/2)

idea Let Pk = Pk(ak, φk), Qk = Qk(ak, φk), Rk = Rk(ak, φk). Find differential operator DPk,Qk,Rk analytically such that DPk,Qk,Rksk = 0. Solve (Pk, Qk, Rk) = arg min

(˜ P, ˜ Q,˜ R)

Q1(rk − ˜ s) s.t. D˜

P, ˜ Q,˜ R˜

s = 0 Q2 ˜ P ≤ τ Q3 ˜ Q ≤ τ Q4 ˜ R ≤ τ with τ > 0, Q1, Q2, Q3, Q4 ∈ {D0, D1, . . . } regularization

  • perators, ˜

P := Pk(˜ a, ˜ φ), ˜ Q := Qk(˜ a, ˜ φ), ˜ R := Rk(˜ a, ˜ φ) and ˜ s := ˜ a(t)cos(˜ φ(t)).

5 / 14

slide-30
SLIDE 30

introduction

  • perator-based signal separation (OSS) (S. Peng et al., 2008) (2/2)

6 / 14

slide-31
SLIDE 31

introduction

  • perator-based signal separation (OSS) (S. Peng et al., 2008) (2/2)

unconstrained formulation

6 / 14

slide-32
SLIDE 32

introduction

  • perator-based signal separation (OSS) (S. Peng et al., 2008) (2/2)

unconstrained formulation (Pk, Qk, Rk) = arg min

(˜ P, ˜ Q,˜ R)

P, ˜ Q,˜ R˜

s+ λQ1(rk − ˜ s)+ µ(Q2 ˜ P + Q3 ˜ Q + Q4 ˜ R)

6 / 14

slide-33
SLIDE 33

introduction

  • perator-based signal separation (OSS) (S. Peng et al., 2008) (2/2)

unconstrained formulation (Pk, Qk, Rk) = arg min

(˜ P, ˜ Q,˜ R)

P, ˜ Q,˜ R˜

s+ λQ1(rk − ˜ s)+ µ(Q2 ˜ P + Q3 ˜ Q + Q4 ˜ R) problem

6 / 14

slide-34
SLIDE 34

introduction

  • perator-based signal separation (OSS) (S. Peng et al., 2008) (2/2)

unconstrained formulation (Pk, Qk, Rk) = arg min

(˜ P, ˜ Q,˜ R)

P, ˜ Q,˜ R˜

s+ λQ1(rk − ˜ s)+ µ(Q2 ˜ P + Q3 ˜ Q + Q4 ˜ R) problem Want to control ˜ s, and thus sk, in some way.

6 / 14

slide-35
SLIDE 35

introduction

null space pursuit (NSP) (S. Peng, W.-L. Hwang et al., 2010)

7 / 14

slide-36
SLIDE 36

introduction

null space pursuit (NSP) (S. Peng, W.-L. Hwang et al., 2010)

solution

7 / 14

slide-37
SLIDE 37

introduction

null space pursuit (NSP) (S. Peng, W.-L. Hwang et al., 2010)

solution Introduce leakage factor γ.

7 / 14

slide-38
SLIDE 38

introduction

null space pursuit (NSP) (S. Peng, W.-L. Hwang et al., 2010)

solution Introduce leakage factor γ. (Pk, Qk, Rk) = arg min

(˜ P, ˜ Q,˜ R)

P, ˜ Q,˜ R˜

s+ λ(Q1(rk − ˜ s)+γ˜ s)+ µ(Q2 ˜ P + Q3 ˜ Q + Q4 ˜ R)

7 / 14

slide-39
SLIDE 39

introduction

null space pursuit (NSP) (S. Peng, W.-L. Hwang et al., 2010)

solution Introduce leakage factor γ. (Pk, Qk, Rk) = arg min

(˜ P, ˜ Q,˜ R)

P, ˜ Q,˜ R˜

s+ λ(Q1(rk − ˜ s)+γ˜ s)+ µ(Q2 ˜ P + Q3 ˜ Q + Q4 ˜ R) Controls retainment of information in residual signal.

7 / 14

slide-40
SLIDE 40

introduction

null space pursuit (NSP) (S. Peng, W.-L. Hwang et al., 2010)

solution Introduce leakage factor γ. (Pk, Qk, Rk) = arg min

(˜ P, ˜ Q,˜ R)

P, ˜ Q,˜ R˜

s+ λ(Q1(rk − ˜ s)+γ˜ s)+ µ(Q2 ˜ P + Q3 ˜ Q + Q4 ˜ R) Controls retainment of information in residual signal. remaining problems (OSS, NSP)

7 / 14

slide-41
SLIDE 41

introduction

null space pursuit (NSP) (S. Peng, W.-L. Hwang et al., 2010)

solution Introduce leakage factor γ. (Pk, Qk, Rk) = arg min

(˜ P, ˜ Q,˜ R)

P, ˜ Q,˜ R˜

s+ λ(Q1(rk − ˜ s)+γ˜ s)+ µ(Q2 ˜ P + Q3 ˜ Q + Q4 ˜ R) Controls retainment of information in residual signal. remaining problems (OSS, NSP)

◮ Processing necessary for (˜

P, ˜ Q, ˜ R) → ˜ s.

7 / 14

slide-42
SLIDE 42

introduction

null space pursuit (NSP) (S. Peng, W.-L. Hwang et al., 2010)

solution Introduce leakage factor γ. (Pk, Qk, Rk) = arg min

(˜ P, ˜ Q,˜ R)

P, ˜ Q,˜ R˜

s+ λ(Q1(rk − ˜ s)+γ˜ s)+ µ(Q2 ˜ P + Q3 ˜ Q + Q4 ˜ R) Controls retainment of information in residual signal. remaining problems (OSS, NSP)

◮ Processing necessary for (˜

P, ˜ Q, ˜ R) → ˜ s.

◮ Very heuristical and thus hard to analyze.

7 / 14

slide-43
SLIDE 43

introduction

null space pursuit (NSP) (S. Peng, W.-L. Hwang et al., 2010)

solution Introduce leakage factor γ. (Pk, Qk, Rk) = arg min

(˜ P, ˜ Q,˜ R)

P, ˜ Q,˜ R˜

s+ λ(Q1(rk − ˜ s)+γ˜ s)+ µ(Q2 ˜ P + Q3 ˜ Q + Q4 ˜ R) Controls retainment of information in residual signal. remaining problems (OSS, NSP)

◮ Processing necessary for (˜

P, ˜ Q, ˜ R) → ˜ s.

◮ Very heuristical and thus hard to analyze. ◮ Does not explicitly enforce IMF-properties.

7 / 14

slide-44
SLIDE 44

introduction

null space pursuit (NSP) (S. Peng, W.-L. Hwang et al., 2010)

solution Introduce leakage factor γ. (Pk, Qk, Rk) = arg min

(˜ P, ˜ Q,˜ R)

P, ˜ Q,˜ R˜

s+ λ(Q1(rk − ˜ s)+γ˜ s)+ µ(Q2 ˜ P + Q3 ˜ Q + Q4 ˜ R) Controls retainment of information in residual signal. remaining problems (OSS, NSP)

◮ Processing necessary for (˜

P, ˜ Q, ˜ R) → ˜ s.

◮ Very heuristical and thus hard to analyze. ◮ Does not explicitly enforce IMF-properties.

Does it really help understanding EMD?

7 / 14

slide-45
SLIDE 45

vanishing operators

example (X. Hu, S. Peng, W.-L. Hwang, 2012)

8 / 14

slide-46
SLIDE 46

vanishing operators

example (X. Hu, S. Peng, W.-L. Hwang, 2012)

Pk(ak, φk) = −2a′

k

ak , Qk(ak, φk) = (φ′)2 + 2a′

k

ak , Rk(ak, φk) = 0, DPk,Qk,Rk = d2 dt2 + Pk(t) d dt + Qk(t).

8 / 14

slide-47
SLIDE 47

vanishing operators

example (X. Hu, S. Peng, W.-L. Hwang, 2012)

Pk(ak, φk) = −2a′

k

ak , Qk(ak, φk) = (φ′)2 + 2a′

k

ak , Rk(ak, φk) = 0, DPk,Qk,Rk = d2 dt2 + Pk(t) d dt + Qk(t). problem

8 / 14

slide-48
SLIDE 48

vanishing operators

example (X. Hu, S. Peng, W.-L. Hwang, 2012)

Pk(ak, φk) = −2a′

k

ak , Qk(ak, φk) = (φ′)2 + 2a′

k

ak , Rk(ak, φk) = 0, DPk,Qk,Rk = d2 dt2 + Pk(t) d dt + Qk(t). problem Let a′′

k(t) = 0.

8 / 14

slide-49
SLIDE 49

vanishing operators

example (X. Hu, S. Peng, W.-L. Hwang, 2012)

Pk(ak, φk) = −2a′

k

ak , Qk(ak, φk) = (φ′)2 + 2a′

k

ak , Rk(ak, φk) = 0, DPk,Qk,Rk = d2 dt2 + Pk(t) d dt + Qk(t). problem Let a′′

k(t) = 0. It holds

DPk,Qk,Rksk(t) = a′′

k(t)

ak(t) = 0.

8 / 14

slide-50
SLIDE 50

vanishing operators

example (Guo et al., 2017)

9 / 14

slide-51
SLIDE 51

vanishing operators

example (Guo et al., 2017)

solution

9 / 14

slide-52
SLIDE 52

vanishing operators

example (Guo et al., 2017)

solution Let Ak(ak) := a′

k

ak , Ωk(φk) := 1 (φ′

k)2 . 9 / 14

slide-53
SLIDE 53

vanishing operators

example (Guo et al., 2017)

solution Let Ak(ak) := a′

k

ak , Ωk(φk) := 1 (φ′

k)2 .

Pk(Ak, Ωk) = Ωk Qk(Ak, Ωk) = −2PkAk + 1 2 d dt Pk Rk(Ak, Ωk) = Pk

  • A2

k − d

dt Ak

  • − 1

2 d dt PkAk + 1 DPk,Qk,Rk = Pk(t) d2 dt2 + Qk(t) d dt + Rk(t).

9 / 14

slide-54
SLIDE 54

vanishing operators

example (Guo et al., 2017)

solution Let Ak(ak) := a′

k

ak , Ωk(φk) := 1 (φ′

k)2 .

Pk(Ak, Ωk) = Ωk Qk(Ak, Ωk) = −2PkAk + 1 2 d dt Pk Rk(Ak, Ωk) = Pk

  • A2

k − d

dt Ak

  • − 1

2 d dt PkAk + 1 DPk,Qk,Rk = Pk(t) d2 dt2 + Qk(t) d dt + Rk(t). It holds DPk,Qk,Rksk(t) = 0.

9 / 14

slide-55
SLIDE 55

vanishing operators

example (Guo et al., 2017)

solution Let Ak(ak) := a′

k

ak , Ωk(φk) := 1 (φ′

k)2 .

Pk(Ak, Ωk) = Ωk Qk(Ak, Ωk) = −2PkAk + 1 2 d dt Pk Rk(Ak, Ωk) = Pk

  • A2

k − d

dt Ak

  • − 1

2 d dt PkAk + 1 DPk,Qk,Rk = Pk(t) d2 dt2 + Qk(t) d dt + Rk(t). It holds DPk,Qk,Rksk(t) = 0. problem

9 / 14

slide-56
SLIDE 56

vanishing operators

example (Guo et al., 2017)

solution Let Ak(ak) := a′

k

ak , Ωk(φk) := 1 (φ′

k)2 .

Pk(Ak, Ωk) = Ωk Qk(Ak, Ωk) = −2PkAk + 1 2 d dt Pk Rk(Ak, Ωk) = Pk

  • A2

k − d

dt Ak

  • − 1

2 d dt PkAk + 1 DPk,Qk,Rk = Pk(t) d2 dt2 + Qk(t) d dt + Rk(t). It holds DPk,Qk,Rksk(t) = 0. problem Yet another complicating step (Ak, Ωk) → (ak, φk).

9 / 14

slide-57
SLIDE 57

vanishing operators

example (Guo et al., 2017)

solution Let Ak(ak) := a′

k

ak , Ωk(φk) := 1 (φ′

k)2 .

Pk(Ak, Ωk) = Ωk Qk(Ak, Ωk) = −2PkAk + 1 2 d dt Pk Rk(Ak, Ωk) = Pk

  • A2

k − d

dt Ak

  • − 1

2 d dt PkAk + 1 DPk,Qk,Rk = Pk(t) d2 dt2 + Qk(t) d dt + Rk(t). It holds DPk,Qk,Rksk(t) = 0. problem Yet another complicating step (Ak, Ωk) → (ak, φk). Can we find a simple general form?

9 / 14

slide-58
SLIDE 58

B-Splines

approach

− − − −

10 / 14

slide-59
SLIDE 59

B-Splines

approach

◮ Express functions, e.g. the signal, as B-splines of order k ≥ 1:

s(t) =

n−1

  • i=0

siBi,k(t) ∈ Ck−2.

− − − −

10 / 14

slide-60
SLIDE 60

B-Splines

approach

◮ Express functions, e.g. the signal, as B-splines of order k ≥ 1:

s(t) =

n−1

  • i=0

siBi,k(t) ∈ Ck−2.

◮ Least-Squares-Fit of time series data.

− − − −

10 / 14

slide-61
SLIDE 61

B-Splines

approach

◮ Express functions, e.g. the signal, as B-splines of order k ≥ 1:

s(t) =

n−1

  • i=0

siBi,k(t) ∈ Ck−2.

◮ Least-Squares-Fit of time series data.

2 4 6 8 10 −4 −2 2 4 2 4 6 8 10 −4 −2 2 4

10 / 14

slide-62
SLIDE 62

B-Splines

approach

◮ Express functions, e.g. the signal, as B-splines of order k ≥ 1:

s(t) =

n−1

  • i=0

siBi,k(t) ∈ Ck−2.

◮ Least-Squares-Fit of time series data.

2 4 6 8 10 −4 −2 2 4 2 4 6 8 10 −4 −2 2 4

◮ Generate extended grid with q ≥ 0 intermediate points

between spline knots.

10 / 14

slide-63
SLIDE 63

B-Splines

approach

◮ Express functions, e.g. the signal, as B-splines of order k ≥ 1:

s(t) =

n−1

  • i=0

siBi,k(t) ∈ Ck−2.

◮ Least-Squares-Fit of time series data.

2 4 6 8 10 −4 −2 2 4 2 4 6 8 10 −4 −2 2 4

◮ Generate extended grid with q ≥ 0 intermediate points

between spline knots.

◮ Preprocess Bi,k and its derivatives (e.g. with GLS’s

gsl_bspline_deriv_eval_nonzero()) on extended grid.

10 / 14

slide-64
SLIDE 64

B-Splines

approach

◮ Express functions, e.g. the signal, as B-splines of order k ≥ 1:

s(t) =

n−1

  • i=0

siBi,k(t) ∈ Ck−2.

◮ Least-Squares-Fit of time series data.

2 4 6 8 10 −4 −2 2 4 2 4 6 8 10 −4 −2 2 4

◮ Generate extended grid with q ≥ 0 intermediate points

between spline knots.

◮ Preprocess Bi,k and its derivatives (e.g. with GLS’s

gsl_bspline_deriv_eval_nonzero()) on extended grid.

◮ Directly optimize over (ak, φk).

10 / 14

slide-65
SLIDE 65

B-Splines

advantages

11 / 14

slide-66
SLIDE 66

B-Splines

advantages

◮ Sparse Least-Square-fit- and evaluation-matrices (even of

derivatives) (local support of Bi,k and its derivatives).

11 / 14

slide-67
SLIDE 67

B-Splines

advantages

◮ Sparse Least-Square-fit- and evaluation-matrices (even of

derivatives) (local support of Bi,k and its derivatives).

◮ B-Splines are already very smooth

11 / 14

slide-68
SLIDE 68

B-Splines

advantages

◮ Sparse Least-Square-fit- and evaluation-matrices (even of

derivatives) (local support of Bi,k and its derivatives).

◮ B-Splines are already very smooth (do we even need

Q2 ˜ P ≤ τ, Q3 ˜ Q ≤ τ, Q4 ˜ R ≤ τ any more?).

11 / 14

slide-69
SLIDE 69

B-Splines

advantages

◮ Sparse Least-Square-fit- and evaluation-matrices (even of

derivatives) (local support of Bi,k and its derivatives).

◮ B-Splines are already very smooth (do we even need

Q2 ˜ P ≤ τ, Q3 ˜ Q ≤ τ, Q4 ˜ R ≤ τ any more?).

◮ Derivatives are free.

11 / 14

slide-70
SLIDE 70

B-Splines

advantages

◮ Sparse Least-Square-fit- and evaluation-matrices (even of

derivatives) (local support of Bi,k and its derivatives).

◮ B-Splines are already very smooth (do we even need

Q2 ˜ P ≤ τ, Q3 ˜ Q ≤ τ, Q4 ˜ R ≤ τ any more?).

◮ Derivatives are free. Let m ≤ k − 2:

s(m)(t) =

n−1

  • i=0

siB(m)

i,k (t).

11 / 14

slide-71
SLIDE 71

B-Splines

advantages

◮ Sparse Least-Square-fit- and evaluation-matrices (even of

derivatives) (local support of Bi,k and its derivatives).

◮ B-Splines are already very smooth (do we even need

Q2 ˜ P ≤ τ, Q3 ˜ Q ≤ τ, Q4 ˜ R ≤ τ any more?).

◮ Derivatives are free. Let m ≤ k − 2:

s(m)(t) =

n−1

  • i=0

siB(m)

i,k (t). ◮ Constant time evaluation of s(m) k

  • nly with ak and φk using

Leibniz’ rule and Faà di Bruno’s formula after preprocessing multinomials and partitions.

11 / 14

slide-72
SLIDE 72

B-Splines

NSP reinvestigation

12 / 14

slide-73
SLIDE 73

B-Splines

NSP reinvestigation

Let ˜ s(t) := ˜ a(t)cos(˜ φ(t)).

12 / 14

slide-74
SLIDE 74

B-Splines

NSP reinvestigation

Let ˜ s(t) := ˜ a(t)cos(˜ φ(t)). The NSP reformulated with B-Splines is

12 / 14

slide-75
SLIDE 75

B-Splines

NSP reinvestigation

Let ˜ s(t) := ˜ a(t)cos(˜ φ(t)). The NSP reformulated with B-Splines is (ak, φk) = arg min

(˜ a,˜ φ)

a,˜ φ˜

s + λ(Q1(rk − ˜ s) + γ˜ s)

12 / 14

slide-76
SLIDE 76

B-Splines

NSP reinvestigation

Let ˜ s(t) := ˜ a(t)cos(˜ φ(t)). The NSP reformulated with B-Splines is (ak, φk) = arg min

(˜ a,˜ φ)

a,˜ φ˜

s + λ(Q1(rk − ˜ s) + γ˜ s) Do we even need the differential operator?

12 / 14

slide-77
SLIDE 77

B-Splines

NSP reinvestigation

Let ˜ s(t) := ˜ a(t)cos(˜ φ(t)). The NSP reformulated with B-Splines is (ak, φk) = arg min

(˜ a,˜ φ)

a,˜ φ˜

s + λ(Q1(rk − ˜ s) + γ˜ s) Do we even need the differential operator?

◮ Operator adapted to get smooth ˜

P, ˜ Q and ˜ R.

12 / 14

slide-78
SLIDE 78

B-Splines

NSP reinvestigation

Let ˜ s(t) := ˜ a(t)cos(˜ φ(t)). The NSP reformulated with B-Splines is (ak, φk) = arg min

(˜ a,˜ φ)

a,˜ φ˜

s + λ(Q1(rk − ˜ s) + γ˜ s) Do we even need the differential operator?

◮ Operator adapted to get smooth ˜

P, ˜ Q and ˜ R.

◮ ˜

a and ˜ φ are already smooth by definition.

12 / 14

slide-79
SLIDE 79

B-Splines

NSP reinvestigation

Let ˜ s(t) := ˜ a(t)cos(˜ φ(t)). The NSP reformulated with B-Splines is (ak, φk) = arg min

(˜ a,˜ φ)

a,˜ φ˜

s + λ(Q1(rk − ˜ s) + γ˜ s) Do we even need the differential operator?

◮ Operator adapted to get smooth ˜

P, ˜ Q and ˜ R.

◮ ˜

a and ˜ φ are already smooth by definition. reduced system

12 / 14

slide-80
SLIDE 80

B-Splines

NSP reinvestigation

Let ˜ s(t) := ˜ a(t)cos(˜ φ(t)). The NSP reformulated with B-Splines is (ak, φk) = arg min

(˜ a,˜ φ)

a,˜ φ˜

s + λ(Q1(rk − ˜ s) + γ˜ s) Do we even need the differential operator?

◮ Operator adapted to get smooth ˜

P, ˜ Q and ˜ R.

◮ ˜

a and ˜ φ are already smooth by definition. reduced system (ak, φk) = arg min

(˜ a,˜ φ)

Q1(rk − ˜ s) + γ˜ s

12 / 14

slide-81
SLIDE 81

B-Splines

NSP reinvestigation

Let ˜ s(t) := ˜ a(t)cos(˜ φ(t)). The NSP reformulated with B-Splines is (ak, φk) = arg min

(˜ a,˜ φ)

a,˜ φ˜

s + λ(Q1(rk − ˜ s) + γ˜ s) Do we even need the differential operator?

◮ Operator adapted to get smooth ˜

P, ˜ Q and ˜ R.

◮ ˜

a and ˜ φ are already smooth by definition. reduced system (ak, φk) = arg min

(˜ a,˜ φ)

Q1(rk − ˜ s) + γ˜ s We can do better than that.

12 / 14

slide-82
SLIDE 82

proposed optimization problem

13 / 14

slide-83
SLIDE 83

proposed optimization problem

IMF requirements for t ∈ R

13 / 14

slide-84
SLIDE 84

proposed optimization problem

IMF requirements for t ∈ R

◮ ak, φk bounded and with bounded derivatives (given)

13 / 14

slide-85
SLIDE 85

proposed optimization problem

IMF requirements for t ∈ R

◮ ak, φk bounded and with bounded derivatives (given) ◮ ak(t) > 0, φ′ k(t) > 0

13 / 14

slide-86
SLIDE 86

proposed optimization problem

IMF requirements for t ∈ R

◮ ak, φk bounded and with bounded derivatives (given) ◮ ak(t) > 0, φ′ k(t) > 0 ◮ |a′ k(t)| ≤ ε|φ′ k(t)|, |φ′′ k(t)| ≤ ε|φ′ k(t)|

13 / 14

slide-87
SLIDE 87

proposed optimization problem

IMF requirements for t ∈ R

◮ ak, φk bounded and with bounded derivatives (given) ◮ ak(t) > 0, φ′ k(t) > 0 ◮ |a′ k(t)| ≤ ε|φ′ k(t)|, |φ′′ k(t)| ≤ ε|φ′ k(t)|

Let c(˜ a, ˜ φ) ∈ R be a cost function and ε > 0 fixed.

13 / 14

slide-88
SLIDE 88

proposed optimization problem

IMF requirements for t ∈ R

◮ ak, φk bounded and with bounded derivatives (given) ◮ ak(t) > 0, φ′ k(t) > 0 ◮ |a′ k(t)| ≤ ε|φ′ k(t)|, |φ′′ k(t)| ≤ ε|φ′ k(t)|

Let c(˜ a, ˜ φ) ∈ R be a cost function and ε > 0 fixed. (ak, φk) = arg min

(˜ a,˜ φ)

c(˜ a, ˜ φ) s.t. ˜ a(t) > 0 ˜ φ′(t) > 0 |˜ a′(t)| ≤ ε|˜ φ′(t)| |˜ φ′′(t)| ≤ ε|˜ φ′(t)|

13 / 14

slide-89
SLIDE 89

proposed optimization problem

IMF requirements for t ∈ R

◮ ak, φk bounded and with bounded derivatives (given) ◮ ak(t) > 0, φ′ k(t) > 0 ◮ |a′ k(t)| ≤ ε|φ′ k(t)|, |φ′′ k(t)| ≤ ε|φ′ k(t)|

Let c(˜ a, ˜ φ) ∈ R be a cost function and ε > 0 fixed. (ak, φk) = arg min

(˜ a,˜ φ)

c(˜ a, ˜ φ) s.t. ˜ a(t) > 0 ˜ φ′(t) > 0 |˜ a′(t)| ≤ ε|˜ φ′(t)| |˜ φ′′(t)| ≤ ε|˜ φ′(t)| We can e.g. set c(˜ a, ˜ φ) = Q(rk − ˜ s) with Q ∈ {D0, . . . , Dk−2}.

13 / 14

slide-90
SLIDE 90

conclusion

14 / 14

slide-91
SLIDE 91

conclusion

◮ Differential operator might not be necessary with B-Splines.

14 / 14

slide-92
SLIDE 92

conclusion

◮ Differential operator might not be necessary with B-Splines. ◮ Proposed problem has strict separation of fitness and

qualification.

14 / 14

slide-93
SLIDE 93

conclusion

◮ Differential operator might not be necessary with B-Splines. ◮ Proposed problem has strict separation of fitness and

qualification.

◮ Great flexibility of cost-functions due to power of

preprocessing.

14 / 14

slide-94
SLIDE 94

conclusion

◮ Differential operator might not be necessary with B-Splines. ◮ Proposed problem has strict separation of fitness and

qualification.

◮ Great flexibility of cost-functions due to power of

preprocessing.

  • utlook

14 / 14

slide-95
SLIDE 95

conclusion

◮ Differential operator might not be necessary with B-Splines. ◮ Proposed problem has strict separation of fitness and

qualification.

◮ Great flexibility of cost-functions due to power of

preprocessing.

  • utlook

◮ Implement optimization problem.

14 / 14

slide-96
SLIDE 96

conclusion

◮ Differential operator might not be necessary with B-Splines. ◮ Proposed problem has strict separation of fitness and

qualification.

◮ Great flexibility of cost-functions due to power of

preprocessing.

  • utlook

◮ Implement optimization problem. ◮ Investigate other cost functions.

14 / 14

slide-97
SLIDE 97

conclusion

◮ Differential operator might not be necessary with B-Splines. ◮ Proposed problem has strict separation of fitness and

qualification.

◮ Great flexibility of cost-functions due to power of

preprocessing.

  • utlook

◮ Implement optimization problem. ◮ Investigate other cost functions. ◮ Compare with other approaches.

14 / 14