Spline approximation of random processes with singularity Konrad - - PowerPoint PPT Presentation

spline approximation of random processes with singularity
SMART_READER_LITE
LIVE PREVIEW

Spline approximation of random processes with singularity Konrad - - PowerPoint PPT Presentation

Spline approximation of random processes with singularity Konrad Abramowicz Department of Mathematics and Mathematical Statistics Ume a university Stockholm, 2nd Northern Triangular Seminar, 2010 Coauthor: Oleg Seleznjev Department of


slide-1
SLIDE 1

Spline approximation of random processes with singularity

Konrad Abramowicz

Department of Mathematics and Mathematical Statistics Ume˚ a university

Stockholm, 2nd Northern Triangular Seminar, 2010

slide-2
SLIDE 2

Coauthor: Oleg Seleznjev

Department of Mathematics and Mathematical Statistics Ume˚ a university

slide-3
SLIDE 3

Outline

  • Introduction. Basic Notation

Results Optimal Rate Recovery Undersmoothing Numerical Experiments and Examples Bibliography

slide-4
SLIDE 4

Suppose a random process X(t), t ∈ [0, 1], with finite second moment is observed in a finite number of points (sampling designs). At any unsampled point t, we approximate the value of the process. The approximation performance on the entire interval is measured by mean errors. In this talk we deal with two problems: ◮ Investigating accuracy of such interpolator in mean norms ◮ Constructing a sequence of sampling designs with asymptotically optimal properties

slide-5
SLIDE 5

Basic notation

Let X = X(t), t ∈ [0, 1], be defined on the probability space (Ω, F, P).

slide-6
SLIDE 6

Basic notation

Let X = X(t), t ∈ [0, 1], be defined on the probability space (Ω, F, P). Assume that, for every t, the random variable X(t) lies in the normed linear space L2(Ω) = L2(Ω, F, P) of zero mean random variables with finite second moment and identified equivalent elements with respect to P.

slide-7
SLIDE 7

Basic notation

Let X = X(t), t ∈ [0, 1], be defined on the probability space (Ω, F, P). Assume that, for every t, the random variable X(t) lies in the normed linear space L2(Ω) = L2(Ω, F, P) of zero mean random variables with finite second moment and identified equivalent elements with respect to P. We set ||ξ|| = ` Eξ2´1/2 for all ξ ∈ L2(Ω) and consider the approximation based on the normed linear space Cm[0, 1] of random processes having continuous q.m. (quadratic mean) derivatives up to order m ≥ 0.

slide-8
SLIDE 8

Basic notation

Let X = X(t), t ∈ [0, 1], be defined on the probability space (Ω, F, P). Assume that, for every t, the random variable X(t) lies in the normed linear space L2(Ω) = L2(Ω, F, P) of zero mean random variables with finite second moment and identified equivalent elements with respect to P. We set ||ξ|| = ` Eξ2´1/2 for all ξ ∈ L2(Ω) and consider the approximation based on the normed linear space Cm[0, 1] of random processes having continuous q.m. (quadratic mean) derivatives up to order m ≥ 0. We define the integrated mean norm for any X ∈ Cm[0, 1] by setting ||X||p = „Z 1 ||X(t)||pdt «1/p , 1 ≤ p < ∞, and the uniform mean norm ||X||∞ = max[0,1] ||X(t)||.

slide-9
SLIDE 9

  • lder’s conditions and local stationarity

We define the classes of processes used throughout the paper. Let X ∈ Cm[a, b]. We say that i) X ∈ Cm,β([a, b], C) if Y (t) = X (m)(t) is H¨

  • lder continuous, i.e., if there exist

0 < β ≤ 1 and a positive constant C such that, for all t, t + s ∈ [a, b], || Y (t + s) − Y (t) || ≤ C|s|β, (1)

slide-10
SLIDE 10

ii) X ∈ Vm,β([a, b], V (·)) if Y (t) = X (m)(t) is locally H¨

  • lder, i.e., if there exist

0 < β ≤ 1 and a positive continuous function V (·) such that, for all t, t + s, ∈ [a, b], s > 0, || Y (t + s) − Y (t) || ≤ V (t)1/2|s|β, (2)

slide-11
SLIDE 11

ii) X ∈ Vm,β([a, b], V (·)) if Y (t) = X (m)(t) is locally H¨

  • lder, i.e., if there exist

0 < β ≤ 1 and a positive continuous function V (·) such that, for all t, t + s, ∈ [a, b], s > 0, || Y (t + s) − Y (t) || ≤ V (t)1/2|s|β, (2) iii) X ∈ Bm,β([a, b], c(·)) if Y (t) = X (m)(t) is locally stationary (see, Berman (1974)), i.e., if there exist 0 < β ≤ 1 and a positive continuous function c(t) such that lim

s→0

|| Y (t + s) − Y (t) || |s|β = c(t)1/2 uniformly in t ∈ [a, b]. (3)

slide-12
SLIDE 12

ii) X ∈ Vm,β([a, b], V (·)) if Y (t) = X (m)(t) is locally H¨

  • lder, i.e., if there exist

0 < β ≤ 1 and a positive continuous function V (·) such that, for all t, t + s, ∈ [a, b], s > 0, || Y (t + s) − Y (t) || ≤ V (t)1/2|s|β, (2) iii) X ∈ Bm,β([a, b], c(·)) if Y (t) = X (m)(t) is locally stationary (see, Berman (1974)), i.e., if there exist 0 < β ≤ 1 and a positive continuous function c(t) such that lim

s→0

|| Y (t + s) − Y (t) || |s|β = c(t)1/2 uniformly in t ∈ [a, b]. (3) We say that X ∈ BVm,β((0, 1], c(·), V (·)) if X ∈ Cm[a, b] and its m-th q.m. derivative satisfies (2) and (3) for any [a, b] ⊂ (0, 1].

slide-13
SLIDE 13

Composite Splines

For any f ∈ C l[0, 1], l ≥ 0, the piecewise Hermite polynomial Hk(t) := Hk(f , Tn)(t),

  • f degree k = 2l + 1, l ≥ 0, is the unique solution of the interpolation problem

H(j)

k (ti) = f (j)(ti), where i = 0, . . . , n, j = 0, . . . , l. Define Hq,k(X, Tn), q ≤ k, to be

a composite Hermite spline Hq,k(X, Tn) :=  Hq(X, Tn)(t), t ∈ [0, t1] Hk(X, Tn)(t), t ∈ [t1, 1] .

slide-14
SLIDE 14

Composite Splines

For any f ∈ C l[0, 1], l ≥ 0, the piecewise Hermite polynomial Hk(t) := Hk(f , Tn)(t),

  • f degree k = 2l + 1, l ≥ 0, is the unique solution of the interpolation problem

H(j)

k (ti) = f (j)(ti), where i = 0, . . . , n, j = 0, . . . , l. Define Hq,k(X, Tn), q ≤ k, to be

a composite Hermite spline Hq,k(X, Tn) :=  Hq(X, Tn)(t), t ∈ [0, t1] Hk(X, Tn)(t), t ∈ [t1, 1] .

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 −0.2 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 tj tj+1

slide-15
SLIDE 15

quasi Regular Sequences

We consider quasi regular sequences (qRS) of sampling designs {Tn = Tn(h)} generated by a density function h(·) via Z ti h(t)dt = i n , i = 1, . . . , n, where h(·) is continuous for t ∈ (0, 1] and if h(·) is unbounded in t = 0, then h(t) → +∞ as t → +0. We denote this property of {Tn} by: {Tn} is qRS(h). Observe that if h(·) is positive and continuous on [0, 1], then we obtain regular sequences. Denote the distribution function by H(t) = R t

0 h(v)dv, and the quantile function by

G(s) = H−1(s), and g(s) = G ′(s), t, s ∈ [0, 1], i.e., tj = G(j/n), j = 0, . . . , n.

slide-16
SLIDE 16

Previous Results

◮ (Seleznjev, Buslaev 1999) Optimal approximation rate for linear methods for X ∈ C 0,β[0, 1] is n−β ◮ (Seleznjev, 2000) Results on Hermite spline approximation when X ∈ Bm,β([0, 1], c(·)) and regular sequences of sampling designs are used ||X − Hk(X, Tn)|| ∼ n−(m+β) as n → ∞, m ≤ k.

slide-17
SLIDE 17

Processes of interest

Let X(t), t ∈ [0, 1], be a stochastic process which l − th q.m. derivatice satisfies H¨

  • lder’s condition on [0, 1] with 0 < α ≤ 1. Moreover the process is q.m.

differentiable up to order m on the left-open interval (0, 1]. The m − th derivative is locally H¨

  • lder with 0 < β ≤ 1 on [0, 1] and locally stationary on any [a, b] ⊂ (0, 1]

with β. We denote it by X ∈ BVm,β((0, 1], c(·), V (·)) ∩ Cl,α([0, 1], M).

slide-18
SLIDE 18

Processes of interest

Let X(t), t ∈ [0, 1], be a stochastic process which l − th q.m. derivatice satisfies H¨

  • lder’s condition on [0, 1] with 0 < α ≤ 1. Moreover the process is q.m.

differentiable up to order m on the left-open interval (0, 1]. The m − th derivative is locally H¨

  • lder with 0 < β ≤ 1 on [0, 1] and locally stationary on any [a, b] ⊂ (0, 1]

with β. We denote it by X ∈ BVm,β((0, 1], c(·), V (·)) ∩ Cl,α([0, 1], M). Examples: ◮ X1(t) = Y (t1/2), t ∈ [0, 1], where Y (t), t ∈ [0, 1], is a fractional Brownian motion with Hurst parameter H,

slide-19
SLIDE 19

Processes of interest

Let X(t), t ∈ [0, 1], be a stochastic process which l − th q.m. derivatice satisfies H¨

  • lder’s condition on [0, 1] with 0 < α ≤ 1. Moreover the process is q.m.

differentiable up to order m on the left-open interval (0, 1]. The m − th derivative is locally H¨

  • lder with 0 < β ≤ 1 on [0, 1] and locally stationary on any [a, b] ⊂ (0, 1]

with β. We denote it by X ∈ BVm,β((0, 1], c(·), V (·)) ∩ Cl,α([0, 1], M). Examples: ◮ X1(t) = Y (t1/2), t ∈ [0, 1], where Y (t), t ∈ [0, 1], is a fractional Brownian motion with Hurst parameter H, ◮ X2(t) = t9/10Y (t), t ∈ [0, 1], where Y (t), t ∈ [0, 1], is a zero mean stationary process with Cov(Y (t), Y (s)) = exp(−(t − s)2).

slide-20
SLIDE 20

Problem formulation

We have a process which l − th derivative is α-H¨

  • lder on [0, 1].

Can we get the approximation rate better than n−(l+α)?

slide-21
SLIDE 21

Regularly varying function

A positive function f (·) is called regularly varying (on the right) at the origin with index ρ, if for any λ > 0, f (λx) f (x) → λρ as x → 0+, and denote this property by f ∈ Rρ(O+).

slide-22
SLIDE 22

Assumptions and conditions

Let us recall the notation: H(t) = R t

0 h(v)dv, G(s) = H−1(s), and g(s) = G ′(s),

t, s ∈ [0, 1].

slide-23
SLIDE 23

Assumptions and conditions

Let us recall the notation: H(t) = R t

0 h(v)dv, G(s) = H−1(s), and g(s) = G ′(s),

t, s ∈ [0, 1]. We formulate the following assumptions about the local H¨

  • lder function and the

sequence generating density: (A1) there exists a function r ∈ Rρ(0+), ρ > 0, such that r(s) ≥ g(s), s ∈ [0, a], for some a > 0, (A2) there exists a function R ∈ Rρ(0+), ρ > 0, such that R(H(t)) ≥ V (t)1/2, t ∈ [0, a], for some a > 0.

slide-24
SLIDE 24

Assumptions and conditions

Let us recall the notation: H(t) = R t

0 h(v)dv, G(s) = H−1(s), and g(s) = G ′(s),

t, s ∈ [0, 1]. We formulate the following assumptions about the local H¨

  • lder function and the

sequence generating density: (A1) there exists a function r ∈ Rρ(0+), ρ > 0, such that r(s) ≥ g(s), s ∈ [0, a], for some a > 0, (A2) there exists a function R ∈ Rρ(0+), ρ > 0, such that R(H(t)) ≥ V (t)1/2, t ∈ [0, a], for some a > 0. Moreover we formulate the following conditions on the behavior of the introduced functions in a neighborhood of zero (C1) the assumption (A1) holds with r(·) such that V (t)r(H(t))2(m+β) → 0 as t → 0, (C2) for a given 1 ≤ p < ∞, the assumptions (A1) and (A2) hold with r(·) and R(·) such that, for some b > 0, R(H(t))r(H(t))m+β ∈ Lp[0, b].

slide-25
SLIDE 25

Optimal rate recovery

In this section we use the convention, that if p = ∞, then 1/p = 0.

slide-26
SLIDE 26

Optimal rate recovery

In this section we use the convention, that if p = ∞, then 1/p = 0.

Theorem

Let X ∈ BVm,β((0, 1], c(·), V (·)) ∩ Cl,α([0, 1], M), l + α ≤ m + β, be interpolated by a composite Hermite spline Hq,k(t, Tn), l ≤ q ≤ 2l + 1, m ≤ k ≤ 2m + 1, where Tn is a qRS(h). Let the condition (C1) be satisfied if p = ∞, or alternatively, for 1 ≤ p < ∞, let the condition (C2) hold. If additionally, r(s) = o(s(m+β)/(α+l+1/p)−1) as s → 0, (4) then lim

n→∞ nm+β||X − Hq,k(t, Tn)||p = bm,β k,p ||c1/2h−(m+β)||p.

slide-27
SLIDE 27

For a composite spline Hq,k, we define asymptotic optimality of the sequence of sampling designs T ∗

n by

lim

n→∞ ||X − Hq,k(X, T ∗ n )||p

. inf

T∈Dn

||X − Hq,k(X, T)||p = 1, where Dn := {Tn : 0 = t0 < t1 < . . . < tn = 1} denotes the set of all (n + 1)-point designs.

slide-28
SLIDE 28

Asymptotically optimal density

Let γ := 1/(m + β + 1/p), 1 ≤ p < ∞. Define h∗(t) := c(t)γ/2/ Z 1 c(s)γ/2ds, t ∈ [0, 1], and denote by g∗(s), s ∈ [0, 1] the corresponding quantile density function.

slide-29
SLIDE 29

Asymptotically optimal density

Let γ := 1/(m + β + 1/p), 1 ≤ p < ∞. Define h∗(t) := c(t)γ/2/ Z 1 c(s)γ/2ds, t ∈ [0, 1], and denote by g∗(s), s ∈ [0, 1] the corresponding quantile density function.

Proposition

Let X ∈ BVm,β((0, 1], c(·), V (·)) ∩ Cl,α([0, 1], M), l + α ≤ m + β, such that V (t) = O(c(t)) as t → 0, and c, V ∈ Rρ(0+), be interpolated by a composite Hermite spline Hq,k(t, Tn), l ≤ q ≤ 2l + 1, m ≤ k ≤ 2m + 1, where Tn is a qRS(h). If the condition g∗(s) = o(s(m+β)/(α+l+1/p)−1) as s → 0, (5) is satisfied, then h∗(·) is asymptotically optimal, and lim

n→∞ nm+β||X − Hq,k(X, Tn(h∗))||p = bm,β k,p ||c1/2||γ.

slide-30
SLIDE 30

Remark

Theorem and Proposition above extend to the class Bm,1((0, 1], c(·))) ∩ Cl,α([0, 1], M), l + α ≤ m + 1, with the assumptions and the conditions formulated in terms of local stationarity function c(·) instead of function V (·).

slide-31
SLIDE 31

Undersmoothing

slide-32
SLIDE 32

Undersmoothing

For any q ≤ k < m denote ck(t) := ||X (k+1)(t)||2, and consider the following modification of the assumption (A2), (A2′) there exists a function R ∈ Rρ(0+), ρ > 0, such that R(H(t)) ≥ ck(t)1/2, t ∈ [0, a], for some a > 0. Let us reformulate the conditions in terms of function ck(·): (C1′) the assumption (A1) holds with r(·), such that ck(t)r(H(t))2(k+1) → 0 as t → 0, (C2′) for given 1 ≤ p < ∞, the assumptions (A1) and (A2′′) hold with r(·) and R(·), such that, for some b > 0, R(H(t))r(H(t))k+1 ∈ Lp[0, b]. Observe that in this case we do not require the local H¨

  • lder property, and the results

are formulated for the local stationary class of processes.

slide-33
SLIDE 33

Theorem

Let X ∈ Bm,β((0, 1], c(·)) ∩ Cl,α([0, 1], M), l + α ≤ m + β, be interpolated by a composite Hermite spline Hq,k(t, Tn), l ≤ q ≤ 2l + 1, q ≤ k < m, where Tn is a qRS(h). Let the condition (C1′) be satisfied if p = ∞, or alternatively, for 1 ≤ p < ∞, let the condition (C2′) hold. If additionally, r(s) = o(s(k+1)/(α+l+1/p)−1) as s → 0, then lim

n→∞ nk+1||X − Hq,k(t, Tn)||p = bk,1 k,p||c1/2 k

h−(k+1)||p.

slide-34
SLIDE 34

Asymptotically optimal density

We introduce the asymptotically optimal density, when 1 ≤ p < ∞. Let γk := 1/(k + 1 + 1/p). Define h∗

k (t) := ck(t)γk /2/

Z 1 ck(s)γk /2ds, and denote by g∗

k (s), s ∈ [0, 1] the corresponding quantile density function.

Proposition

Let X ∈ Bm,β((0, 1], c(·)) ∩ Cl,α([0, 1], M), l + α ≤ m + β, such that ck ∈ Rρ(0+), be interpolated by a composite Hermite spline Hq,k(t, Tn), l ≤ q ≤ 2l + 1, q ≤ k < m, where Tn is a qRS(h). If the condition g∗

k (s) = o(s(k+1)/(α+l+1/p)−1) as s → 0,

is satisfied, then h∗(·) is asymptotically optimal, and lim

n→∞ nk+1||X − Hq,k(X, Tn(h∗ k ))||p = bk,1 k,p||c1/2 k

||γk .

slide-35
SLIDE 35

Numerical Experiments and Examples

When choosing the knot distribution, we consider the densities of a form: hλ(t) = 1 λt

1 λ −1.

This leads to ti = „ i n «λ , and therefore, such densities are called power densities.

slide-36
SLIDE 36

Example 1

Let BH = BH(t), t ∈ [0, 1] denote a fractional Brownian motion with Hurst parameter 0 ≤ H ≤ 1, i.e., a zero mean Gaussian process, starting at zero, with covariance function: KBH (s, t) = 1 2 “ |t|2H + |s|2H − |t − s|2H” .

slide-37
SLIDE 37

Example 1

Let BH = BH(t), t ∈ [0, 1] denote a fractional Brownian motion with Hurst parameter 0 ≤ H ≤ 1, i.e., a zero mean Gaussian process, starting at zero, with covariance function: KBH (s, t) = 1 2 “ |t|2H + |s|2H − |t − s|2H” . Consider now a time change version of the process BH, X(t) := BH( √ t), t ∈ [0, 1]. Then X ∈ BV0,H((0, 1], c(·), V (·)) ∩ C0,H/2([0, 1], 1), where c(t) = V (t) = (4t)−H.

slide-38
SLIDE 38

Example 1

Let BH = BH(t), t ∈ [0, 1] denote a fractional Brownian motion with Hurst parameter 0 ≤ H ≤ 1, i.e., a zero mean Gaussian process, starting at zero, with covariance function: KBH (s, t) = 1 2 “ |t|2H + |s|2H − |t − s|2H” . Consider now a time change version of the process BH, X(t) := BH( √ t), t ∈ [0, 1]. Then X ∈ BV0,H((0, 1], c(·), V (·)) ∩ C0,H/2([0, 1], 1), where c(t) = V (t) = (4t)−H. Let H = 0.8. We measure the error of the piecewise linear approximation in mean uniform norm (p = ∞). The conditions of Theorem are satisfied if λ > 2H/(H + 2/p). In our experiments we consider the following choice of λ: λ1,∞ = 1 λ2,∞ = 2.1

slide-39
SLIDE 39

Figure: Comparison of the uniform mean errors for the uniform density hλ1,∞(·) and hλ2,∞(·) in the log-log scale.

slide-40
SLIDE 40

Figure: Convergence of the n2 scaled uniform mean errors to the asymptotic constant for the generating density hλ2,∞(·).

slide-41
SLIDE 41

Example 2

Let Y (t), t ∈ [0, 1] be a zero mean Gaussian process with covariance kernel KY (s, t) = exp{−(s − t)2}. Such process has infinitely many q.m. derivatives, hence the rate of convergence of Hermie spline approximation is limited by the order of spline only.

slide-42
SLIDE 42

Example 2

Let Y (t), t ∈ [0, 1] be a zero mean Gaussian process with covariance kernel KY (s, t) = exp{−(s − t)2}. Such process has infinitely many q.m. derivatives, hence the rate of convergence of Hermie spline approximation is limited by the order of spline only. We investigate a distorted version of the process, X(t) := t0.9Y (t), t ∈ [0, 1].

slide-43
SLIDE 43

Example 2

Let Y (t), t ∈ [0, 1] be a zero mean Gaussian process with covariance kernel KY (s, t) = exp{−(s − t)2}. Such process has infinitely many q.m. derivatives, hence the rate of convergence of Hermie spline approximation is limited by the order of spline only. We investigate a distorted version of the process, X(t) := t0.9Y (t), t ∈ [0, 1]. Then X ∈ Bm,1((0, 1], cm(·)) ∩ C0,0.9([0, 1], 1), for any m ≥ 0, where cm = ||Y m+1(t)||2.

slide-44
SLIDE 44

Consider now an approximation by the composite Hermite spline H1,3. The conditions

  • f Theorem are satisfied when λ > 4/ (0.9 + 1/p), with

c3(t) = 1680t

9 5 − 44631

1250 t− 11

5 − 16929

125000 t− 21

5 + 8424

5 t− 1

5 + 4322241

108 t− 31

5

To evaluate mean integrated error (p = 2) we consider the following choice of the generating density parameter : λ1,2 = 1 λ2,2 = 3 λ3,2 = 4 λ4,2 = 5

slide-45
SLIDE 45

Figure: Comparison of the integrated mean errors, (p = 2), for the uniform density hλ1,2(·), hλ2,2(·), hλ2,2(·), and hλ4,2(·) in the log-log scale.

slide-46
SLIDE 46

Figure: Ratio of the integrated mean errors, (p = 2), for the densities hλ2,2(·), hλ2,2(·), and hλ4,2(·) and the optimal density h∗

3(·).

slide-47
SLIDE 47

Bibliography

S.M. Berman Sojourns and extremes of Gaussian processes.

  • Ann. Probab.. 2, 999-1026; corrections 8, 999 (1980); 12, 281 (1984),

O.Seleznjev, A. Buslaev On certain extremal problems in theory of approximation of random processes. East Jour. Approx. 5, 467-481, 1999.

  • O. Seleznjev

Spline approximation of random processes and design problems. Journal of Statistical Planning and Inference 84, 249-262, 2000