Nonparametric estimation in a multiplicative noise model Charlotte - - PowerPoint PPT Presentation

nonparametric estimation in a multiplicative noise model
SMART_READER_LITE
LIVE PREVIEW

Nonparametric estimation in a multiplicative noise model Charlotte - - PowerPoint PPT Presentation

Introduction Density estimation strategy Nonparametric estimation in a multiplicative noise model Charlotte Dion (1) , (2) Joint work with Fabienne Comte (2) (1) LJK, UMR CNRS 5224, Universit Grenoble Alpes, Grenoble (2) MAP5, UMR CNRS 8145,


slide-1
SLIDE 1

Introduction Density estimation strategy

Nonparametric estimation in a multiplicative noise model

Charlotte Dion(1),(2) Joint work with Fabienne Comte(2) (1) LJK, UMR CNRS 5224, Université Grenoble Alpes, Grenoble (2) MAP5, UMR CNRS 8145, Université Paris Descartes, Paris Cité Lundi 18 avril 2016

Charlotte Dion JPS 18/04/2016 Les Houches 1 / 22

slide-2
SLIDE 2

Introduction Density estimation strategy

Motivation: the model

Nonnegative random variable X height, weight... time between the first symptom of a disease and the death of the patient → survival data. Interest: nonparametric estimation of the density function f the survival function F(x) = P(X > x) = ∞

x

f(u)du (E[X] =

  • F, E[h(X)]).

Charlotte Dion JPS 18/04/2016 Les Houches 1 / 22

slide-3
SLIDE 3

Introduction Density estimation strategy

Motivation: the model

Classical noise model Yi = Xi + εi, i = 1, . . . , n with E[εi] = 0. But often the noise depends on the level of the signal: Yi = Xi + αXiεi, α ∈ R, Yi = Xi (1 + αεi)

Yi = XiUi, i = 1, . . . , n, Ui ∼ U[1−a,1+a], 0 < a < 1 with E[Ui] = 1.

Charlotte Dion JPS 18/04/2016 Les Houches 2 / 22

slide-4
SLIDE 4

Introduction Density estimation strategy

Motivation: the model

Classical noise model Yi = Xi + εi, i = 1, . . . , n with E[εi] = 0. But often the noise depends on the level of the signal: Yi = Xi + αXiεi, α ∈ R, Yi = Xi (1 + αεi)

Yi = XiUi, i = 1, . . . , n, Ui ∼ U[1−a,1+a], 0 < a < 1 with E[Ui] = 1. → What does it represent? A partial transmission of the information Xi up to an error of order ±100a%: unintentionally during a survey

Charlotte Dion JPS 18/04/2016 Les Houches 2 / 22

slide-5
SLIDE 5

Introduction Density estimation strategy

What does it represent?

deliberately to mask some data.

Figure : X, Y , X vs Y , a = 0.5

Charlotte Dion JPS 18/04/2016 Les Houches 3 / 22

slide-6
SLIDE 6

Introduction Density estimation strategy

Motivation: literature case Ui ∼ U[0,1]

Vardi (1989), Vardi and Zhang (1992) asymptotic framework Link with deconvolution method. If ε ∼ E(1), exp(−ε) ∼ U[0,1]. The density function of a nonnegative random variable Y is decreasing ⇔ Y = XU with U ∼ U[0,1] independent of X.

Charlotte Dion JPS 18/04/2016 Les Houches 4 / 22

slide-7
SLIDE 7

Introduction Density estimation strategy

Motivation: literature case Ui ∼ U[0,1]

Vardi (1989), Vardi and Zhang (1992) asymptotic framework Link with deconvolution method. If ε ∼ E(1), exp(−ε) ∼ U[0,1]. The density function of a nonnegative random variable Y is decreasing ⇔ Y = XU with U ∼ U[0,1] independent of X. Asgharian et. al. (2012) asymptotic nonparametric estimation of F. Brunel et. al. (2015) (non-asymptotic) adaptive estimator of f and of survival function F, optimal rates of convergence.

Charlotte Dion JPS 18/04/2016 Les Houches 4 / 22

slide-8
SLIDE 8

Introduction Density estimation strategy

Model

Yi = XiUi, i = 1, . . . , n, Ui ∼ U[1−a,1+a], 0 < a < 1 the (Xi){i=1,...,n} and (Ui){i=1,...,n} are independent the Xi are i.i.d. with density f the Ui are i.i.d. with density U[1−a,1+a], a known the Yi are observed, i.i.d. with density fY on R+. Issues: how can we estimate the density f and the associated survival function F?

Charlotte Dion JPS 18/04/2016 Les Houches 5 / 22

slide-9
SLIDE 9

Introduction Density estimation strategy

Notations

L2(R+) = {t : R+ → R, ∞ |t(x)|2dx < ∞} and the associated scalar product t, v = +∞ t(x)v(x)dx and norm t2 =

  • R+ |t(x)|2dx.

If t is bounded: t∞ = sup

x∈R+|t(x)|.

Assumption f ∈ L2(R+)

Charlotte Dion JPS 18/04/2016 Les Houches 6 / 22

slide-10
SLIDE 10

Introduction Density estimation strategy Procedure First step Second step

Density fY

fY (y) = 1 2a

  • y

1−a y 1+a

f(x) x dx, y ∈]0, +∞[, If f∞ < +∞, then fY ∞ < ∞ yfY (y) →

y→0 0 and yfY (y)

y→+∞ 0

Charlotte Dion JPS 18/04/2016 Les Houches 7 / 22

slide-11
SLIDE 11

Introduction Density estimation strategy Procedure First step Second step

Auxiliary function

For a bounded t, derivable and t′ ∈ L2(R+), E[t(Y1) + Y1t′(Y1)] = 1 2a +∞ t(y)

  • f
  • y

1 + a

  • − f
  • y

1 − a

  • dy

= t, g. g(x) := 1 2a

  • f
  • x

1 + a

  • − f
  • x

1 − a

  • , x ∈ R+

E[ψt(Y1)] = t, g with ψt(y) := t(y) + yt′(y).

Charlotte Dion JPS 18/04/2016 Les Houches 8 / 22

slide-12
SLIDE 12

Introduction Density estimation strategy Procedure First step Second step

Auxiliary function

For a bounded t, derivable and t′ ∈ L2(R+), E[t(Y1) + Y1t′(Y1)] = 1 2a +∞ t(y)

  • f
  • y

1 + a

  • − f
  • y

1 − a

  • dy

= t, g. g(x) := 1 2a

  • f
  • x

1 + a

  • − f
  • x

1 − a

  • , x ∈ R+

E[ψt(Y1)] = t, g with ψt(y) := t(y) + yt′(y). → Strategy (different from U ∼ U[0,1]): Build a projection estimator of g. Look for an inversion formula to get f.

Charlotte Dion JPS 18/04/2016 Les Houches 8 / 22

slide-13
SLIDE 13

Introduction Density estimation strategy Procedure First step Second step

f

  • x

1 + a

  • − f
  • x

1 − a

  • =

2ag(x)

Charlotte Dion JPS 18/04/2016 Les Houches 9 / 22

slide-14
SLIDE 14

Introduction Density estimation strategy Procedure First step Second step

f

  • x

1 + a

  • − f
  • x

1 − a

  • =

2ag(x) f (x) − f 1 + a 1 − ax

  • =

2ag((1 + a)x)

Charlotte Dion JPS 18/04/2016 Les Houches 9 / 22

slide-15
SLIDE 15

Introduction Density estimation strategy Procedure First step Second step

f

  • x

1 + a

  • − f
  • x

1 − a

  • =

2ag(x) f (x) − f 1 + a 1 − ax

  • =

2ag((1 + a)x) f 1 + a 1 − ax

  • − f

1 + a 1 − a 2 x

  • =

2ag 1 + a 1 − a(1 + a)x

  • .

. . f 1 + a 1 − a N−1 x

  • − f

1 + a 1 − a N x

  • =

2ag 1 + a 1 − a N−1 (1 + a)x

  • Charlotte Dion

JPS 18/04/2016 Les Houches 9 / 22

slide-16
SLIDE 16

Introduction Density estimation strategy Procedure First step Second step

f

  • x

1 + a

  • − f
  • x

1 − a

  • =

2ag(x) f (x) − f 1 + a 1 − ax

  • =

2ag((1 + a)x) f 1 + a 1 − ax

  • − f

1 + a 1 − a 2 x

  • =

2ag 1 + a 1 − a(1 + a)x

  • .

. . f 1 + a 1 − a N−1 x

  • − f

1 + a 1 − a N x

  • =

2ag 1 + a 1 − a N−1 (1 + a)x

  • f (x) − f

1 + a 1 − a N x

  • =

2a

N−1

  • k=0

g 1 + a 1 − a k (1 + a)x

  • Charlotte Dion

JPS 18/04/2016 Les Houches 9 / 22

slide-17
SLIDE 17

Introduction Density estimation strategy Procedure First step Second step

f

  • x

1 + a

  • − f
  • x

1 − a

  • =

2ag(x) f (x) − f 1 + a 1 − ax

  • =

2ag((1 + a)x) f 1 + a 1 − ax

  • − f

1 + a 1 − a 2 x

  • =

2ag 1 + a 1 − a(1 + a)x

  • .

. . f 1 + a 1 − a N−1 x

  • − f

1 + a 1 − a N x

  • =

2ag 1 + a 1 − a N−1 (1 + a)x

  • f (x) − f

1 + a 1 − a N x

  • =

2a

N−1

  • k=0

g 1 + a 1 − a k (1 + a)x

  • fN(x) := 2a

N−1

  • k=0

g 1 + a 1 − a k (1 + a)x

  • Charlotte Dion

JPS 18/04/2016 Les Houches 9 / 22

slide-18
SLIDE 18

Introduction Density estimation strategy Procedure First step Second step

Projection

f(x) − fN(x) = f(((1 + a)/(1 − a))Nx), gives f − fN →

N→∞ 0.

Notice that f ∈ L2(R+) ⇒ g ∈ L2(R+). Orthonormal basis of L2(R+): (ϕj)j∈N, g(x) =

  • j=0

aj(g)ϕj(x), with aj(g) = ϕj, g. For m ∈ Mn ⊂ N, gm :=

m−1

  • j=0

aj(g)ϕj projection Sm = Vect{ϕ0, ϕ1, . . . , ϕm−1} with aj(g) = ϕj, g = E[ϕj(Y1) + Y1ϕ′

j(Y1)] = E[ψϕj(Y1)].

Charlotte Dion JPS 18/04/2016 Les Houches 10 / 22

slide-19
SLIDE 19

Introduction Density estimation strategy Procedure First step Second step

Estimator of g and f

  • gm =

m−1

  • j=0
  • ajϕj,
  • aj = 1

n

n

  • i=1

[Yiϕ′

j(Yi) + ϕj(Yi)] = n−1 n

  • i=1

ψϕj(Yi).

Charlotte Dion JPS 18/04/2016 Les Houches 11 / 22

slide-20
SLIDE 20

Introduction Density estimation strategy Procedure First step Second step

Estimator of g and f

  • gm =

m−1

  • j=0
  • ajϕj,
  • aj = 1

n

n

  • i=1

[Yiϕ′

j(Yi) + ϕj(Yi)] = n−1 n

  • i=1

ψϕj(Yi). Then, as: fN(x) = 2a

N−1

  • k=0

g 1 + a 1 − a k (1 + a)x

  • it comes
  • fN,m(x) = 2a

N−1

  • k=0
  • gm

1 + a 1 − a k (1 + a)x

  • .

Charlotte Dion JPS 18/04/2016 Les Houches 11 / 22

slide-21
SLIDE 21

Introduction Density estimation strategy Procedure First step Second step

Choice: Laguerre basis

ϕ0(x) = √ 2e−x, ϕk(x) = √ 2Lk(2x)e−x for k ≥ 1, x ≥ 0 with Lk le kème Laguerre polynomials Lk(x) =

k

  • j=0

(−1)j

  • k

j

  • xj

j! . → Orthonormal basis: ϕj, ϕk = δj,k. ∀j ≥ 0, ϕj∞ ≤ √ 2, and ϕ′

j∞ ≤ 2

√ 2(j + 1) where ϕ′

j is the derivative function of ϕj.

→ All functions in L2(R+) can be decomposed on this basis.

Charlotte Dion JPS 18/04/2016 Les Houches 12 / 22

slide-22
SLIDE 22

Introduction Density estimation strategy Procedure First step Second step

MISE: mean integrated squared error (1)

Definition E[ gm − g2] = g − E[ gm]2 + E[E[ gm] − gm2]. Here: E[ gm] = gm.

Charlotte Dion JPS 18/04/2016 Les Houches 13 / 22

slide-23
SLIDE 23

Introduction Density estimation strategy Procedure First step Second step

MISE: mean integrated squared error (1)

Definition E[ gm − g2] = g − E[ gm]2 + E[E[ gm] − gm2]. Here: E[ gm] = gm. Thus E[ gm − g2] = g − gm2

  • Bias term

+

Charlotte Dion JPS 18/04/2016 Les Houches 13 / 22

slide-24
SLIDE 24

Introduction Density estimation strategy Procedure First step Second step

MISE: mean integrated squared error (1)

Definition E[ gm − g2] = g − E[ gm]2 + E[E[ gm] − gm2]. Here: E[ gm] = gm. Thus E[ gm − g2] = g − gm2

  • Bias term

+ E[gm − gm2]

  • Variance term

.

Charlotte Dion JPS 18/04/2016 Les Houches 13 / 22

slide-25
SLIDE 25

Introduction Density estimation strategy Procedure First step Second step

MISE: mean integrated squared error (2)

Proposition Assume that E[X2

1] < +∞.

(i) The estimator gm of g satisfies E[ gm − g2] ≤ g − gm2 + c1 m n + c2 m3 n , c1 = 4, c2 = 16E[Y 2

1 ].

Charlotte Dion JPS 18/04/2016 Les Houches 14 / 22

slide-26
SLIDE 26

Introduction Density estimation strategy Procedure First step Second step

MISE: mean integrated squared error (2)

Proposition Assume that E[X2

1] < +∞.

(i) The estimator gm of g satisfies E[ gm − g2] ≤ g − gm2 + c1 m n + c2 m3 n , c1 = 4, c2 = 16E[Y 2

1 ].

(ii) The estimator fN,m of f satisfies E[ fN,m − f2] ≤ 8a2 (√1 + a − √1 − a)2

  • g − gm2 + c1 m

n + c2 m3 n

  • +

2 1 − a 1 + a N f2.

Charlotte Dion JPS 18/04/2016 Les Houches 14 / 22

slide-27
SLIDE 27

Introduction Density estimation strategy Procedure First step Second step

Adaptive selection procedure

E[ gm − g2] ≤ g − gm2 + c1 m n + c2 m3 n , c1 = 4, c2 = 16E[Y 2

1 ].

Discrete collection Mn = {m ∈ 1, n, m3 ≤ n} mth := argmin

m∈Mn

  • g − gm2 + c1 m

n + c2 m3 n

  • Charlotte Dion

JPS 18/04/2016 Les Houches 15 / 22

slide-28
SLIDE 28

Introduction Density estimation strategy Procedure First step Second step

Adaptive selection procedure

E[ gm − g2] ≤ g − gm2 + c1 m n + c2 m3 n , c1 = 4, c2 = 16E[Y 2

1 ].

Discrete collection Mn = {m ∈ 1, n, m3 ≤ n} mth := argmin

m∈Mn

  • g − gm2 + c1 m

n + c2 m3 n

  • But g − gm2 = g2 − gm2, mth = argmin

m∈Mn

  • −gm2 + c1 m

n + c2 m3 n

  • .

Charlotte Dion JPS 18/04/2016 Les Houches 15 / 22

slide-29
SLIDE 29

Introduction Density estimation strategy Procedure First step Second step

Adaptive selection procedure

E[ gm − g2] ≤ g − gm2 + c1 m n + c2 m3 n , c1 = 4, c2 = 16E[Y 2

1 ].

Discrete collection Mn = {m ∈ 1, n, m3 ≤ n} mth := argmin

m∈Mn

  • g − gm2 + c1 m

n + c2 m3 n

  • But g − gm2 = g2 − gm2, mth = argmin

m∈Mn

  • −gm2 + c1 m

n + c2 m3 n

  • .

Penalty term pen(m) := κ1 m n + κ2E[Y 2

1 ]m3

n =: pen1(m) + pen2(m).

Charlotte Dion JPS 18/04/2016 Les Houches 15 / 22

slide-30
SLIDE 30

Introduction Density estimation strategy Procedure First step Second step

Adaptive selection procedure

E[ gm − g2] ≤ g − gm2 + c1 m n + c2 m3 n , c1 = 4, c2 = 16E[Y 2

1 ].

Discrete collection Mn = {m ∈ 1, n, m3 ≤ n} mth := argmin

m∈Mn

  • g − gm2 + c1 m

n + c2 m3 n

  • But g − gm2 = g2 − gm2, mth = argmin

m∈Mn

  • −gm2 + c1 m

n + c2 m3 n

  • .

Penalty term pen(m) := κ1 m n + κ2E[Y 2

1 ]m3

n =: pen1(m) + pen2(m). But E[Y 2

1 ] is unknown →

C2 = 1 n

n

  • k=1

Y 2

k

  • m = argmin

m∈Mn

{− gm2 + pen(m)},

  • pen(m) = 2κ1 m

n + 2κ2 C2 m3 n

Charlotte Dion JPS 18/04/2016 Les Houches 15 / 22

slide-31
SLIDE 31

Introduction Density estimation strategy Procedure First step Second step

Oracle-type inequality

Final estimator,

  • fN,

m(x) = 2a N−1

  • k=0
  • g

m

1 + a 1 − a k (1 + a)x

  • .

Theorem Assume that f is bounded and that E[X8

1] < +∞. For the final estimator

  • fN,

m there exists κ0 such that for κ1, κ2 ≥ κ0,

E[ fN,

m − f2]

≤ 16a2 (√1 + a − √1 − a)2

  • 6 inf

m∈M{g − gm2 + pen(m)} + Ca

n

  • +

1 − a 1 + a N f2, where Ca is a positive constant depending on a and f∞.

Charlotte Dion JPS 18/04/2016 Les Houches 16 / 22

slide-32
SLIDE 32

Introduction Density estimation strategy Procedure First step Second step

Results on simulated data

5 10 15 20 −0.4 −0.2 0.0 0.2 0.4 g ^

m ^

5 10 15 20 0.00 0.05 0.10 0.15 0.20 0.25 f ^

m

Figure : Mixed gamma case n = 2000, a = 0.25. 10 final estimators g

m — of g —

(left), 20 estimators fN,

m — of f — (right)

Charlotte Dion JPS 18/04/2016 Les Houches 17 / 22

slide-33
SLIDE 33

Introduction Density estimation strategy Procedure First step Second step

And for the survival function?

Function of interest: F(x) = 1 − F(x) = +∞

x

f(u)du, F Y for Y and: G(x) := ∞

x

g(u)du = 1 2a

  • (1 + a)F
  • x

1 + a

  • − (1 − a)F
  • x

1 − a

  • =

xfY (x) + F Y (x).

Charlotte Dion JPS 18/04/2016 Les Houches 18 / 22

slide-34
SLIDE 34

Introduction Density estimation strategy Procedure First step Second step

And for the survival function?

Function of interest: F(x) = 1 − F(x) = +∞

x

f(u)du, F Y for Y and: G(x) := ∞

x

g(u)du = 1 2a

  • (1 + a)F
  • x

1 + a

  • − (1 − a)F
  • x

1 − a

  • =

xfY (x) + F Y (x). F N(x) := 2a 1 + a

N−1

  • k=0

1 − a 1 + a k G 1 + a 1 − a k (1 + a)x

  • → G(0) = 1 thus limN→∞ F N(0) = 1 which is coherent with F(0) = 1.

Charlotte Dion JPS 18/04/2016 Les Houches 18 / 22

slide-35
SLIDE 35

Introduction Density estimation strategy Procedure First step Second step

And for the survival function?

Assumption E[X2

1] < +∞ then F ∈ L2(R+), and G ∈ L2(R+).

Orthogonal projection of G on Sm: Gm =

m−1

  • j=0

bj(G)ϕj, with bj(G) :=< G, ϕj >= E[Y ϕj(Y )]+ < F Y , ϕj > .

Charlotte Dion JPS 18/04/2016 Les Houches 19 / 22

slide-36
SLIDE 36

Introduction Density estimation strategy Procedure First step Second step

And for the survival function?

Assumption E[X2

1] < +∞ then F ∈ L2(R+), and G ∈ L2(R+).

Orthogonal projection of G on Sm: Gm =

m−1

  • j=0

bj(G)ϕj, with bj(G) :=< G, ϕj >= E[Y ϕj(Y )]+ < F Y , ϕj > . Projection estimator

  • Gm =

m−1

  • j=0
  • bjϕj,
  • bj = 1

n

n

  • i=1
  • Yiϕj(Yi) +
  • R+ ϕj(x)1Yi≥xdx
  • F N,m(x)

= 2a 1 + a

N−1

  • k=0

1 − a 1 + a k Gm 1 + a 1 − a k (1 + a)x

  • .

Charlotte Dion JPS 18/04/2016 Les Houches 19 / 22

slide-37
SLIDE 37

Introduction Density estimation strategy Procedure First step Second step

Better results than a classic deconvolution strategy. Transposition of the method for the estimation of the survival function. A kernel estimator instead of the projection estimator is possible. Application to data protection.

Charlotte Dion JPS 18/04/2016 Les Houches 20 / 22

slide-38
SLIDE 38

Introduction Density estimation strategy Procedure First step Second step

References

◮ Laguerre estimation for k-monotone densities observed with noise, Belomestny, D., Comte, F. and Genon-Catalot, V. (2016), Preprint HAL ◮ Nonparametric density and survival function estimation in the multiplicative censoring model Brunel, E., Comte, F. and Genon-Catalot, V.(2015) to appear in TEST ◮ Nonparametric estimation in a multiplicative censoring model with symmetric noise Comte F. and Dion. C.(2016), Preprint HAL ◮ Privacy protection and quantile estimation from noise multiplied data Sinha, B. and Nayak, T. and Zayatz, L. (2011) Sankhya B ◮ Large sample study of empirical distributions in a random-multiplicative censoring model Vardi, Y. and Zhang, C-H. (1992) The Annals of Statistics

Charlotte Dion JPS 18/04/2016 Les Houches 21 / 22

slide-39
SLIDE 39

Introduction Density estimation strategy Procedure First step Second step

Thank you for your attention !

Charlotte Dion JPS 18/04/2016 Les Houches 22 / 22

slide-40
SLIDE 40

If a is unknown ?

A K-sample is available, where the signal is constant, → we have observations of U: U (1)

1

, . . . , U (1)

K

→ ML estimator: max1≤i≤K(|U (1)

i

− 1|).

Charlotte Dion JPS 18/04/2016 Les Houches 23 / 22

slide-41
SLIDE 41

If a is unknown ?

A K-sample is available, where the signal is constant, → we have observations of U: U (1)

1

, . . . , U (1)

K

→ ML estimator: max1≤i≤K(|U (1)

i

− 1|). Repeated observations of Xi are available: Yi,k = XiUi,k, k ∈ {1, 2}, i = 1, . . . , n, where (Ui,1)i and (Ui,2)i are independent samples from U[1−a,1+a]. E

  • Y 2

i,1/Y 2 i,2

  • = 1 + a2/3

1 − a2 then

  • an =
  • W n − 1

W n + 1/3, with W n = 1 n

n

  • i=1

Wi, Wi := Y 2

i,1

Y 2

i,2

. → we can plug this estimator.

Charlotte Dion JPS 18/04/2016 Les Houches 23 / 22

slide-42
SLIDE 42

Real data: confidential protection

How to alter the data to minimize the risk of disclosure and to remain able to find the main characteristics of the original dataset? % of people under the poverty level for the 51 states of USA: Sinha et al.(2011) n = 51 for a = 0.1 (from American Community Survey)

Charlotte Dion JPS 18/04/2016 Les Houches 24 / 22

slide-43
SLIDE 43

Real data: confidential protection

How to alter the data to minimize the risk of disclosure and to remain able to find the main characteristics of the original dataset? % of people under the poverty level for the 51 states of USA: Sinha et al.(2011) n = 51 for a = 0.1 (from American Community Survey)

Figure : a = 0.1 and a = 0.5

Charlotte Dion JPS 18/04/2016 Les Houches 24 / 22

slide-44
SLIDE 44

Results on real data: fN,

m better than

fY,m?

◮ A Comparison of Statistical Disclosure Control Methods: Multiple Imputation Versus Klein, M., Mathew, T. and Sinha, B. (2013) Research report series

10 15 20 25 0.00 0.05 0.10 0.15

Figure : Histogram of the real data Xi’s with full multiplicative noise, with a = 0.5, Yi = XiUi.

  • - projection estimator of f on the (Xi)i, — estimator

fN,

m,

— projection estimator of fY on the (Yi)i.

Charlotte Dion JPS 18/04/2016 Les Houches 25 / 22

slide-45
SLIDE 45

Results on real data

10 15 20 10 15 20

10 15 20 10 15 20

Figure : Before: Y vs X, after: Xnew vs X

→ Our estimations of Q1, Q3, min, max are closer from the one of X as the values of Sinha et al.(2011)

Charlotte Dion JPS 18/04/2016 Les Houches 26 / 22