combining information from different sources a resampling
play

Combining information from different sources: A resampling based - PowerPoint PPT Presentation

Combining information from different sources: A resampling based approach S.N. Lahiri Department of Statistics North Carolina State University May 17, 2013 Overview Background Examples/Potential applications Theoretical Framework Combining


  1. Combining information from different sources: A resampling based approach S.N. Lahiri Department of Statistics North Carolina State University May 17, 2013

  2. Overview Background Examples/Potential applications Theoretical Framework Combining information Uncertainty quantification by the Bootstrap S.N. Lahiri (NCSU) DIMACS Talk May 17, 2013 2 / 33

  3. Introduction/Example - Ozone data EPA runs computer models to generate hourly ozone estimates (cf. Community Multiscale Air Quality System (CMAQ)) with a resolution of 10mi square. S.N. Lahiri (NCSU) DIMACS Talk May 17, 2013 3 / 33

  4. Introduction/Example - Ozone data There also exist a network of ground monitoring stations that also report the O3 levels. S.N. Lahiri (NCSU) DIMACS Talk May 17, 2013 4 / 33

  5. Introduction There are many other examples of spatially indexed datasets that report measurements on an atmospheric variable at different spatial supports. Our goal is to combine the information from different sources to come up with a better estimate of the true spatial surface. S.N. Lahiri (NCSU) DIMACS Talk May 17, 2013 5 / 33

  6. Introduction Consider a function m ( · ) on a bounded domain D ⊂ R d that we want to estimate using data from two different sources. Data Source 1: The resolution of Data Source 1 is coarse ; It gives only an averaged version of m ( · ) over a grid upto an additive noise. Thus, Data Source 1 corresponds to data generated by Satellite or by computer models at a given level of resolution. S.N. Lahiri (NCSU) DIMACS Talk May 17, 2013 6 / 33

  7. Introduction Data Source 2: Data Source 2, on the other hand, gives point-wise measurements on m ( · ); Has an additive noise that is different from the noise variables for Data Source 1. Thus, Data Source 2 corresponds to data generated by ground stations or monitoring stations. S.N. Lahiri (NCSU) DIMACS Talk May 17, 2013 7 / 33

  8. Introduction Error Structure: We suppose that each set of noise variables are correlated . Further, the variables from the two sources are possibly cross-correalated . But, we do NOT want to impose any specific distributional structure on the error variables or on their joint distributions. Goals: Combine the data from the two sources to estimate the function m ( · ) at a given resolution (that is finer than that of Source 1); Quantify the associated uncertainty . S.N. Lahiri (NCSU) DIMACS Talk May 17, 2013 8 / 33

  9. Theoretical Formulation For simplicity, suppose that d = 2 and D = [0 , 1] 2 . Data Source 1: The underlying random process is given by: i ∈ Z d Y ( i ) = m ( i ; ∆) + ǫ ( i ) , where m ( i ; ∆) = ∆ − d � ∆( i +[0 , 1] d ) m ( s ) d s , ∆ ∈ (0 , ∞ ), and where { ǫ ( i ) , i ∈ Z d } is a zero mean second order stationary process . The observed variables are { Y ( i ) : ∆( i + [0 , 1) d ) ∩ [0 , 1) d � = ∅} ≡ { Y ( i k ) : k = 1 , . . . , N } . S.N. Lahiri (NCSU) DIMACS Talk May 17, 2013 9 / 33

  10. Data Scource 1: Coarse grid data (spacings= ∆) 1 0 0 1 S.N. Lahiri (NCSU) DIMACS Talk May 17, 2013 10 / 33

  11. Data Source 2: Point-support measurements Data Source 2: The underlying random process is given by: s ∈ R d Z ( s ) = m ( s ) + η ( s ) , s ∈ R d } is a zero mean second order stationary where { η ( s ) , process on R d . The observed variables are { Z ( s i ) : i = 1 , . . . , n } . where s 1 , . . . , s n are generated by iid uniform random vectors over [0 , 1] d . S.N. Lahiri (NCSU) DIMACS Talk May 17, 2013 11 / 33

  12. Data Scource 2: Point-support data 1 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0 1 S.N. Lahiri (NCSU) DIMACS Talk May 17, 2013 12 / 33

  13. Theoretical Formulation Let { ϕ j : j ≥ 1 } be an O.N.B. of L 2 [0 , 1] d . and let m ( · ) ∈ L 2 [0 , 1] d . Then, � m ( s ) = β j ϕ j ( s ) j ≥ 1 j ∈ Z β 2 where � j < ∞ . We consider a finite approximation J � m ( s ) ≈ β j ϕ j ( s ) ≡ m J ( s ) . j =1 Our goal is to combine the data from the two sources to estimate the parameters { β j : j = 1 , . . . , J } . S.N. Lahiri (NCSU) DIMACS Talk May 17, 2013 13 / 33

  14. Estimation on Fine grid The finite approximation to m ( · ) may be thought of as a finer resolution approximation with grid spacings δ ≪ ∆: 1 0 0 1 S.N. Lahiri (NCSU) DIMACS Talk May 17, 2013 14 / 33

  15. Estimation of the β j ’s From Data set 1: { Y ( i k ) : k = 1 , . . . , N } , we have N β (1) ˆ � = N − 1 Y ( i k ) ϕ j ( i k ∆) . j k =1 It is easy to check that for ∆ small: N E ˆ β (1) � N − 1 = m ( i k ; ∆) ϕ j ( i k ∆) j k =1 N � � N − 1 ∆ − d ≈ m ( s ) ϕ j ( s ) d s ( i k +[0 , 1] d )∆ k =1 � [0 , 1] d m ( s ) ϕ j ( s ) d s / [ N ∆ d ] ≈ β j . = S.N. Lahiri (NCSU) DIMACS Talk May 17, 2013 15 / 33

  16. Estimation of the β j ’s From Data set 2: { Z ( s i ) : i = 1 , . . . , n } , we have n β (2) ˆ � = n − 1 Z ( s i ) ϕ j ( s i ) . j i =1 It is easy to check that as n → ∞ : n E [ˆ β (2) � n − 1 |S ] = m ( s i ) ϕ j ( s i ) j i =1 � → [0 , 1] d m ( s ) ϕ j ( s ) d s = β j a.s. where S is the σ -field of the random vectors generating the data locations. S.N. Lahiri (NCSU) DIMACS Talk May 17, 2013 16 / 33

  17. Introduction The estimator from Data Set k ∈ { 1 , 2 } is J � β ( k ) ˆ m ( k ) ( · ) = ˆ ϕ j ( · ) . j j =1 We shall consider a combined estimator of m ( · ) of the form: m (1) ( · ) + a 2 ˆ m (2) ( · ) m ( · ) = a 1 ˆ ˆ where a 1 , a 2 ∈ R and a 1 + a 2 = 1. S.N. Lahiri (NCSU) DIMACS Talk May 17, 2013 17 / 33

  18. Combined estimator of m ( · ) Many choices of a 1 ∈ R (with a 2 = 1 − a 1 ) is possible. Here we seek an optimal choice of a 1 that minimizes the MISE: � � 2 � m ( · ) − m J ( · ) ˆ . E Evidently, this depends on the joint correlation structure of the error processes from Data sources 1 and 2. S.N. Lahiri (NCSU) DIMACS Talk May 17, 2013 18 / 33

  19. Optimal a 1 More precisely, it can be shown that the optimal choice of a 1 is given by � � β (1) β (2) β (2) � J [ˆ − ˆ ][ˆ − β j ] j =1 E j j j a 0 1 = � J j =1 E [ˆ β (1) − ˆ β (2) ] 2 j j Since each ˆ β ( K ) is a linear function of the observations from j Data set k ∈ { 1 , 2 } , the numerator and the denominator of the optimal a 1 depends on the joint covariance structure of the processes { ǫ ( i ) : i ∈ Z d } and { η ( s ) : s ∈ R d } . Note that the ϕ j ’s drop out from the formula for the MISE optimal a 0 1 due to the ONB property of { ϕ j : j ≥ 1 } . S.N. Lahiri (NCSU) DIMACS Talk May 17, 2013 19 / 33

  20. Joint-Correlation structure We shall suppose that { ǫ ( i ) : i ∈ Z d } is SOS with covariogram i , k ∈ Z d ; σ ( k ) = Cov( ǫ ( i ) , ǫ ( i + k )) for all { η ( s ) : s ∈ R d } is SOS with covariogram s , h ∈ R d ; τ ( h ) = Cov( η ( s ) , η ( s + h )) for all and the cross-correlation function between the ǫ ( · )’s and η ( · )’s is given by i ∈ Z d , s ∈ R d ; Cov( ǫ ( i ) , η ( s )) = γ ( i − s ) for all for some function γ : R d → R . S.N. Lahiri (NCSU) DIMACS Talk May 17, 2013 20 / 33

  21. Joint Correlation Structure This formulation is somewhat non-standard, as the two component spatial processes have different supports. Example: Consider a zero mean SOS bivariate process { ( η 1 ( s ) , η 2 ( s )) : s ∈ R d } with autocovariance matrix Σ( · ) = (( σ ij ( · ))). Let η ( s ) = η 1 ( s ) and � ǫ ( i ) = ∆ − d i ∈ Z d . η 2 ( s ) d s , [ i +[0 , 1) d ]∆ Then, Cov( ǫ ( i ) , ǫ ( i + k )) depends only on k for all i , k ∈ Z d ; ( given by an integral of σ 11 ( · )) and Cov( ǫ ( i ) , η ( s )) depends only on i − s for all i ∈ Z d , s ∈ R d ( given by an integral of σ 12 ( · )). S.N. Lahiri (NCSU) DIMACS Talk May 17, 2013 21 / 33

  22. Estimation of a 0 1 Recall that the optimal � � [ˆ β (1) − ˆ β (2) ][ˆ β (2) � J j =1 E − β j ] j j j a 0 1 = j =1 E [ˆ β (1) − ˆ β (2) � J ] 2 j j depends on the population joint covariogram of the error processes that are typically unknown . It is possible to derive an asymptotic approximation to a 0 1 that involves only some summary characteristics of these � functions (such as τ ( h ) d h and � k ∈ Z d σ ( k )), and use plug-in estimates. S.N. Lahiri (NCSU) DIMACS Talk May 17, 2013 22 / 33

  23. Estimation of a 0 1 However, the limiting formulae depends on the asymptotic regimes one employs (relative growth rates of n and N , and the strength of dependence). The accuracy of these approximations are not very good even for d = 2 due to edge-effects. These issues with the asymptotic approximations suggest that we may want to use a data-based method, such as the spatial block bootstrap/subsampling that more closely mimic the behavior in finite samples. S.N. Lahiri (NCSU) DIMACS Talk May 17, 2013 23 / 33

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend