how to quantify nutrient export additive biomass
play

How to quantify nutrient export: Additive Biomass functions for - PowerPoint PPT Presentation

Introduction data methods Results Discussion Literatur How to quantify nutrient export: Additive Biomass functions for spruce fit with Nonlinear Seemingly Unrelated Regression IBS-DR Biometry Workshop, W urzburg Christian Vonderach


  1. Introduction data methods Results Discussion Literatur How to quantify nutrient export: Additive Biomass functions for spruce fit with Nonlinear Seemingly Unrelated Regression IBS-DR Biometry Workshop, W¨ urzburg Christian Vonderach Forest Research Institute Baden-W¨ urttemberg W¨ urzburg, 07.10.2015

  2. Introduction data methods Results Discussion Literatur Introduction 1 data 2 methods applied 3 Nonlinear Seemingly Unrelated Regression results 4 NSUR fit comparison Discussion 5

  3. Introduction data methods Results Discussion Literatur what it is about EnNa: Energywood and sustainability (funded by FNR) havesting removes wood (i. e. C) and also nutrients (Ca, K, Mg, P, . . . ) → sustainability required regaring C and also Ca, K, Mg, P, . . . nutrient balance: NB = VW + DP − SI − HV � �� � soil � � HV = trees = compartments nutrient concentration differs within different compartments → compartment-specific biomass functions required

  4. Introduction data methods Results Discussion Literatur collected data spruce ( Picea abies ) 6 data compilations (incl. Wirth et al., 2004) homogenisation stump/B coarse wood/B small wood needles ≈ 1200 trees (only referenced shown; +CH | DK | B)

  5. Introduction data methods Results Discussion Literatur Overview of collected data age dbh height 100 number of data sets 50 0 0 50 100 150 200 0 25 50 75 0 10 20 30 40 40 3000 aboveground biomasse [kg] 30 height [m] 2000 20 1000 10 0 0 0 20 40 60 80 0 20 40 60 80 dbh [cm] dbh [cm]

  6. Introduction data methods Results Discussion Literatur general methodological design wanted: biomass functions for all compartments, and the total mass maintain additivity (see Parresol, 2001) 1 BM total = � BM comp y total ) = � c y i ) + 2 � � with var (ˆ i =1 var (ˆ i < j cov (ˆ y i , ˆ y j ) 2 Nonlinear Seemingly Unrelated Regression (NSUR)

  7. Introduction data methods Results Discussion Literatur general methodological design wanted: biomass functions for all compartments, and the total mass maintain additivity (see Parresol, 2001) 1 BM total = � BM comp y total ) = � c y i ) + 2 � � with var (ˆ i =1 var (ˆ i < j cov (ˆ y i , ˆ y j ) 2 Nonlinear Seemingly Unrelated Regression (NSUR) NSUR requires rectangular data set (i. e. no NA’s) but some of the studies contain NA’s complete case imputation

  8. Introduction data methods Results Discussion Literatur general methodological design wanted: biomass functions for all compartments, and the total mass maintain additivity (see Parresol, 2001) 1 BM total = � BM comp y total ) = � c y i ) + 2 � � with var (ˆ i =1 var (ˆ i < j cov (ˆ y i , ˆ y j ) 2 Nonlinear Seemingly Unrelated Regression (NSUR) NSUR requires rectangular data set (i. e. no NA’s) [image removed] but some of the studies contain NA’s complete case imputation

  9. Introduction data methods Results Discussion Literatur SUR: seemingly-unrelated regression I linear SUR-Regression, see Zellner (1962): y sur = X β + ǫ with ǫ ∼ N (0 , Σ c ⊗ I N ) (1) with the stacked column vectors y sur = [ y ′ 1 y ′ 2 · · · y ′ m ] ′ , m ] ′ and error term ǫ = [ ǫ ′ β = [ β ′ 1 β ′ 2 · · · β ′ 1 ǫ ′ 2 · · · ǫ ′ m ] ′ . The design matrix X now is blockdiagonal:   0 0 X 1 · · · 0 X 2 0   · · ·   X = . . .   . . . . . .   0 0 X M · · · where N=number of Observation, M=number of equations

  10. Introduction data methods Results Discussion Literatur SUR: seemingly-unrelated regression II the variance-covariance matrix of the errors is:   · · · σ 11 σ 12 σ 1 M   · · · σ 21 σ 22 σ 2 M   Σ = Σ c ⊗ I N = (2)  ⊗ I N . . .  . . .  . . .  · · · σ M 1 σ M 2 σ MM Zellner (1962, S. 350) and Rossi et al. (2005, S. 66): ”In a formal sense, we regard (1) as a single-equation regression model [. . . ] “. ” Given Σ , we can transform (1) into a system with uncorrelated errors “ [. . . ] ”by a matrix H , so that E ( H ǫǫ ′ H ′ ) = H Σ H ′ = I . “ ” This means that, if we premultiply both sides of (1) by [ H ], the transformed system has uncorrelated errors “.

  11. Introduction data methods Results Discussion Literatur SUR: seemingly-unrelated regression III the resulting model fulfills the LS-assumptions and the LS-estimator is (Zellner, 1962): ˆ β sur = ( X ′ H ′ HX ) − 1 X ′ H ′ Hy = ( X ′ Σ − 1 X ) − 1 X ′ Σ − 1 y (3) where the covariance matrix of the estimator is: Var ( β ) = ( X ′ Σ − 1 X ) − 1 (4) where Σ − 1 = Σ − 1 (5) ⊗ I c BUT: Σ is not known and must be estimated from the data.

  12. Introduction data methods Results Discussion Literatur weighted nonlinear seemingly unrelated regression I in the non-linear case, the model is (see Parresol, 2001): y nsur = f ( X , β ) + ǫ mit ǫ ∼ N (0 , Σ ⊗ I N ) (6) with the stacked column vectors y nsur = [ y ′ 1 y ′ 2 · · · y ′ m ] ′ , m ] ′ and error term ǫ = [ ǫ ′ f = [ f ′ 1 f ′ 2 · · · f ′ 1 ǫ ′ 2 · · · ǫ ′ m ] ′ . if a weighted regression is needed (as in this case):   Ψ 1 ( θ 1 ) 0 0 · · · 0 Ψ 2 ( θ 2 ) 0  · · ·    Ψ( θ ) = (7) . . .  . . .  . . .   0 0 Ψ M ( θ M ) · · ·

  13. Introduction data methods Results Discussion Literatur weighted nonlinear seemingly unrelated regression II Considering a univariate gnls, the estimated parameter vector minimises the (weighted) sum of squares of the residuals S ( β ) = ǫ ′ Ψ − 1 ǫ = [ Y − f ( X , β )] ′ Ψ − 1 [ Y − f ( X , β )] (8) with weights-matix Ψ . In the NSUR-model, this term is updated to: S ( β ) = ǫ ′ ∆ ′ ( Σ − 1 ⊗ I ) ∆ ǫ (9) = [ Y − f ( X , β )] ′ ∆ ′ ( Σ − 1 ⊗ I ) ∆ [ Y − f ( X , β )] √ Ψ − 1 and Σ (still) not known. where ∆ = Parresol (2001) estimates Σ from the residuals of an univariate gnls-fit ( i , j ): 1 ǫ i ˆ i ˆ ∆ ′ σ ij = ∆ j ǫ j (10) √ N − K i � N − K j

  14. Introduction data methods Results Discussion Literatur weighted nonlinear seemingly unrelated regression III to estimate β , we can use the Gauss-Newton-Minimisation method (Parresol, 2001): β n +1 = β n + l n · [ F ( β n ) ′ ˆ ∆ ′ ( ˆ Σ − 1 ⊗ I ) ˆ ∆ F ( β n ) ] − 1 (11) F ( β n ) ′ ˆ ∆ ′ ( ˆ Σ − 1 ⊗ I ) ˆ ∆ [ y − f ( X , β n ) ] where F ( β n ) is the jacobian. the covariance-matrix of the parameter estimates is: Σ b = [ F ( β n ) ′ ˆ Σ − 1 ⊗ I ) ˆ ˆ ∆ ′ ( ˆ ∆ F ( β n ) ] − 1 (12) and the NSUR-system-variance is: S ( b ) σ 2 ˆ NSUR = (13) MN − K

  15. Introduction data methods Results Discussion Literatur wait, what about study-effects?

  16. Introduction data methods Results Discussion Literatur wait, what about study-effects? gnls cannot model random effects and hence, the NSUR-code can’t as well but we are not interested in these anyway. . .

  17. Introduction data methods Results Discussion Literatur wait, what about study-effects? gnls cannot model random effects and hence, the NSUR-code can’t as well but we are not interested in these anyway. . . y corr = y obs − ( f ( A β + Bb , ν ) − f ( A β , ν ) ) � �� � � �� � fixed+random effects fixed effects raw data nlme−fit gnls−fit 20 20 20 15 15 15 y y y 10 10 10 5 5 5 5 10 15 20 5 10 15 20 5 10 15 20 x x x

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend