coxflexboost fitting structured survival models
play

CoxFlexBoost : Fitting Structured Survival Models Benjamin Hofner 1 - PowerPoint PPT Presentation

CoxFlexBoost : Fitting Structured Survival Models Benjamin Hofner 1 Institut f ur Medizininformatik, Biometrie und Epidemiologie (IMBE) Friedrich-Alexander-Universit at Erlangen-N urnberg joint work with Torsten Hothorn and Thomas Kneib


  1. CoxFlexBoost : Fitting Structured Survival Models Benjamin Hofner 1 Institut f¨ ur Medizininformatik, Biometrie und Epidemiologie (IMBE) Friedrich-Alexander-Universit¨ at Erlangen-N¨ urnberg joint work with Torsten Hothorn and Thomas Kneib Institut f¨ ur Statistik Ludwig-Maximilians-Universit¨ at M¨ unchen useR! 2009 - Rennes 1 benjamin.hofner@imbe.med.uni-erlangen.de

  2. Introduction Data Example - Intensive Care Patients with Severe Sepsis Response: 90-day survival Predictors: 14 categorical predictors (sex, fungal infection (y/n), . . . ) 6 continuous predictors (age, Apache II Score, . . . ) Previous studies showed the presence of linear, non-linear and time-varying effects. Aims: flexible survival model for patients suffering from severe sepsis identify prognostic factors (at appropriate complexity) Further Details of the Data-Set: Origin: Department of Surgery, Campus Großhadern, LMU Munich Period of observation: March 1993 – February 2005 (12 years) N: 462 septic patients (180 observations right-censored) IMBE Erlangen-N¨ urnberg CoxFlexBoost : Fitting Structured Survival Models 2

  3. Introduction Structured Survival Models Cox PH model: λ i ( t ) = λ ( t , x i ) = λ 0 ( t ) exp( x ′ i β ) Generalization: Structured Survival Models λ i ( t ) = exp( η i ( t )) with additive predictor L � η i ( t ) = f l ( x i ( t )) , l =1 Generic representation of covariate effects f l ( x i ) a) linear effects: f l ( x i ( t )) = f l , linear (˜ x i ) = ˜ x i β b) smooth effects: f l ( x i ( t )) = f l , smooth (˜ x i ) c) time-varying effects: f l ( x i ( t )) = f l , smooth ( t ) · ˜ x i (or f l ( x i ( t )) = t β · ˜ x i ) where ˜ x i is a covariate from x i ( t ). Note: c) includes log-baseline (˜ x i ≡ 1) IMBE Erlangen-N¨ urnberg CoxFlexBoost : Fitting Structured Survival Models 3

  4. Introduction Estimation Estimation Flexible terms f l , smooth ( · ) can be represented using P-splines (Eilers & Marx, 1996) This leads to: Penalized Likelihood Criterion: � � � t i � n � L L pen ( β ) = δ i η i ( t i ) − exp( η i ( t )) dt − pen l ( β l ) 0 i =1 l =0 NB: this is the full log-likelihood Problem: Estimation and in particular model choice t i observed survival time δ i indicator for non-censoring pen l ( β l ) P-spline penalty for smooth effects IMBE Erlangen-N¨ urnberg CoxFlexBoost : Fitting Structured Survival Models 4

  5. CoxFlexBoost CoxFlexBoost Aim: Maximization of the log-likelihood with different modeling alternatives We use: Iterative algorithm called Likelihood-based Boosting with component-wise base-learners Therefore: Use one base-learner g j ( · ) for each covariate (or each model component) [ j ∈ { 1 , . . . , J } ] ⇒ Component-wise boosting as is used a means of estimation with intrinsic variable selection and model choice (as we will show now). IMBE Erlangen-N¨ urnberg CoxFlexBoost : Fitting Structured Survival Models 5

  6. CoxFlexBoost CoxFlexBoost Aim: Maximization of the log-likelihood with different modeling alternatives We use: Iterative algorithm called Likelihood-based Boosting with component-wise base-learners Therefore: Use one base-learner g j ( · ) for each covariate (or each model component) [ j ∈ { 1 , . . . , J } ] ⇒ Component-wise boosting as is used a means of estimation with intrinsic variable selection and model choice (as we will show now). IMBE Erlangen-N¨ urnberg CoxFlexBoost : Fitting Structured Survival Models 5

  7. CoxFlexBoost CoxFlexBoost Aim: Maximization of the log-likelihood with different modeling alternatives We use: Iterative algorithm called Likelihood-based Boosting with component-wise base-learners Therefore: Use one base-learner g j ( · ) for each covariate (or each model component) [ j ∈ { 1 , . . . , J } ] ⇒ Component-wise boosting as is used a means of estimation with intrinsic variable selection and model choice (as we will show now). IMBE Erlangen-N¨ urnberg CoxFlexBoost : Fitting Structured Survival Models 5

  8. CoxFlexBoost Some Details on CoxFlexBoost After some initializations, in each boosting iteration m (until m = m stop ): 1.) All base-learners g j ( · ) (i.e., modeling possibility) are fitted separately (based on penalized MLE). g j ∗ (i.e., the base-learner that 2.) Choose best fitting base-learner ˆ maximizes the unpenalized LH) 3.) Add . . . . . . fraction ν of the fit (ˆ g j ∗ ) to the model . . . fraction ν of the parameter estimate ( β j ∗ ) to the estimation ( ν = 0 . 1 in our case) What happens then? (parameters of) previously selected base-learners are treated as a constant in the next iteration IMBE Erlangen-N¨ urnberg CoxFlexBoost : Fitting Structured Survival Models 6

  9. CoxFlexBoost Variable Selection and Model Choice Variable Selection and Model Choice . . . is achieved by selection of base-learner, i.e., component-wise boosting (steps 1.) & 2.)) and early stopping, i.e., estimate optimal stopping iteration � m stop , opt via cross validation, bootstrap, . . . For Variable selection (without model choice): Define one base-learner per covariate e.g. flexible base-learner with 4 df For Variable selection and model choice: Define one base-learner per modeling possibility But the flexibility must be comparable! Otherwise: more flexible base-learners are preferred IMBE Erlangen-N¨ urnberg CoxFlexBoost : Fitting Structured Survival Models 7

  10. CoxFlexBoost Degrees of Freedom Specify Flexibility by Degrees of Freedom Specifying the flexibility via df is more intuitive than specifying it via the smoothing parameter κ . df can be used to make smooth effects comparable to other modeling components (e.g., linear effects). e . g . Use initial � df j ( = 4) and solve ! df( κ j ) − � df j = 0 for κ j , where Fisher matrix ���� � ) − 1 � F [0] ( F [0] df( κ j ) = trace + κ j K j (Gray, 1992) . j j � �� � penalized Fisher matrix Problem 1: Not constant over the (boosting) iterations But simulation studies showed: No big deviation from the initial � df j IMBE Erlangen-N¨ urnberg CoxFlexBoost : Fitting Structured Survival Models 8

  11. CoxFlexBoost Degrees of Freedom Specify Flexibility by Degrees of Freedom Specifying the flexibility via df is more intuitive than specifying it via the smoothing parameter κ . df can be used to make smooth effects comparable to other modeling components (e.g., linear effects). e . g . Use initial � df j ( = 4) and solve ! df( κ j ) − � df j = 0 for κ j , where Fisher matrix ���� � ) − 1 � F [0] ( F [0] df( κ j ) = trace + κ j K j (Gray, 1992) . j j � �� � penalized Fisher matrix Problem 1: Not constant over the (boosting) iterations But simulation studies showed: No big deviation from the initial � df j IMBE Erlangen-N¨ urnberg CoxFlexBoost : Fitting Structured Survival Models 8

  12. CoxFlexBoost Degrees of Freedom Problem 2 For P-splines with higher order differences ( d ≥ 2): df > 1 ( κ → ∞ ) Polynomial of order d − 1 remains unpenalized Solution: Decomposition for differences of order d = 2 (based on Kneib, Hothorn, & Tutz, 2009) f smooth ( x ) = + β 1 x + f smooth , centered ( x ) β 0 � �� � � �� � unpenalized, parametric part deviation from polynomial Add unpenalized part as separate, parametric base-learners Assign df = 1 to the centered effect (and add as P-spline base-learner) Analogously for time-varying effects Technical realization (see Fahrmeir, Kneib, & Lang, 2004): decomposing the vector of regression coefficients β into ( e β unpen , e β pen ) utilizing a spectral decomposition of the penalty matrix IMBE Erlangen-N¨ urnberg CoxFlexBoost : Fitting Structured Survival Models 9

  13. CoxFlexBoost Degrees of Freedom Problem 2 For P-splines with higher order differences ( d ≥ 2): df > 1 ( κ → ∞ ) Polynomial of order d − 1 remains unpenalized Solution: Decomposition for differences of order d = 2 (based on Kneib et al., 2009) f smooth ( x ) · t = β 0 · t + β 1 x · t + f smooth , centered ( x ) · t � �� � � �� � unpenalized, parametric part deviation from polynomial Add unpenalized part as separate, parametric base-learners Assign df = 1 to the centered effect (and add as P-spline base-learner) Analogously for time-varying effects Technical realization (see Fahrmeir et al., 2004): decomposing the vector of regression coefficients β into ( e β unpen , e β pen ) utilizing a spectral decomposition of the penalty matrix IMBE Erlangen-N¨ urnberg CoxFlexBoost : Fitting Structured Survival Models 9

  14. CoxFlexBoost Results Simulation Results (in short) Properties of CoxFlexBoost Good variable selection strategy Good model choice strategy if only linear and smooth effects are used Selection bias in favor of time-varying base-learners (if present) ⇒ standardizing time could be a solution Estimates are better if decomposition for model choice is used (compared to one flexible base-learner with 4 df) IMBE Erlangen-N¨ urnberg CoxFlexBoost : Fitting Structured Survival Models 10

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend