trajectory modeling by shape
play

Trajectory Modeling by Shape Nicholas P. Jewell Departments of - PowerPoint PPT Presentation

Trajectory Modeling by Shape Nicholas P. Jewell Departments of Statistics & School of Public Health (Biostatistics) University of California, Berkeley Victorian Centre for Biostatistics Murdoch Childrens Research Institute March 6,


  1. Trajectory Modeling by Shape Nicholas P. Jewell Departments of Statistics & School of Public Health (Biostatistics) University of California, Berkeley Victorian Centre for Biostatistics Murdoch Children’s Research Institute March 6, 2014 1

  2. Thanks • Joint work with Brianna Heggeseth (Williams College) References • Heggeseth, BC and Jewell, NP. The impact of covariance misspecification in multivariate Gaussian mixtures on estimation and inference: an application to longitudinal modeling. Statistics in Medicine , 2013, 32, 2790-2803 . • Heggeseth, BC and Jewell, NP. Vertically shifted mixture models for clustering longitudinal data by shape. Submitted for publication.

  3. “ Understanding our world requires conceptualizing the similarities and differences between the entities that compose it ” Robert Tryon and Daniel Bailey, 1970 3

  4. How does BMI change with age? National Longitudinal Study of Youth (NLSY) from 1979 - 2008. 4

  5. How does BMI change with age? National Longitudinal Study of Youth (NLSY) from 1979 - 2008. 5

  6. How does BMI change with age? National Longitudinal Study of Youth (NLSY) from 1979 - 2008. 6

  7. Typical Longitudinal Analysis • Use Generalized Estimating Equations (GEE) to estimate the mean outcome, and how it changes over time, adjusting for covariates  regression parameter estimation is consistent despite potential covariance misspecification  efficiency can be gained through use of a more appropriate working correlation structure  robust (sandwich) standard error estimators available • But, with a heterogeneous population,  BMI does not change much for some people as they age  BMI changes considerably for some people as they age • We don ’ t wish to average out these separate trajectories by modeling the mean over time 7

  8. Finite Mixture Models y i = ( y i1 , . . . , y im ) • Data for n individuals: measured at times t i = ( t i 1 , . . . , t im i ) • We assume K latent trajectories in the population that are distributed Σ K with frequencies: where and . π k > 0 k =1 π k = 1 π 1 , . . . , π K f ( y | t , θ ) = π 1 f ( y | t , β 1 , Σ 1 ) + · · · + π K f ( y | t , β K , Σ K ) • The (conditional) mixture density is , a multivariate f ( y | t , β k , Σ k ) Gaussian with mean and covariance . µ k Σ k • In most trajectory software, (conditional) independence is assumed ( Σ k = σ 2 as a working correlations structure: k I ) . 8 θ = ( π 1 , . . . , π K ; β 1 , . . . , β K ; Σ 1 , . . . , Σ K )

  9. Finite Mixture Models • The mean vector is related to the observation times as follows: µ k ( µ k ) j = β 0 + β 1 t ij  Linear: ( µ k ) j = β 0 + β 1 t ij + β 2 t 2  Quadratic: ij  Splines in observation times where the regression model (and coefficients) are assumed the same for each cluster, and is the j th observation for the i th individual t ij where 1 ≤ j ≤ m i 9

  10. Finite Mixture Models exp ( γ k z ) • Group membership: π k = Σ K j =1 exp ( γ j z ) Z is set of same or different covariates This expands to include the s also θ γ 10

  11. Estimation for Mixture Models • Maximum likelihood estimation for θ via the EM algorithm • K is pre-specified; can be chosen using the BIC • Parameter estimators are not consistent under covariance misspecification (White, 1982; Heggeseth and Jewell, 2013). • Robust (sandwich) standard error estimators are available. • How bad can the bias in regression estimators be? What influences its size? 11

  12. Mispecified Covariance Structure Bias and Separation of Trajectories • Separated components lead to little bias even when you wrongly assume independence. Black dashed -- true means, Solid lines – estimated means SE I ( β 01 ) = 0 . 01 , ˆ ˆ SE I ( β 01 ) = 0 . 02 , ˆ ˆ 12 SE R ( β 01 ) = 0 . 01 SE R ( β 01 ) = 0 . 06

  13. Mispecified Covariance Structure Bias and Level of Dependence • Components with little dependence lead to small bias even when you wrongly assume independence. Black dashed -- true means, Solid lines – estimated means SE I ( β 01 ) = 0 . 02 , ˆ ˆ SE I ( β 01 ) = 0 . 03 , ˆ ˆ 13 SE R ( β 01 ) = 0 . 06 SE R ( β 01 ) = 0 . 04

  14. NLSY Data Analysis Covariance makes a difference to the trajectories  hard to estimate bias from mispecified covariance 14

  15. How Do We Group These Blocks? 15

  16. Group by Color 16

  17. Group by Shape 17

  18. How Do We Group These Blocks? 18

  19. Group by Color or Shape 19

  20. How Do We Group These (Regression) Lines? 20 15 y 10 5 2 4 6 8 10 x 20

  21. Group by Intercept 20 15 y 10 5 2 4 6 8 10 x 21

  22. Group by Level 20 15 y 10 5 2 4 6 8 10 x 22

  23. Group by Shape (Slope) 20 15 y 10 5 2 4 6 8 10 x 23

  24. Simulated Data 15 10 Y 5 0 2 4 6 8 10 Time How could we group these individuals? 24

  25. Simulated Data 15 ● 10 ● Y ● ● ● 5 0 2 4 6 8 10 Time How could we group these individuals? 25

  26. Real Longitudinal Data • Center for the Health Assessment of Assessment of Mothers and Children of Salinas (CHAMACOS) Study  In 1999-2000, enrolled 601 pregnant women in agricultural Salinas Valley, CA.  Mostly Hispanic, agricultural workers.  Determine if exposure to pesticides and other chemicals impact children's growth patterns (BMI, neurological measures etc_. • First, focus on studying/estimating the growth patterns of children. • Second, determine if early life predictors are related to the patterns  pesticide/chemical exposure in utero  ODT, PDT, PDE, BPA (bisphenol A) 26

  27. CHAMACOS Data 40 35 30 BMI 25 20 15 10 20 40 60 80 100 120 Age in months How could we group these individuals? 27

  28. Cluster Analyses • Clustering is the task of assigning a set of objects into groups so that the objects in the same group are more similar to each other than to those in other groups. • What does it mean for objects to be more similar or more dissimilar?  Distance matrix • Why do we cluster objects? 28

  29. Standard Clustering Methods • Partition methods  Partition objects into K groups so that an objective function of dissimilarities is minimized or maximized.  Example: K-means Algorithm • Model-based methods  Assume a model that includes a grouping structure and estimate parameters.  Example: Finite Mixture Models 29

  30. K-means algorithm • Input: Data for n individuals in vector form. For individual i , the observed data vector is y i = ( y 1i , . . . , y im ) . • Measure of Dissimilarity: Squared Euclidean distance. The dissimilarity between the 1 st and 2 nd individuals is d ( y 1 � y 2 ) = k y 1 � y 2 k 2 = ( y 11 � y 12 ) 2 + · · · + ( y im � y 2m ) 2 30

  31. K -means Algorithm • Goal: Partition individuals into K sets so as C = { C 1 , C 2 , . . . , C K } to minimize the within-cluster sum of squares Σ K k =1 Σ y i ∈ C k k y i � µ k k 2 where is the mean vector of individuals in . C k µ k ( K must be known before starting K -means. There are many ways to choose K from the data that try to minimize the dissimilarity within each cluster while maximizing the dissimilarity between clusters: for example, the use of silhouettes .) 31

  32. Application to Simulated Data K − means 15 10 Y 5 0 2 4 6 8 10 Time 32

  33. Application to Simulated Data K − means 15 ● ● ● ● ● 10 Y ● ● ● ● ● 5 0 2 4 6 8 10 Time 33 How would you describe—interpret—the group trajectories?

  34. Finite Mixture Model Applied to CHAMACOS Data Mixture Model with Independence 40 35 30 BMI 25 20 15 10 20 40 60 80 100 120 34 Age in months

  35. Finite Mixture Model Applied to CHAMACOS Data Mixture Model with Independence 40 35 30 BMI 25 20 15 10 20 40 60 80 100 120 35 Age in months

  36. Finite Mixture Model Applied to CHAMACOS Data Mixture Model with Exponential 40 35 30 BMI 25 20 15 10 20 40 60 80 100 120 36 Age in months

  37. Clustering by Shape • Interested in shape not just level (which appears to dominate clustering techniques) • Want a method that:  Works with irregularly sampled data  Includes a way to estimate the relationship between baseline risk factors and group membership  Groups individuals according to the outcome pattern over time ignoring the level 37

  38. Clustering by Shape Options • Estimate slopes between neighboring observations and cluster on the “ derived ” observations • Fit splines for each individual, differentiate, and cluster on coefficients of resulting derivative • Use partition based cluster methods (like PAM) but use (i) the Pearson coefficient as a distance or dissimilarity measure d corr ( x , y ) = 1 − Corr ( x , y ) or the cosine-angle measure of dissimilarity Σ m j =1 x j y j d cos ( x , y ) = 1 − ( Σ m j =1 x 2 j )( Σ m j =1 y 2 j ) • Vertical shifting individual trajectories 38

  39. Vertical Shifting • For each individual, calculate i = y i − m − 1 i Σ m i y ∗ j =1 y ij • Each individual now has mean zero and so level is removed from any resulting clustering • Apply clustering technique to shifted data, e.g. finite mixture model 39

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend