causal phenotype discovery via deep networks
play

Causal Phenotype Discovery via Deep Networks Dave Kale 1 , 2 , - PowerPoint PPT Presentation

Causal Phenotype Discovery via Deep Networks Dave Kale 1 , 2 , Zhengping Che 1 M. Taha Bahadori 1 , Wenzhe Li 1 , Yan Liu 1 Randall Wetzel 2 1 University of Southern California, Computer Science 2 Laura P. and Leland K. Whittier VPICU, Childrens


  1. Causal Phenotype Discovery via Deep Networks Dave Kale 1 , 2 , Zhengping Che 1 M. Taha Bahadori 1 , Wenzhe Li 1 , Yan Liu 1 Randall Wetzel 2 1 University of Southern California, Computer Science 2 Laura P. and Leland K. Whittier VPICU, Children’s Hospital LA November 20, 2015 Kale/Che (USC/VPICU) Learning Causal Phenotypes November 20, 2015 1 / 27

  2. Disclosures and Funding Disclosures • D. Kale, Z. Che, T. Bahadori, W. Li, and Y. Liu have no commercial or financial interests related to this work. • R. Wetzel is CEO of Virtual PICU (VPS) Systems, LLC. Funding • D. Kale is funded by a Innovation in Engineering Fellowship from the Alfred E. Mann Institute at USC. • The VPICU is funded by a grant from the Laura P. and Leland K. Whittier Foundation. Kale/Che (USC/VPICU) Learning Causal Phenotypes November 20, 2015 2 / 27

  3. Outline 1 Background: why and how of computational phenotyping Phenotypes: representations of illness Computational phenotyping Phenotyping as representation learning 2 Phenotyping clinical time series with deep learning Deep learning for time series Causal analysis of phenotypic representations 3 Experiments Setup Prediction results Visualization of causal phenotypes 4 Conclusion 5 References Kale/Che (USC/VPICU) Learning Causal Phenotypes November 20, 2015 3 / 27

  4. Outline 1 Background: why and how of computational phenotyping Phenotypes: representations of illness Computational phenotyping Phenotyping as representation learning 2 Phenotyping clinical time series with deep learning Deep learning for time series Causal analysis of phenotypic representations 3 Experiments Setup Prediction results Visualization of causal phenotypes 4 Conclusion 5 References Kale/Che (USC/VPICU) Learning Causal Phenotypes November 20, 2015 4 / 27

  5. Electronic (or computational) phenotyping Rules/algorithms that define diagnostic/inclusion criteria [PheKB]. Kale/Che (USC/VPICU) Learning Causal Phenotypes November 20, 2015 5 / 27

  6. Electronic (or computational) phenotyping Rules/algorithms that define Classifiers that answer the question “does diagnostic/inclusion criteria [PheKB]. patient have X ?” [AL14] [AP14] Kale/Che (USC/VPICU) Learning Causal Phenotypes November 20, 2015 5 / 27

  7. Electronic (or computational) phenotyping Rules/algorithms that define Classifiers that answer the question “does diagnostic/inclusion criteria [PheKB]. patient have X ?” [AL14] [AP14] Clusters of patients with similar symptoms/signs [MK12] [SWS15]. Kale/Che (USC/VPICU) Learning Causal Phenotypes November 20, 2015 5 / 27

  8. Electronic (or computational) phenotyping Rules/algorithms that define Classifiers that answer the question “does diagnostic/inclusion criteria [PheKB]. patient have X ?” [AL14] [AP14] Clusters of patients with similar Latent factors/bases for diagnoses, symptoms/signs [MK12] [SWS15]. procedures, etc. [HGS14] [ZW14]. Procedures factor ! Phenotype importance ! Diagnosis factor ! Procedures ! Patients 
 Patients ! factor ! Phenotype 1 ! Phenotype R ! Diagnoses ! Kale/Che (USC/VPICU) Learning Causal Phenotypes November 20, 2015 5 / 27

  9. Computational phenotyping of critical illness Our setting: learning critical illness phenotypes from multivariate PICU time series. DBP SBP CRR ETCO2 FIO2 TGCS Gluc 80 120 5.0 40 0.65 16 240 0.60 70 110 4.5 14 220 35 0.55 100 4.0 12 200 60 90 3.5 0.50 30 10 180 0.45 50 80 3.0 0.40 8 160 70 2.5 25 40 0.35 60 2.0 6 140 20 0.30 30 50 1.5 0.25 4 120 20 40 1.0 15 0.20 2 100 HR pH RR SAO2 emp T UO 200 7.40 45 100 38 2.5 1.0 180 7.35 40 37 95 2.0 0.8 160 35 36 7.30 140 30 90 35 1.5 0.6 7.25 120 25 85 34 1.0 0.4 7.20 100 20 33 80 0.5 0.2 80 7.15 15 32 60 7.10 10 75 31 0.0 0.0 0 2 4 6 8 10 12 14 0 2 4 6 8 10 12 14 0 2 4 6 8 10 12 14 0 2 4 6 8 10 12 14 0 2 4 6 8 10 12 14 0 2 4 6 8 10 12 0 2 4 6 8 10 12 Deformable motifs [SDK11] Bayesian clustering [MK12] Multi-task GPs [GP15] Subspace clustering [BK15] Kale/Che (USC/VPICU) Learning Causal Phenotypes November 20, 2015 6 / 27

  10. Phenotyping as representation learning Medicine : phenotypes, biomarkers [BD01] 1 Measurable attributes of patient/disease. 2 Independent of other biomarkers. 3 Separate patients into meaningful groups. 4 Improve outcome prediction, risk assessment. 5 Clinically plausible, interpretable. Kale/Che (USC/VPICU) Learning Causal Phenotypes November 20, 2015 7 / 27

  11. Phenotyping as representation learning Medicine : phenotypes, biomarkers [BD01] 1 Measurable attributes of patient/disease. 2 Independent of other biomarkers. 3 Separate patients into meaningful groups. 4 Improve outcome prediction, risk assessment. 5 Clinically plausible, interpretable. Machine learning : features, representations [BCV13] 1 Measurable properties of objects. 2 Independent, disentangle factors of variation. 3 Form natural clusters. 4 Useful for discriminative, predictive tasks. 5 Interpretable, provide insight into problem. Kale/Che (USC/VPICU) Learning Causal Phenotypes November 20, 2015 7 / 27

  12. Deep learning of representations Representation learning : learn transformation of data useful for some task. Kale/Che (USC/VPICU) Learning Causal Phenotypes November 20, 2015 8 / 27

  13. Deep learning of representations Representation learning : learn transformation of data useful for some task. Main tool: neural networks (feedforward nets, ConvNets, RNNs, etc.) • Date back to 40s; abandoned in 90s. • Revived as deep learning in 2000s. (new methods, big data, faster hardware) • State-of-the-art in vision, speech, NLP • Google, Apple, Microsoft, Facebook • Biologically inspired (if not plausible). • Maximally varying, nonlinear functions. • Exploit labeled and unlabeled data. • Layers yield increasing abstraction. Kale/Che (USC/VPICU) Learning Causal Phenotypes November 20, 2015 8 / 27

  14. Deep learning of representations Representation learning : learn transformation of data useful for some task. Main tool: neural networks (feedforward nets, ConvNets, RNNs, etc.) Output: ˆ y = g( h L W out + b out ) • sigmoid for binary classification • softmax for multiclass classification • identity for regression Hidden: ˆ h ℓ = h( h ℓ − 1 W ℓ + b ℓ ) • sigmoid or tanh traditional • rectified linear ( h ( a ) = max(0 , a ) ) popular Input: ˆ h 0 = x Kale/Che (USC/VPICU) Learning Causal Phenotypes November 20, 2015 8 / 27

  15. Deep learning of representations Representation learning : learn transformation of data useful for some task. Main tool: neural networks (feedforward nets, ConvNets, RNNs, etc.) Train using gradient descent . Cost: C ( y, x ; { W ℓ , b ℓ } ) (denote C ) ∂ C Update: W ℓ ( i, j ) = W ℓ ( i, j ) − α ∂W ℓ ( i,j ) Computing the gradients via backpropagation: ∂h ℓ ( j ) ∂a ℓ ( j ) ∂ C ∂ C ∂W ℓ ( i,j ) = ∂W ℓ ( i,j ) where ∂h ℓ ( j ) ∂a ℓ ( j ) ∂h ℓ ( j ) ∂a ℓ ( j ) ∂a ℓ ( j ) = g ′ ( a ℓ ( j )) ∂W ℓ ( i,j ) = h ℓ − 1 ( i ) ∂ C ∂ C ∂h ℓ ( j ) = � k W ℓ +1 ( j, k ) h ℓ +1 ( k ) a ℓ ( j ) = h ℓ − 1 W ℓ (: , j ) + b j Kale/Che (USC/VPICU) Learning Causal Phenotypes November 20, 2015 8 / 27

  16. Neural nets combine different views of CP Kale/Che (USC/VPICU) Learning Causal Phenotypes November 20, 2015 9 / 27

  17. Neural nets combine different views of CP Output layer: classifier Kale/Che (USC/VPICU) Learning Causal Phenotypes November 20, 2015 9 / 27

  18. Neural nets combine different views of CP Output layer: classifier Hidden layers: Latent factors/bases 0.97 ¡ 0.52 ¡ 0.11 ¡ 0.18 ¡ 0.43 ¡ 0.92 ¡ 0.88 ¡ 0.89 ¡ 0.67 ¡ 0.08 ¡ Kale/Che (USC/VPICU) Learning Causal Phenotypes November 20, 2015 9 / 27

  19. Neural nets combine different views of CP Output layer: classifier Hidden layers: Latent factors/bases Multiclustering [BC13] 0.97 ¡ 0.52 ¡ 0.11 ¡ 0.18 ¡ 0.43 ¡ 0.92 ¡ 0.88 ¡ 0.89 ¡ 0.67 ¡ 0.08 ¡ Kale/Che (USC/VPICU) Learning Causal Phenotypes November 20, 2015 9 / 27

  20. Major challenge of neural nets: interpretation No predefined semantics (vs. graphical model) Learned bases not guaranteed to be uncorrelated or independent (vs. PCA, ICA) Information contained in distributed activations, so interpreting individual features unreliable [SZ14] Kale/Che (USC/VPICU) Learning Causal Phenotypes November 20, 2015 10 / 27

  21. Outline 1 Background: why and how of computational phenotyping Phenotypes: representations of illness Computational phenotyping Phenotyping as representation learning 2 Phenotyping clinical time series with deep learning Deep learning for time series Causal analysis of phenotypic representations 3 Experiments Setup Prediction results Visualization of causal phenotypes 4 Conclusion 5 References Kale/Che (USC/VPICU) Learning Causal Phenotypes November 20, 2015 11 / 27

  22. Deep learning for time series: window-based approach ŷ Classifica(on • Apply neural net (NNet) to fixed-size h (l) Feature extrac(on windows ( subsequences ). • Classification, feature extraction. • Correlations across variables, time. • Relatively few, weak model assumptions. • Can learn to detect smooth, trajectory-like patterns. Kale/Che (USC/VPICU) Learning Causal Phenotypes November 20, 2015 12 / 27

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend