discussion bridging causal inference theory practice
play

Discussion: Bridging Causal Inference Theory & Practice - PowerPoint PPT Presentation

Discussion: Bridging Causal Inference Theory & Practice Elizabeth A. Stuart, PhD Associate Dean for Education, Professor; Johns Hopkins Bloomberg School of Public Health @lizstuartdc September 19, 2019 Elizabeth Stuart Disclosures Rela


  1. Discussion: Bridging Causal Inference Theory & Practice Elizabeth A. Stuart, PhD Associate Dean for Education, Professor; Johns Hopkins Bloomberg School of Public Health @lizstuartdc September 19, 2019

  2. Elizabeth Stuart Disclosures Rela latio ionship ip Company ny(ies es) Speakers Bureau Advisory Committee Board Membership Consultancy Review Panel PCORI Funding ME-1502-27794 (co-PI Dahabreh; completed) Former Chair of Clinical Trials Advisory Panel Honorarium Ownership Interests Stock holdings in Abbott Labs, Abbvie, Chemtura, Express Scripts, Johnson & Johnson, Medtronic, Merck, Proctor & Gamble, Phillips, Quest Diagnostics 2

  3. Causal inference overview 3

  4. We see only one potential outcome for each unit 4

  5. Defining Causal Effects • A causal effect is a comparison of potential outcomes for the same units • e.g., for Joe, what would happen if today he does or doesn’t take an antidepressant • e.g., for a group of patients, the average difference in mortality if they all receive Drug A vs. Drug B • Causal effects are not comparisons of outcomes for different groups of people • But we can use those people to estimate causal effects! 5

  6. So how do we estimate causal effects? • Randomized experiments give us unbiased estimates of effects • Treatment group provides good estimate of what would happen to full sample under treatment; same for control group • But we often can’t randomize • Ethics • Concerns about generalizability • Instead use non-experimental studies • The catch is that non-experimental studies will always involve some assumptions about the unobserved potential outcomes 6

  7. The three talks • There is a long history of rigorous methodological work examining how to estimate causal effects in non-experimental settings • But still MANY questions about complications that come up in practice • These 3 talks aim to bridge this gap and develop methods for the questions that come up in clinical practice • Dr. Neugebauer: How to study dynamic treatment regimes, where treatment decisions depend on (evolving) response to earlier treatments? • Dr. Gutman: How to compare outcomes across a set of treatment choices (more than two)? • Dr. Zeger: How can we help predict what is best for each individual? 7

  8. Dr. Neugebauer: The main idea • Many treatment decisions depend on a current health status, which may be a function of earlier treatment decisions • Possible, but hard, to study in randomized designs (would need large samples, long time period, etc.) • Can we take advantage of large-scale EHR data to estimate the effects of dynamic regimes? • Yes! With careful attention to time ordering, confounding, and challenges that arise from infrequent/unspecified timing of monitoring in EHR 8

  9. Dr. Neugebauer: Questions/Thoughts • Challenges in defining the potential adaptive intervention • Importance of diagnostics • Especially in terms of data support (was happy to see that) • How can we understand: 1) how many people actually experience the adaptive regimes of interest, and 2) how similar/different they are from one another? • These diagnostics relatively straightforward in simple two group/two time point case; how to make it easy here? 9

  10. Dr. Gutman: The main idea • Most causal inference methods developed for two treatment conditions, but many clinical questions involve choice of a few options • May be hard to do a randomized trial with enough subjects • So instead take advantage of large-scale non-experimental data • The catch is that causal inference becomes even harder • Instead of only seeing ½ of the potential outcomes we see 1/3, or ¼, or 1/5 • It even becomes hard to define the target estimand! • Reference population • Comparison conditions • Matching and imputation approaches to estimate the effects 10

  11. Dr. Gutman: Questions/Thoughts • Really nice to see a thoughtful discussion of these issues, including the challenges induced by different eligibility • Also nice example of replicating a randomized trial as much as possible in terms of defining exposures, outcomes, timing, etc. • Builds on long tradition (Rubin, Cochran), now “trial emulation” (Hernan) • And does a sensitivity analysis to unobserved confounding • Data requirements are large, especially with more and more treatment groups • How to develop the right diagnostics, tied to the estimands, and that show whether data are sufficient for comparisons 11

  12. Dr. Zeger: The main idea • We want to figure out “what works for whom” by borrowing information on “similar” individuals who came before • Define subsets of individuals who are similar to one another, to then learn and predict for future “similar” individuals • Use Bayesian methods to combine population level data with expert judgement • The ultimate goal: Implement this in a learning healthcare system 12

  13. Dr. Zeger: Questions/Thoughts • Great goal: helping clinical decision making for each patient rely on previous knowledge (both in the data and expert opinion) • But this is hard! Remember point about individual causal effects • How to define similar? Background characteristics? Prognosis without treatment? • How to deal with confounding when considering different treatment choices? • How to reflect uncertainty? 13

  14. Common themes • With more complex questions come more data needs • Need for thoughtful data collection in non-experimental studies • Common assumption of unconfounded treatment assignment • How can we make better use of data or qualitative methods to understand plausibility of this assumption? • Sensitivity analyses crucial • Will always be untestable assumptions in non-experimental settings • Question is how to make those more plausible, or how to assess robustness of results to violation of them 14

  15. Common themes (cont.) • Very impressive work from all three presenters • Answering important clinical questions with rigorous and innovative methods THANK YOU! 15

  16. Learn More • https://www.elizabethstuart.org/ • https://www.pcori.org/research-results/about-our- research/research-methodology/methodology-standards- academic-curriculum • inc. module on causal inference by me and Dr. Zeger! 16

  17. Thank You! Elizabeth A. Stuart Associate Dean for Education, Professor @lizstuartdc September 19, 2019 17

  18. Questions? 18

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend