partnering and pragmatic trials in a learning health care
play

Partnering and Pragmatic Trials in a Learning Health Care System - PowerPoint PPT Presentation

Partnering and Pragmatic Trials in a Learning Health Care System AcademyHealth Annual Research Meeting By: Anna Spier June 25, 2017 J- PALs U.S. Health Care Delivery Initiative (HCDI) HCDI develops rigorous evidence of strategies to


  1. Partnering and Pragmatic Trials in a Learning Health Care System AcademyHealth Annual Research Meeting By: Anna Spier June 25, 2017

  2. J- PAL’s U.S. Health Care Delivery Initiative (HCDI) HCDI develops rigorous evidence of strategies to improve the quality and value of health care delivery by: • Funding randomized evaluations • Connecting policymakers, practitioners, and researchers to spur policy-relevant research on key health care issues • Building evaluation capacity • Disseminating evidence to policymakers J - PAL | A CADEMY H EALTH 2

  3. I. Why RCTs? II. Building Research Partnerships III. Examples

  4. What is the effect of Medicaid? “Medicaid is • Not true : Increases in utilization, worthless or perceived access and quality, worse than no reductions in financial strain, and insurance” improvement in self-reported health • “Covering the Not true: Medicaid increases use uninsured will get of ER (overall and for a broad them out of the range of visit types) Emergency Room” Not true in short run : increases in health • “Health care use insurance In long run, remains to be seen : increases in preventive care and improvements in expansion saves self-reported health money” 4 J - PAL | A CADEMY H EALTH

  5. Randomized evaluations can provide clear answers Not always obvious what the effects of a given policy are • Ex: Those with insurance are in worse health than those without • insurance Conclude that insurance makes people sicker? Or that individuals in • poor health are more likely to seek out insurance? Randomized evaluations randomly assign individuals to • treatment (program) or control (status quo) By construction , the treatment group and the control group will • have the same characteristics, on average Observable: age, income, measured health, etc. • Unobservable: motivation, social networks, unmeasured health, etc. • Clear attribution of subsequent differences to treatment • (program) J - PAL | A CADEMY H EALTH

  6. Limited use of RCTs in U.S. health care delivery Finkelstein and Taubman Science 2015 • Review of empirical papers in top medical, economics, • and health services journals: 18% of U.S. health care delivery interventions randomized Greater use of RCTs for U.S. medical studies • • 80% of U.S.-based medical treatment studies randomized • True of both drug (86%) and non-drug (66%) interventions Greater use of RCTs for other social policy • • 36% of U.S. education studies • 46% percent of international development studies J - PAL | A CADEMY H EALTH

  7. I. Why RCTs? II. Building Research Partnerships III. Examples

  8. Typical process Defined Initial scoping Identify interested intervention researcher • Is randomization feasible? • Partner has interesting • Based on research • Is sample size sufficient? research question interests/bandwidth • Is partner committed? • Interested in RCT Full RCT Design pilot • Incorporate what was • Work out learned from pilot implementation hurdles • Proof of concept J - PAL | A CADEMY H EALTH 8

  9. Ideal research questions • Intervention that is: – Well-defined (protocol-driven, often well- established) – Policy-relevant/academically-interesting • Serves a large enough sample to detect anticipated effects • Ability to randomize access to the intervention (typically requires a capacity constraint or phased roll-out) • For private sector: aligned with business interests J - PAL | A CADEMY H EALTH 9

  10. Ideal partnerships • Willingness to experiment • Large institution (statistical power) • Access to administrative data • Realistic expectations • Executive-level support/sponsorship • Engaged researcher – Respects partner’s priorities – Works with partner to assess feasibility of evaluation – Thinks creatively about designing evaluation to address practical concerns – Helps navigate institutional or legal obstacles to data J - PAL | A CADEMY H EALTH 10

  11. Benefits of administrative data Compared to surveys, administrative data may: Reduce research costs Lessen logistical burden Enable long-term follow-up Improve accuracy of findings Image Credit: The Noun Project: Vaibhav Radhakrishnan; Chameleon Design; Aaron K. Kim; Alexander Bogomolov J - PAL | A CADEMY H EALTH 11

  12. Common challenges to randomization • PROGRAM DESIGN – Resources exist to extend the program to everyone in the study area – Program has strict eligibility criteria – Program is an entitlement – Sample size is small • IMPLEMENTATION – Difficult for service providers to adhere to random assignment due to logistical or political reasons – Control group finds out about the treatment, benefits from the treatment, or is harmed by the treatment J - PAL | A CADEMY H EALTH 12

  13. When does a randomized evaluation not make sense? • Too small: sample is to small to pick up a reasonable impact • Too early: still ironing out logistics • Too late: already serving everyone who is eligible, and no randomization was built in • When a positive impact has been proven, and we have the resources to serve everyone J - PAL | A CADEMY H EALTH 13

  14. I. Why RCTs? II. Building Research Partnerships III. Examples

  15. Nonprofit, community organization intervention: Care coordination • Partner: Camden Coalition of Healthcare Providers • Intervention: Support to high-need patients (“super - utilizers”) for ongoing outpatient care and assistance in Goal: Analysis of primary accessing social outcome (hospital re- programs admissions) using Health Information Exchange J - PAL | A CADEMY H EALTH 15

  16. Large health system intervention: Clinical Decision Support (CDS) • Partner : Aurora Health Care • Intervention : CDS notifies physicians in real time when they have ordered a diagnostic scan that is inconsistent with current professional guidelines • Goal : Determine how CDS impacts ordering behavior of physicians J - PAL | A CADEMY H EALTH 16

  17. Private company intervention: Workplace wellness • Partner: BJ’s Wholesale Club • Intervention: Diet, exercise, and mental health programming for employees over 1-year period • Goal : Gauge impact on five categories of outcomes gathered from both primary and administrative data sources J - PAL | A CADEMY H EALTH 17

  18. Nurse Family Partnership J - PAL | A CADEMY H EALTH 18

  19. Feasibility: funding a novel expansion SCDHHS Medicaid Waiver • Implementing NSO Agencies Pay for Success (PFS) • contract PFS The Children’s – contingent on randomized J-PAL Project Trust evaluation SC DHEC Philanthropists Social Finance J - PAL | A CADEMY H EALTH 19

  20. Nurse home visits today Innovative expansion of NFP in South Carolina Study Population: 6,000 over four years Outcomes of interest: Short- and long-run impact on a wide range of health, education, employment, criminal justice and other outcomes. J - PAL | A CADEMY H EALTH

  21. Discussion & Questions Thank you! Anna Spier Senior Policy Associate J-PAL North America aspier@mit.edu

  22. What Does HCDI Provide? Partnership development between policymakers, • practitioners, and researchers to spur policy-relevant research Technical assistance in identifying and scoping • opportunities for rigorous evaluation of innovative programs Targeted research funding • Trainings and resources for researchers and • policymakers regarding rigorous evaluation Syntheses of existing evidence on key policy topics • J - PAL | A CADEMY H EALTH 22

  23. J- PAL’s mission is to reduce poverty by ensuring that policy is informed by scientific evidence J - PAL | A CADEMY H EALTH

  24. J- PAL’s network of 145 professors use randomized evaluations to inform policy J - PAL | A CADEMY H EALTH

  25. We have 842 ongoing and completed projects across 8 sectors in 80 countries J - PAL | A CADEMY H EALTH

  26. What is a Randomized Evaluation? Before the program starts, eligible individuals are randomly assigned to two groups so that they are statistically identical before the program. Two groups Treatment continue to be identical, except for treatment Any differences in outcomes between Control the groups can be attributed to the program J - PAL | A CADEMY H EALTH 26

  27. Why Randomize? Results from experiments can surprise us • Decision makers are more likely to trust • the results Higher confidence to scale up • approaches or make changes in how money is spent Photo Credit: Shutterstock.com J - PAL | A CADEMY H EALTH

  28. Why Randomize? Hospi pital readmis missions ons Intervention Yea ear 2013 2014 2015 J - PAL | A CADEMY H EALTH

  29. Why Randomize? Hospi pital readmis missions ons Intervention Counter-factual Impact Yea ear 2013 2014 2015 J - PAL | A CADEMY H EALTH

  30. Why Randomize? Hospi pital readmis missions ons Intervention Counter-factual Impact Yea ear 2013 2014 2015 J - PAL | A CADEMY H EALTH

  31. Future opportunities: what questions do you want answered? Examples include: Choice of appropriate care (i.e. preventive screening • uptake) Who provides care? Where and how is care provided? • (i.e. post-partum length of stay, telemedicine) Efficient use of existing resources (i.e. staffing, • scheduling) Insurance contract and reimbursement design (i.e. • reference pricing, limited network plans) System-wide innovations (i.e. payment reform) • J - PAL | A CADEMY H EALTH 31

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend