Evidence-informed Health Policy: NASHP Pre-conference Origins of - - PDF document

evidence informed health policy
SMART_READER_LITE
LIVE PREVIEW

Evidence-informed Health Policy: NASHP Pre-conference Origins of - - PDF document

11/17/2017 October 23, 2017 Evidence-informed Health Policy: NASHP Pre-conference Origins of the EiHP Workshop Original workshops were developed as a joint project between CEbP and the Milbank Memorial Fund in 2009 Meant to equip


slide-1
SLIDE 1

11/17/2017 1

Evidence-informed Health Policy:

NASHP Pre-conference

October 23, 2017

Origins of the EiHP Workshop

  • Original workshops were developed as a joint project

between CEbP and the Milbank Memorial Fund in 2009

  • Meant to equip policymakers in state and local

government with knowledge, skills, and attitudes for understanding and applying research evidence

  • Adapted in 2014 to be delivered in different formats,

timeframes, and audiences

slide-2
SLIDE 2

11/17/2017 2

Acknowledgements

  • Martha Gerrity and Mark Gibson
  • Jane Beyer
  • Milbank Memorial Fund
  • NASHP and PCORI
  • Pew-MacArthur Results First Initiative

To start

  • Introductions (of a sort)
  • Ground rules
  • Overview of the session
slide-3
SLIDE 3

11/17/2017 3

Ground rules

  • Please don’t hesitate to ask questions
  • Three hours is a long time, so:

– We’ll be generous with the breaks, including to get lunch at noon – We understand if you need to step out

  • Participation in the polls and other activities will make

this more interesting and useful

Overview of session

  • 10:30-10:45 Introduction, ground rules, overview
  • 10:45-11:25 Defining EiHP and Evidence basics
  • 11:25-11:30 Break
  • 11:30-12:00 Summarizing evidence and Common pitfalls
  • 12:00-12:20 Break for lunch
  • 12:20-1:10 Examples from the evidence
  • 1:10-1:30 Wrap-up, discussion, and questions
slide-4
SLIDE 4

11/17/2017 4

What is evidence?

  • For our purposes, evidence comes from research that:

– Is intended to test the validity of a claim – Uses reproducible methods – Collects and interprets data using tests to distinguish between chance and true effects – Can be scrutinized by peers and the public – Falsifiable!

  • More simply stated by W. Edwards Deming: “In God we
  • trust. All others must bring data.”

What is EiHP?

  • An approach to health policy decisions that is informed by

the best and most complete available research evidence

  • A structured way to use research to better understand

what works, recognizing that: – Not all studies are created equal – Some studies may not be relevant to policymaking – Transparency in identifying and applying studies is important

slide-5
SLIDE 5

11/17/2017 5

What is EiHP?

  • Systematic process in which relevant research is:

– Identified in an unbiased fashion – Critically interpreted to understand its quality – Combined to provide a better estimate of the real effects – Applied appropriately to policymaking – Re-assessed when new information becomes available

Why should I use EiHP?

  • When tackling complex issues, it’s useful to have a

sense of: – What we know – What we don’t know – What we merely believe

slide-6
SLIDE 6

11/17/2017 6

Why should I use EiHP?

  • An analogy from medicine

– When there is a delay in adopting an effective therapy

  • r discarding an ineffective or harmful therapy, lives

are at stake – This hazard is magnified when we consider programs and policies that affect the lives of many more people

Why should I use EiHP?

  • EiHP improves the chances that a policy or investment

will achieve the desired ends – And reduces the likelihood that a failed policy will have to be abandoned in the future

  • EiHP can be a starting point for engaging stakeholders

with divergent views

  • When done deliberately and transparently, EiHP can

increase public confidence in the policymaking process

  • Consider the alternatives:

– Anecdotes – Opinion – Intuition

slide-7
SLIDE 7

11/17/2017 7

Understanding Evidence

The origins of epidemiology

slide-8
SLIDE 8

11/17/2017 8

The origins of epidemiology… and public health policy

https://upload.wikimedia.org/wikipedia/commons/c/cb/John_Snow_memorial_and_pub.jpg

Health in the 20th Century

  • Vaccination
  • Motor-vehicle safety
  • Safer workplaces
  • Control of infectious

disease

  • Decline in deaths

from heart attack and stroke

  • Safer, healthier foods
  • Maternal and prenatal

care

  • Family planning
  • Fluoridation
  • Tobacco control

policies

  • CDC. (1999). MMWR, 48(12):241-243.
slide-9
SLIDE 9

11/17/2017 9

Asking the right question

  • Having a standard way for framing questions you hope

to answer with the evidence is critical – A research tool for assessing which studies are relevant – An exercise in establishing, a priori, what types of research and outcomes would influence the policy decision being contemplated – Agreement about desired outcomes and a process for reviewing and summarizing the evidence can help build consensus

Asking the right question

  • PICO(TS+)

– Population – Intervention – Comparison – Outcome – (Timing, Setting, Policy context)

slide-10
SLIDE 10

11/17/2017 10

Asking the right question: Population

  • Demographics
  • Conditions
  • Geography

Asking the right question: Intervention

  • Drug, device, or procedure
  • Diagnostic test
  • New methods of organizing or delivering care
  • Systems or process changes
  • Policy changes
slide-11
SLIDE 11

11/17/2017 11

Asking the right questions: Comparison

  • Status quo
  • Placebo
  • Sham procedure
  • Alternate treatment

Asking the right question: Outcomes

  • Health or wellbeing (most important outcomes)
  • Surrogate measures
  • System performance
  • Process measures
slide-12
SLIDE 12

11/17/2017 12

Asking the right question: PICO example

Population: Adults with serious mental illness Intervention: Assertive community treatment Comparator: Usual care Outcomes: Psychiatric hospitalization, emergency dept use, homelessness, psychiatric symptoms, medication adherence

The “reverse” PICO

  • When there are disagreements about the meaning or

applicability of a study, it can help to reverse the PICO process – May clarify whether the study is really answering the question you are interested in

slide-13
SLIDE 13

11/17/2017 13

The Challenge of Using Evidence

  • There are an estimated 24 million studies in PubMed,

each a potential piece of evidence

  • Studies often reach conflicting results
  • It’s easy to pick and choose the evidence that best

supports a given position

  • How do you know what evidence is most accurate and

reliable?

Why are some studies “good” and some studies “bad”?

  • Some studies are not designed to fairly answer the question

they pose

  • Studies can be biased to favor certain results, intentionally or

unintentionally

  • Conflict of interest can result in a bias toward favorable results
  • It’s time consuming and takes some technical sophistication

to sort through studies to assess quality and summarize results

slide-14
SLIDE 14

11/17/2017 14

The essence of epidemiology

  • How do you explore the relationship between an

exposure and an outcome? – Hypothesize and observe – Hypothesize and experiment

The evidence hierarchy

Murad et al. (2016). Evidence-Based Medicine Published Online First: 23 June

  • 2016. doi:10.1136/ebmed-2016-110401
slide-15
SLIDE 15

11/17/2017 15

The evidence hierarchy

Murad et al. (2016). Evidence-Based Medicine Published Online First: 23 June

  • 2016. doi:10.1136/ebmed-2016-110401

Case series or reports

  • Simply describes a set of cases and their outcomes
  • Often used for rare conditions, or when a treatment or test is

very new

  • Usually represent the experience of a single center
  • Should not be used to establish effectiveness of a treatment
  • Be especially wary of non-consecutive case series (meaning

that the authors picked out only the cases they wanted to describe)

slide-16
SLIDE 16

11/17/2017 16

The risk of case reports

  • Porter and Jick, 1981:
  • Addiction is rare in people treated with narcotics

The risk of case reports

Leung, et al. (2017) NEJM. 376;22.

slide-17
SLIDE 17

11/17/2017 17

Case-control studies

Cases Controls Exposed Unexposed Exposed Unexposed Start of study Past

Case-control example

High test scores Low test scores Large class Small class Large class Small class Start of study Past

slide-18
SLIDE 18

11/17/2017 18

Case-control studies

  • Advantages:

– Quick and inexpensive – Particularly good for investigating rare outcomes – Dynamic populations

  • Disadvantages:

– Recall bias – Cases and controls may not be representative (selection bias) – Confounding

Prospective cohort studies

Healthy Population Exposed Unexposed Outcomes Outcomes Start of study Future

slide-19
SLIDE 19

11/17/2017 19

Prospective cohort example

Healthy Population Small class Large class Test scores Test scores Start of study Future

Prospective cohort studies

  • Advantages:

– Eliminate recall bias – Can examine multiple outcomes – Allows estimation of incidence of outcome – Better at detecting long-term harms than other studies

  • Disadvantages

– Expensive and take a long time – Loss to follow-up (attrition bias) – Still subject to confounding

slide-20
SLIDE 20

11/17/2017 20

Retrospective cohort studies

Exposed Unexposed Outcomes Outcomes Start of study Past

Retrospective cohort example

Small class Large class Test scores Test scores Start of study Past

slide-21
SLIDE 21

11/17/2017 21

Retrospective cohort studies

  • Advantages

– Much easier to do than prospective cohorts

  • Disadvantages

– Only as good as the data set being used

  • Claims data and diagnosis codes are often unreliable

– Data about confounding factors could be missing and can’t be adjusted for

Randomized controlled trial (RCT)

Population

  • f Interest

Intervention Control Outcomes Outcomes Start of study Future

slide-22
SLIDE 22

11/17/2017 22

RCT example

Primary school students Large class Small class Test scores Test scores Start of study Future

RCTs

  • Advantages:

– Eliminates confounding – Can be blinded to control for placebo effects, performance and detection bias (sometimes)

  • Disadvantages:

– Expensive, may not follow people for long periods – Sometimes inappropriate to randomize – Can still be poorly designed and this leads to bias – May not detect rare or long-term harms

slide-23
SLIDE 23

11/17/2017 23

RCT-like studies

  • Cluster randomized studies

– Intervention or control is randomly assigned to a “cluster”

  • f people (e.g. at the level of a clinic, or school, or

correctional facility), but not at the individual level – Commonly done for social or health services studies

  • Quasi-randomized studies

– Not truly randomized, but leverage natural experiments like the Oregon Health Plan lottery or social services that have waiting lists

An example: Physical activity

  • Does a program to slow the decline in physical activity

among adolescents work?

  • Randomly assigned 10 secondary schools to a

multicomponent physical activity intervention (Physical Activity 4 Everyone) or to usual physical education

  • Over 24 months there were 1,150 who participated in the

study

  • In the PA4E school, students had significantly greater

levels of moderate-vigorous physical activity as measured by an accelerometer

Sutherland, et al. (2016). Am J Prev Med. 51(2):195–205.

slide-24
SLIDE 24

11/17/2017 24

Baby simulators for preventing teen pregnancy?

Brinkman, et al. (2016). http://www.thelancet.com/journals/lancet/article/PIIS0140-6736(16)30384-1/abstract

Summarizing and Assessing Evidence

slide-25
SLIDE 25

11/17/2017 25

The evidence hierarchy

Murad et al. (2016). Evidence-Based Medicine Published Online First: 23 June

  • 2016. doi:10.1136/ebmed-2016-110401

Systematic review and meta-analysis

Systematic Review

  • Focused summary of

research on a topic that uses clearly defined steps to:

– Perform a comprehensive search for the evidence – Select which studies to include – Assess quality of each study

Meta-analysis

  • Systematic review PLUS

a conclusion that quantitatively combines results across studies

slide-26
SLIDE 26

11/17/2017 26

How evidence is synthesized

  • A systematic search of the literature is done
  • Studies are selected for inclusion based on pre-specified

criteria

  • The studies are individually assessed for their quality

and risk-of-bias

  • Included studies are summarized and, when appropriate,

statistical methods are applied to better estimate the true results (and risks)

  • A judgment is made about the overall quality of the

literature and its limitations

Why are systematic reviews so important?

  • Single trials rarely settle an issue
  • Reproducibility confirms the effect
  • Refines our estimate of the size of the intervention effect
  • Expose unintended harms that might not have been

detected in individual trials

slide-27
SLIDE 27

11/17/2017 27

What makes a good systematic review?

  • A well-defined and appropriately narrow question
  • Comprehensive search strategy that is unlikely to miss

any relevant studies

  • Clear inclusion and exclusion criteria and an accounting
  • f study disposition
  • Quality assessment of the included studies
  • Appropriate techniques to combine results
  • Sensitivity analysis

– i.e. What if we only look at RCTs (or good quality studies, or studies with more than 500 people, etc…) Risk of Bias Inconsistency Indirectness Imprecision Publication Bias

Single Study Body of Evidence Evidence Summary

slide-28
SLIDE 28

11/17/2017 28

GRADE(ing) the evidence

From: https://www.nlm.nih.gov/nichsr/hta101/HTA_101_FINAL_7-23-14.pdf

Plain Language GRADEs

slide-29
SLIDE 29

11/17/2017 29

Supportive housing – General

  • Good quality review of 8 SRs, 7 RCTs, 5 quasi-

experimental studies (limited to adults)

  • Moderate-quality evidence
  • Consistent improvements in housing outcomes
  • Subset of Housing First programs also showed

reductions in emergency dept. use and hospitalization

  • Mixed evidence on behavioral health and substance use
  • utcomes
  • Some evidence of racial differences in outcomes

Rog, D.J., Marshall, T., Dougherty, R.H., George, P., Daniels, A.S., Shoma Ghose, S., & Delphin-Rittmon, M.E. (2014). Permanent Supportive Housing: Assessing the Evidence. Psychiatric Services, 65(3), 287-294.

Supportive housing – Program evaluations

  • Numerous program evaluations in the gray literature
  • Many simple before-and-after designs, but some with

more rigorous quasi-experimental designs

  • Many only reported savings for those who remain

housed

  • Most reported significant cost-savings ranging from

about $1,000 to $10,000 per person per year

slide-30
SLIDE 30

11/17/2017 30

Supportive housing – Cost-offsets

  • Good-quality narrative systematic review of 4 RCTs, 8

quasi-experimental studies, and 22 before-and-after studies of Housing First that reported on costs

  • 21 of 22 before-and-after studies, 4 of 8 quasi-

experimental, and 1 of 4 RCTs showed cost-savings

  • Authors questioned whether it is certain that Housing

First programs will pay for themselves

Applying evidence quality to policymaking

  • High quality evidence gives us confidence that if a

program or policy were replicated, the results would be similar to what the studies found

  • Very low quality evidence gives us very little confidence

that the program or policy would produce the observed benefits, and future studies might change the conclusions

slide-31
SLIDE 31

11/17/2017 31 Analysis

  • f

Evidence Value Judgments

EVIDENCE INFORMATION ABOUT OUTCOMES DECISIONS SCIENTIFIC JUDGMENTS PUBLIC VALUES AND PREFERENCES

Based on a graphic published in D.M. Eddy, (1990). “Clinical Decision Making: From Theory to Practice—Anatomy of a Decision.” Journal

  • f American Medical Association, 263(3): 441-3.

What is EBM?

Domains of policy deliberation

1. What is the quality of the evidence? Based on this evidence, how confident should I be that the program or policy will produce the benefit I’m looking for? 2. What exactly was the intervention in the study that produced the results? Could anything besides the intervention have influenced the results? 3. Who produced the evidence? Are there conflicts of interest that might overstate the effectiveness? 4. What about other studies? Do they reach similar conclusions? 5. Who are other stakeholders for this issue (providers, patients, payers, government agencies) and would they interpret the data differently?

Key questions for policymakers to ask

slide-32
SLIDE 32

11/17/2017 32

Common Pitfalls

Do the endpoints matter?

DiVincenzo, et al. (2015). NEJM. 373(21):2048-2058

  • Study examining a novel treatment for RSV
  • Adults were inoculated with RSV and quarantined
  • Randomized to drug or placebo
  • Followed for 16 days in quarantine and 28 days total for

– Amount of viral RNA and time to non-detectable – Symptom severity – Mucous weight

slide-33
SLIDE 33

11/17/2017 33

Inferential reasoning

  • The existence of a risk factor does not guarantee that

interventions designed to address the risk factor will be successful – Sugar sweetened beverage (SSB) consumption is associated with adolescent obesity – RCT of a year long intervention targeting adolescents was successful in reducing SSB consumption but had no effect on weight at 2 year follow-up

Ebbeling, et al. (2012). NEJM. 367(15):1407-16.

Placebo effects for self-reported outcomes

Sham surgery 27% 33% Knee arthroscopy WOMET score improvement 25% Pain score improvement 31%

Sihvonen, et al. (2013). NEJM. 369(26):2515-24.

slide-34
SLIDE 34

11/17/2017 34

Regression to the mean

  • Particularly limits the usefulness of uncontrolled before-

and-after studies for utilization

Johnson, et al. (2015). Health Affairs. 34(8):1312-19.

Relative and absolute benefits

  • Statistical significance ≠ real world significance

– Magnitude of difference may still be small

  • Consider a drug that is advertised as reducing the risk of

a heart attack by 33% – That could mean it reduces the absolute risk of a heart attack from 60% to 40% – It could also mean that it reduces the absolute risk from 3% to 2%

slide-35
SLIDE 35

11/17/2017 35

Inadequate follow-up

  • An example of multigenerational outcomes:

– The Moving to Opportunity (MtO) social experiment was a large randomized trial that compared the effects of housing vouchers requiring relocation to low-poverty neighborhoods compared to usual vouchers – Initial economic outcomes of MtO were mostly negative – Long-term follow-up of children from the study found major differences in income later in life

Chetty, et al. (2015). American Economic Review. 106(4):1-46.

Publication bias

  • Positive studies are more likely to be published than

negative studies (particularly for smaller studies)

  • Sometimes negative data is intentionally withheld
  • This creates a risk for systematic reviewers
slide-36
SLIDE 36

11/17/2017 36

Subgroup analysis

  • Classic post-hoc analysis of the ISIS-2 trial which

showed overwhelming benefit to aspirin for people with heart attacks

  • Benefit didn’t extend to Libras or Geminis
  • Subgroups should be pre-specified and reasonable
  • The larger then number of subgroups the greater the

chance that a finding will be different due to chance alone

  • Peto. (2011). Br J Cancer, 104(7):1057–1058

Conflicts of interest (COI)

  • Cochrane review of the effects of industry-sponsorship
  • n published results

– 27% more likely to report efficacy – 34% more likely to reach positive conclusions about the drug or device

  • “…industry sponsored drug and device studies are more
  • ften favorable to the sponsor’s products than non-

industry sponsored drug and device studies due to biases that cannot be explained by standard 'Risk

  • f bias' assessment tools…”

Lundh, et al. (2017). Cochrane Database of Systematic Reviews. Issue 2:MR000033.

slide-37
SLIDE 37

11/17/2017 37

Guidelines are only as good as the evidence

  • 20% of Class I recommendations were downgraded,

reversed, or omitted in later guidelines

  • Recommendations that were based on opinion,
  • bservational data, or a single RCT were the most likely

to be reversed (3-fold higher than those based on multiple trials or meta-analyses)

Neuman et al. (2014). JAMA, 311(20):2092-2100.

Common understanding of “cost-effective”

  • When someone tells you that an intervention is cost-

effective, what does that mean?

slide-38
SLIDE 38

11/17/2017 38

Types of economic analysis

Numerator Denominator Question Cost-effectiveness Costs of intervention Natural effects (e.g. deaths, unintended pregnancies, etc…) How much does it cost per unit of effect? Cost-utility Costs of intervention Measures of utility (QALYs, DALYs) How much does it cost per utility- adjusted effect? Cost-benefit Costs of intervention Monetized effects

  • r utilities

How much does it cost compared to the costs avoided by the outcomes?

What is a QALY?

  • Quality adjusted life years (QALYs) are a compound

measure of added length of life combined with a “discount” for lower quality of life in some disease states

  • The degree of discounting for less than perfect health is

derived by various methods

  • Allows estimation of the cost per QALY which can be

used to establish “cost-effectiveness” (if a willingness to pay threshold is agreed on) or to compare various

  • ptions
slide-39
SLIDE 39

11/17/2017 39

Cost-benefit analysis

  • Monetizes outcomes of an intervention (directly or

indirectly) to allow comparison of potential savings to the costs of the intervention

Cost-benefit analysis

  • Colorado Adolescent Maternity Program study in 2014
  • Immediate post-partum placement of LARC vs standard

care with respect to repeat pregnancy

  • Each dollar invested in LARC saves $6.50 for the

Colorado Medicaid program over 3 years

1 year 2 year 3 years LARC 2.6% 8.1% 17.7% Standard care 20.1% 46.5% 83.7% Han, L. et al. (2014). AJOG. 211:24.e1-7.

slide-40
SLIDE 40

11/17/2017 40

Research on the Effects of Health Insurance

“Nobody dies because they don’t have access to health care.” – Rep. Raul Labrador at a town hall meeting on health care in May 2017

slide-41
SLIDE 41

11/17/2017 41

Oregon Health Insurance Experiment

  • In 2008, Oregon had resources to offer Medicaid

coverage to some people on a waiting list

  • Offers to enroll were extended through a random lottery
  • 29,834 people received invitations to enroll and 45,088

who did not became the control group

  • Because many people who were selected by lottery did

not apply or were not eligible and about 15% of the control group obtained coverage, being selected by the lottery equated to a roughly 25% increase in the probability of having Medicaid coverage

Oregon Health Insurance Experiment

  • Followed for 2 years to assess the effects of coverage
  • n

– Mortality – Physical health outcomes – Behavioral health outcomes – Access to preventive services – Utilization – Personal finances – Self-reported health and quality of life

slide-42
SLIDE 42

11/17/2017 42

Oregon Health Insurance Experiment

  • At 1 year, there was no statistically significant difference

in mortality – Rare event in both groups, roughly 0.8% vs 0.7% – Very wide confidence interval includes the possibility

  • f substantial increase or decrease in mortality at 1

year

  • Case closed?

Finkelstein et al. (2012). Q J Econ, 127(3):1057–1106. Woolhandler & Himmelstein. (2017). Ann Int Med, 167(6):424-431

Health Insurance and Mortality

  • A recent review by Himmelstein and Woolhandler

summarizes the results of multiple studies examining the effects of insurance on mortality

  • Mix of RCTs, quasi-experimental, and observational

studies published over the past 40 years

Woolhandler & Himmelstein. (2017). Ann Int Med, 167(6):424-431

slide-43
SLIDE 43

11/17/2017 43

Health Insurance and Mortality

  • Association between insurance and mortality is difficulty

to study for several reasons: – Mortality from treatable medical conditions is uncommon in non-elderly adults – Exposure to insurance is dynamic – Observational studies have significant confounding and potential reverse causality – Hard to adjust for baseline health status since this

  • ften unknown in the uninsured

– Before-and-after studies may capture secular trends

Woolhandler & Himmelstein. (2017). Ann Int Med, 167(6):424-431

Health Insurance and Mortality

  • Quasi-experimental studies

– Comparison of mortality trends in 3 states (ME, NY, AZ) expanding Medicaid to neighboring states without expansion – Uninsurance rate fell by 3.2% – Relative risk of mortality fell by about 6% or 20 deaths per 100,000) – Larger effects among middle-aged, non-whites, and in poorer counties

Woolhandler & Himmelstein. (2017). Ann Int Med, 167(6):424-431

slide-44
SLIDE 44

11/17/2017 44

Health Insurance and Mortality

  • Quasi-experimental studies

– Comparison of mortality trends in Massachusetts vs propensity matched counties in other states – Relative risk of mortality fell by about 3% or 8 deaths per 100,000) – Larger effects in poorer counties

Woolhandler & Himmelstein. (2017). Ann Int Med, 167(6):424-431

Health Insurance and Mortality

  • Other lines of evidence

– Survey-based cohorts in the US demonstrate that coverage is associated with 10% to 20% reductions in the risk of mortality – Incremental implementation of universal coverage programs in Taiwan and Canada showed associations between coverage and accelerated declines in mortality

Woolhandler & Himmelstein. (2017). Ann Int Med, 167(6):424-431

slide-45
SLIDE 45

11/17/2017 45

Health Insurance and Mortality

  • Limitations of the studies mean there is uncertainty

about the effects of insurance on mortality in non-elderly adults

  • However, without the benefit of formal systematic review

and meta-analysis, the balance of available evidence suggests that insurance coverage may be associated with modest reductions in mortality

  • Case closed?

Other Benefits of Health Insurance

  • Mortality is a high bar as far as outcomes go
  • It’s reasonable to look at other outcomes as:

– Part of a “chain of evidence” that would suggest mortality benefit – Other important measures of health and wellbeing, regardless of association with mortality

slide-46
SLIDE 46

11/17/2017 46

Other Benefits of Health Insurance

  • Recent review of studies examining the broader effects
  • f health insurance on various outcomes

– Access and utilization – Chronic disease diagnosis and management – Self-reported health and quality of life – Financial security

Sommers et al. (2017). NEJM, 377(6):586-593.

Other Benefits of Health Insurance

  • Access and utilization

– Greater likelihood of accessing outpatient services and having a usual source of care – Increased use of preventive services – Improved adherence to prescription drugs – Mixed data on ED utilization

Sommers et al. (2017). NEJM, 377(6):586-593.

slide-47
SLIDE 47

11/17/2017 47

Other Benefits of Health Insurance

  • Chronic disease diagnosis and management

– Increased rates of diagnosis and treatment of conditions like diabetes and high blood pressure – Better outcomes for people with depression

Sommers et al. (2017). NEJM, 377(6):586-593.

Other Benefits of Health Insurance

  • Self-reported health and quality of life

– Significantly improved self-rated health compared to the prior year in OHIE – Authors point to evidence of an association between self-reported health and 5 to 10 year mortality

Sommers et al. (2017). NEJM, 377(6):586-593.

slide-48
SLIDE 48

11/17/2017 48

Other Benefits of Health Insurance

  • Financial security

– Reduced out-of-pocket spending – Reduced catastrophic spending and medical debt – Reduced personal bankruptcy

Sommers et al. (2017). NEJM, 377(6):586-593.

Effects of Health Insurance

  • Consistent evidence of improved access to care, use of

preventive services, chronic disease diagnosis and management

  • Improved outcomes for people with depression
  • Improved self-rated health and quality of life
  • Improved financial security
  • Mixed evidence with greater uncertainty about the

effects on ED utilization and mortality

Sommers et al. (2017). NEJM, 377(6):586-593.

slide-49
SLIDE 49

11/17/2017 49

EiHP Lessons

  • Single studies (even RCTs) need to be put in context
  • Certainty about the effects will vary by outcome
  • When evidence is limited and definitive studies are not

likely to be done, it may be necessary to use chain of evidence approaches

Medicaid ACO Models: Comparing Oregon and Colorado

slide-50
SLIDE 50

11/17/2017 50

ACO Models: Oregon & Colorado

  • Analysis of two different Medicaid ACO models that were

implemented near-contemporaneously in Colorado and Oregon

  • Focused on:

– Expenditures and utilization – Access – Appropriateness of care for certain conditions – Preventable hospitalizations

McConnell et al. (2017). JAMA, 177(4):538-545.

ACO Models: Oregon & Colorado

slide-51
SLIDE 51

11/17/2017 51

ACO Models: Oregon & Colorado

Oregon CCO Colorado RCCO Primary care medical homes Primary care medical homes Risk-adjusted global budgets with financial risk FFS with added PMPM for care coordination Required reductions in spending growth No required reductions, but anticipated

  • High utilizer programs
  • ED use reduction programs
  • Hospital-to-home transitions
  • Spending on social determinants
  • Integration of oral and behavioral

health

  • Encouraged use of APMs over FFS
  • High utilizer programs
  • ED use reduction programs
  • Spending on social determinants
  • Centralized data collection for clinic

performance McConnell et al. (2017). JAMA, 177(4):538-545.

ACO Models: Oregon & Colorado

  • Essentially asked how things changed in Oregon

compared to Colorado in the two years following implementation of their respective models

  • Difference-in-differences analysis
  • 18 months of pre-reform data and 24 months of post-

reform data

  • Excluded dual eligibles from the analysis
  • Propensity score weighting based on age, sex, rural

residence, and Chronic Illness and Disability Payment system risk to account for differences in the populations

McConnell et al. (2017). JAMA, 177(4):538-545.

slide-52
SLIDE 52

11/17/2017 52

ACO Models: Oregon & Colorado

  • Expenditures

– Standardized expenditures

  • Utilization

– ED visits, inpatient days, PC visits

  • Access

– Well child visits and preventive visits for adolescents and adults

  • Appropriateness

– Four measures of appropriate or low value care

  • Quality

– Preventable ED visits and hospitalizations

McConnell et al. (2017). JAMA, 177(4):538-545.

ACO Models: Oregon & Colorado

  • Expenditures

– No statistically significant differences in standardized total expenditures – Over two years, there was a statistically significant difference in inpatient expenditures with Oregon spending more

McConnell et al. (2017). JAMA, 177(4):538-545.

slide-53
SLIDE 53

11/17/2017 53

ACO Models: Oregon & Colorado

  • Utilization

– Statistically significant differences in ED and PC visits with Oregon having lower utilization rates – Small but statistically significant difference in inpatient days with Oregon having higher utilization rates

McConnell et al. (2017). JAMA, 177(4):538-545.

ACO Models: Oregon & Colorado

  • Access

– At two years, statistically significant differences in well child visits (age 3-6 yrs), adolescent well-care visits, and adult preventive care visits with greater access in Oregon

McConnell et al. (2017). JAMA, 177(4):538-545.

slide-54
SLIDE 54

11/17/2017 54

ACO Models: Oregon & Colorado

  • Appropriateness

– No statistically significant differences in use of inappropriate care with the exception of slightly greater avoidance of imaging from uncomplicated headache in Oregon

McConnell et al. (2017). JAMA, 177(4):538-545.

ACO Models: Oregon & Colorado

  • Quality

– Statistically significant differences in avoidable ED visits and preventable acute inpatient admissions with lower observed rates in Oregon

McConnell et al. (2017). JAMA, 177(4):538-545.

slide-55
SLIDE 55

11/17/2017 55

ACO Models: Oregon & Colorado

  • Limitations

– Snapshot of certain indicators within each outcome domain – Expenditures didn’t include prescription drug spending – Results have to be contextualized within the broader national trend of decreased growth in spending and utilization between 2010 and 2014

McConnell et al. (2017). JAMA, 177(4):538-545.

ACO Models: Oregon & Colorado

  • Conclusions

– Similar changes in expenditures – Oregon observed greater improvements in certain measures of access, utilization, and quality

McConnell et al. (2017). JAMA, 177(4):538-545.

slide-56
SLIDE 56

11/17/2017 56

In the end…

  • Good evidence is a tool for good governing
  • It allows more confidence that a public policy will:

– Achieve its intended goal – Be the best use of limited resources – Not have to be abandoned in 5 years