for Assessing Fidelity to Evidence-Based Interventions Shannon - - PowerPoint PPT Presentation

for assessing fidelity to
SMART_READER_LITE
LIVE PREVIEW

for Assessing Fidelity to Evidence-Based Interventions Shannon - - PowerPoint PPT Presentation

1 A Comparison of Strategies for Assessing Fidelity to Evidence-Based Interventions Shannon Wiltsey Stirman, PhD National Center for PTSD and Stanford University @slwiltsey Acknowledgements 2 Coauthors Candice Monson, PhD (Co-PI)


slide-1
SLIDE 1

1

A Comparison of Strategies for Assessing Fidelity to Evidence-Based Interventions

Shannon Wiltsey Stirman, PhD National Center for PTSD and Stanford University

@slwiltsey

slide-2
SLIDE 2

2

Acknowledgements

  • Coauthors
  • Candice Monson, PhD (Co-PI)
  • Norman Shields, PhD (Co-I)
  • Patricia Carreno
  • Kera Mallard
  • Matthew Beristianos
  • Sharon Hasslen
  • Research Funding
  • Canadian Institute of Health Research RN327031
  • National Institute of Mental Health R01 MH106506
  • The authors have no conflicts of interest to report.
slide-3
SLIDE 3

3

Importance of assessing fidelity

  • Measure of success of implementation strategies such as training
  • Key implementation outcomes (Proctor et al., 2009)
  • Necessary to understand unexpected outcomes (e.g., voltage

drop) (Schoenwald et al., 2010)

  • Fidelity support has been shown to improve training outcomes (Lu

et al., 2014) and decrease turnover (Aarons et al., 2009)

@slwiltsey

slide-4
SLIDE 4

4

Relationship between fidelity and outcomes

  • Mixed findings re: observation
  • Meta-analysis found no overall relationship (Webb et al., 2010)
  • Fidelity predicted changes in depression (Webb et al., 2010)
  • Temporal confounds
  • Subsequent findings for CPT for PTSD (Farmer et al., 2015)
  • Self-report
  • Some researchers have found associations with outcomes (Hanson et

al., 2015)

  • Clinical worksheets
  • Fidelity predicted subsequent symptom change (Stirman et al., 2015)

@slwiltsey

slide-5
SLIDE 5

5

Exploring associations with clinical outcomes

  • Implications for data collection
  • Important to rule out temporal confounds
  • Potential for moderating variables to also impact fidelity
  • Possible strategies
  • Rate all early sessions and examine their impact on subsequent

symptom change

  • Look at session-to-session change
  • All require numerous fidelity ratings

– Lower-burden, reliable methods would advance this line of research

slide-6
SLIDE 6

6

Considerations in assessing fidelity

Strategy Advantages Disadvantage Observation Accuracy Rater agreement, time Self-report Less time intensive than

  • bservation

Accuracy unknown, response bias

  • Clinical documentation

Integrated into care, accessible Clinician burden, response bias

  • Interview

Interviewer can probe for details Clinician burden, potential response bias

  • Survey

Typically brief Clinician burden Work samples (e.g., CBT worksheets) Integrated into care, minimizes clinician burden Requires rating @slwiltsey

slide-7
SLIDE 7

7

Method

slide-8
SLIDE 8

8

Study design

  • Fidelity to cognitive processing therapy was assessed in a sample
  • f clinician participants from a study on implementation support

strategies

  • Clinician participants completed the following:
  • One-time interview
  • Monthly self report (re: adherence to CPT)
  • Session note with adherence checklist
  • CPT worksheets
  • Recordings of therapy sessions

@slwiltsey

slide-9
SLIDE 9

9

Sample Characteristics

Therapists

  • N=40
  • 32% M; 68%F
  • Age 42 (SD=11)
  • 86% White, 4% Hispanic
  • 49% Phd/PsyD/MD; 33%

Master’s; 18% Bachelor’s/Other

  • Years of practice =11 (sd=8)
  • 36% Private Practice; 21%

Community Mental Health; 11% Federal; 18% Provincial 15% Other Clients

  • N=77
  • 41%M; 57% F; ,1% T
  • Age 40 (SD=14)
  • 75% White; 3% Black; 3%

South Asian; 5% Hispanic/Lation; 9% Other

  • 78% English First Language,

9% French

  • 40% Military or Veteran
  • 65% 12 or more years of

education

@slwiltsey

@slwiltsey

slide-10
SLIDE 10

10

Observer ratings

  • Raters were trained to 90% agreement on adherence and

competence ratings

  • Raters reviewed full audio of CPT sessions
  • Dichotomous ratings of adherence of unique and essential CPT

items

  • Seven-point scale for competence to each CPT item
  • Decision rules to foster agreement

@slwiltsey

slide-11
SLIDE 11

11

Interview

  • Interviewers asked about
  • The extent to which therapist followed the CPT protocol
  • The type, nature, and frequency of adaptations
  • Global rating of adherence (generally adherent vs. generally non-

adherent)

@slwiltsey

slide-12
SLIDE 12

12

Self-Reports

  • In the past month, how closely have you followed the CPT

protocol with your cases (0-3 scale)

  • Clinical note checklist
  • Checked off each unique and essential item completed in a given

session

@slwiltsey

slide-13
SLIDE 13

13

CPT worksheets

@slwiltsey

slide-14
SLIDE 14

mmanding officer making

  • rders that got us into crossfire.

“People in authority cannot be

  • trusted. He put us in harm’s

I feel fearful and distrusting. I avoid people in authority, or argue with them about their decisions when I have to interact with them.

Does it make sense to tell yourself “B” above? _Yes. He doesn’t understand what happened and he’s said hurtful things. What can you tell yourself on such occasions in the future? It’s probably best not to talk with him about it.

Tom told me to get over it I hate him upset

Clinical Note: “CPT session 3. Reviewed ABC sheets, identified stuck points. Assigned ABC sheets and trauma narrative for homework”

A

Activating Event (Something happens)

B

Belief (I tell myself something)

C

Consequence (I feel something)

Worksheets combined with clinical notes can provide richer information

@slwiltsey

slide-15
SLIDE 15

15

CPT worksheet

  • Rater scores each section for adherence (0-1)
  • Assigns competence rating for each column/section
  • Previous research found high inter-rater reliability
  • Adherence k=.68-.98
  • Competence ICC=.63-.89

@slwiltsey

slide-16
SLIDE 16

16

Results

slide-17
SLIDE 17

17

Results: Observer

  • Adherence m=.85, sd=.25 (0-1 scale)
  • Competence m = 2.98, sd=1.13 (0-6 scale)
  • Feasibility
  • 60-75 minutes per rating
  • 40 clinicians turned in 485 sessions
  • Reliability: k=.87 (adherence), ICC=.78 (competence)
  • Treated as the “gold standard”

@slwiltsey

slide-18
SLIDE 18

18

Results: Interview

  • Rater Agreement: simple to get to 95% agreement
  • Feasibility:
  • 30 completed (75%)
  • One hour interview (included other topics)
  • Coding is brief
  • Less precise as it encompasses a larger timeframe
  • 37% rated as generally adherent (0-1 scale)
  • Agreement with observer ratings
  • Adherence: r=.048, p-.57
  • Competence rpb=.12, p=.16

@slwiltsey

slide-19
SLIDE 19

19

Monthly self report

  • m=2.36, sd=.65 (1-4 scale)
  • Feasibility: ~65% response rate (including dropouts)
  • Received 56 that could be matched with randomly selected rating
  • Agreement with observer rating
  • Adherence: r=.42, p=.001
  • Competence: r=.13, p=.31

@slwiltsey

slide-20
SLIDE 20

20

Clinical note checklist (self-report)

  • m=.73, sd=.30 (73% of session elements)
  • Feasibility: depends on system
  • We requested one per month because it wasn’t embedded
  • Received 42 that could be matched with randomly selected observer

ratings

  • Agreement with observer ratings
  • Adherence r=.87, p=.00
  • Competence r=.77, p=.003

@slwiltsey

slide-21
SLIDE 21

21

Worksheet Ratings

  • Adherence m=.15, sd=.05 (0-1 scale)
  • Competence m=.25, sd=.17 (0-2 scale)
  • Feasibility: Depends on system of collection
  • Challenges in matching up with some sessions
  • Therapists posted worksheets for clinical challenges
  • 12 could be matched
  • Correspondence with observer ratings:
  • Adherence: r=.08, p=.85
  • Competence: r=.21, p=.62

@slwiltsey

slide-22
SLIDE 22

22

Discussion

slide-23
SLIDE 23

23

Discussion

  • Clinical notes appeared to be most reliable proxies for observer

ratings

  • Monthly reports may be adequate under some circumstances
  • Data on worksheets should be interpreted with caution
  • Previous research correlated highly with observer ratings
  • Low sample size
  • Data collection procedures need to be carefully considered
  • Interviews should probably be on a different scale

@slwiltsey

slide-24
SLIDE 24

24

Future directions

  • Larger datasets
  • Prospective research
  • Consider/refine strategies for data capture
  • Examine associations with outcomes
slide-25
SLIDE 25

25

Contact

  • sws1@Stanford.edu