Evaluation in Todays Political, Social and Technological Climate - - PowerPoint PPT Presentation

evaluation in today s political social and technological
SMART_READER_LITE
LIVE PREVIEW

Evaluation in Todays Political, Social and Technological Climate - - PowerPoint PPT Presentation

Evaluation in Todays Political, Social and Technological Climate Kathryn Newcomer September 8, 2017 1 Evidence-based Policy, Data-Driven Decision-making the New Normal? 2 Question to Address Today u What is the impact of


slide-1
SLIDE 1

Evaluation in Today’s Political, Social and Technological Climate

Kathryn Newcomer September 8, 2017

1

slide-2
SLIDE 2

2

“Evidence-based Policy,” “Data-Driven Decision-making”– the New Normal?

slide-3
SLIDE 3

Question to Address Today

u What is the impact of the “evidence-based

policy” imperative, as well as the current political, social and technological climate on evaluation practice in the public and non-profit sectors?

3

slide-4
SLIDE 4

What are Challenges for Evaluators in Providing Evidence to Inform Policymaking?

u What constitutes sufficient evidence? u How transferable is evidence? u When and where do we underestimate the role

played by the “impactees?”

u Where is the capacity to support both the

demand and supply of evidence?

4

slide-5
SLIDE 5

Contrasting Views on Evidence- Based Policy #1

u We need to collect data to test if programs work

  • r do not work.

Versus

u We need to learn which program mechanisms

work for whom, where and under what circumstances.

5

slide-6
SLIDE 6

Contrasting Views on Evidence- Based Policy #2

u Policy should be made at the top and based on

evidence. Versus

u Policy is “made” through implementation

processes at multiple levels by multiple actors with different types of data available to them.

6

slide-7
SLIDE 7

Contrasting Views on Evidence- Based Policy #3

u Program impact can be measured precisely.

Versus

u Measuring program impact is difficult as programs

and intended impactees change and evolve.

7

slide-8
SLIDE 8

Contrasting Views on Evidence- Based Policy #4

u Random Control Trials (RCTs) are the gold

standard for research and evaluation design. Versus

u Research designs must be matched to answer the

question raised; RCTs are appropriate for certain impact questions.

8

slide-9
SLIDE 9

Contrasting Views on Evidence- Based Policy #5

u Proven program models can be replicated in

multiple locations as long as they are implemented with fidelity to the original design. Versus

u Program mechanisms may be replicated in

multiple locations as long as they are adapted to meet local conditions.

9

slide-10
SLIDE 10

Contrasting Views on Evidence- Based Policy #6

u Benefit-cost analysis should be used to compare

social programs. Versus

u Benefit-cost analysis is difficult to use to compare

social programs given the challenge of costing out benefits, especially those accruing over time.

10

slide-11
SLIDE 11

Why isn’t There Agreement About the Quality of Evidence?

u Differing professional standards and “rules” or

criteria for evidence, e.g., lawyers, engineers, economists

u Disagreements about methodologies within

professional groups, e.g., RCTs

u The constancy of change in problems and the

characteristics of the targeted “impactees”

11

slide-12
SLIDE 12

We Overstate the Ease of Flow of Evidence

It plays a wide (enough) causal role Study conclusion: It plays a causal role there Policy prediction: It will play a causal role here

Source: Cartwright, N. (2013). Knowing what we are talking about: why evidence doesn't always travel. Evidence & Policy: A Journal of Research, Debate and Practice, 9(1), 97-112.

12

slide-13
SLIDE 13

We Underes)mate the Role of Voli)on Among Impactees and their Decision-making

13

slide-14
SLIDE 14

We Underestimate the Evolving Sources of Complexity Affecting the Production of Relevant Evidence

u Change in the nature of problems to be addressed

by government and the philanthropic sector

u Change in the context in which programs and

policies are implemented

u Changing priorities of political leaders – and under

Trump?

14

slide-15
SLIDE 15

We Overstate The Current Evaluation Capacity among Decision-Makers Government

15

slide-16
SLIDE 16

Evaluation Capacity = Both Demand and Supply

u How clear is the understanding between providers

and requestors on what sort of data (evidence) is needed?

u Are there sufficient resources to respond to

demands for specific sorts of evidence?

u How can evaluators instruct users about how to

assess the quality and appropriateness of evidence?

16

slide-17
SLIDE 17

Transmission Process

u Just as a there are many producers, there are

many potential users of the evidence provided, e.g., different policy designer and implementers in complex service delivery networks

u Understanding and strengthening the linkage

between the producers of evaluative data and the many potential users of that information requires time and resources

17

slide-18
SLIDE 18

Evaluators Need to Help Information Users Frame Pertinent Questions and then Match the Questions with the Appropriate Evaluation Approach

Questions Relevant to Users Evaluation Design

18

slide-19
SLIDE 19

Match Evaluation Approach to Questions

Objective Illustrative Questions Possible Design

#1: Describe program activities

  • How extensive and costly are the program activities?
  • How do implementation efforts vary across sites, beneficiaries,

regions?

  • Has the program been implemented sufficiently to be evaluated?
  • Monitoring
  • Exploratory Evaluations
  • Evaluability Assessments
  • Multiple Case Studies

#2: Probe targeting & implementation

  • How closely are the protocols implemented with fidelity to the
  • riginal design?
  • What key contextual factors are likely to affect achievement of

intended outcomes?

  • How do contextual constraints affect the implementation of a

intervention?

  • How does a new intervention interact with other potential

solutions to recognized problems?

  • Multiple Case Studies
  • Implementation or Process

evaluations

  • Performance Audits
  • Compliance Audits
  • Problem-Driven Iterative

Adaptation

#3: Measure the impact of policies & programs

  • What are the average effects across different implementations of

the intervention?

  • Has implementation of the program or policy produced results

consistent with its design (espoused purpose)?

  • Is the implementation strategy more (or less) effective in relation

to its costs?

  • Experimental Designs/RCTs
  • Non-experimental Designs:

Difference-in-difference, Propensity score matching, etc.

  • Cost-effectiveness & Benefit Cost

Analysis

  • Systematic Reviews & Meta-Analyses

#4 : Explain how/ why programs & policies produce (un)intended effects

  • How/why did the program have the intended effects?
  • To what extent has implementation of the program had important

unanticipated negative spillover effects?

  • How likely is it that the program will have similar effects in other

communities or in the future?

  • Impact Pathways and Process

tracing

  • System dynamics
  • Configurational analysis,

19

slide-20
SLIDE 20

20

Accountability Learning

There is an ongoing tension between producing evidence to demonstrate accountability versus to promote learning.

A Delicate Balancing Act

slide-21
SLIDE 21

u Please join us in November, 2017 in Washington, D.C.! u "From Learning to Action" is the theme of our American Evaluation Association

Annual Conference (3500+ attendees and 120+ workshops & panels), and in line with this theme, I have worked with committee of 17 (from 7 countries) to plan our approach, and we have challenged participants to:

u think creatively about innovative ways to engage audiences at the annual

conference – beyond panels and posters;

u invite evaluators or evaluation users who might not normally attend AEA, but are

clearly stakeholders in our work, to participate in conference sessions; and

u submit a 60 second video on Learning from Evaluation to highlight how we can

foster learning from evaluation in a variety of settings.

21

slide-22
SLIDE 22

Relevant References

u Dahler-Larsen, Peter. 2012. The Evaluation Society. Stanford University Press. u Donaldson, S., C. Christie, and M. Mark (editors) 2015. Credible and Actionable Evidence,

2nd Edition. Sage.

u Head, B. 2015. “Toward More “Evidence-Informed” Policy Making?” Public Administration

  • Review. Vol.76, Issue 3, pp. 472-484.

u Kahneman, D. 2011. Thinking, Fast and Slow. Farrar, Straus and Giroux Publishers. u Mayne, J. 2010. “Building an evaluative culture: The key to effective evaluation and

results management.” Canadian Journal of Program Evaluation, 24(2), 1-30.

u Newcomer, K. and C. Brass. 2O16. “Forging a Strategic and Comprehensive Approach to

Evaluation within Public and Nonprofit Organizations: Integrating Measurement and Analytics within Evaluation.” American Journal of Evaluation, Vol. 37 (1), 80-99.

u Olejniczak, K., E. Raimondo, and T

. Kupiec. 2016. “Evaluation units as knowledge brokers: Testing and calibrating an innovative framework.” Evaluation, Volume 22 (2)., 168-189.

u Sunstein. C. and R. Hastie. 2015. Wiser: Getting Beyond Groupthink to Make Groups

  • Smarter. Harvard Business Review Press.

u World Bank Group. Mind, Society and Behavior. 2015.

22

slide-23
SLIDE 23

Thank You! Questions?

I can be reached at newcomer@gwu.edu

23