Measurement Tools for Improvement Efforts Jane A. Taylor, Ed.D. - - PowerPoint PPT Presentation

measurement tools for improvement efforts
SMART_READER_LITE
LIVE PREVIEW

Measurement Tools for Improvement Efforts Jane A. Taylor, Ed.D. - - PowerPoint PPT Presentation

Measurement Tools for Improvement Efforts Jane A. Taylor, Ed.D. Improvement Advisor to IHI As part of our extensive program and with CPD hours awarded based on actual time spent learning, credit hours are offered based on attendance per


slide-1
SLIDE 1

Measurement Tools for Improvement Efforts

Jane A. Taylor, Ed.D. Improvement Advisor to IHI

slide-2
SLIDE 2

ME Forum 2019 Orientation

As part of our extensive program and with CPD hours awarded based

  • n actual time spent learning, credit hours are offered based on

attendance per session, requiring delegates to attend a minimum of 80% of a session to qualify for the allocated CPD hours.

  • Less than 80% attendance per session = 0 CPD hours
  • 80% or higher attendance per session = full allotted CPD

hours Total CPD hours for the forum are awarded based on the sum of CPD hours earned from all individual sessions. Conflict of Interest The speaker(s) or presenter(s) in this session has/have no conflict of interest or disclosure in relation to this presentation.

slide-3
SLIDE 3

Tools for Improvement

  • View systems and processes
  • Gather information
  • Organize information
  • Understand variation
  • Understand relationships
slide-4
SLIDE 4

VIEW SYSTEMS AND PROCESSES

MAPPING

slide-5
SLIDE 5

Service Delivery Test of procedures and methods

Consumers

A B C D

Design and Re-design

Consumer research

Adapted from OUT OF THE CRISIS by W.E. Deming

System View:

Suppliers

Inputs

Outputs

slide-6
SLIDE 6

Flow Charts

3 Simple Symbols

slide-7
SLIDE 7

GATHERING INFORMATION

DATA COLLECTION FORMS OPERATIONAL DEFINITIONS

slide-8
SLIDE 8

Data Collection Forms – Answer specific questions posed in the planning phase of the improvement cycle – Make the recording of observations easy, efficient and accurate – Facilitate data analysis during the study phase of the improvement cycle – Always TEST form first

slide-9
SLIDE 9

Form for Collecting Data

PIN HOLE TINT SPOT BUBBLE STONE SMUDGE TOP DRIP TOTAL

slide-10
SLIDE 10

Data Collection Forms

slide-11
SLIDE 11

Data Collection Forms – Variables for Stratification

§ Stratification Monthly data summarizing surgical complications

Stratify by surgeon Stratify by age Stratify by OR

slide-12
SLIDE 12

Operational Definitions

What is a fall? What is a ventilator associated pneumonia? What is discharge within 2 hours of medically ready? What is on-time? What is clean? Operational definitions allow for consistent and accurate data collection. Gives communicable meaning to a concept

slide-13
SLIDE 13

Components of Operational Definition

Developing an operational definition requires agreement on two things:

  • 1. A method of measurement

– Which device? (clock, wristwatch, stopwatch?) – To what degree of precision (nearest hour, 5 minutes, minute, second?)

  • 2. A set of criteria for judgment

– What is “late”, “error”, “a fall”?

Page 37

13

slide-14
SLIDE 14

The Importance of Operational Definitions

  • If data are collected differently by different

people, or differently each time collected, it makes it hard to know whether changes in the data are due to the changes tested or from inconsistencies in data collection.

Page 37

14

slide-15
SLIDE 15

Qualitative Data

Interviews 1:1

  • What was it like? Not, did you like it?
  • What did you feel, hear, sense?
  • What did it mean to you when this happened?

Focus - limit to 5 questions, similar users

  • General question
  • More specific re the issue you want to learn about
  • Ask the group, what other questions you might ask?
  • Feedback with, “all things considered (provide summary)
  • Ask if you captured the conversation accurately
slide-16
SLIDE 16

Stand where the work is done long enough and you will figure out what needs to be done. attributed to Taiichi Ohno Observation

slide-17
SLIDE 17

Taiichi Ohnos Seven Wastes

MUDA

Time on Hand (Waiting) Transportation

Defective Products

Processing Movement Stock on Hand (Inventory)

Overproduction

slide-18
SLIDE 18

Other qualitative data

Observation Introduce yourself It is about watching the work process not the person Make notes: waste, overproduction, rework, items not fit for use, communication gaps Draw pictures – spaghetti diagram

slide-19
SLIDE 19

Spaghetti Diagram: Before

slide-20
SLIDE 20

Spaghetti Diagram: After

slide-21
SLIDE 21

ORGANIZING INFORMATION DIAGRAMS: DRIVER, AFFINITY, CAUSE AND EFFECT, INTERRELATIONSHIP

slide-22
SLIDE 22

Driver Diagram: Best theory to date to get results

  • Primary drivers: systems, processes,

structures and norms that need to change to get improvement

  • Secondary drivers: places in the system or

processes where changes need to occur or discrete moments in time where improvements are needed

  • Changes: ideas based on evidence,
  • bservation, or experience that get results
slide-23
SLIDE 23

Driver Diagram Example from NZ

What’s Your Theory? Bennett and Provost. July 2015 QP 37

slide-24
SLIDE 24

Aim Primary Drivers Secondary Drivers Changes By July 4, 2020 Improve school readiness for children

  • f color and

American Indian children to less than 10%

Cross Sector and Family Collaboration

Clinics Schools Family

Screen at WIC visits Develop reliable screening & referral process to schools Develop registry to follow up with referrals Create shared consent agreements Connect screening, id of problems with access to resources Work with family partners for messaging the value and import of screening and services Develop cultural humility and incorporate into approaches ID early learning resources eg, Head Start, Libraries, ECE Screen at places families and children frequent; engage families most underserved Support transition out or service (Part C) Inform policy for system coordination eg, data sharing Focus on access to services Hire staff who reflect culture groups Coordinate with other schools Refer to EL Programs Develop follow through core services Carefully follow those not on track @ 3 Use Medicaid funding for referrals; cooperate with DHS to bill for services for those without IEP or IFSP

Communication

Internal

Track referrals at reliable intervals Establish follow up protocol Connect with families for follow up support Share project status with leadership Recruit leader to communicate project status in-and external

External

Communicate referral outcomes and status to referring providers Share project status, result, partnerships, barriers (storyboard)

slide-25
SLIDE 25

Cause and Effect Diagram

  • r Fishbone Diagram

Wikipedia

slide-26
SLIDE 26

Affinity Diagram

slide-27
SLIDE 27

Interrelationship Diagram

slide-28
SLIDE 28

UNDERSTANDING VARIATION

RUN AND CONTROL CHARTS; FREQUENCY PLOTS; PARETO CHART

slide-29
SLIDE 29

Sources of Data (Figure 2.1)

Page 26

Data are documented observations

  • r the results of performing a

measurement process.

Data can be obtained by perception (for example, observation) or by performing a measurement process.

29

slide-30
SLIDE 30

Shewhart’s Theory of Variation (1931)

Special Causes—those causes not part of the system all the time

  • r do not affect everyone, but

arise because of specific, assignable circumstances

30

Common Causes—those causes inherent in the system over time, affect everyone working in the system, and affect all outcomes of the system

slide-31
SLIDE 31

Run Chart

Graphical display of data plotted in some type of

  • rder. Also has been called a time series or a

trend chart.

Page 67

31

slide-32
SLIDE 32

Analyzing a Run Chart

  • Starts with a simple visual analysis.
  • Direction of goodness?
  • They are testing changes here. Do they have

improvement yet?

32

slide-33
SLIDE 33

May Display More Than One Measure on a Graph

33

slide-34
SLIDE 34

Figure 3.13 Page 78 78

34

slide-35
SLIDE 35

When Do We Start a Run Chart?

35

slide-36
SLIDE 36

Why a run chart? Why not just a table?

Month

Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov

Measure 83 80 81 84 83 85 68 87 89 92 91

HC Data Guide, p 68

60 65 70 75 80 85 90 95 100

Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec Jan Feb Mar

Percent

Run Chart of Measure

Goal Monthly Measure – Goal = 90%

36

slide-37
SLIDE 37

How Should We Look at Data?

1 2 3 4 5 6 7 8 9

4 11

Delay Time (hours)

Before and After Test

made change

37

slide-38
SLIDE 38

How Should We Look at Data?

1 2 3 4 5 6 7 8 9

4 11

Delay Time (hours)

Before and After Test

made change

38

slide-39
SLIDE 39

How Should We Look at Data?

1 2 3 4 5 6 7 8 9

4 11

Delay Time (hours)

Before and After Test

made change

39

slide-40
SLIDE 40

How Should We Look at Data?

1 2 3 4 5 6 7 8 9

4 11

Delay Time (hours)

Before and After Test

made change

40

slide-41
SLIDE 41

How Should We Look at Data?

1 2 3 4 5 6 7 8 9

4 11

Delay Time (hours)

Before and After Test

made change

41

slide-42
SLIDE 42

How Should We Look at Data?

1 2 3 4 5 6 7 8 9

4 11

Delay Time (hours)

Before and After Test

made change

42

slide-43
SLIDE 43

March, 1997 The Joint Commission Journal on Quality Improvement, Vol 23, No 3.

We are increasingly realizing not

  • nly how critical measurement is to

the quality improvement we seek but also how counterproductive it can be to mix measurement for accountability or research with measurement for improvement.

43

slide-44
SLIDE 44

Data for Improvement, Accountability and Research in Health Care

Aspect Improvement Accountability Research Aim: Methods: Bias: Sample Size: Flexibility of Hypothesis: Testing Strategy: Determining if a Change is an Improvement: Confidentiality of the Data: Frequency of Use: Improvement of care

(processes, systems, and

  • utcomes)

Comparison, choice, reassurance New generalizable knowledge Test observable No test, evaluate current performance Test blinded or controlled Accept consistent bias Measure and adjust to reduce bias Design to eliminate bias “Just enough” data, small sequential samples Obtain 100% of available, relevant data “Just in case” data Hypothesis flexible, changes as learning takes place No hypothesis Fixed hypothesis Sequential tests No tests One large test Run charts or Shewhart control charts No focus on change Hypothesis, statistical tests (t-test, F-test, chi square, p-values) Data used only by those involved with improvement Data available for public consumption and review Research subjects’ identities protected Daily, weekly, monthly Quarterly, annually At end of project

slide-45
SLIDE 45

Percent of Patients Counseled on Smoking Cessation 20 40 60 80 100 Jan 06 M M J S N J- 07 M M % Percent of Smokers Who Have Not Smoked for Two Months 10 20 30 40 50 60

Jan 06 M M J S N J- 07 M M

% Article Free Gum or Patch Support Group E-mail Buddies Article Free Gum or Patch Support Group E-mail Buddies

Data for Judgment vs. Improvement

HC Data Guide, p 29, 30

45

slide-46
SLIDE 46

Data for Judgment vs. Improvement

Ave Patient Satisfaction Scores

60 70 80 90 100

Jan 06 M M J S N J- 07 M M

Percent Scripting Rec Process Test w ait chg

Patient Satisfaction Percentile Ranking

60 70 80 90 100

Jan 06 M M J S N J- 07 M M

Percentile Scripting Rec Process Test w ait chg

HC Data Guide, p 31

46

slide-47
SLIDE 47

Stages of Facing Reality: Reaction to Data

  • “The data are wrong”
  • “The data are right, but it’s not a problem”
  • “The data are right; it is a problem; but it is

not my problem.”

  • “I accept the burden of improvement”

47

from Escape Fire, Don Berwick, (2002 Forum Speech), page 287-288

slide-48
SLIDE 48

Three Categories of Measures

  • Outcome Measures: Voice of the customer or patient. How is

the system performing? What is the result?

  • Process Measures: Voice of the workings of the system. Are

the parts/steps in the system performing as planned?

  • Balancing Measures: Looking at a system from different

directions/dimensions.

– What happened to the system as we improved the outcome and process measures? – The unanticipated negative consequences,or other factors influencing

  • utcome)

HC Data Guide, p 36

48

slide-49
SLIDE 49

Family of Measures for Improvement Projects

  • Health care systems are very complex.

– Any single measure used as the sole means of determining improvement to a particular system is inadequate. – Multiple measures are necessary to evaluate the impact

  • f our changes on the many facets of the system.

– Improvement projects typically require a family of 5-8 key global measures.

49

Page 61

slide-50
SLIDE 50

Project Measures: Categories

Page 36

50

slide-51
SLIDE 51

Page 64

slide-52
SLIDE 52

Stratification of Data

HC Data Guide, p 50

52

slide-53
SLIDE 53

Stratification of a Run Chart

HC Data Guide, p 50

53

slide-54
SLIDE 54

Fig 3.21

54

slide-55
SLIDE 55

Same Scale

slide-56
SLIDE 56

Small Multiples

  • Multiple run charts viewed on one page
  • All these run charts are about the same

measure but for a different location, provider

  • r segment of the population
  • Each has the same scale vertically and

horizontally

  • Allows for rapid comparison

56

slide-57
SLIDE 57

Better Measures?

More Learning and More Improvement

  • Improvement is about change
  • Change is best seen using data
  • Data over time is useful to understand whether and

how we are improving

  • Small family of measures
  • Outcome measures what matters
  • Process measures should be sensitive to change and

address key process that would lead to improvement

  • Balancing measures – are we doing unintentional

harm?

  • How much data? Just enough!
slide-58
SLIDE 58

Vilfredo Federico Damaso Pareto was an Italian engineer, sociologist, economist, political scientist and philosopher. He made several important contributions to economics, particularly in the study of income distribution and in the analysis of individuals' choices. He introduced the concept of Pareto efficiency and helped develop the field of

  • microeconomics. He also was the first to discover that income follows a Pareto

distribution, which is a power law probability distribution. The Pareto principle was named after him and built on observations of his such as that 80% of the land in Italy was owned by 20% of the population. He also contributed to the fields of sociology and mathematics.

58

slide-59
SLIDE 59

Pareto Principle

To Total ADEs By Medication

50 1 00 1 50 200 250 300 350 400 450

Heparin Coumadin Morp/ S Insulin Digitalis Pot C Amp/ P Lov Con Cycl Albt Cef/ t Other

#

Vital Few Useful Many

59

slide-60
SLIDE 60

Cumulative Percent Frequency

Pareto Diagram

Source: R. Lloyd, Quality Health Care: A Guide to Developing and Using Indicators. Jones and Bartlett Publishers, 2004:309.

60

slide-61
SLIDE 61

Pareto Chart of Causes

TYPE OF ACCIDENT

Cars Falls Pedestrian Drowning Fire Motorcycle Poisoning Chocking Guns Bicycles Electrocution

5000 10000 15000 20000 25000 30000

CAUSES OF WRECKS

Intoxication Weather Poor Visibility Mechanical Distractions Medication Road Maintenance Road Design

10 20 30 40 50

Method of Determining Causes: District Captain Using Investigator’s Observations and the Highway Patrol Procedures. IH: 31-11 IH: 31-10/12 61

slide-62
SLIDE 62

Page 143

Stratification of Wards with Pareto Chart

62

HCDG: Page 143

slide-63
SLIDE 63

63

Data Distribution Any time you gather a little or a lot of data you end up forming some sort of distribution.

slide-64
SLIDE 64

Characteristics of a Distribution

Shape Center Spread

64

slide-65
SLIDE 65

A Tale of Two Clinics

  • Imagine that you want to select a medical clinic for

you and your family.

  • Two clinics (A & B):

– They are both equal driving distance from your home – They both received the same number of star ratings from a local quality assessment organization – They have an average wait time to see the doctor of 45 minutes

Which of the two clinics would you pick based on this information?

65

65

slide-66
SLIDE 66

X = 45

Clinic A Clinic B

Two distributions that have the identical mean. Are they the same? Average Wait Time is 45 min Why are these two distributions different?

66

slide-67
SLIDE 67

X = 45

Clinic A Clinic B

They are different because they have different measures

  • f dispersion or spread.

The dispersion of the data in Distribution A is not as wide as it is in Distribution B. Distribution A has a smaller standard deviation than Distribution B.

67

slide-68
SLIDE 68

Stratification with Histogram/Frequency Plot

HCGD: Page 139

68

slide-69
SLIDE 69

UNDERSTANDING RELATIONSHIPS

slide-70
SLIDE 70

Scatter Plots Moving Beyond One Variable

IH Ch.33 , DG Ch. 4 p 8-9, QHC Ch. 7 p. 244-256

X Y

Is there a relationship between these two variables? If so, what influences what?

§ As X increases do you think Y will also increase? § As X increases do you think Y will decrease? § Or, do you think that there is no relationship between X and Y?

70

slide-71
SLIDE 71

The Health Care Data Guide: Learning from Data for Improvement. Lloyd Provost and Sandra Murray, Jossey-Bass, 2011.Page 145

Strong +r Strong -r Weak +r Weak -r No correlation (r = ~0)

slide-72
SLIDE 72

Stratification with Scatter Plot

Page 143

Stratification Using Symbols to Distinguish Each Department

72

slide-73
SLIDE 73

A Final Thought on Scatterplots

They help you:

  • Understand relationships
  • Understand the direction and

strength of the relationships

Scatterplots do not prove anything!

73

slide-74
SLIDE 74

Both And

Shewharts Principle for Presenting Data

Whenever an average, range, or histogram is used to summarize data, the summary should not mislead the user into taking any action that the user would not take if the data were presented in a time series.

Source: D. Wheeler, Understanding Variation: The Key to Managing Chaos, SPC Press, 1993.

74