1 Perceived response burden: what are the trends The manufacturing - - PDF document

1
SMART_READER_LITE
LIVE PREVIEW

1 Perceived response burden: what are the trends The manufacturing - - PDF document

1 Some initial reflections on measuring response BLUE-ETS Conference on Business Burden and Motivation in NSI Surveys burden and NSIs Statistics Netherlands, Heerlen, March 22 & 23, 2011 Measuring response burden is important to NSIs,


slide-1
SLIDE 1

1

1

1

Response burden: trends and Response burden: trends and consequences consequences

Dag Gravem Dag Gravem1,

1, Gustav

Gustav Haraldsen Haraldsen and and Tora Tora Löfgren Löfgren Statistics Norway Statistics Norway

1 1 dfg@ssb.no

dfg@ssb.no

BLUE-ETS Conference

  • n Business Burden and Motivation in NSI Surveys

Statistics Netherlands, Heerlen, March 22 & 23, 2011

2

Some initial reflections on measuring response burden and NSIs

  • Measuring response burden is important to NSIs, but why?

– Response burden may influence motivation and data quality – Response burden may indicate questionnaire problems – Authorities, businesses and their interest organizations are concerned about businesses’ bottom line

But should we listen to them?

  • Response burden is measured in many different ways by

NSIs, making international comparison difficult…

  • Often, a coherent strategy for measuring response burden

and using the results is difficult to find.

3

Response burden measurement at Statistics Norway

  • Like many other NSIs, Statistics Norway has not had a

systematic strategy for measuring differences and changes in response burden.

  • Still, we have been interested in differences…

– between different questionnaires – between different modes (e.g. paper and web) – between different subsamples – before and after going from paper to web – before and after questionnaires undergo content/layout changes

4

Response burden measurement at Statistics Norway

  • …andwe do have some data that allows us to investigate

some of these aspects in a longitudinal perspective

– 2004: 1 paper questionnaire – 2006: 6 web questionnaires – 2010: 12 web questionnaires

  • All belong to the structural business survey family of
  • questionnaires. One questionnaire per industry.

– Some questionnaire items in common, some are industry specific

  • Measured by appending a response burden questionnaire

to the regular questionnaires

5

The instrument: the response burden questionnaire

  • Measured both time burden and perceived response
  • burden. We focus on the perceived response burden.
  • Key question: ”Did you find it easy or burdensome/difficult to

complete the questionnaire?”

– Very easy – Quite easy – Neither easy nor burdensome/difficult – Quite burdensome/difficult – Very burdensome/difficult

  • If burdensome/difficult: follow-up question on reasons for

perceived response burden

6

2010 list of problems

  • The number of questions
  • Poor visual layout
  • Needed help to collect necessary information
  • Had to wait for information not yet available
  • The web application had low usability
  • Technical problems with the web application
  • Complicated calculations
  • Mismatch between questions and available information
  • Difficult to judge which response alternative was the right
  • ne
  • Unclear terms or definitions
slide-2
SLIDE 2

2

7

Perceived response burden: what are the trends for our questionnaires?

  • Responses to the key question converted to a perceived response

burden (prb) index. Positive figures indicate easy to complete, negative figures difficult to complete.

  • 0,14

0,18 0,32 Manufacturing, one business (2004)

  • 0,01

0,2 0,21 Service industry 0,01 0,11 0,1 Domestic trade 0,02 0,12 0,1 Hotels and restaurants 0,04 0,13 0,09 Transport and communication 0,14 0,16 0,02 International sea transport

  • 0,07
  • 0,15
  • 0,08

Manufacturing, multi business (2004) 0,05

  • 0,12
  • 0,17

Construction 2010-2004/06 2010 2004/2006 Questionnaire/industry. Ranged from most difficult to easiest in 2004/2006.

8

The manufacturing questionnaires

  • Switching from paper to web does not make the response

tasks easier, despite the fact that many businesses prefer web questionnaires

  • The level and change in perceived response burden seems

to be correlated with business structure

– One business companies had few problems in 2004, but were hit quite hard when the mode changed to web – Multi business companies had quite a lot of problems in 2004, but were hit less hard when the mode changed to web

9

Reasons for response burden, 2004 and 2010. Manufacturing questionnaire.

Several establishments. 10 20 30 40 50 60 M i s m a t c h N e e d

  • f

h e l p T e r m s C a l c u l a t i

  • n

s R e s p . c a t e g

  • r

i e s W a i t f

  • r

i n f

  • r

m a t i

  • n

O t h e r N

  • .
  • f

q u e s t i

  • n

s L a y

  • u

t E

  • u

s a b i l i t y E

  • t

e c h n i c a l Causes Percent

2004 2010

10

Reasons for response burden, 2004 and 2010. Manufacturing questionnaires

  • Web related reasons are not important (!)
  • The web questionnaire rather than the web application

seems to be the source of the problem

  • The differences between the 2004 and 2010 need to be

more closely looked into

– Partly a result of outsourcing revision tasks to respondents?

11

Differences i perceived response burden between web questionnaires in 2006 and 2010

  • Differences are small, the index more positive for 5 of 6

questionnaires

  • In 2010, the percentage using web questionnaires had

increased from 30-40 to 80-90 percent

– No indication that ”followers” have more problem than early adopters

  • f web technology

– Increased computer competence?

  • From 2006 and 2010, some of the web questionnaires

became shorter, some were about equally long, and some became longer.

– This does not seem to have much of an impact on perceived response burden

12

The web questionnaire that changed the most: International sea transport

  • 2010: questionnaire length increased significantly for a large

subsample

  • Despite this, the prb index went from 0,02 to 0,16

– 2006 sample was very small, so actual differences may be more moderate

  • But what about the reasons for response burden? Did they

change?

slide-3
SLIDE 3

3

13

Response burden reasons for the sea transport questionnaires in 2006 and 2010.

10 20 30 40 50 60 M i s m a t c h N

  • .
  • f

q u e s t i

  • n

s C a l c u l a t i

  • n

s T e r m s R e s p . c a t e g

  • r

i e s E

  • u

s a b i l i t y E

  • t

e c h n i c a l O t h e r L a y

  • u

t N e e d

  • f

h e l p W a i t f

  • r

i n f

  • r

m a t i

  • n

Reasons Percent 2006 2010 14

The web questionnaire that changed the most: International sea transport

  • Five reasons are down, two are a bit up, two markedly up,

and two new reasons are introduced.

  • New industry-specific items may have introduced new

problems

– But we remember that calculations, response categories, need of help and wait for information were up for the construction questionnaire

  • Anyway, retrieval and judgment problems appear to have

increased in importance

  • The prb index is down, although new sources of response

burden have become part of the prb cocktail.

15

Summing up so far…

  • Increased response burden seems to go together with a

greater awareness of response burden sources (manufacturing questionnaires)

  • But reduced response burden does not necessarily go

together with reduced concerns about sources of response burdens (sea transportation questionnaire)

  • What seems to be just as important is how much the

questionnaires are changed

  • A consciousness-raising process regarding response

burden reasons? Consciousness raised by…

– high response burden – salient changes in the questionnaire – other factors

16

How different filters affected the distribution of response burden reasons. All structural business questionnaires in 2010.

5 10 15 20 25 30 35 40 45 50 N

  • .
  • f

q u e s t i

  • n

s L a y

  • u

t E

  • u

s a b i l i t y E

  • t

e c h n i c a l T e r m s C a l c u l a t i

  • n

s M i s m a t c h R e s p

  • n

s e c a t e g

  • r

i e s N e e d

  • f

h e l p W a i t f

  • r

i n f

  • r

m a t i

  • n

O t h e r Reason Percent Mid category excluded Mid category included Mid category only 17

The question on response burden reasons also worked for one ”non-target” group

  • The ”mid category” agreed with those reporting high prb on

the reasons for response burden:

– Calculations, mismatch, response categories plus need of help finding information: information retrieval and judgment!

  • Still, those reporting high prb gave the clearest message

regarding what the problems were

  • We plan to ask all respondents in future prb surveys on

sources of response burden.

18

Perceived response burden: does it have any consequences for response quality?

  • Respondents taking the response task seriously ->

thorough, time consuming response process -> find the task difficult

– High quality data – High percieved response burden

  • Respondents “satisficing” -> not very concerned about

definitions, calculations, response categories etc. -> find the task easy

– Low quality data – Low perceived response burden

  • Both categories of respondents may make errors

– To what extent do they?

slide-4
SLIDE 4

4

19

Average number of quality violations among respondents to the Norwegian wage and salary survey (2008), dependent on how easy or burdensome they found the questionnaire

0,0 2,0 4,0 6,0 8,0 10,0 12,0 Very easy 16,9 % Rather easy 49,2 % Neither 8,8 % Rather heavy 19,3 % Very heavy 5,6 % Level of percieved response burden Average rate of errors (violations of controls)

Method: measuring number of violations of one item nonresponse control and three consistency checks

20

Conclusions

  • Because more respondents thought the questionnaire was

easy than difficult, errors made by respondents who thought the questions were easy to answer was a bigger quality problem than errors made by those who thought it was difficult and burdensome.

  • “Those who complain, are the cleverest when it comes to

finding problems. The errors done by those who are most content are the most serious errors”.

  • It’s time to get more scientific about measuring perceived

response burden and its relation to data quality!

21

Thank you!