26:010:557 / 26:620:557 Social Science Research Methods Dr. Peter - - PowerPoint PPT Presentation

26 010 557 26 620 557 social science research methods
SMART_READER_LITE
LIVE PREVIEW

26:010:557 / 26:620:557 Social Science Research Methods Dr. Peter - - PowerPoint PPT Presentation

26:010:557 / 26:620:557 Social Science Research Methods Dr. Peter R. Gillett Associate Professor Department of Accounting & Information Systems Rutgers Business School Newark & New Brunswick Dr. Peter R Gillett February 24, 2006


slide-1
SLIDE 1

February 24, 2006

  • Dr. Peter R Gillett

1

26:010:557 / 26:620:557 Social Science Research Methods

  • Dr. Peter R. Gillett

Associate Professor Department of Accounting & Information Systems Rutgers Business School – Newark & New Brunswick

slide-2
SLIDE 2

February 24, 2006

  • Dr. Peter R Gillett

2

Overview

I Measurement Theory I Research Design I Internal Validity I External Validity I Research Design Principles I Experimental Validity

Internal, External, Construct, Statistical

Conclusion

slide-3
SLIDE 3

February 24, 2006

  • Dr. Peter R Gillett

3

Research Design

I Research Design is the plan and structure of

investigation

Framework Organization Configuration of elements

I Research Design has two purposes

To answer research questions To control variance

N Experimental N Extraneous N Error

slide-4
SLIDE 4

February 24, 2006

  • Dr. Peter R Gillett

4

Research Design

I Research Design tells us

What observations to make How to make them How to analyze their quantitative

representations

I Recall that power = 1 – Beta risk =

probability of correctly rejecting a false null hypothesis

slide-5
SLIDE 5

February 24, 2006

  • Dr. Peter R Gillett

5

Research Design

I Example of detecting admissions

discrimination

Simple design randomly assigns males or

females to colleges and compares admission rates

Factorial design crosses Gender with three

levels of Ability (in this case both active variables) and tests interaction

Note that parallel tests at different levels of

ability would not be as clear evidence

slide-6
SLIDE 6

February 24, 2006

  • Dr. Peter R Gillett

6

Research Design

I Maxmincon

Maximize systematic variance Minimize error variance Control extraneous variance

I N.B. Here we are considering the variance of the

dependent variable

slide-7
SLIDE 7

February 24, 2006

  • Dr. Peter R Gillett

7

Research Design

I Experimental variance

Design plan and conduct research so that the experimental conditions

are as different as possible

I Extraneous variance

Choose participants that are as homogeneous as possible on

extraneous independent variables

Whenever possible, assign subjects to experimental groups and

conditions randomly, and assign conditions and other factors to experimental groups randomly

Control extraneous variables by building them into the design Match participants and assign them to experimental groups at random

I Error variance

Reduce errors Increase reliability of measures

slide-8
SLIDE 8

February 24, 2006

  • Dr. Peter R Gillett

8

Research Design

I Experiment

In an experiment, the researcher manipulates or

controls one or more of the independent variables

I Nonexperiments

In nonexperimental research the nature of the

variables precludes manipulation (e.g., sex, intelligence, occupation)

I “The ideal of science is the controlled

experiment” (K&L, p. 467)

slide-9
SLIDE 9

February 24, 2006

  • Dr. Peter R Gillett

9

Research Design

I Four “faulty” designs

Notation:

N X

² X is manipulated

N ~X

² X is not manipulated (i.e., subject not given X)

N (X)

² X is not manipulated but measured or imagined

slide-10
SLIDE 10

February 24, 2006

  • Dr. Peter R Gillett

10

Research Design

I Design 19.1: One Group

(a) X

Y (Experimental)

(b) (X)

Y (Nonexperimental)

I “One Shot Case Study” I Scientifically worthless

slide-11
SLIDE 11

February 24, 2006

  • Dr. Peter R Gillett

11

Research Design

I Design 19.2: One Group, Before – After

(Pretest – Posttest)

(a) Yb

X Ya (Experimental)

(b) Yb

(X) Ya (Nonexperimental)

I Group is compared to itself I Measurement, history, maturation,

regression

slide-12
SLIDE 12

February 24, 2006

  • Dr. Peter R Gillett

12

Research Design

I Design 19.3: Simulated Before – After

  • X

Y Yb

I Here we cannot tell whether the two

groups were equivalent before X

slide-13
SLIDE 13

February 24, 2006

  • Dr. Peter R Gillett

13

Research Design

I Design 19.4: Two Groups, No Control

(a)

X Y (Experimental) ~X ~Y

(b)

(X) Y (Nonexperimental) (~X) ~Y

I Groups are assumed equal on all other

variables

slide-14
SLIDE 14

February 24, 2006

  • Dr. Peter R Gillett

14

Research Design

I Criteria

Does the design adequately test the hypothesis Does the design adequately control independent

variables

N Randomize whenever possible

² Select participants at random ² Assign participants to groups at random ² Assign experimental treatments to groups at random

N Control independent variables so that extraneous unwanted

sources of systematic variance have minimal opportunity to

  • perate

Generalizability

slide-15
SLIDE 15

February 24, 2006

  • Dr. Peter R Gillett

15

Research Design

I Internal and External Validity

Campbell 1957, Campbell and Stanley 1963 The primary yardsticks by which the quality of

research contributions is judged

These goals can and do conflict with each

  • ther

I Internal Validity

Did the experimental manipulation really

make a significant difference?

slide-16
SLIDE 16

February 24, 2006

  • Dr. Peter R Gillett

16

Internal Validity

I

Threats / alternative explanations

Measurement N Measuring participants changes them History N Events occurring in the specific experimental situation may have influenced the outcome Maturation N Subjects generally may have changed or grown over time Statistical Regression N Regression towards the mean Instrumentation N Changes in the measurement device, instrument or process Selection N Characteristics of the subjects selected could have influenced the outcome Attrition / experimental mortality N Loss of subjects in some treatments or with certain characteristics Interaction

slide-17
SLIDE 17

February 24, 2006

  • Dr. Peter R Gillett

17

Internal Validity

I In a longitudinal study, we take repeated

measurements of subjects at different points in time

What are the strengths and weaknesses of

such studies as regards internal validity?

slide-18
SLIDE 18

February 24, 2006

  • Dr. Peter R Gillett

18

Internal & External Validity

I “Campbell and Stanley (1963) say that

internal validity is the sine qua non of research design, but that the ideal design should be strong in both internal and external validity, even though they are frequently contradictory.” (K&L, p. 477)

slide-19
SLIDE 19

February 24, 2006

  • Dr. Peter R Gillett

19

External Validity

I To what populations can the conclusions from an

experiment be generalized

Representativeness N Ecological representativeness N Variable representativeness

I Threats

Reactive / interaction effects of testing Interaction effects of selection biases Reactive effects of experimental arrangements Multiple-treatment interference

could all have influenced outcomes and therefore compromise generalizability beyond the subjects actually studied

slide-20
SLIDE 20

February 24, 2006

  • Dr. Peter R Gillett

20

Research Design Principles

I Design is data discipline I A design is formally a subset of the Cartesian product of

the independent variable(s) and the dependent variable

I A complete design is based on a cross-partition of the

independent variables

I We will not discuss incomplete designs I Analysis of variance is a statistical technique appropriate

for experimental designs that is not appropriate if participants cannot be assigned at random and there are unequal numbers of cases in the cells of the factorial design

slide-21
SLIDE 21

February 24, 2006

  • Dr. Peter R Gillett

21

Research Design Principles

I Design 20.1: Experimental Group –

Control Group: Randomized Participants

[R]

X Y (Experimental) ~X Y (Control)

I “Best” for many purposes

slide-22
SLIDE 22

February 24, 2006

  • Dr. Peter R Gillett

22

Research Design Principles

I Control group

Formerly meant exclusively the group that did

not receive a treatment

This is less obvious when there are multiple

levels of treatment

More generally, now, it means the particular

group against which comparisons are made

slide-23
SLIDE 23

February 24, 2006

  • Dr. Peter R Gillett

23

Research Design Principles

I Design 20.2: Experimental Group –

Control Group: Matched Participants

[Mr]

X Y (Experimental) ~X Y (Control)

I Participants are matched on one or more

attributes and randomly assigned to the two groups

slide-24
SLIDE 24

February 24, 2006

  • Dr. Peter R Gillett

24

Research Design Principles

I Matching versus Randomization

Randomization is preferred In practice, matching may be necessary

N Matching by equating participants N Frequency distribution matching method

² Can be tricky with multiple variables

N Holding variables constant N Incorporating nuisance variables into the research design N Participants acting as own controls

Matching is only relevant when the variables are

correlated with the dependent variable

slide-25
SLIDE 25

February 24, 2006

  • Dr. Peter R Gillett

25

Research Design Principles

I Design 20.3: Before and After Control

Group (Pretest – Posttest)

(a)

[R] Yb X Ya (Experimental) Yb ~X Ya (Control)

(b)

[Mr] Yb X Ya (Experimental) Yb ~X Ya (Control)

slide-26
SLIDE 26

February 24, 2006

  • Dr. Peter R Gillett

26

Research Design Principles

I Design 20.3 supplies a control group against

which the difference Ya – Yb can be checked

I However, difference scores are problematic

unless the experimental effect is strong

I In addition, the pretest can have a sensitizing

effect on participants, which decreases both internal and external validity

Pretests should be avoided when the testing

procedures are unusual

slide-27
SLIDE 27

February 24, 2006

  • Dr. Peter R Gillett

27

Research Design Principles

I Difference scores can be problematic -

differences may be small compared to the error of measurement

I Residualized scores may be used instead

Predict posttest scores from pretest scores

based on correlations (regression)

Subtract predicted posttest scores from actual

posttest scores

slide-28
SLIDE 28

February 24, 2006

  • Dr. Peter R Gillett

28

Research Design Principles

I Design 20.4: Simulated Before – After,

Randomized

[R]

X Ya Yb

I Improves Design 19.3 by adding

randomization, but is still fairly weak

slide-29
SLIDE 29

February 24, 2006

  • Dr. Peter R Gillett

29

Research Design Principles

I Design 20.5: Three Group, Before – After

  • Yb

X Ya (Experimental) [R] Yb ~X Ya (Control 1) X Ya (Control 2)

I Improves on Design 20.3 by adding a way

  • f avoiding confounding effects of pretest
slide-30
SLIDE 30

February 24, 2006

  • Dr. Peter R Gillett

30

Research Design Principles

I Design 20.6: Three Group, Before – After

  • Yb

X Ya (Experimental) [R] Yb ~X Ya (Control 1) X Ya (Control 2) ~X Ya (Control 3)

I Strong, satisfying, potent controls,

combines best designs so far (20.1 and 20.3), somewhat impractical, incomplete

slide-31
SLIDE 31

February 24, 2006

  • Dr. Peter R Gillett

31

Research Design Principles

I We can think of Design 20.6 as a factorial design,

crossing the experimental manipulation with pretest – no pretest

I A factorial design is the structure of research in which

two or more independent variables are juxtaposed in

  • rder to study their independent and interactive effects
  • n a dependent variable

I For those who want to read more, Chapter 21 gives

more examples, and also considers alternative designs such as correlated group designs, repeated trials designs, analysis of covariance

slide-32
SLIDE 32

February 24, 2006

  • Dr. Peter R Gillett

32

Experimental Validity

I Internal Validity I External Validity I Construct Validity I Statistical Conclusion Validity

slide-33
SLIDE 33

February 24, 2006

  • Dr. Peter R Gillett

33

Experimental Validity

I Internal validity: additional threats (see Cook and

Campbell, 1979)

Ambiguity regarding causal influence N A causes B or B causes A? Diffusion of treatments N Experimental and control groups share treatment information Compensatory equalization of treatments N Administrative and constituency reluctance to tolerate inequity Compensatory rivalry by respondents N Social competition reduces or reverses treatment differences Resentful demoralization N Outcomes affected by reaction to not receiving desirable treatment Local history effects

slide-34
SLIDE 34

February 24, 2006

  • Dr. Peter R Gillett

34

Experimental Validity

I

Construct validity: threats

Inadequate preoperational explication N Operationalization not appropriate Mono-operation bias N Only one exemplar and / or measure used Mono-method bias N All manipulations represented or measures recorded in the same way Hypothesis-guessing N Subjects behave as they believe experimenters want Evaluation apprehension N Respondents attempt to present themselves as competent and healthy Experimenter expectancies N Data obtained can be biased by the experimenters expectancies Confounding constructs and levels of constructs N In testing whether A affects B, limited levels of A varied and few levels of B measured Interactions Restricted generalizability across constructs N Results apply to constructs examined not to related but distinct constructs

slide-35
SLIDE 35

February 24, 2006

  • Dr. Peter R Gillett

35

Experimental Validity

I Statistical conclusion validity: threats

Low statistical power Violation of assumptions of statistical test “Fishing” and error rates Reliability of measures Reliability of treatment implementation Random irrelevancies Random heterogeneity of respondents

slide-36
SLIDE 36

February 24, 2006

  • Dr. Peter R Gillett

36

Experimental Validity

I Generally trade-offs must be made

between the different kinds of validity

I Precedence

For theory-building

N Internal, Construct, Statistical Conclusion, External

Applied research

N Internal, External, Construct (effect), Statistical

Conclusion, Construct (cause)

slide-37
SLIDE 37

February 24, 2006

  • Dr. Peter R Gillett

37

Experimental Validity

I Classic text:

“Quasi-Experimentation” Cook & Campbell Houghton Mifflin 1979

I New Edition:

“Experimental and Quasi-Experimental Designs for Generalized Casual Inference” Shadish, Cook & Campbell Houghton Mifflin 2002