February 24, 2006
- Dr. Peter R Gillett
1
26:010:557 / 26:620:557 Social Science Research Methods
- Dr. Peter R. Gillett
26:010:557 / 26:620:557 Social Science Research Methods Dr. Peter - - PowerPoint PPT Presentation
26:010:557 / 26:620:557 Social Science Research Methods Dr. Peter R. Gillett Associate Professor Department of Accounting & Information Systems Rutgers Business School Newark & New Brunswick Dr. Peter R Gillett February 24, 2006
February 24, 2006
1
February 24, 2006
2
Internal, External, Construct, Statistical
February 24, 2006
3
Framework Organization Configuration of elements
To answer research questions To control variance
N Experimental N Extraneous N Error
February 24, 2006
4
February 24, 2006
5
February 24, 2006
6
February 24, 2006
7
I Experimental variance
Design plan and conduct research so that the experimental conditions
are as different as possible
I Extraneous variance
Choose participants that are as homogeneous as possible on
extraneous independent variables
Whenever possible, assign subjects to experimental groups and
conditions randomly, and assign conditions and other factors to experimental groups randomly
Control extraneous variables by building them into the design Match participants and assign them to experimental groups at random
I Error variance
Reduce errors Increase reliability of measures
February 24, 2006
8
In an experiment, the researcher manipulates or
In nonexperimental research the nature of the
February 24, 2006
9
N X
² X is manipulated
N ~X
² X is not manipulated (i.e., subject not given X)
N (X)
² X is not manipulated but measured or imagined
February 24, 2006
10
February 24, 2006
11
February 24, 2006
12
February 24, 2006
13
February 24, 2006
14
Does the design adequately test the hypothesis Does the design adequately control independent
N Randomize whenever possible
² Select participants at random ² Assign participants to groups at random ² Assign experimental treatments to groups at random
N Control independent variables so that extraneous unwanted
Generalizability
February 24, 2006
15
February 24, 2006
16
I
Threats / alternative explanations
Measurement N Measuring participants changes them History N Events occurring in the specific experimental situation may have influenced the outcome Maturation N Subjects generally may have changed or grown over time Statistical Regression N Regression towards the mean Instrumentation N Changes in the measurement device, instrument or process Selection N Characteristics of the subjects selected could have influenced the outcome Attrition / experimental mortality N Loss of subjects in some treatments or with certain characteristics Interaction
February 24, 2006
17
February 24, 2006
18
February 24, 2006
19
I To what populations can the conclusions from an
Representativeness N Ecological representativeness N Variable representativeness
I Threats
Reactive / interaction effects of testing Interaction effects of selection biases Reactive effects of experimental arrangements Multiple-treatment interference
February 24, 2006
20
I Design is data discipline I A design is formally a subset of the Cartesian product of
I A complete design is based on a cross-partition of the
I We will not discuss incomplete designs I Analysis of variance is a statistical technique appropriate
February 24, 2006
21
February 24, 2006
22
February 24, 2006
23
February 24, 2006
24
Randomization is preferred In practice, matching may be necessary
N Matching by equating participants N Frequency distribution matching method
² Can be tricky with multiple variables
N Holding variables constant N Incorporating nuisance variables into the research design N Participants acting as own controls
Matching is only relevant when the variables are
February 24, 2006
25
February 24, 2006
26
Pretests should be avoided when the testing
February 24, 2006
27
February 24, 2006
28
February 24, 2006
29
February 24, 2006
30
February 24, 2006
31
I We can think of Design 20.6 as a factorial design,
I A factorial design is the structure of research in which
I For those who want to read more, Chapter 21 gives
February 24, 2006
32
February 24, 2006
33
I Internal validity: additional threats (see Cook and
Ambiguity regarding causal influence N A causes B or B causes A? Diffusion of treatments N Experimental and control groups share treatment information Compensatory equalization of treatments N Administrative and constituency reluctance to tolerate inequity Compensatory rivalry by respondents N Social competition reduces or reverses treatment differences Resentful demoralization N Outcomes affected by reaction to not receiving desirable treatment Local history effects
February 24, 2006
34
I
Construct validity: threats
Inadequate preoperational explication N Operationalization not appropriate Mono-operation bias N Only one exemplar and / or measure used Mono-method bias N All manipulations represented or measures recorded in the same way Hypothesis-guessing N Subjects behave as they believe experimenters want Evaluation apprehension N Respondents attempt to present themselves as competent and healthy Experimenter expectancies N Data obtained can be biased by the experimenters expectancies Confounding constructs and levels of constructs N In testing whether A affects B, limited levels of A varied and few levels of B measured Interactions Restricted generalizability across constructs N Results apply to constructs examined not to related but distinct constructs
February 24, 2006
35
February 24, 2006
36
N Internal, Construct, Statistical Conclusion, External
N Internal, External, Construct (effect), Statistical
February 24, 2006
37