ASSESSING THE MEASUREMENT MODEL RELIABILITY AND VALIDITY USING - - PowerPoint PPT Presentation

assessing the measurement model reliability and validity
SMART_READER_LITE
LIVE PREVIEW

ASSESSING THE MEASUREMENT MODEL RELIABILITY AND VALIDITY USING - - PowerPoint PPT Presentation

ASSESSING THE MEASUREMENT MODEL RELIABILITY AND VALIDITY USING SPSS/AMOS USING SPSS/AMOS RELIABILITY AND VALIDITY Measurement Reliability 125 140 pounds pounds An instrument consistently measures the variable of interest In order


slide-1
SLIDE 1

ASSESSING THE MEASUREMENT MODEL RELIABILITY AND VALIDITY

USING SPSS/AMOS USING SPSS/AMOS

slide-2
SLIDE 2

RELIABILITY AND VALIDITY

slide-3
SLIDE 3

Measurement Reliability

■ An instrument consistently measures the variable of interest ■ In order for an instrument to be valid, it must also be reliable - A reliable instrument, however, is not necessarily valid

125 pounds 140 pounds

slide-4
SLIDE 4

Creating Reliable Measures

■ Test-Retest Method ■ Alternative-Form Method ■ Internal Consistency Method

– Split-half reliability – Item-total reliability

■ Use Established Measures ■ Assessing Reliability of Research Workers

– Inter-observer or inter-coder agreement

■ Reliability coefficients should be at least equal to .70 to demonstrate a reliable measure.

slide-5
SLIDE 5

Measurement Validity

■ Does the empirical measure observe what it intends to observe? ■ Does the measure appropriately (adequately and accurately) reflect the meaning of the concept?

slide-6
SLIDE 6

Creating Valid Measures

■ Cont Content V ent Validity lidity

– Face Validity – Expert Panel Validity

■ Crit Criterion V erion Validity lidity

– Predictive Validity – Concurrent Validity

■ Construct V Construct Validity lidity

– Convergent Validity – Discriminant Validity

slide-7
SLIDE 7

7

Validity (cont.)

CONTENT VALIDITY Refers to representativeness of what is being measured to the intended concepts (capturing all the dimensions of the social concept) CRITERION VALIDITY Established when the scores obtained on one measure can be accurately compared to those obtained with a more direct or already validated measure of the same phenomenon. CONSTRUCT VALIDITY Refers to adequacy of the measuring instrument for measuring the theoretical concepts and relationships; also adequacy of the logical structure of the conceptualization and operationalization.

slide-8
SLIDE 8
slide-9
SLIDE 9

GROUP WORK: ASSESSING CONTENT VALIDITY

slide-10
SLIDE 10

Gr Group w

  • up work

rk

■ Finalize you construct conceptualization and measurement (your group should have the name of the construct, definition, dimensions, indicators

  • r items)

■ Each member in each group will be given a piece of paper to judge the suitability of items/indicators develops by group to measure a construct. ■ Only one group member will be stay and explain the construct to the raters ■ The raters will evaluate the suitability of each item. ■ The raters will give their judgement to the group who lead the presentation. ■ Calculate the content validity index/ratio. (wh will calculate?)

slide-11
SLIDE 11

DISCUSS WHA DISCUSS WHAT Y YOU SHOULD DO T U SHOULD DO TO IMPR IMPROVE THE CONTENT V E THE CONTENT VALIDIT LIDITY

slide-12
SLIDE 12

TYPES OF MEASUREMENT MODELS

Reflective OR Formative Measurement Models

slide-13
SLIDE 13

Type of Measurement Models

■ Measurement models can be Measurement models can be:

Reflectiv flective Measurement Measurement Mode Model Formativ

  • rmative Measurement Model

Measurement Model

slide-14
SLIDE 14

Reflective vs. Formative

■ Reflective indicators are seen as functions of the latent construct, and changes in the latent construct are reflected in changes in the indicator (manifest) variables. ■ Reflective indicators are represented as single headed arrows pointing from the latent construct outward to the indicator variables; the associated coefficients for these relationships are called outer loadings.

slide-15
SLIDE 15

Reflective vs. Formative

■ Reflective indicators are considered “effects” of the LVs. In

  • ther words, the LVs cause or form the indicators (Chin

1998b). ■ All reflective indicators measure the same underlying phenomenon, namely the LV. Whenever the LV changes, all reflective indicators should change accordingly, which refers to internal consistency (Bollen1984). Consequently, all reflective indicators should correlate positively. (Urbach & Ahlemann, 2010, p. 11) ■ Direction of causality is from construct to measure. ■ Indicators are expected to be correlated. ■ Dropping an indicator from the measurement model does not alter the meaning of the construct. ■ Takes measurement error into account at the item level ■ Similar to factor analysis ■ Typical for management and social science researches

slide-16
SLIDE 16

Reflective vs. Formative

■ In contrast, formative indicators are assumed to cause a latent construct, and changes in the indicators determine changes in the value

  • f

the latent construct (Diamantopoulos & Winklhofer 2001; Diamantopoulos, Riefler & Roth 2008). ■ Formative indicators are represented by single-headed arrows pointing toward the latent construct inward from the indicator variables; the associated coefficients for these formative relationships are called outer weights.

slide-17
SLIDE 17

Reflective vs. Formative

■ Formative indicators cause or form the LV by definition (Chin 1998b). These indicators are viewed as the cause variables that reflect the conditions under which the LV is

  • realized. Since there is no direct causal relationship

between the LV and the indicators (but vice versa), formative indicators may even be inversely related to each

  • ther. In other words, formative indicators of the same LV

do not necessarily have to correlate (Bollen 1984; Rossiter 2002). ■ Direction of causality is from measure to construct (Urbach & Alemann, 2010, p. 11) ■ Indicators are not expected to be correlated ■ Dropping an indicator from the measurement model may alter the meaning of the construct – No such thing as internal consistency reliability ■ Based on multiple regression – Need to take care of multicollinearity

slide-18
SLIDE 18

Reflectiv flective vs. F e vs. Formativ rmative

(Albers (Albers, , 20 2010)

■ Reflective ■ Formative

Satisfaction I feel well in this hotel I’m always happy to stay in this hotel I recommend this hotel to others Satisfaction The service is good The personnel is friendly The room is well- equipped

slide-19
SLIDE 19
slide-20
SLIDE 20
slide-21
SLIDE 21

ASSESSING THE REFLECTIVE MEASUREMENT MODEL

For this workshop we will focus on Reflective Measurement Model

slide-22
SLIDE 22

Assessing T Assessing Teacher Commitment Scale acher Commitment Scale Construct V Construct Validity lidity

■ Data : TCOMM.sav ■ Questionnaire: See Attachment ■ Teacher Commitment Dimension

Item em Dimens nsion ion Conceptual ceptual and Operational ational Definition inition KOM1 – KOM5 Commitment to School See next Table KOM6 – KOM10 Commitment to Student KOM11 – KOM13 Commitment to Teaching KOM14 – KOM17 Commitment to Profession

slide-23
SLIDE 23

Teacher Commitment as a Multidimensional Scale

slide-24
SLIDE 24

HANDS-ON ACTIVITIES WITH SPSS

slide-25
SLIDE 25

SOME NOTES ON ASSESSING CONSTRUCT VALIDITY

For reflective model

slide-26
SLIDE 26

Reflective Measurement Models

■ Indicat Indicator reliability r reliability – Squared loadings ■ Int Internal Consist rnal Consistency ncy – Composite reliability – Cronbach’s alpha ■ Con Convergent v ergent validity lidity – Average Variance Extracted (AVE) ■ Discriminant V Discriminant Validity lidity – Fornell-Larcker Criterion – Cross loadings – HTMT ratio

slide-27
SLIDE 27

Indicat Indicator R r Reliability liability

■ The indicator reliability denotes the indicator variance that is explained by the latent variable ■ The value is between 0 and 1. ■ When indicator and latent variables are standardized, the indicator reliability equals the squared indicator loading ■ Normally should be at least 0.25 to 0.5 ■ However, reflective indicators should be eliminated from measurement models if their loadings within the PLS model are smaller than 0.7 (Hulland 1999, p. 198).

slide-28
SLIDE 28

Indicator Reliability

slide-29
SLIDE 29

Internal Consistency (Cronbach’s Internal Consistency (Cronbach’s α)

  • Measures the reliability of indicators
  • The value is between 0 and 1
  • In early phase 0.7 acceptable, but in later phases values of 0.8 or

0.9 is more desirable (Nunnally, 1978)

N = number of indicators assigned to the factor 2

i = variance of indicator i

2

t = variance of the sum of all assigned indicators’ scores

slide-30
SLIDE 30

Composite Reliability

  • Measures the reliability of indicators
  • The value is between 0 and 1
  • Composite reliability should be 0.7 or higher to indicate adequate

convergence or internal consistency (Gefen et al., 2000).

i = loadings of indicator i of a latent variable i = measurement error of indicator i j = flow index across all reflective measurement model

slide-31
SLIDE 31

Average Variance Extracted (AVE)

  • Comparable to the proportion of variance explained in factor analysis
  • Value ranges from 0 and 1.
  • AVE should exceed 0.5 to suggest adequate convergent validity

(Bagozzi & Yi, 1988; Fornell & Larcker, 1981).

2

i = squared loadings of indicator i of a latent variable

var(i ) = squared measurement error of indicator i

slide-32
SLIDE 32

Discriminant Validity

  • Fornell & Larcker (1981) criterion

– A latent variable should explain better the variance of its own indicators than the variance of other latent variables – The AVE of a latent variable should be higher than the squared correlations between the latent variable and all other variables (Chin, 2010; Chin 1998; Fornell & Larcker, 1981).

  • Cross loadings

– The loadings of an indicator on its assigned latent variable should be higher than its loadings on all other latent variables.

  • Heterotrait-Monotrait ratio

– The value should be lower than 0.9 or more conservative lower than 0.85. – Or the value should be significant. (compare to 1)

slide-33
SLIDE 33

ASSESSING THE FORMATIVE MEASUREMENT MODEL

slide-34
SLIDE 34

Assessing F Assessing Formativ rmative Measurement Models e Measurement Models

slide-35
SLIDE 35

MODERN APPROACH

Rasch Measurement Model

slide-36
SLIDE 36

Rasch Measurement Model