assessing the measurement model reliability and validity
play

ASSESSING THE MEASUREMENT MODEL RELIABILITY AND VALIDITY USING - PowerPoint PPT Presentation

ASSESSING THE MEASUREMENT MODEL RELIABILITY AND VALIDITY USING SPSS/AMOS USING SPSS/AMOS RELIABILITY AND VALIDITY Measurement Reliability 125 140 pounds pounds An instrument consistently measures the variable of interest In order


  1. ASSESSING THE MEASUREMENT MODEL RELIABILITY AND VALIDITY USING SPSS/AMOS USING SPSS/AMOS

  2. RELIABILITY AND VALIDITY

  3. Measurement Reliability 125 140 pounds pounds ■ An instrument consistently measures the variable of interest ■ In order for an instrument to be valid, it must also be reliable - A reliable instrument, however, is not necessarily valid

  4. Creating Reliable Measures ■ Test-Retest Method ■ Alternative-Form Method ■ Internal Consistency Method – Split-half reliability – Item-total reliability ■ Use Established Measures ■ Assessing Reliability of Research Workers – Inter-observer or inter-coder agreement ■ Reliability coefficients should be at least equal to .70 to demonstrate a reliable measure.

  5. Measurement Validity ■ Does the empirical measure observe what it intends to observe? ■ Does the measure appropriately (adequately and accurately) reflect the meaning of the concept?

  6. Creating Valid Measures ■ Cont Content V ent Validity lidity – Face Validity – Expert Panel Validity ■ Crit Criterion V erion Validity lidity – Predictive Validity – Concurrent Validity ■ Construct V Construct Validity lidity – Convergent Validity – Discriminant Validity

  7. Validity (cont.) CONTENT VALIDITY Refers to representativeness of what is being measured to the intended concepts (capturing all the dimensions of the social concept) CRITERION VALIDITY Established when the scores obtained on one measure can be accurately compared to those obtained with a more direct or already validated measure of the same phenomenon. CONSTRUCT VALIDITY Refers to adequacy of the measuring instrument for measuring the theoretical concepts and relationships; also adequacy of the logical structure of the conceptualization and operationalization. 7

  8. GROUP WORK: ASSESSING CONTENT VALIDITY

  9. Gr Group w oup work rk ■ Finalize you construct conceptualization and measurement (your group should have the name of the construct, definition, dimensions, indicators or items) ■ Each member in each group will be given a piece of paper to judge the suitability of items/indicators develops by group to measure a construct. ■ Only one group member will be stay and explain the construct to the raters ■ The raters will evaluate the suitability of each item. ■ The raters will give their judgement to the group who lead the presentation. ■ Calculate the content validity index/ratio. (wh will calculate?)

  10. DISCUSS WHA DISCUSS WHAT Y YOU SHOULD DO T U SHOULD DO TO IMPR IMPROVE THE CONTENT V E THE CONTENT VALIDIT LIDITY

  11. TYPES OF MEASUREMENT MODELS Reflective OR Formative Measurement Models

  12. Type of Measurement Models ■ Measurement models can be Measurement models can be : Reflectiv flective Measurement Measurement Mode Model Formativ ormative Measurement Model Measurement Model

  13. Reflective vs. Formative ■ Reflective indicators are seen as functions of the latent construct, and changes in the latent construct are reflected in changes in the indicator (manifest) variables. ■ Reflective indicators are represented as single headed arrows pointing from the latent construct outward to the indicator variables; the associated coefficients for these relationships are called outer loadings.

  14. Reflective vs. Formative ■ Reflective indicators are considered “effects” of the LVs. In other words, the LVs cause or form the indicators (Chin 1998b). ■ All reflective indicators measure the same underlying phenomenon, namely the LV. Whenever the LV changes, all reflective indicators should change accordingly, which refers to internal consistency (Bollen1984). Consequently, all reflective indicators should correlate positively. (Urbach & Ahlemann, 2010, p. 11) ■ Direction of causality is from construct to measure. ■ Indicators are expected to be correlated. ■ Dropping an indicator from the measurement model does not alter the meaning of the construct. ■ Takes measurement error into account at the item level ■ Similar to factor analysis ■ Typical for management and social science researches

  15. Reflective vs. Formative ■ In contrast, formative indicators are assumed to cause a latent construct, and changes in the indicators determine changes in the value of the latent construct (Diamantopoulos & Winklhofer 2001; Diamantopoulos, Riefler & Roth 2008). ■ Formative indicators are represented by single-headed arrows pointing toward the latent construct inward from the indicator variables; the associated coefficients for these formative relationships are called outer weights.

  16. Reflective vs. Formative ■ Formative indicators cause or form the LV by definition (Chin 1998b). These indicators are viewed as the cause variables that reflect the conditions under which the LV is realized. Since there is no direct causal relationship between the LV and the indicators (but vice versa), formative indicators may even be inversely related to each other. In other words, formative indicators of the same LV do not necessarily have to correlate (Bollen 1984; Rossiter 2002). ■ Direction of causality is from measure to construct (Urbach & Alemann, 2010, p. 11) ■ Indicators are not expected to be correlated ■ Dropping an indicator from the measurement model may alter the meaning of the construct – No such thing as internal consistency reliability ■ Based on multiple regression – Need to take care of multicollinearity

  17. Reflectiv flective vs. F e vs. Formativ rmative (Albers (Albers, , 20 2010) ■ Reflective ■ Formative The service is I feel well in good this hotel I’m always The personnel Satisfaction happy to stay Satisfaction is friendly in this hotel The room is I recommend well- this hotel equipped to others

  18. ASSESSING THE REFLECTIVE MEASUREMENT MODEL For this workshop we will focus on Reflective Measurement Model

  19. Assessing T Assessing Teacher Commitment Scale acher Commitment Scale Construct V Construct Validity lidity ■ Data : TCOMM.sav ■ Questionnaire: See Attachment ■ Teacher Commitment Dimension Item em Dimens nsion ion Conceptual ceptual and Operational ational Definition inition KOM1 – KOM5 Commitment to School See next Table KOM6 – KOM10 Commitment to Student KOM11 – KOM13 Commitment to Teaching KOM14 – KOM17 Commitment to Profession

  20. Teacher Commitment as a Multidimensional Scale

  21. HANDS-ON ACTIVITIES WITH SPSS

  22. SOME NOTES ON ASSESSING CONSTRUCT VALIDITY For reflective model

  23. Reflective Measurement Models ■ Indicat Indicator reliability r reliability – Squared loadings ■ Int Internal Consist rnal Consistency ncy – Composite reliability – Cronbach’s alpha ■ Con Convergent v ergent validity lidity – Average Variance Extracted (AVE) ■ Discriminant V Discriminant Validity lidity – Fornell-Larcker Criterion – Cross loadings – HTMT ratio

  24. Indicat Indicator R r Reliability liability ■ The indicator reliability denotes the indicator variance that is explained by the latent variable ■ The value is between 0 and 1. ■ When indicator and latent variables are standardized, the indicator reliability equals the squared indicator loading ■ Normally should be at least 0.25 to 0.5 ■ However, reflective indicators should be eliminated from measurement models if their loadings within the PLS model are smaller than 0.7 (Hulland 1999, p. 198).

  25. Indicator Reliability

  26. Internal Consistency (Cronbach’s Internal Consistency (Cronbach’s α ) N = number of indicators assigned to the factor  2 i = variance of indicator i  2 t = variance of the sum of all assigned indicators’ scores • Measures the reliability of indicators • The value is between 0 and 1 • In early phase 0.7 acceptable, but in later phases values of 0.8 or 0.9 is more desirable (Nunnally, 1978)

  27. Composite Reliability  i = loadings of indicator i of a latent variable  i = measurement error of indicator i j = flow index across all reflective measurement model • Measures the reliability of indicators • The value is between 0 and 1 • Composite reliability should be 0.7 or higher to indicate adequate convergence or internal consistency (Gefen et al., 2000).

  28. Average Variance Extracted (AVE)  2 i = squared loadings of indicator i of a latent variable var(  i ) = squared measurement error of indicator i • Comparable to the proportion of variance explained in factor analysis • Value ranges from 0 and 1. • AVE should exceed 0.5 to suggest adequate convergent validity (Bagozzi & Yi, 1988; Fornell & Larcker, 1981).

  29. Discriminant Validity  Fornell & Larcker (1981) criterion – A latent variable should explain better the variance of its own indicators than the variance of other latent variables – The AVE of a latent variable should be higher than the squared correlations between the latent variable and all other variables (Chin, 2010; Chin 1998; Fornell & Larcker, 1981). Cross loadings  – The loadings of an indicator on its assigned latent variable should be higher than its loadings on all other latent variables. Heterotrait-Monotrait ratio  – The value should be lower than 0.9 or more conservative lower than 0.85. – Or the value should be significant. (compare to 1)

  30. ASSESSING THE FORMATIVE MEASUREMENT MODEL

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend