So you think you can dance? Some concrete suggestions and cautions - - PowerPoint PPT Presentation

so you think you can dance some concrete suggestions and
SMART_READER_LITE
LIVE PREVIEW

So you think you can dance? Some concrete suggestions and cautions - - PowerPoint PPT Presentation

So you think you can dance? Some concrete suggestions and cautions in evaluating/validating claims of college readiness Thanos Patelis Center for Assessment Presentation at the National Conference on Student Assessment San Diego, CA June 23, 2015


slide-1
SLIDE 1

So you think you can dance? Some concrete suggestions and cautions in evaluating/validating claims of college readiness

Thanos Patelis Center for Assessment Presentation at the National Conference on Student Assessment San Diego, CA

June 23, 2015

slide-2
SLIDE 2

Overview

  • Claims of college readiness
  • Framework
  • Some methodologies
  • Examples of results
  • Cautions
  • Takeaways

Validation Methods 2

slide-3
SLIDE 3

Claims of College Readiness (Samples)

  • Achieve ‐ Published English and mathematics benchmarks in 2004 (Achieve, 2004). The definitions of college and career readiness were in the form of the content that students should be able to do to be fully

prepared to succeed in credit‐bearing college courses or in certain types of occupations.

  • ACT ‐ Using final course grade data from a large sample of colleges, ACT (Allen & Sconing, 2005) modeled the probability of success in typical first‐year courses as a function of ACT test scores. Success was defined

as a course grade of B or higher, and for each college the ACT test score that yield a .50 probability of success was identified. The median of these scores represents the college readiness benchmark. Benchmarks were obtained for four common first‐year courses: English Composition, using the ACT English score as the predictor; College Algebra, using the ACT Mathematics score as the predictor; Social Science, using the ACT Reading score as the predictor; and Biology, using the ACT Science score as the predictor.

  • David Conley and EPIC ‐ The work that stimulated the conversation around college readiness was a study to discover what must students know and be able to do in order to succeed in entry‐level university

courses (Conley, 2003). In a two‐year study involving more than 400 faculty and staff members from twenty research universities, all members of the Association of American Universities (AAU), in extensive meetings and reviews statements representing college success were developed.

  • NCEE ‐ The National Center on Education and the Economy (NCEE) presented results of research on the requirements of community colleges. The rationale for focusing on community colleges was that they offer

a gateway to four‐year colleges and the workforce. Community college staff members were asked what kind and level of literacy in mathematics and English is required of a high school graduate if that student is going to have a good chance of succeeding in the first year of a typical community college program. (NCEE, 2013a; 2013b)

  • Ohio Board of Regents ‐ In 2007, Ohio enacted a requirement as follows: “Not later than December 31, 2012, the presidents, or equivalent position, of all state institutions of higher education, or their designees,

jointly shall establish uniform statewide standards in mathematics, science, reading, and writing each student enrolled in a state institution of higher education must meet to be considered in remediation‐free status” (See 3345.061 (F) of the Ohio Revised Code: http://codes.ohio.gov/orc/3345.061). In December 2012, the presidents of Ohio’s public colleges and universities established a set of standards and expectation in English that included reading, writing, speaking, viewing and listening, mathematics that included mathematical processes, number and operations, algebra, geometry, probability and statistics, science in the disciplines of biology, chemistry, computer science, engineering, geology, and physics. Additionally, they established a set of college readiness indicators that they purport are supposed to guarantee remediation free status at any public pose‐secondary institution in Ohio. These cutoffs were set in the areas of English, reading and mathematics for the ACT, SAT, ACCUPLACER and COMPASS (Ohio Higher Ed, 2012).

  • PARCC – Level 4 representing (a) in ELA/literacy Students performing at this level demonstrate a strong command of the knowledge, skills, and practices embodied by the Common Core State Standards for English

language arts/literacy assessed at grade 11. They are academically prepared to engage successfully in entry‐level, credit‐bearing courses in College English Composition, Literature, and technical courses requiring college‐level reading and writing. Students performing at this level are exempt from having to take and pass college placement tests in two‐ and four‐year public institutions of higher education designed to determine whether they are academically prepared for such courses without need for remediation; (b) in math Students performing at this level demonstrate a strong command of the knowledge, skills, and practices embodied by the Common Core State Standards for Mathematics assessed at Algebra II or Mathematics III They are academically prepared to engage successfully in entry‐level, credit‐bearing courses in College Algebra, Introductory College Statistics, and technical courses requiring an equivalent level of mathematics. Students performing at this level are exempt from having to take and pass placement tests in two‐ and four‐year public institutions of higher education designed to determine whether they are academically prepared for such courses without need for remediation.

  • Smarter Balanced ‐ Level 3 representing (a) in ELA/literacy students who perform at the College Content‐Ready level in English language arts/literacy demonstrate reading, writing, listening, and research skills

necessary for introductory courses in a variety of disciplines. They also demonstrate subject‐area knowledge and skills associated with readiness for entry‐level, transferable, credit‐bearing English and composition courses; and(b) Students who perform at the College Content‐Ready level in mathematics demonstrate foundational mathematical knowledge and quantitative reasoning skills necessary for introductory courses in a variety of disciplines. They also demonstrate subject‐area knowledge and skills associated with readiness for entry‐level, transferable, credit‐bearing mathematics and statistics courses.

  • Texas ‐ In 2009, Texas with assistance from EPIC developed the Texas College and Career Readiness Standards (THECB & TEA, 2009). “The 79th Texas Legislature, Third Called Special Session, passed House Bill 1,

the “Advancement of College Readiness in Curriculum.” Section 28.008 of the Texas Education Code, seeks to increase the number of students who are college and career ready when they graduate high school. The legislation required the Texas Education Agency (TEA) and the Texas Higher Education Coordinating Board (THECB) to establish Vertical Teams (VTs) to develop college and career readiness standards in the areas of English/language arts, mathematics, science, and social studies. “These standards specify what students must know and be able to do to succeed in entry‐level courses at postsecondary institutions in Texas” (p. iii). As a result of this, the standards were produced and organized around four levels of specificity of (a) the key content, (b) organizing components, (c) performance expectations, and (d) examples of performance indicators. A resource was provided to explain and help schools understand the specifics of this (see www.txccrsc.org).

  • Virginia ‐ In January 2007, the Board of Education authorized VDOE to conduct studies to determine factors contributing to success in postsecondary education. In 2009 and 2010, respectively, the Virginia Board
  • f Education approved revised Standards of Learning in mathematics and English. In 2011, Virginia developed college and career ready mathematics and English performance expectations involving Virginia’s

community colleges and four‐year institutions. These performance expectations define the level of achievement students must reach to be academically prepared for success in entry‐level, credit‐bearing, college courses in mathematics and English or further career and technical training after high school (VDOE, 2011). In English, the content are in the areas of reading with vocabulary, nonfiction reading, literary reading, reading analysis and critical reading as components; writing comprised of the areas of composing, revision and editing, and documentation and ethics; and communicating comprised of speaking listening and

  • collaborating. In mathematics, the expectations are organized as (a) problem solving, decision making and integration, (b) understanding and applying functions, (c) procedure and calculation, and (d) verification

and proof. (See: http://www.doe.virginia.gov/instruction/college_career_readiness/

Validation Methods 3

slide-4
SLIDE 4

Validation Framework

Validation Methods 4

Validity refers to the degree to which evidence and theory support the interpretations

  • f test scores for proposed uses of tests.

‐‐ AERA et al., 2014, p. 11 Explicit Statement of Proposed Interpretation of Test Scores Types of Evidence: Test Content Response Process Internal Structure Relations to Other Variables Consequences of Testing The focus here is one type of validation evidence.

slide-5
SLIDE 5

Relations to Other Variables

  • Historically, most popular validation evidence (Jonson & Plake,

1998; Cizek, Rosenberg, & Koons, 2008)

  • Offers an empirical approach using external data sources to make

comparisons about the information provided.

  • Takes advantage of data that exist concurrently with the assessment

data in question, as well as what will exist in the near future.

  • Does not imply the same results – the actual methodology matters.
  • Along with content‐related evidence, the evidence produced is

compelling.

  • Context matters – While national studies have been used to

develop benchmarks and provide validation evidence, what college success means, what student experience and what performance represents is context specific.

Validation Methods 5

slide-6
SLIDE 6

Components involved in Relations to Other Variables

Validation Methods 6

The Sample The Predictors The Criterion

From Willingham, Lewis, Morgan, & Ramist, 1990; Patelis, 2001; 2002; Patelis, Camara, & Wiley, 2009

The Methodology

slide-7
SLIDE 7

Components

  • Characteristics of the test and/or grades
  • Meaning of the scores
  • Score variations
  • Educational practices

Validation Methods 7

The Predictors

  • There are factors that affect the comparability of scores

from tests that include purpose/use, construct/content, administration conditions, scoring, population of students tests, and opportunity to learn. (Gong and DePascale, 2013, p. 8).

– One of these factors includes the degree of experiential specificity of the predictors used. They can range from specific representations (course grades or end‐of‐course test scores) to assessments representing more generalized skills (e.g., ACT and SAT) (Anastasi & Urbina, 1997; Geisinger, 2002).

The Criterion

slide-8
SLIDE 8

Components (cont’d)

  • Institutional variation
  • Types of institutions
  • Students represented
  • Nature of the data

Validation Methods 8

The Sample

  • Student selection processes (i.e., admissions, course placement,
  • pportunities to learn, etc.) can drastically impact the institutional variation,

the type of institutions, and the students represented.

  • If college outcomes are used as a criterion, the selection process

represented by the admissions process will affect the sample and validity coefficients (Young, 2001; Ling, Patelis, & Lewis, 2004; Shen, Sackett, Kuncel, et al., 2012).

  • A taxonomy of nine admissions models for post‐secondary institutions can

assist in understanding the institutional variation, the types of institutions, and students represented (Perfetto et al., 1999).

slide-9
SLIDE 9

Relations to Other Variables

Validation Methods 9

Elementary Secondary Post‐Secondary Claims of college readiness College outcomes

  • Enrollment
  • Course Grades
  • Discipline Grade Point

Average

  • Cumulative Grade

Point Average

  • Retention/Persistence
  • Graduation
  • Faculty Ratings
  • Course Placement

Other Tests

  • 10th/11th Grades:
  • AP
  • PLAN
  • PSAT/NMSQT
  • 11th/12th Grades:
  • ACT
  • AP
  • SAT

Other Criteria

  • Course Grades
  • Cumulative Grade

Point Average

  • Retention
  • Graduation
  • Teacher Ratings
  • Course Placements

“Link” “Link”

The framework by Holland (2007) offers the basis for linking and evaluating the benchmarks with other variables.

1 2 3 4 5 6

slide-10
SLIDE 10

Notes on Data

  • Data can be obtained as students move forward and test scores and other
  • utcomes are collected.

– For tests administered this academic year, use test scores and other data from this academic year.

  • Convergent and concurrent validation

– Next year, additional test scores and other data can be gathered and used.

  • Convergent, concurrent, and predictive validation

– For tests administered to 10th and 11th grades, may need to wait 3‐4 years before students are in a post‐secondary institution.

  • Work with college/university to get data
  • Use data from the National Student Clearinghouse to get college enrollment and persistence data

– Once students move through elementary, middle and into high school, use historical student data.

  • Back map
  • Data can be obtained by administering high school tests to college students.

– Effort involves recruiting post‐secondary institutions and students, ensuring student motivation, and sample representativeness

  • Use archival data (weaker argument)

– If last year’s scores have college readiness benchmarks (e.g., PLAN, PSAT/NMSQT, AP, others), examine relationship between this year’s test scores and last years performance.

Validation Methods 10

slide-11
SLIDE 11

Linking Methodologies

Projections:

  • Logistic regression
  • Linear regression
  • Borderline groups

approach

  • Equipercentile
  • Decision theory
  • Back mapping

Validation Methods 11

Descriptive Information:

  • Correlation coefficients
  • Box plots of test scores by

performance levels of college readiness

  • Cross tabulations of

performance levels and college readiness

  • Scatter plots
slide-12
SLIDE 12

Descriptive Results – Example

  • In addition to quantitative

representations of the relationship between two test scores or other variables (i.e., correlation coefficients) or cross tabulations comparing performance levels and being college ready or not, a visual display can be very informative.

  • Using box plots showing distribution of

scores at each performance level on a high school state test.

  • This is a nice visual representation of the

performance on tests with college readiness benchmarks and the state test.

  • This represents 42% and 45% of the 10th

grade students in the state taking the PSAT/NMSQT and SAT, respectively (Patelis, Behuniak, & Tucker, 2001).

Validation Methods 12

Steps toward college ready College Readiness Benchmark

slide-13
SLIDE 13

Linking Methodology ‐ Logistic Regression

  • This is a very popular approach used by ACT and College Board in establishing

college readiness benchmarks (as well as in providing evidence to support these benchmarks) for their tests (see for example, Allen & Sconing, 2005; Wiley, Wyatt & Camara, 2010).

  • Data representing predictors (the scores associated with the test) and criteria

(other tests scores or outcomes) for the same students are used.

– While the inference will differ, this methodology can be used with concurrent data or data representing predictive type of relationship or data looking backward.

  • Use generally a cut‐off on the criterion that represents a dichotomy of either

college readiness or college success.

– This represents one advantage of the logistic regression in that the modeling of the relationship is focused around what is considered as college readiness or college success.

  • Decide on a likelihood value(s) for estimating projected score.

– This can be done using a policy capturing approach (e.g., Kobrin, Patterson, Wiley, & Mattern, 2012; Camara, 2015; Thacker, 2015) or logical reasoning or arbitrary values. – This represents another advantage of the logistic regression in that you can explicitly represent the tolerance for uncertainty.

  • Calculate the regression coefficient associated with specified likelihood values

between the predictors and criteria.

Validation Methods 13

slide-14
SLIDE 14

Logistic Regression – Example 1

  • Task: Provide evidence to inform the proficiency cut‐score on an 11th

grade state test score.

  • Used PSAT/NMSQT test scores with “steps to college readiness

benchmark” for students who also took the state test. These particular benchmarks represent the scores at which a student will have a 65 percent likelihood of attaining the SAT college readiness benchmark.

– Sample was 7,495 11th grade students in this state – Represented 36% of the 11th grade state test takers – Correlation coefficients of performance between sections on each test were substantial.

  • ELA/Critical Reading = 0.72
  • Mathematics = 0.82
  • Writing = 0.72
  • Issue: The sample was not representative of the 11th graders in this state.

– Performance on the state test was higher for the sample who took both tests.

  • Cognizant of this limitation, the analyses were done to explore the results

and gleam some qualified information.

Validation Methods 14

slide-15
SLIDE 15

Logistic Regression – Example 1 (cont’d)

  • The college readiness benchmark on the PSAT/NMSQT for 11th graders and the grade 11 state test

proficiency cut‐score are shown.

  • As can be seen, the grade 11 proficiency cut‐score is well below the college readiness benchmark –

even based on this relatively higher performing sample of 11th graders.

Validation Methods 15

slide-16
SLIDE 16

Borderline Group Method

  • Data representing predictors (the test scores

associated with the test) and criteria (other tests scores or outcomes) for the same students are used.

  • Using the cut‐score on the predictor

add/subtract one standard error of measurement of the predictor to/from the cut‐score.

  • For those students who score within one

standard error of measurement around the mean on the predictor, calculate the mean on the criterion.

  • Can also make the calculations the other

direction using the benchmark of the criterion and standard error of measurement

  • f the criterion to select the students and

calculate the mean on the predictor.

Validation Methods 16

Cut‐Score

+ 1 SEM ‐ 1 SEM

Mean  Projected Score

Predictor Criterion

slide-17
SLIDE 17

Borderline Group – Example 1 (cont’d)

  • The projected PSAT/NSMQT scores for the grade 11 proficiency cut‐score

are below the college readiness benchmark.

  • The borderline group method results are very similar to the logistic

regression method.

Validation Methods 17

slide-18
SLIDE 18

Logistic Regression & Borderline Group – Example 2

  • Task: How does the cut‐score on the 10th grade state test compare to

another test’s college readiness benchmark?

  • Used three years of PSAT/NMSQT test scores with college readiness

benchmarks for students who also took the state test. These benchmarks represent the minimum scores at which students have a 65% chance of

  • btaining a 2.67 freshman year grade point average (FYGPA).

– Samples ranged from 8,794 to 15,307 who took both tests – Represented approximately 11‐20% of the 10th grade state test takers – Correlation coefficients of performance between sections on the test were substantial.

  • Reading = 0.74‐0.79
  • Mathematics = 0.79‐0.85
  • Issue: The sample was not representative of the 10th graders in this state.

– Performance on the state test was higher for the sample who took both tests.

  • Cognizant of this limitation, the analyses were done to explore the results

and gleam some qualified information.

Validation Methods 18

slide-19
SLIDE 19

Logistic Regression & Borderline Group – Example 2

Reading Mathematics Logistic Regression Logistic Regression Borderline 0.50 0.65 Borderline 0.50 0.65 Yr 1 738 733 743 533 526 537 Yr 2 736 727 738 529 515 525 Yr 3 732 728 739 529 524 533

Validation Methods 19

  • The projected 10th grade state test scores were above the “meeting the standard” cut‐score on the state test, but

within the range associated with the performance level. The projections were actually near the center of the range

  • f the performance level.

– This was expected based on the higher performance of the sample taking both the state test and the PSAT/NMSQT in 10th grade compared to all students taking the state test in 10th grade. – The findings, considering the sampling bias, supported the rigor of the performance level associated with “meeting the standard”.

  • The borderline and logistic regression provided similar results.
slide-20
SLIDE 20

Cautions

  • With a number of new tests being introduced making claims of college readiness,

gathering validation evidence to support these claims is crucial.

  • While there are a number of types of evidence and the assessment program’s

goals and theory of action should drive the type of evidence needed, gathering empirical evidence based on relations to other variables is a popular, effective approach that, if done well, can offer a compelling argument to support the claims being made.

  • The components involving the predictors, the criterion, and the sample being used

must be clearly specified and their impact on the results noted.

– A perfect study does not exist. – The limitations represented by the decisions to use certain predictors, criteria, and samples must be noted. – But, some evidence, properly documented and qualified, is better than no evidence. – Better yet, a body of evidence, properly documented and qualified, is better still.

  • While there are limitations imposed by decisions on which predictors and criteria

to use (assuming they’re reliable), in studies examining the relation to other variables, there is nothing more impactful than the sample used.

Validation Methods 20

slide-21
SLIDE 21

Cautions (cont’d)

  • Timing of data

– If you use the ACT, SAT, or data with multiple time points, it is crucial to keep track of the timing. – For example, students can take the SAT seven times a year starting in 9th grade. – If you were to examine the relationship of the 10th grade state test to the SAT college readiness benchmark, it is important to select an administration or set of administrations and the number of times the SAT was taken.

  • Score variations may arise depending on the time interval associated with which

administrations selected and repeaters.

Validation Methods 21

9th Grade Fall Spring 10th Grade Fall Spring 11th Grade Fall Spring 12th Grade Fall Spring State Test Scores SAT SAT

  • ACT and SAT data generally represent the latest test scores for a cohort of students graduating high

school on a certain year.

– Need to get date when tests taken or data for specific administrations. – Note that there are score variations by administration. So, if possible, replicate analyses by time intervals.

slide-22
SLIDE 22

Cautions (cont’d)

  • Sample

– Missing data based on sampling can affect the magnitude of the validity coefficients.

  • This can happen when the criterion is another test where not all students participate (e.g., ACT or SAT).

– Selection systems or attrition can affect the score variability and influence the results. – Qualify interpretation, if the participate rates are different between predictors and criteria.

  • Educational Practices

– When examining the relationship of state test scores to other tests, school selection or course assignment practices can produce differential experiences that could create differences in the regression coefficients between the predictor and criterion by school. – Examine (if at all possible) differences in the relationship between predictors and criteria by school.

  • Could utilize intraclass correlation coefficient – similar to what one would do when using a multi‐level or hierarchical linear

model analysis

  • Institutional Variation

– Since post‐secondary institutions will have different admissions models, the relationship in state test scores to college

  • utcomes will be influenced by the impact of these institutional practices.

– Replicate analyses across types of institutions or utilize some kind of multi‐level or hierarchical linear modelling. – Example: Validation study examining the relationship of the SAT on college freshman GPA found different raw and adjusted correlation coefficients for all institutions (0.35 raw and 0.53 adj.) vs. those that were more selective institutions accepting less than 50% of the applicants (0.39 raw and 0.58 adj). Kobrin, Patterson, Shaw, Mattern, & Barbuti, 2008. Validation Methods 22

slide-23
SLIDE 23

Cautions (cont’d)

  • Methods

– In using the logistic regression methods, limitations can arise when the proportions of the dichotomy in the criterion (e.g., percent at/above the benchmark is 90% vs. below is 10%) are substantially different.

  • The accuracy of the regression coefficients will be inflated.
  • Selecting a higher likelihood in calculating the projected score may help.
  • At least, examine it and qualify the findings.

– Methods relying on observed scores without modeling the relationship between the predictor and criterion (e.g., borderline group or equipercentile) may be affected by drastic differences in the distributions

  • f scores with respect to differences in the degree of non‐normality

between the distributions.

  • It is a good idea to examine the distributions.
  • Sampling bias can create the differential distributions.
  • Even if you use multiple methods to examine relationship or make projections,

lean toward selecting the results that are produced from methods more robust in the face of issues with the distributions (e.g., logistic regression).

Validation Methods 23

slide-24
SLIDE 24

Takeaways

  • For assessments programs making claims of college readiness, should implement validation studies

involving examining the relations to other variables.

– Part of body of evidence that contribute or attend to theory of action of the assessment program.

  • There is a wide array of other variables that can be selected.

– Utilize third‐party sources (faculty ratings, colleges/universities, National Student Clearinghouse)

  • Care must be exercised in selecting the variables.

– Care and time needed in selecting variables and the time intervals associated between predictors and criteria.

  • Sampling issues offer the most potential for impacting the results.
  • Educational practices and institutional variation necessitate examination of results by school and

institution.

– Replication or some way of accounting for school effects should be exercised.

  • Utilize multi‐method approach to help in converging or triangulating on the results.
  • Some evidence, even if qualified, is better than no evidence.

– Use what you have; purposefully sample within a state to maximize representation; examine (or at least weight) by school/institution.

  • Build analyses and studies into ongoing operations of the assessment program.
  • Testing vendor can do this, but be an informed consumer.

Validation Methods 24

slide-25
SLIDE 25

References

Achieve (2004). Ready or Not: Creating a High School Diploma that Counts. Washington, DC: Author. http://www.achieve.org/ReadyorNot AERA, APA, NCME (2014). Standards for educational and psychological testing. Washington, DC: AERA. Allen, J., & Sconing, J. (2005).Using ACT Assessment scores to set benchmarks for college readiness. (ACT Research Report No. 2005‐3). Iowa City, IA: ACT, Inc. http://www.act.org/research/researchers/reports/ Anastasi, A. & Urbina, S. (1997). Psychological Testing (7th Ed.). Upper Saddle River, NJ: Prentice Hall. Camara, W. J. (2015). Employing empirical data in judgmental processes. Presentation at the National Assessment on Student Assessment, San Diego, CA. Cizek, G. J., Rosenberg, S. L., & Koons, H. H. (2008). Sources of validity evidence for educational and psychological tests. Educational and Psychological Measurement, 68, 397–412. doi:10.1177/0013164407310130. Conley, D. T. (2003).Understanding University Success: A Report from Standards for Success – A Project of the Association of American Universities and The Pew Charitable Trusts. Eugene, OR: Center for Educational Policy

  • Research. http://www.epiconline.org/publications/understanding‐university‐success

Geisinger, K. F. (August, 2002). Anne Anastasi’s Continuum of Experiential Specificity for Tests of Developed Ability and the Current SAT Controversy. Symposium presented at the annual meeting of the American Psychological Association, Chicago, IL. Gong, B. & DePascale, C. (2013). Different but the same: assessment “comparability” in the era of the Common Core State Standards. A white paper developed for the Council of Chief State School Officers, pre‐publication version: http://www.nciea.org/publication_PDFs/Assessment%20Comparabiliy%20CD062013.pdf Holland, P. W. (2007). A framework and history for score linking. In N. J. Dorans, M. Pommerich, & P. W. Holland (eds.), Linking and aligning scores and scale, NY: Springer. Jonson, J. L., & Plake, B. S. (1998). A historical comparison of validity standards and validity practices. Educational and Psychological Measurement, 58, 736–753. doi:10.1177/0013164498058005002 Kobrin, J. L., Patterson, B. F., Shaw, E. J., Mattern, K. D., & Barbuti, S. M. (2008). Validity of the SAT for predicting first‐year college grade point average. Research Report No., 2008‐5. NY: The College Board. Kobrin, J. L., Patterson, B. F., Wiley, A., & Mattern, K. D. (2012). A Standard‐Setting Study to Establish College Success Criteria to Inform the SAT College and Career Readiness Benchmark. Research Report 2012‐3. New York: The College Board. http://research.collegeboard.org/sites/default/files/publications/2012/7/researchreport‐2012‐3‐standard‐setting‐study‐sat‐college‐career‐benchmark.pdf National Center on Education and the Economy (2013a).What Does It Really Mean to be College and Work Ready? The English Literacy Required of First Year Community College Students. Washington, DC: Author. National Center on Education and the Economy (2013b).What Does It Really Mean to be College and Work Ready? The Mathematics Required of First Year Community College Students. Washington, DC: Author. http://www.ncee.org/college‐and‐work‐ready/ Ohio Higher Ed (2012). 2012 Uniform Statewide Standards for Remediation‐Fee Status. https://www.ohiohighered.org/sites/ohiohighered.org/files/uploads/data/reports/hs‐to‐ college/2012_UNIFORM_STATEWIDE_REMEDIATION_FREE_STANDARDS%28010913%29.pdf Patelis, T. (2001). The increased demands placed on achievement tests. Presentation at the annual meeting of the National Council on Measurement in Education, Seattle, WA. Patelis, T., Behuniak, P., & Tucker, C. (2001). The Connecticut Academic Performance Test and College Board Examinations. Presentation at the National Conference on Large‐Scale Assessment, CCSSO, Houston, TX. Patelis, T. (2002). Educational selection. Presentation at a symposium celebrating the legacy of Anne Anastasi, Fordham University and the New York Regional SPSSI Group, New York, NY. Patelis, T., Camara, W. J., & Wiley, A. (2009). Efforts to define college readiness in the United States. Presentation at the 11th International Conference on Education, Athens Institute for Education and Research, Athens, Greece. Perfetto, G. (Ed.) (1999). Toward a Taxonomy of the Admissions Decision‐Making Process. New York: College Entrance Examination Board. Shen., W., Sackett, P. R., Kuncel, N. R., Beatty, A. S., Rigson, J. L., & Kiger, T. B. (2012). All validities are not created equal: determinants of variation in SAT validity across schools. Applied Measurement in Education, 25, 197‐ 219. Texas Education Agency (2013). State of Texas Assessments of Academic Readiness (STAAR) Assessments Standard Setting Technical Report. Austin, TX: Author. http://www.tea.state.tx.us/student.assessment/staar/performance‐standards/ Texas Higher Education Coordinating Board and the Texas Education Agency (2009).Texas College and Career Readiness Standards. Austin, TX: Author. www.thecb.state.tx.us/collegereadiness/crs.pdf Thacker, A. (2015). Policy capturing approaches to setting cut scores for college readiness. Presentation at the National Conference on Student Assessment, San Diego, CA. Virginia Department of Education (2011). Joint Agreedment on Virginia’s College and Career Ready Mathematics and English Performance Expectations. http://www.doe.virginia.gov/instruction/college_career_readiness/expectations/joint_agreement.pdf Wiley, A., Wyatt, J., & Camara, W. J. (2010).The Development of a Multidimensional College Readiness Index (Research Report 2010‐3). New York: The College Board. http://research.collegeboard.org/publications/content/2012/05/development‐multidimensional‐college‐readiness‐index Willingham, W. W., Lewis, C., Morgan, R., & Ramist, L. (1990). Predicting College Grades: An Analysis of Institutional Trends Over Two Decades. Princeton, NJ: Educational Testing Service. Young, J. W. (2001). Differential validity, differential prediction, and college admission testing: a comprehensive review and analysis. Research Report #2001‐06. NY: College Board.

Validation Methods 25

slide-26
SLIDE 26

For more information: Center for Assessment www.nciea.org

Thanos Patelis tpatelis@nciea.org

26