student learning data my evaluation
play

Student Learning Data & My Evaluation For Instructional - PowerPoint PPT Presentation

Student Learning Data & My Evaluation For Instructional Personnel Spring, 2012 Presenters & Questions Boyd Karns Submit questions today Jason Wysong Submit questions online Brandon McKelvey Call/email us! Boyd


  1. Student Learning Data & My Evaluation For Instructional Personnel Spring, 2012

  2. Presenters & Questions • Boyd Karns • Submit questions today • Jason Wysong • Submit questions online • Brandon McKelvey • Call/email us! – Boyd 5-0198 – Jason 5-0212 • Talk with your administrator

  3. Today’s Focus • Requirements of Senate Bill 736 • How Florida will measure student learning – Concept – Example • SCPS Business Rules for 2011-12 • Plan for 2012-13 and beyond

  4. Disclaimers • SCPS did not create the value-added model • FL value-added is different from other places • More flexibility for 2011-12 than 2012-13 • Every district using different rules for 2011-12

  5. Annual Evaluation Ratings  Highly effective  Effective  Needs Improvement/Developing*  Category I: first 3 years—developing  Category II: 4+ years—needs improvement  Unsatisfactory

  6. Underlying Philosophy  Teachers are the single most important variable in a student’s academic growth.  Teachers who effectively implement research- based practices will create student learning growth.

  7. State Process • Method for measuring student learning growth on FCAT must be established by Florida DOE, with measures on other assessments to follow • DOE Student Growth Implementation Committee – Recommend a formula for learning growth measurement – Teachers, administrators, and university professors were appointed to this group – The DOE also contracted with AIR (American Institutes for Research) to provide technical assistance

  8. Key Points • Growth, not proficiency • Learning growth, not learning gain

  9. State Committee Recommendations • The Student Growth Implementation Committee recommended a covariate adjustment model – Covariates are also called variables and represent student characteristics which influence learning – This model yields a VALUE-ADDED score • This model establishes a personal learning growth expectation for all students in the state • If a student meets or exceeds their growth expectation, that student will positively impact their teacher’s value added score

  10. Variables A value added model measures the impact of a teacher on student • learning by accounting for other factors that may impact the learning process The Student Growth Implementation Committee chose to include in the • model the following student characteristics that may influence a student’s expected growth – Number of subject-relevant courses – Two prior years of achievement scores – Student with Disabilities (SWD) status – English Language Learner (ELL) status – Gifted Status – Student Attendance – Mobility (number of transitions) – Difference from Modal Age in Grade (retention) – Class Size (number of students) – Homogeneity of Student’s Prior FCAT Scores

  11. Variables not in the model • Gender • Race/Ethnicity • SES

  12. School Component In addition to student-level scores, the model also calculates a • ‘school component’ – The ‘school component’ is actually a ‘grade-level, subject’ component – For example, all 5 th grade reading teachers at a school will have the same ‘school component’ score The school component is combined with the teacher calculation in • the value-added model – The Student Growth Implementation Committee chose the school component because they believed that there were school-level and classroom level factors that influence student learning – The committee thought that teachers should not be held completely responsible for student performance because some responsibility is held by the school as a whole

  13. School Component • Elementary: 4 school components • Middle: 6 school components • High: 2 school components, maybe 3 • School components can vary significantly by grade level, subject, and from year to year • No direct link to school grades • No way to game the system

  14. Implications • The value-added model starts by comparing students to others around the state. • The teacher’s initial score is the average of these student-level comparisons across the state. • The teacher’s score is adjusted based on the average performance of other students in the same grade level at the school. • This model is designed to control for school effects (leadership, climate, etc.)

  15. Finding a Value-Added Score There are two major components of the value-added score • – Teacher Score (how effective is the teacher) – School Score (how effective is the school) The difference between the school and teacher score is called the • ‘teacher effect’ – This is the difference between a teacher’s effectiveness and the effectiveness of other teachers in the same grade-level and subject In order to find the value-added score, half of the school score must • be added back to the score – This is because the student growth committee chose to only use half of the school component score

  16. Simple Example • Teacher score: 20 • School score: 10 • Unique teacher effect: 10 • Add ½ of school score back in: 15

  17. Standardizing & Aggregating Scores • Since the average FCAT growth rate is different at each grade level, scores must be STANDARDIZED so that teachers of different grade levels can be compared. – This is done by dividing each teacher’s score by the average amount of growth at a grade level – This ‘smoothes’ out differences in growth at grade levels • This accounts for grade-level differences in FCAT.

  18. Standardizing & Aggregating Scores • Since most teachers have students in multiple grade levels, value-added scores must be AGGREGATED so that each teacher receives only one overall score. – This is done through weighted averaging, so that proportion of students in each grade level is incorporated into the overall score • Standardization & aggregation allow for comparison of all teachers in the model regardless of subject, grade level, etc.

  19. Standard Errors • All statistical measures contain a degree of error • Value-added scores have a ‘standard error’ – The standard errors are calculated by the DOE in conjunction with AIR • The standard error is a value that represents the amount of uncertainty that we have in a particular value – For our purposes, a higher standard error would suggest that we have less confidence in the score

  20. Why Does the Standard Error Matter? • If we did not use the standard error in placing teachers in categories, we would be ignoring an important piece of information about the data • No data are perfect, but we have methods for determining how likely data are close to the ‘real’ value – Using data without this adjustment is not appropriate – Example: No one would sample two people in a Presidential poll and not mention that the ‘Margin of Error’ would be 99%

  21. Teacher Value Added Scores at School X in 7 th Grade The dots above the teacher labels are teacher value-added scores. The lines extending from the bars are confidence intervals at 0.5 Standard Errors (SE). At 0.5 SE , Teacher B is lower and Teacher C is higher than the state mean. Teacher C 0.5 SE = 38% Confidence Interval Teacher B Teacher A -30 -20 -10 0 10 20 30 State Mean 7

  22. Teacher Value Added Scores at School X in 7 th Grade At 1 SE , Only Teacher C would be considered significantly higher or lower than the state mean. 1 SE = 68% Confidence Interval Teacher C Teacher B Teacher A -30 -20 -10 0 10 20 30 State Mean 8

  23. Teacher Value Added Scores at School X in 7 th Grade At 2 SE , none of the three teachers would be significantly higher or lower than the state mean. Teacher C 2 SE = 95% Confidence Interval Teacher B Teacher A -30 -20 -10 0 10 20 30 State Mean 9

  24. Standard Error Implications • When you account for the standard error, you are able to have more certainty concerning which evaluation category is most appropriate for a teacher – A 95% confidence interval is built by adjusting for approximately 2 standard errors • However, the more adjustment that is made for the standard error, the wider the range of possible scores are created for each teacher – This means that most teachers will fall around the mean in a single category

  25. Standard Error & Policy Use of standard error makes it more difficult to clearly differentiate teacher performance level…but this is exactly what 736 requires.

  26. VAM Procedures for 2011-12

  27. SB 736 in 2011-12 • 736 requires State Board of Education to establish business rules and set cut points for personnel evaluations • State Board will not set rules until 2012-13 • DOE required districts to establish their own rules for 2011-12

  28. SCPS Process • Teacher Evaluation Committee • Teacher focus groups • Administrator Evaluation Committee • Dr. Vogel & District Administrators

  29. SCPS Decision # 1 Use only 2011-12 student data – This is year 1 – No historical data – Reduces learning growth from 50% to 40% – Remaining 60% is based on supervisor evaluation

  30. SCPS Decision # 2 • Use 2 standard errors with all value-added scores • Adjusting for 2 standard errors greatly increases the range of scores that influence a teacher’s placement • This suggestion is supported by educational research and prior ‘best practices’ using value- added modeling

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend