SLIDE 22 Attrition
- A. If all of the treatment sections had
administered the post-test (i.e., 0% cluster-level attrition within the treatment sections), and the control cluster-level attrition remained the same, would the cluster-level attrition be high or low per WWC guidelines?
- B. Given what we know about cluster-level
attrition not counting against a study twice, would the student-level attrition be high or low?
- C. Based on the WWC attrition tables, if you
are going to “lose” clusters and/or individuals from a study:
- D. Why is it problematic to have small
samples?
- E. Why is it problematic to have very low
treatment attrition with reasonable to high control attrition?
- F. If you “lose” 60% of both the treatment
and control groups (assuming you still have enough data points to have the statistical significance to detect treatment effects), is that a problem for the study based on attrition, per WWC guidelines? Individual-level attrition Cluster-level attrition Increasing compliance, drastically shrinking sample sizes Department Unsupervised
- A. Why would not having full
implementation fidelity and compliance data hinder ToT (as received) analysis?
- B. Why would not having full
implementation fidelity and compliance data NOT hinder ITT (as assigned) analysis?
- C. How might a program craft monitoring
approaches to maximize monitoring and tracking with small project staffs? Compliance monitoring/ implementation fidelity monitoring
Department Unsupervised offers 20 sections of UNSU-110 to the
- project. In all, 10 treatment and 10 control sections were randomly
- assigned. Due to project staffing constraints, only the treatment-
assigned sections were monitored, and not as thoroughly as desired. Following assignment, 1 treatment-assigned instructor implemented the treatment as designed for the full term, 2 offered it for all but the first three weeks, 2 more offered half the treatment activities for the full term, 2 decided they were not ready and did not implement anything new, and no data are available for the remaining 3 treatment-assigned sections or any of the control-assigned sections. While this does not impact the ITT protocol, your evaluator is claiming that this hinders her efforts to provide you with formative assessment
- f how the treatment seems to be working.
There are 2 campuses participating in this grant-funded project. At campus 1 in the Fall term, 20 sections were included in the RCT allowing for 10 sections each of treatment and control (Department Normal). However, many faculty misunderstood how RCT works and were found to be in non-compliance, which undermined the data. At campus 2, therefore, only 4 classes were included in RCT because the department did not want to risk non-compliance by having a single instructor assigned to both a treatment and a control condition, so
- nly 1 section per instructor was included in the pool of eligible
sections to be randomly assigned. These 4 instructors/sections with their 15 students each were then randomly assigned to the treatment (2 sections, 30 students) and control (2 sections, 30 students)
- conditions. All 4 of the faculty at campus 2 offered their sections in
- compliance. In the end, while campus 1 seemed to be better off with a
larger sample, the evaluator claims that campus 2 had more meaningful data and found a treatment effect size of .7 SD, while campus 1 detected no treatment effect. While the primary project outcomes of interest are course GPA, retention, and graduation, which do not lend themselves easily to attrition from the study (even a student who leaves has a data point and therefore remains in the study), another component of the analysis entails psycho-social surveys of students’ sense of belonging in STEM and self-confidence/self-efficacy engaging in STEM coursework and pursuing STEM careers. These constructs are measured using validated survey instruments administered to treatment and control students before Fall term starts and again at the end of the academic year through the study sections. Of the 50 sections (25 treatment and 25 control) that administered the pre-test and were supposed to administer the post-test survey, 12 did not (5 txt, 7 cntrl; 20% and 28% attrition, respectively, with an overall attrition
- f 24%). Each section had 20 students and all completed the in-class
pre-test, but only 52% of the students in the remaining treatment sections took it and 48% of the students in remaining control sections (50% overall attrition, 4% differential attrition). This means that from the original pre-test pool, we only have post-test data for 42% of treatment section students and 35% of control section students (62%
- verall attrition from the original pool, 7% differential attrition). No one
understands why the evaluator is not freaking out about such low response rates and such high attrition.
Implementation fidelity Compliance. Sample Size v. Compliance
- A. If one of the four sections (treatment or
control) had closed due to lack of enrollment, would it have impacted the validity of the data (i.e., introduced confounding factor)? Why or why not?
- B. What strategies might work on your
campus to increase the likelihood that treatment-assigned faculty will offer the treatment and control-assigned faculty will not? Compliance monitoring/ Implementation fidelity monitoring
Case 2: Spring Term
Yay! Compliance! WWC-approved
You do not need a lot of statistical power to detect such a large effect, but you do need Clean Data. Smaller samples are very useful if they are in compliance. WWC-approved measure Distraction – this does NOT repre- sent actual indivi- dual level attrition because cluster- level attrition would then count against us twice
Winter, K., Fernández, E., Avila, S., Johnson, P., & Valad, J. (2018). Working with What Works Clearinghouse Standards to evaluate designs for broadening participation. 2018 Transforming STEM Higher Education, AAC&U Network for Academic Renewal. https://goo.gl/WXXQpJ