Concerns with the MDRC study on small schools released today

MDRC released a  study today, which the NY Times writes “appeared to validate the Bloomberg administration’s decade-long push to create small schools to replace larger, failing high schools.” The report mentions the current controversy over the massive number of school closings, here in NYC and across the country, and thus there may be a political element in the timing of its release:

MDRC’s findings about SSCs are relevant to current federal policy on high school reform, particularly the U. S. Department of Education’s School Improvement Grants (SIGs) for failing schools. Reforms funded by SIGs include school transformation, school restart, school closing, and school turnaround. SSCs straddle several of these categories since they are typically replacements for schools that have closed and they operate as regular public schools.

This is the second MDRC study to conclude that students who attended the new small schools had significantly improved outcomes.  The first MDRC study, released in 2010, looked at students who entered these schools in 2005; this one adds students who entered in 2006 to that group.

I have a lot of reservations about using this study or the previous study to justify the small schools initiative and especially to justify the current massive round of school closings.  I am no expert in statistics, but my concerns revolve around these issues: 

1-      Though the study points out that the small schools were supposedly “unscreened” and evaluates their results by comparing the outcomes of students who applied to the school through lotteries,  compared to those who lost the lottery, it  ignores that the students who attended these small schools were far less high-needs on average, as evidence by their lower rates of English language learners and special needs, as shown by this Annenberg study by Pallas and Jennings.  In fact, they were allowed to openly exclude special needs students during the first two years. Thus even with a “lottery” for admissions, there are substantial peer effects for students who are grouped with higher-achieving students which this study does not mention.  (This is also a problem with many of the charter school studies, like this one, which tend to ignore peer effects.)

2-      The results in terms of higher graduation rates and college readiness (based on Regents scores and credit accumulation) ignore how in NYC, teachers and principals are able manipulate these in ways that do not reflect real learning (especially as teachers grade the Regents of students in their own schools, and in these schools, their own students!)  It has also been alleged that the small schools pioneered the now widespread and largely discredited practice of “credit recovery.” With newer teachers at many of the small schools, who did not have a memory or tradition as to earlier practices, it may have been easier to pressure these teachers into employing such methods.

3-      The study ignores that the small schools on average were allowed to have smaller classes and were far less overcrowded than the large high schools, which legitimately could have led to better results.  The class size at the small schools during these years were from 13 to 20 students per class, according to the PSA first year report, compared to 30 or more at the larger schools.  If the higher needs students in the larger schools had been provided with smaller classes, very likely their chance of success would have been improved substantially as well.

4-      Yet the study doesn’t examine how the opening of the small schools had a negative impact on the system as a whole, by flooding nearly large high schools with the most disadvantaged and academically challenged students, leading to even more overcrowding, larger class sizes, and damaging their opportunity to learn, as reported by many observers and confirmed by the New School’s report, The New Marketplace.

5-      The MDRC study deals with only a subsection of the small schools that were oversubscribed and required a lottery for admissions, so like the charter school studies which use a similar methodology, their success rate may not be representative of small schools overall.
6-      It study ignores the reality that these so-called random “lotteries” may be far from actually random. The MDRC study compares the baseline characteristics of students who “won” the lotteries to those who lost, in a  Supplemental Table 1, which purports to show that both groups were “virtually identical”; but the comparison does not include students who required collaborative team-teaching or self-contained classes; does not include  prior attendance rates, a key factor that principals often examing when selecting students; and does not differentiate between free and reduced price lunch students.  The table also does not include data on previous rates of suspension.  According to an earlier evaluation done by Policy Studies Associates. the ninth graders who entered the small schools had far better attendance records (91% compared to 81%), and were less likely to have been suspended as 8th graders compared to students at the schools they replaced.

Even in the categories the MRDC study does compare, there are greater numbers of higher achieving students, though the differences are not statistically significant, according to the model used.   Most strangely, the table compares the baseline data for four cohorts – entering ninth-graders from the 2004-2005 to 2007-2008 years; while the report compares outcomes for only the first two of these cohorts, students who entered these schools from 2005-6 and 2006-7. This is despite the fact that that the need level of students increased significantly after that point – and likely, the challenges faced by these schools as well.  See the above chart, for example, from the Annenberg study by Jennings and Pallas.

I have no idea why the MRDC study lumped together all these cohorts to examine their baseline characteristics, even as they only compared the outcomes for the first two, but it may considerably bias their conclusions.  Indeed, more than half of the middle and high schools being closed by the DOE this year for poor results are small schools that were founded after 2003.

7- There are several ways in which, especially in the early years of the small schools initiative, principals were able to manipulate the admissions process to get the students who were more likely to succeed, even though by definition these schools were supposedly “unscreened” and used “random” lotteries for admissions.  See this excerpt from a published study by Jennifer Jennings, who embedded herself with three small schools between March 2004 and September 2005: 

My observations revealed that many schools used applications, mandatory information sessions, and much stronger language to deter unwanted applicants. For example, 12 unscreened schools shared a similar application requiring that students provide the most recent report card and two letters of recommendation, one from an eighth-grade teacher and one from a guidance counselor, assistant principal, or principal. The application also asked for the student’s test scores, retention history, and involvement in advanced courses during the eighth grade. Finally, the application included additional questions requiring a narrative response….
The district’s application system provided opportunities for unscreened schools to choose higher achieving students. Through this computer system, each school received a list of students applying to the school, although the school did not know whether the student ranked it, for example, 1st or 12th. ….
The district’s application system provided opportunities for unscreened schools to choose higher achieving students. Through this computer system, each school received a list of students applying to the school, although the school did not know whether the student ranked it, for example, 1st or 12th. This data file included each student’s English-language-learner and special education classification, reading and math test scores, absences, grades, address, and junior high school. Schools were told to identify students who made an ‘‘informed choice’’ by assigning them a 1, while students who did not make an informed choice but the school was willing to accept were assigned a 2. If the school did not fill all of its seats with students making an informed choice, additional seats would be filled by students in the second category.  The Department of Education prohibits unscreened schools from using student performance data to select students. Nonetheless, both Marlena and Anna [pseudonyms for two principals of small schools] learned through their relationships with other principals that such regulations were loosely enforced….
 In addition to the English language learners and full-time special education students whom new schools had a waiver to eliminate, Renaissance [pseudonym for one of these small schools] eliminated part-time special education students and chose only those with 90 percent or higher attendance. Excel eliminated full- and part-time special education students and chose students with attendance rates of 93 percent or higher. 
There are many more revealing insights in the Jennings study about how the small schools were able to deflect over-the-counter students and counsel out low-performers to achieve better results, despite the fact that they claimed to be non-selective.  (Update: I have removed the link to the paper at the author's request; the abstract is here.)
All of these concerns  should provide one with reservations about the validity of the MDRC study and especially its apparent endorsement of the mayor's policies.
You have read this article Aaron Pallas / graduation rates / Jennifer Jennings / lotteries / MDRC study / small schools with the title Concerns with the MDRC study on small schools released today. You can bookmark this page URL Thanks!

No comment for "Concerns with the MDRC study on small schools released today"

Post a Comment