Will the Regents allow ideology to trump research and good sense?

UPDATE: see the May 15 letter to the Regents strongly opposing the proposed reg that would allow up to 40% of a teacher's evaluation to be based upon state test scores, written by noted experts on education and testing. 

UPDATE 2:  Despite all the research and all the warnings from experts, the Regents approved this proposal today.

The NY State Regents are voting tomorrow on whether to allow student scores on state tests, filtered through value-added formulas, to account for 40 percent of a teacher’s evaluation. The new law – rushed through the Legislature to get Race to the Top funds - said that the state tests would account for only 20 percent, with another 20 percent to be devised by districts using locally agreed upon assessments –  not necessarily standardized tests.
According to the NY Times, “The new system is scheduled to be used in the coming school year for English and math teachers in Grades 4 through 8, and then for all teachers the following year.” The article also says there will be  new “state tests by 2012-13, for middle school science and social studies, and ninth- and 10th-grade English, where none exist now. “
And then we are supposed to get a whole new raft of standardized tests for the Common Core by 2014? 
All this will be hugely expensive, at a time of rampant budget cuts, when parents are already up in arms because of their children’s steady diet of standardized tests and test prep, all to evaluate teacher performance by means of a dysfunctional method that few experts believe in.
Governor Cuomo has entered into the controversy, signing a letter supporting this notion, as well as the even more draconian proposition that a teacher who receives an outstanding evaluation from his or her principal but might have a one year drop in student test scores should be ranked as inadequate.  He is also planning to using the “Race to the Top” trick, to try to bribe districts in adopting this system by next fall, and only those districts “that put the evaluation system into effect would be eligible for money from $500 million set aside in the state budget to reward school performance.” How did the legislature let this Cuomo slush fund go through – at the same time cutting aid to schools by nearly a billion dollars?
On NY1, Regents head Merryl Tisch claimed that parents were included in the 63 member teacher evaluation task force.  This is untrue, as there was not a single parent on the taskforce, though we posted a petition asking them to do so, that garnered more than four hundred signatures.
She also argued that using test score-based teacher evaluation is more ‘objective” and thus even if rated highly on other measures, having low test scores for one year should lead to a teacher’s dismissal.  This ignores the fact that the state tests were never designed for this purpose – and are technically unable to make accurate assessments on student “progress.” 
There has also been a wealth of evidence in recent weeks about the unreliability of value added models, their unfairness and their potentially damaging effects on the teaching profession and kids. 
Here are some of these studies:  
Eva L. Baker, Paul E. Barton, Linda Darling-hammond, Edward Haertel, Helen F. Ladd, Robert L. Linn, Diane Ravitch, Richard Rothstein, Richard J. Shavelson, and Lorrie A. Shepard,  Problems with the use of student test scoresto evaluate teachers, Aug. 2010; Economic Policy Institute. 
“If new laws or policies specifically require that teachers be fired if their students’ test scores do not rise by a certain amount, then more teachers might well be terminated than is now the case. But there is not strong evidence to indicate either that the departing teachers would actually be the weakest teachers, or that the departing teachers would be replaced by more effective ones….. there is broad agreement among statisticians, psychometricians, and economists that student test scores alone are not sufficiently reliable and valid indicators of teacher effectiveness to be used in high-stakes personnel decisions, even when the most sophisticated statistical applications such as value-added modeling are employed.
VAM estimates have proven to be unstable across statistical models, years, and classes that teachers teach. One study found that across five large urban districts, among teachers who were ranked in the top 20% of effectiveness in the first year, fewer than a third were in that top group the next year, and another third moved all the way down to the bottom 40%.   Another found that teachers’ effectiveness ratings in one year could only predict from 4% to 16% of the variation in such ratings in the following year. Thus, a teacher who appears to be very ineffective in one year might have a dramatically different result the following year. …This raises questions about whether what is measured is largely a “teacher effect” or the effect of a wide variety of other factors.”
Henry Braun, Naomi Chudowsky, and Judith Koenig, eds., Getting Value Out of Value-Added: Report of a Workshop, National Research Council, Washington, DC, 2010.
“Value-added methods involve complex statistical models applied to test data of varying quality. Accordingly, there are many technical challenges to ascertaining the degree to which the output of these models provides the desired estimates. Despite a substantial amount of research over the last decade and a half, overcoming these challenges has proven to be very difficult, and many questions remain unanswered…”
“Given one year of test score gains, it is impossible to distinguish between teacher effects and lassroom-level factors. In a given year, a class of students may perform particularly well or particularly poorly for reasons that have nothing to do with instruction.”
As expected, the level of uncertainty is higher when only one year of test results are used …as against three years of data…But in both cases, the average range of value-added estimates is very wide. …For all teachers and years, the average confidence interval width is 44 points…..schools with high levels of mobility, test exemption, and absenteeism tend to have fewer students contributing to value-added estimates. And fewer student observations introduce a greater level of uncertainty associated with these estimates.….. For example, the widest confidence intervals are found in the Bronx – whose schools serve many disadvantaged students – at 37 percentile points in math and 47 points in ELA (both based on up to three years of data)…”
“...value-added measurement[s] … are simply too narrow to be relied upon as a meaningful representation of the range of skills, knowledge, and habits we expect teachers and schools to cultivate in their students.”
Sean P. Corcoran, Jennifer L. Jennings, Andrew A. Beveridge, Teacher effectiveness on high- and low-stakes tests, April 10, 2011.
“To summarize, were teachers to be rewarded for their classroom's performance on the state test or alternatively, sanctioned for low performance many of these teachers would have demonstrated quite different results on a low-stakes test of the same subject.  Importantly, these differences need not be due to real differences in long-run skill acquisition…
That is, teachers deemed top performers on the high-stakes test are quite frequently average or even low performers on the low-stakes test. Only in a minority of cases are teachers consistently high or low performers across all metrics… Our results …. highlight the need for additional research on the impact that high-stakes accountability has on the validity of inferences about teacher quality.”
John Ewing, Mathematical Intimidation: Driven by the Data, May, 2011.    
Ewing, the former executive director of the American Mathematical Society, points out that value-added evaluations are highly unreliable and that using such measures will corrupt the educational process:
 ““…making policy decisions on the basis of value added models has the potential to do even more harm than browbeating teachers…we almost surely will end up making bad decisions that affect education for decades to come …Of course we should hold teachers accountable, but this does not mean we have to pretend that mathematical models can do something they cannot….
Shouldn’t we focus on how well students are prepared to learn in the future, not merely what they learned in the past year? Shouldn’t we try to distinguish teachers who inspire their students, not merely the ones who are competent? When we accept value-added as an “imperfect” substitute for all these things because it is conveniently at hand, we are not raising our expectations of teachers, we are lowering them….Why must we use value-added even with its imperfections? … the only apparent reason for its superiority is that value-added is based on data. Here is mathematical intimidation in its purest form—in this case, in the hands of economists, sociologists, and education policy experts.

And if we drive away the best teachers by using a flawed process, are we really putting our students first?”
You have read this article NY state Regents / Research Partnership for New York City Schools / value added teacher evaluation with the title Will the Regents allow ideology to trump research and good sense?. You can bookmark this page URL http://thediariesofalawstudent.blogspot.com/2011/05/will-regents-allow-ideology-to-trump.html. Thanks!

No comment for "Will the Regents allow ideology to trump research and good sense?"

Post a Comment