Teacher growth score “capricious” and “arbitrary”, judge rules
Holy crap, peoples! I’m feeling extremely corroborated this week, what with the ProPublica report on Monday and also the recent judge’s ruling on a teacher’s growth score. OK, so technically the article actually came out last week (hat tip Chris Wiggins), but I only found out about it yesterday.
Growth scores are in the same class of models as Value-added models, and I’ve complained about them at great length in this blog as well as in my upcoming book.
Here’s what happened. A teacher named Sheri Lederman in Great Neck, New York got a growth score of 14 one year and 1 the next, even though her students did pretty well on state tests in both years.
Lederman decided to sue New York State for her “ineffective rating”, saying it was a problem of the scoring system, not her teaching. Albany Supreme Court justice Roger McDonough got the case and ruled last week.
McDonough decided to vacate her score, describing it as “arbitrary and capricious”. Here are more details on the ruling, taken from the article:
In his ruling, McDonough cited evidence that the statistical method unfairly penalizes teachers with either very high-performing students or very low-performing students. He found that Lederman’s small class size made the growth model less reliable.
He found an inability of high-performing students to show the same growth using current tests as lower-performing students.
He was troubled by the state’s inability to explain the wide swing in Lederman’s score from year to year, even though her students performed at similar levels.
He was perplexed that the growth model rules define a fixed percentage of teachers as ineffective each year, regardless of whether student performance across the state rose or fell.
This is a great start, hopefully we’ll see less growth models being used in the future.
Update: here’s the text of the decision.
Coincidentally, this arrived in my reader at the same time as Graham Fletcher’s post about student performance incentives for multiplication facts.
Whether applied to students or teachers, incentives are the lazy administrators way of managing. They appear to sound logical and to be easy to implement. However, practical experience shows that they don’t work: they are impossible to design perfectly and flawed designs have serious unintended consequences. Perhaps even more important, they distract attention from all of the other things that could/should be done to support achievement.
LikeLike
And yet the reforms being pushed by the behemoths in the Gates Foundation and Lumina Foundation (and downstream recipients like CCRC at Teacher College Columbia U., etc.) are frequently very much about performance incentives and other market-style solutions.
LikeLike
The judge had the greatest difficulty accepting the idea that student-achievement from year to year can be plotted on a “bell curve.”
Interesting — but also disturbing for those Galtonians and administrators whose legacy and whose livelihoods depend on the “truth” of “bell curve” distributions. Legions of American educators have staked their collective reputations on this idea, and on the fact that intelligence can be quantified and plotted on a normal distribution curve. Test makers and publishers — a whole industry, in fact, sell their products predicated on the reality of this distribution.
For almost one-hundred years, the apparent ease of producing such bell-shaped data has provided administrators and bureaucrats with a powerful justification for their testing regimes — even though all standardized tests are designed to produce this kind of distribution. In a curious kind of circular reasoning, the testing means are justified based on the obvious truth of the underlying distribution.
The judge is — in this instance — reversing this accepted wisdom. But standardized testing will continue.
LikeLike