The LA Times has obtained standardized test results from the Los Angeles school district and is using that information to publish ratings of individual teachers. There’s little doubt that their methodology has flaws, but that’s an argument for better metrics and analysis, not shutting down this line of inquiry. I am a huge believer in public education — it’s probably the most successful government program ever launched — but there’s a bit of a black hole when it comes to accountability. There’s some understanding of which school districts and schools are better than others, but very little information on which teachers are good at their jobs and which ones aren’t.
A lot of people are complaining already that the teachers are being judged on the basis of performance on standardized tests and that there’s more to teaching than improving test performance. I’d agree, but judging them on that basis puts them in the same boat as their students. Students are judged based on their performance on standardized tests starting at an early age and ending when they apply to graduate school. If it’s not fair to judge a teacher based on how their students do on achievement tests, how is it fair to choose which kids get to go to magnet schools based on the results of those same tests?
It’ll be interesting to see what happens next.
August 20, 2010 at 5:27 pm
I’m somewhat surprised at your comments. Yes, you did mention the possibility that the metric was bad, but didn’t really consider the implications.
As a software developer, you’ve been made aware of, if not become a victim of, phony metrics. The best example in our profession is lines of code.
Now let’s apply the LA Times logic about “value added” to the LOC metric. Their argument is basically this — that even though the measurement may be flawed, it still shows the difference between good teachers and bad teachers.
However, as we both know, the LOC metric certainly doesn’t do that with software developers. In fact, those who excel at churning out more lines may be the WORST people to have on your staff!
I’ve been a software developer since 1980, and my wife has been a public school teacher for the past 10 years. The havoc wrought by No Child Left Behind and its reliance on bubble tests is no different than the imposition of the LOC metric on our profession. The difference is, the general public doesn’t fancy themselves programmers, whereas everyone thinks being a teacher is something anyone with a brain can do without training. As a result, professionals in software development were able to shoot down the LOC metric before it destroyed the profession. Teachers, on the other hand, have had public “experts” tell them that this was the way to “accountability.”
Surely you’ve heard the phrase, “What gets measured gets done.” The imposition of the LOC metric resulted in bloated code bases. The imposition of NCLB testing results in teachers spending more and more time preparing kids for answers that fit into a bubble.
Is that really the kind of education we want? We wonder why this country seems to be heading downward. The so called reform of NCLB is on of the reasons why. We are creating a generation of data entry clerks, not thinkers.
August 21, 2010 at 12:50 am
I actually agree with all of that. When people are judged by imperfect metrics (and of course the standardized tests that schools rely on right now are rather imperfect), they have an incentive to come up with better metrics. I hope that’s what we see on the horizon with teaching. We need better teachers and better ways to figure out who the good teachers are. I feel like the LA Times project is a step in that direction.
August 21, 2010 at 8:35 pm
I’ll second Rafe’s thoughts. It seems like getting any accountability from public schools has been extraordinarily difficult for far too long. I’m well aware No Child Left Behind is flawed, but I definitely appreciate the Obama Administration’s Race to the Top initiative for getting states and unions to find and try something new.
For developers, a big part of getting out from underneath LOC as a measurement is identifying the core question being asked: “How do we know who’s effective?” and using their expertise to help steer that conversation instead of putting up a wall and resisting any assessment or worse, silently going along.
Another aspect is that developers themselves know who the effective developers are and there’s little incentive in hiding the deadweight on a development team. I can’t imagine a worse frustration as a new, aspiring teacher (or anything else) and seeing coworkers who were absolutely uninterested in getting better. Great development shops help others get better and/or help themselves by steering the others out if they won’t get better.