Teacher Evaluation: If not Value-added, then what?

While Education reformer Rick Hess thinks “would-be reformers [are] getting waaaay ahead of themselves” when it comes to implementing “primitive systems to measure everything they can, or to validate everything else (observations, student feedback, etc.)” under the mantle of value-added analysis, he also doesn’t dismiss it out of hand as a way to evaluate teachers. Why? Because he doesn’t see any alternative evaluation tool being offered that is appreciably better. Assuming that we all think teachers–like other employees–should be evaluated, he offers five alternatives:

There’s principal evaluation. This asks accountable supervisors to take responsibility for their employees, though research has shown that principals usually punt, rating 99 percent of their teachers as terrific, come hell or high water. In principle, this is an attractive tool. Of course, the same folks who denounce value-added also tend to reject this one, arguing that supervisors may be dumb, biased, subjective, or eager to fill the teacher’s job with a friend or relative.
There’s level-based accountability, upon which value-added attempts to improve. As critics of NCLB have noted, the problem with holding teachers or schools responsible for achievement levels is that the result is only a faint reflection of their work; it also captures everything else in that child’s life, and all previous years of schooling. Level-based accountability has the virtue of helping us see how kids are faring, but it’s a troubling tool when used to evaluate individual teachers or schools.
There’s student and/or parental feedback, which presumes that student surveys or parental information can provide valuable insight into teacher or school performance. The Gates Foundation’s “Measures of Effective Teaching” project, for instance, has been working intensively with student surveys. These seem like potentially useful tools, though there are serious questions about reliability, validity, how these data are collected, and the rest.
There’s choice-based accountability, where one leans less on supervisor judgment or on measuring particular dimensions of performance and more on the marketplace. This is accountability that results from families making choices about where they want to send their kids, with those dollars following along. This is decentralized, market-driven accountability. It is the least driven by bosses, policymakers, or simple measures, and in that sense it is particularly democratic. But questions immediately arise about whether families value the “right” things or are making informed choices, whether this can work in rural communities or those with a dearth of good schools, whether there are unintended consequences (like social fragmentation), and so on.
Finally, there’s peer review, which many self-styled teacher advocates tend to like. They see it as sensible, professional, and fair. And I’m all for peer review, so long as it’s identifying excellence and mediocrity and providing tough-minded accountability. However, few peer review efforts have lived up to their billing. For instance, as Steven Brill has reported, the lauded Toledo peer review program–which has been credited with aggressively weeding out bad teachers–turned out, when studied for The New Teacher Project’s “Widget Effect,” to have removed just one tenured teacher (in a fair-sized, low-performing system) during the two years studied. If peer review is providing toothsome accountability, then it’s a swell option. But if teachers engage in peer review and nothing much happens, that doesn’t cut it. This means that those who think peer review is the answer need to explain how parents and taxpayers know when peer review is really working and what happens when it’s not.
My experience is that the same folks who lash out at value-added also pooh-pooh each of the alternatives (except a weak sauce version of peer review). Rather than recognizing that each approach has strengths and weaknesses, and that smart accountability is designed accordingly, they attend only to the potential flaws–and use those to reject each in turn. The result? What they’re ultimately rejecting is not just the tool of value-added but the notion that public educators who are paid with public funds to serve the public’s children ought to be responsible for how well they do their jobs. And I, along with the “reform” community, find that an unacceptable stance.

0 0 votes
Article Rating
Subscribe
Notify of
guest
7 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Russ
Russ
12 years ago

“…he also doesn’t dismiss it out of hand as a way to evaluate teachers. Why? Because he doesn’t see any alternative evaluation tool being offered that is appreciably better.”
Yes, but if you are doing something that is demonstrably harmful, simply stopping is an improvement.
Process improvement expert, Peter Scholtes, said this is like someone telling you to stop banging your head against a wall and replying, yes, but if we remove the wall what will I bang my head against?
pscholtes.com/performance/
http://www.amazon.com/exec/obidos/ISBN=0070580286/worldwidedemingw
Appraisals or process improvement, pick one.

Dan
Dan
12 years ago

Our performance appraisals address several different categories of performance according to metrics, which we are provided with at the beginning of each year. Some are more objective, some are more subjective. Coworkers and other parties are are consulted as necessary. I have the opportunity to contribute to and comment on the appraisal, and there is a reasonable appeal process in place. It works absolutely fine, just like it works for every other profession on the planet. It’s really not that difficult once you get the unions out of the room, which prefer to see 2012 teachers herded like 1920’s industrial workers for self-interested reasons.

Russ
Russ
12 years ago

“It works absolutely fine, just like it works for every other profession on the planet.”
That’s certainly at odds with my own experience. I find most managers/employees views on it range from a necessary evil to a absolute waste of time. I’ve yet to encounter anyone outside of senior management who finds it all that beneficial. My own view is that it is used as a way to avoid lawsuits when firing employess by showing employees are treated in similar fashion and not based on arbitrary or discriminatory practices.
Deming famously included the Red Bead Experiment in his seminars to illustrate what’s wrong with focusing on workers instead of process. Among the lessons:
http://www.maaw.info/DemingsRedbeads.htm

Empirical evidence (i.e., observations of facts, as opposed to secondhand information, or information further removed from fact such as opinion) is never complete. There are always a large number of variables that affect any set of performance results, many of which are unknown and unknowable.

I’ll agree that is possible to evaluate teachers, however to do so would require observation over long periods of time, which is very likely cost prohibitive on the scale suggested.

Marc
Marc
12 years ago

“I’ll agree that is possible to evaluate teachers, however to do so would require observation over long periods of time, which is very likely cost prohibitive on the scale suggested.”
Russ, actually, no. Warwick is piloting such a program right now. It combines classroom review with some value-added/metric based (ie; test score) tools. Obviously, kinks are still being worked out, but an attempt is being made.
http://www.warwickonline.com/stories/New-regulations-link-teach-certification-to-educator-evaluations,64532

Dan
Dan
12 years ago

“I’ll agree that is possible to evaluate teachers, however to do so would require observation over long periods of time, which is very likely cost prohibitive on the scale suggested.”
I have an immediate family member whose job for two decades has been to evaluate teachers. Anecdotal, I know, but I trust his opinion and he has told me that an experienced administrator can easily tell an effective teacher from an ineffective teacher through a couple of days of observation, meeting with them, reviewing their lesson plans, and speaking with their students and colleagues. This seems very reasonable to me. He asserts that everyone in a school knows who the “bad teachers” are anyway. This I cannot speak to, but all of the teachers I have asked have concurred with that statement.

Bucket Chick
Bucket Chick
12 years ago

If teachers-to-be can be evaluated and graded during their practicums and student teaching periods, then how is it that once they are professionals they are so hard to evaluate? While it is difficult to perhaps gauge it from the students’ test scores alone (because of variables such as home life and other enrichment that students might receive outside of school) it can certainly be evaluated by observation, preparedness, lesson plans, etc.

Dan
Dan
12 years ago

“I’ve yet to encounter anyone outside of senior management who finds it all that beneficial.”
I find it beneficial. I work hard at my job and produce good results while other people surf ebay, sleep, or go to union meetings. I get good performance appraisals and they get poor performance appraisals as a result, so it motivates me to keep working hard. They should be fired and replaced with good employees, but they are union, so it’s impossible. If we were all appraised and rewarded equally, there would be motivation to work hard and no work would get done.
The red bead “experiment” assumes that the workers have little or no control over the incidence of red beads. Therefore it is applicable to some workplaces, like manufacturing plants, but not to others where workers have a large degree of control over quality, such as my work environment.

Show your support for Anchor Rising with a 25-cent-per-day subscription.