In my recent paper, The Evergreen Effect, I show that nearly every employee in Washington school districts—including all teachers, principals, superintendents, and school support staff like janitors and librarians—is given a satisfactory performance evaluation. The problems with this seem self-evident to me and, as I articulate in the piece, if an employer can’t differentiate between their employees, they’re likely to treat them all as interchangeable widgets when it comes time to decide on how to help them improve, how much to pay them, or which ones should be retained.
Yet, if there’s one strain of criticism to this argument, it comes from hypothetical questions about how many ineffective employees we should expect schools to identify. Recent pieces, from Aaron Pallas and Matthew Di Carlo and The New York Times’ Jenny Anderson, explore this issue.
I have three basic responses. One, I mostly think this question is just an abstraction at this point. School districts across the country are still primarily relying on either/or evaluation systems where all employees are rated satisfactory or unsatisfactory. And, even the places that have implemented new evaluation systems, like Florida and Tennessee, still identify 97-98 percent of teachers as satisfactory. Unless you think 1-2 percent of employees is the right number of low performers (which American Federation of Teachers President Randi Weingarten implies in the Times piece), we have work to do.
Two, why aren’t Pallas and Di Carlo asking the opposite question about how many excellent teachers there are? Do they think 98-99 percent of teachers are truly exceptional? How many teachers and principals should receive extra compensation, be protected from layoffs, given additional responsibilities, and encouraged to stay on the job? There are two ends to every distribution, but Pallas and Di Carlo seem unconcerned with the positive side.
Three, I do agree with their point that there is no “right” answer. It should ultimately be decided by value judgments made by local communities, which should reflect their unique needs. If student performance was low and flat in certain schools, especially compared to similar students in other schools, that community might want to hold more adults accountable. If students at a particular school achieve at high levels and show strong growth, that school probably doesn’t have the same urgency around identifying poor performers.
Dana Goldstein points out that New York City has had this particular fight before, but in most districts the distribution of evaluation ratings isn’t public information, so communities by and large haven’t had this discussion yet. Until they do, and until we start seeing something approaching real differentiation, the question about the “right” number is premature.