Yesterday, the ShankerBlog posted a piece by Doug Harris proposing that states and districts incorporate student growth into teacher and principal evaluations through an initial “screening” process under which ineffective teachers could be initially identified for further examination and support. Sherman Dorn is right to call this a common-sense idea, albeit with some kinks to work out. But what both Harris and Dorn miss is that nobody is forcing state hands on this, and a handful of states are already experimenting with a variation of it.
The Obama administration’s education initiatives have called for states and districts to adopt teacher and principal evaluation systems that are based, in significant part, on student growth and other measures. Race to the Top (RTTT), the School Improvement Grant program, the ESEA waivers*, and other efforts have all used some variation of this language. Harris writes of RTTT that the administration “encouraged—or required, depending on your vantage point—states to lump value-added or other growth model estimates together with other measures.” While it’s true that they required both student growth and other measures, nowhere do they require states to lump them together in some sort of percentage-based weighting. That’s just what most states have done.
Most states have come up with some sort of rubric that looks like this:
This is one way to ensure that student growth is a “significant part” of teacher and principal evaluations. But again, this is not what’s required of any of the Obama administration initiatives.
Other states, like Massachusetts which won a Race to the Top grant and earned an ESEA waiver, use a model that looks like a matrix. They put student growth on one axis and teacher practice on another, and each box in the table corresponds to some consequences for the teacher or principal. Here’s what an overly simplified “look-up table” looks like:
A third approach is similar to the matrix model but instead uses low student growth ratings to “trigger” some pre-defined consequences, such as lowering a teacher’s overall evaluation rating or requiring them to draft a personalized growth plan. Failure to make progress on the growth plan can lead to dismissal.
Harris’ conceptual vision for student growth as a “screen” works the same as the “trigger” model but in the opposite order. He would have student growth come first. A low rating could lead to closer inspection of a teacher’s classroom practice. But there are two main problems with doing it this way. One is that it doesn’t work as well from a timing standpoint. Student growth scores often come much later than observation results. And two, most teacher evaluation systems historically have not done a good job of differentiating teachers and providing them the feedback they need to improve. If low student growth just leads to the same high evaluation scores, it’s hard to say student growth played a “significant part” in a teacher’s overall rating.
States have choices about how they make this work. I wish more states and districts would experiment with different ways to accomplish the same goals, and Harris’ proposal is one option worth considering.
*Until recently I worked on these issues for the Department of Education. Everything in this post references material that is publicly available.
Photo Credit: Daily Press