It’s no secret that 206 teachers were fired last Friday in Washington, DC, including 65 teachers who earned a rating of “ineffective” on the District’s controversial evaluation system, IMPACT. DCPS fired an additional 141 teachers who scored “minimally effective” for a second consecutive year.
While it’s commendable that 55% of teachers rated “minimally effective” last year improved to “effective,” I wonder about other trends between this year and last. What percentage of “effective” teachers moved to the highest category? And what about the 65 teachers who were scored as the least effective? Who were they? Veterans or rookies? For those who were teaching in DCPS last year, what was their IMPACT score? Did some of these teachers change schools or grades in the last year? Did any teacher move from an “effective” rating to “ineffective?”
DCPS has provided some of this information in its IMPACT report, but more is needed. For example, 663 teachers were rated “highly effective” last year, and 290 of those received the rating again. However, it’s not clear whether the remaining 373 teachers are no longer teaching, or earned a lower rating this year. If they did earn a lower rating, why?
In any discussion of teacher evaluations, the words “validity and reliability” are bound to pop up, particularly in the most-nerdy of circles. These two words form the gold standard for educational assessments. Validity implies that the assessment accurately measures the concept it was designed to evaluate, while reliability means that an assessment produces consistent results.
And this is why DCPS should release more information about two-year trends in IMPACT scores. While it’s understandable – and encouraging – that a teacher could move from “minimally effective” to “effective” as they gain knowledge of IMPACT through professional development and coaching, a teacher moving from “minimally effective” to “highly effective” – and vice versa – should raise eyebrows.
And that’s just what happened. 3% of “minimally effective” teachers last year were evaluated as “highly effective” this year. While 3% is a very small number, the performance of these teachers should be further analyzed – either to replicate such remarkable improvement across the District, or to determine whether there was potential error in the observations or special circumstances in the teacher’s classroom that affected their evaluation.
Further, DCPS should provide information comparing how teachers are evaluated by principals and master educators throughout the year. Each teacher is observed five times – three with an administrator and two with an outside master educator. Do master educators and principals observe and evaluate the same teacher differently? Is the one announced principal visit distinctive from the unannounced visits?
In a high-stakes system like IMPACT, these questions are ones that deserve to be answered – both for the teacher whose job depends on the results and for the student whose learning depends on their teacher’s effectiveness. No evaluation system will ever be perfect, but thinking about these kinds of questions can help improve IMPACT moving forward.