Teachers are inundated with data—data about their students’ performance, data that shows whether their students are making progress, and data that can help identify students for targeted remediation. But the data doesn’t make a difference if it isn’t interpreted or used correctly to inform instruction. How best to learn? In training, of course. But a new report out today says the majority of educators never learn how to use data in their university preparation programs.
Of a sample of 180 undergraduate and graduate programs, only six programs were found to adequately prepare teachers-in-training to sufficiently collect, analyze, and use assessment data, according to the report by the National Council on Teacher Quality (NCTQ). Only 12 percent of programs in the sample had class work or homework that involved analyzing data from student assessments. Even more alarming: In order for a program to be deemed “adequate” in this NCTQ analysis, it had to include just one objective or lecture addressing assessment data, when continued and repeated practice in this realm is certainly warranted. “The bar to earn a passing rating in this study was set low,” authors Julie Greenberg and Kate Walsh write. “But it also means that our margin of error is so substantial that there should be little doubt that a program designated as inadequate is in fact inadequate.”
The report makes a series of recommendations, including more incentives from the federal government to include relevant coursework in teacher preparation programs and more accountability from states.
Annual state report cards on teacher preparation programs, released in March, show that accountability measures vary widely from state to state. States set their own performance targets and criteria for identifying low-performing programs, so it’s not surprising to hear that less than 2 percent of programs receive that designation. Of the 38 programs identified as low-performing or at-risk, 13 come from Texas. But when you look at the state’s performance criteria, which are included in every state report card, you will see that Texas judges its programs based, in part, on achievement data of the students taught by program graduates. Linking student achievement to a teacher preparation program’s performance is one of several indicators we recommended in a report last year, but the vast majority of states do not, instead relying on one-time site visits or a simple review of standards.
Without more targeted and responsive state oversight, teacher preparation programs will continue in this somewhat unknown space, churning out graduates as they see fit. But in today’s world of data-driven instruction, schools and districts need more than that; they need educators who can use data to diagnose problems and re-direct their energies to more sufficiently—and more quickly—turn our schools around.