The response to the Department of Education’s release of early results from the School Improvement Grant program has run the gamut from “the data are terrible and we’ve wasted billions of dollars” to “the data tell us nothing,” with a lot of hedging in between. The data suggest that 65 percent of SIG schools had higher proficiency rates in math and 64 percent had higher rates in reading in their first year of implementation. Here’s how various voices responded to the results:
- EdWeek turned in a balanced take on the numbers. But the story also quotes Senator Harkin saying the results showed a need to “explore other possible models” in addition to the four options already available.
- Andy Smarick* and Bryan Hassel debate whether the results are disappointing and predictable or not that bad and too early to tell.
- Matthew Di Carlo at the ShankerBlog reminds us that changes in proficiency rates aren’t a very good metric in the first place. The proficiency cut points are arbitrary, scores tend to jump around year-to-year, and we don’t know if a change is due to improvement or just a change in the students enrolled in the school. Anne Hyslop has similar arguments and brings up good questions about the data.
- The Department’s press release does a good job of explaining the nuance and reminding us that a more rigorous evaluation of the SIG program is underway. But, its use of the term “dramatic change” suggests dramatic improvement. If the results turn out negative over the long term, this dramatic change just becomes a lot of churn.
- The Washington Post wins the prize for missing the biggest opportunity. They say that, “locally, schools receiving the funds include four in Prince George’s County, 13 public schools and one public charter school in the District and one high school in Alexandria,” but they don’t bother to look up the results and tell us how those local schools are faring. That information is all public and available to anyone who wants to find which schools are on the lists and how they’re doing.
From my perspective, the Hassel change-is-hard, wait-and-see approach seems like the right one. While Di Carlo and Hyslop’s warnings are important to keep in mind, the early data are nonetheless promising, although far from actionable. The results of the full Institute for Education Sciences study likely won’t be available for several years, so we’re going to have to deal with early, sneak-peeks like this for the time being. We’d be better off as a field if we could take them for what they are, continue to monitor implementation, find and tell local stories, and wait for more rigorous evaluations before drawing any sweeping conclusions.
*Andy is my colleague and a Partner at Bellwether Education Partners.