One of the best things states can do to improve their measures of high school effectiveness–and importantly, provide valuable information to educators–has nothing to do with a test. Feedback reports for high schools, containing information on students’ actual college and career outcomes, can reduce the huge burden on assessment systems to accurately measure all of the qualities that go into “college readiness.” That’s a win-win. We get much better information for high schools and at the same time, set much more realistic and attainable goals for high school assessments.
Why does this matter? Think about the variety of things that research shows are necessary for “college-readiness.” And, consider this partial list of what the administration requires from the assessment systems funded in the “Race to the Test” competition:
Produce student achievement data and student growth data (both as defined in this notice) that can be used to determine whether individual students are college- and career-ready (as defined in this notice) or on track to being college- and career-ready (as defined in this notice);
Assesses all students, including English learners (as defined in this notice) and students with disabilities (as defined in this notice); and
Produces data, including student achievement data and student growth data, that can be used to inform—
- Determinations of school effectiveness for purposes of accountability under Title I of the ESEA;
- Determinations of individual principal and teacher effectiveness for purposes of evaluation;
- Determinations of principal and teacher professional development and support needs; and
- Teaching, learning, and program improvement.
College- and Career-Ready is the new mantra. But determining how to measure that goal is a challenge, with most accountability systems failing to recognize how well high schools are preparing students for future success. My colleague Chad Aldeman’s research shows that many state data systems are already collecting key indicators, such as remediation rates, first year college credit accumulation rates, wage and employment information, and more. Using data from several states, he demonstrates that AYP is an unreliable indicator of actual success in college and constructs a much more stable indicator using college and career outcomes data–in combination with assessment data.
We should not develop assessment tools in isolation from college outcome measures. Thinking beyond the test will provide better solutions and at the same time, improve our tests. Just as an effective mass transit system is an important way to reduce road congestion in metro areas, college outcomes data is a critical and needed complement to our assessment systems.
[This is the last of three posts on the administration's $350 million initiative to improve assessments. See earlier posts with my in-progress grades for the administration's initiative and a look at what's to like and what's to fear in the "Race to the Test" guidelines.]