- Use Assessment to Drive Support for “Fewer, Clearer, and Higher” Standards: Integrating the assessment conversation can strengthen both educator and political support for the common core. There is widespread agreement on the need to improve assessment, so the connection between improving assessment and “fewer, clearer, higher” should be explicit. If you want to assess more deeply and at a higher level of cognitive challenge, you’ll likely need more extended performance-like tasks (like NAEP Science 2009 or PISA). These take more time to assess and can be expensive — in other words, you need fewer. Clearer is also critical — if the standards cannot be clearly defined within the curriculum, then we end up with generic tests and weaker instruments.
Grade: A — Even skeptics agree that the standards are off to a promising start. Key to assessments is for consortia to be aligned on a common curriculum framework* so that assessment instruments are clear in what they are measuring.
- Open Platforms and Shared Infrastructure: These are essential to drive down costs, enable scaling, and allow new ideas to penetrate from the edges. (See more in my post about standards.)
Grade: B+ — The guidelines offer a really important start. They require consortia to “develop assessment items and produce student data in a manner that is consistent with standards for interoperability” and “make all assessment content…freely available.” As the notice inviting applications (NIA) explains, the goal is to allow non-participating states to use content and for commercial organizations to be able to “further develop, extend, and incorporate the content…and enable technology providers to compete for States’ business.” Applicants must also address these issues in their applications (even though the point value is low).
Yet, as anybody who has been frustrated by supposedly “interoperable” software or computer devices knows, the definition of “interoperable” is open to interpretation. To really make this work, the Department and others, such as CCSSO and vendors’ associations, must ensure that consortia tools meet open standards all along the way (not just five years from now). This includes not just the content or items themselves, but the approach, templates, and documentation behind the assessments. In other words, they must share not just the product, but the recipe. I’m almost certain applicants will tout their adherence to evidence-centered design and universal design for learning. If they are actually following these approaches, then this requirement should not be a burden.
- Don’t Lock in Current Practice: My greatest fear is that we’ll get these shiny new standards and then race to develop RFPs for a national common assessment. Any plan that invests heavily based on the current deeply embedded assessment tools and practices will show no more than very modest improvements.
- Be Smart About Where You Start: The earlier stage the idea, the more it needs to be tried in a low-risk, but still consequential, environment. If we hold every new idea to the current lowest common technology denominator or strictest technical and process constraints — especially high-stakes testing constraints — the ideas will not be very innovative. That said, every pilot needs to take a universal design approach and contemplate how it could work for all students (the open platforms will help here).
Grade: Incomplete — Change will not be easy. I’m worried that consortia with 30+ states will be constrained in a thousand small ways by both the current system’s inertia and the need to find commonality among a large number of states in very different starting places. Tom Vander Ark worries about this too.
- Enable Both Sustaining and Disruptive Improvements: We need a 5-7 year plan to significantly improve student assessment, with investments all along the pipeline from crazy new idea to modest, low risk improvements. And, we need an intentional plan to evaluate and scale these up along the way. This implies a series of pilots at various scales, along with incentives to build demand so that successful ideas progress along the pipeline.
Grade: Incomplete — With the emphasis on open standards, you can begin to see some of the administration’s theory of action. I was also happy to see that they are trying to align development of alternate assessments and innovative approaches to meet the needs of both English language learners and students with disabilities. But, while the NIA asks applicants for a theory of action, the overall theory of change from the administration’s standpoint is still incomplete. The administration needs to be clear about it’s overall theory and assumptions — including how it hopes to use i3 — so that we can better evaluate progress and correct course along the way.
Overall Grade: B — A good start, but it’s really up to the states and the various consortia to make this work.
* Please note the word framework — this does not mean everybody has to teach the exact same thing in the exact same way — it just means that the standards are clear enough to enable assessments to collect and report on evidence of student learning in more than a vague way.
[This is the second of three posts on the administration's $350 million initiative to improve assessments. See what's to like and what's to fear in the "Race to the Test" guidelines and learn why college outcomes data is critical for effective assessments and measurements.]