On Wednesday, the Fordham Institute released “Quality Control in K-12 Digital Learning: Three (Imperfect) Approaches,” the first in a series of six papers exploring critical issues in digital learning. Written by Rick Hess, the paper recognizes the importance of efforts to ensure that new digital learning endeavors meet a high bar for quality. And, it offers a helpful framework, outlining and describing the pros and cons of three different quality control approaches: the regulation of inputs, outcome-based accountability, and market-based mechanisms. Perhaps the best part of the paper is its realistic recognition that there is no magic recipe to ensure quality. We need a blend of strategies and a willingness to adopt better tools as they become available, not only for digital learning, but also for traditional, place-based learning and all of the blended learning options in-between.
Five additional ideas to spur further dialogue:
- Process measures, such as indicators of student engagement, are an additional quality control option that could be complementary to an overall balanced approach. Separate from either inputs or outcomes, process measures illuminate information that students and families care about and serve as a connection between outcomes and the things that providers do to reach those outcomes.
- In the paper, Hess describes a number of challenges to assessing outcomes, in particular the challenge of trying to develop assessments to measure learning at both the course and sub-course level. This is a place where we can and should be much more creative. Rather than trying to develop new assessments in every subject/topic/learning module, we could think about indicators of quality problems which then trigger more involvement. So, for instance, with better data, we could look at patterns of performance across providers in domains where there is a sequential progression (why did all of provider X’s algebra I students perform poorly in algebra II?). Audits, particularly since student work should be easily captured in digital form, are another tool to ensure fidelity of outcomes. While some of these tools are ones that you’d like to use very sparingly, wide knowledge of their availability would serve as a deterrent to bad behavior. Sure, there will always be bad actors, but smart, cost-effective deterrents are important. (Once investigations began into the Atlanta cheating scandals, the number of suspected erasure cases in Atlanta dramatically declined from 2009 to 2010. Widespread knowledge that all tests would undergo an erasure analysis surely impacted behavior.)
- The paper tends to assume that micro-level choice — choice at the course or even learning objective level — will be the dominant paradigm. For a variety of reasons, I like having these options available to students & families. But, we should also consider that it’s highly likely that students & families will choose (either actively or passively) to have other entities/institutions be responsible for aggregating and guiding these choices. As an analogy, owners remodeling a house could choose to hire each individual contractor and make dozens of different decisions about the various aspects of construction. Or, they could ask a general contractor to handle the entire job. Most choose a general contractor to provide expert guidance and oversight. Many students and families want the same. Put another way, Zainab Oni, the bright, rising 10th grade student on my panel at Wednesday’s TASC Digital Learning Forum, expressed the need for adults to help guide and connect students to different learning options and opportunities (both online and place-based). In this new world of many options, we’ll need new entities and roles for both educators and youth-focused organizations to help students/families navigate. We’ll also need ways to help traditional schools and other providers excel at the general contractor role — bundling together powerful learning opportunities that span not only the online world, but also integrate applied experiences such as California’s “Linked Learning” approach.
- I agree on the need for transparency to ensure that reputational effects and information help create a better market. And, by better market, we need to think of this as a national market (or at the very least, a statewide market). One of the biggest problems with the very local, district/charter-like markets is that there are huge information asymmetries — both on performance and price — between large, national providers and generally unsophisticated buyers. Every deal is a one-off for the buyers and there is very little learning across them. Moreover, given the number of deals where the provider sells directly to the district, not to the end student/family, it becomes almost impossible to assess performance because there is little sense of who is contributing what to performance (it’s not transparent who is providing the service). These and many more issues around market dynamics are under examined.
- In his discussion of the paper, Tom Vander Ark offers a vision of how embedded assessment can overcome a number of these issues. But, the bridge to that future has not yet been built. To build it, we need to think about the ownership of the large data sets generated by digital learning and the ability for this data to cross over both proprietary institutions and geographic areas. We need to value this data in three different ways: a) data to get at quality/integrity of the learning; b) data to inform the student’s record and future learning; and c) data that helps us improve the learning process, not unlike the research value of large medical data sets. The data that providers collect does not have to be dictated from on high. But, it can’t be a black box. There has to be transparency and portability. And, we should not be naïve about the extent to which a subset of both providers and institutions will resist and look to claim this level of transparency but really, just want the power to grade themselves.
Thanks to Fordham and Rick Hess for a good kick-off to a much needed conversation.