On Monday, the American Institutes for Research (AIR) launched a new website, CollegeMeasures.org, which combines several existing data sets to calculate seven different outcomes measures for over 1,500 colleges. Easily the most controversial of these data points is a new calculation that estimates the cost of students who drop out after their first year–what the site calls attrition.
The overall figures aren’t pretty. According to an accompanying paper (PDF) written by Mark Schneider, an AIR vice president and the former commissioner of the National Center for Education Statistics, first-year attrition cost over $9 billion from 2003 through 2008, or about $1.8 billion a year. Those staggering figures and the corresponding state-specific totals have touched off newspaper articles and media coverage across the country.
The attrition cost estimates have generated significant pushback. Inside Higher Ed had a nice summary of the problems with the estimates from Cliff Adelman, who works at the Institute for Higher Education Policy and used to be a researcher at the U.S. Department of Education:
According to Adelman, the Education Department’s Integrated Postsecondary Education Data System [IPEDS], which Schneider used to determine the dropout rates, is incapable of providing accurate information to produce those numbers, because it covers only a narrow group of students and tracks them only to the extent they remain enrolled at their original institutions.
“You get only first-time, full-time students who enrolled in the fall semester (not winter or spring) who showed up at the same school (not somewhere else) as full-time students (not part-time) in the next fall semester (not winter or spring),” Adelman said via e-mail. “This data story, already distant from the realities of student attendance patterns by three galaxies, is further compounded by a state level analysis which pretends that students never, never cross state lines to attend a second school.”
The limited cohort of students tracked in IPEDS is a common critique that pops up constantly in discussions of graduation and retention rates. Considering only full-time students enrolling for the first time only captures a small share of the enrollment at less selective four-year universities, community colleges, and for-profit institutions. In addition, the federal graduation rate treats students who transfer to another institution as dropouts and doesn’t give the school any credit for graduating students who transfer to the college. Add all that up and a convincing case can be made that we’re relying on an incomplete measure that makes judgments about colleges based on small subsets.
But if the federal graduation rate is so flawed/limited/satanic then why isn’t there a substantive movement underway to improve it?* The U.S. Congress just reauthorized the Higher Education Act a little over two years ago–a perfect opportunity to change the definition that passed with no action. In fact, some of the biggest higher ed lobbying associations even came out against changes that would have improved the graduation rate calculation.
And that’s ultimately why constant arguments about the federal graduation rate, while accurate in their merits, aren’t made on good faith. If the problem is so bad, lobby for a change that produces graduation rates for part-time or transfer-in students.
This pattern of complaints without solution provides perfect cover for colleges to explain away bad graduation rates. For example, Chicago State University objected to the article I co-authored that labeled it a dropout factory, citing all the standard critiques about excluding part-time students and failing to account for transfer students. But nowhere did its response actually present an alternative graduation rate that addressed these issues.
The reticence to provide better data isn’t surprising. It’s better to hide behind the flawed figures than present a more complete picture that includes the part-time and older students that very well might graduate at an even worse rate. After all, if you can’t graduate a homegrown full-time student, what are the odds you’re doing any better with the part-time students who are more likely to be balancing work and family responsibilities in addition to academics?
All that said, it’s worth noting that a couple hundred schools have solved some of the federal graduation rate flaws by reporting information to the Voluntary System of Accountability in a way that include students who transferred and are either enrolled elsewhere or earned a degree at a different college. Why not expand the membership and then have that data reported to IPEDS?
While this small group of colleges have found a way to improve their graduation rate reporting, many other colleges and trade associations have actively fought attempts to make the federal calculation better. The best way to track outcomes for groups like transfers is to establish a student unit record system–a national database of individual-level information. But this database is arguably the most hated policy idea among the trade associations.** It’s so despised that colleges actually got a provision inserted into the reauthorized Higher Education Act banning the creation of such a system. (Even more ridiculous, early versions of the student loan reconciliation bill also included provisions saying that any money for statewide college access plans could not be used for the creation of a student unit record system.)
The failed push for the unit record system and graduation rate reform leaves us with the worst of both worlds. The federal graduation rate is less than ideal and constantly cited by colleges as indicative of nothing. But colleges have also blocked avenues to improve that calculation and have shown no interest to self-report a better measure in significant numbers on their own. And so the higher education establishment now has a free hand to tear down the rate with no expectation that it will replace the calculation with anything better.
To be fair, there are some legitimate objections to a more nuanced rate–it would certainly take more work, tracking the students would be a bit more difficult, etc. But if the rate is really as bad as its critics say, then surely it would be worth the headache to fix it? If not, then it’s time to stop complaining and cease using the federal limitations as a convenient scapegoat to explain away low rates.
*In fairness to Adelman, I’ll note that he has argued for changes in the past.
**The American Association of State Colleges and Universities and a couple of other trade associations did sign on to a unit record system, so not all the higher ed lobbying groups fought this proposal.