Our new study, In Debt and In the Dark: It’s Time for Better Information on Student Loan Defaults, makes several points concerning student loan default rates, or the percentage of a college’s students that take out federal loans and then default.
Default rates are already used for accountability purposes. The details get a bit complicated (see page 3 here for the details), but among other things, if a college’s default rate is above 30 percent for long enough, it will face punishment—namely its students will no longer be eligible to take out federal student loans.
This 30-percent cutoff applies to every college, which sounds fair enough, but does not take into account the fact that some colleges educate more at-risk students than other colleges. For example, since coming from low-income backgrounds puts students at greater risk of default, a college where 80 percent of students are eligible for Pell grants may have a higher default rate than a college that has few Pell-eligible students—for reasons that have nothing to do with the quality of the colleges. The end result is that a universal cutoff holds colleges with more at-risk students to a higher standard than other colleges.
One solution is to use input-adjusted, or predicted, default rates instead of a universal cutoff. These rates take into account the types of students each college educates and would only punish a college if its actual default rate was higher than its predicted default rate. The thinking behind this proposal can be seen in the charts below. Both show the actual default rates for four-year colleges on the vertical axis. The red dots in the first chart show which colleges are in danger of being punished for having a high default rate under current policy (higher than 30 percent, though these sanctions don’t kick in right away).
While it is certainly reasonable that colleges where nearly one out of three student borrowers defaults would be subject to sanctions, such a cutoff will miss many colleges that have a higher default rate than they should have.
Rather than applying the same 30 percent-cutoff to every college, the second chart instead asks what each college’s predicted default rate is based on the percentage of students receiving Pell grants and the percentage of students who are part time. It then compares this predicted default rate to the college’s actual default rate. Since a college with more Pell students will have a higher predicted default rate, this avoids punishing colleges for educating at-risk students.
When a college’s actual default rate is lower than its predicted default rate (the green dots below the line), students at the college are defaulting less often than expected, indicating that the college is doing a better job of preparing students for life after school.
But when a college’s actual default rate is higher than its predicted default rate (the red dots above the line), students are defaulting at a higher-than-expected rate, and these colleges should be subject to sanctions. The severity of the sanctions could depend on how far above the line they are (I would even exempt those colleges that are above the line but within a reasonable confidence interval) with the most severe consequences (loss of federal aid eligibility) reserved for the worst cases.
In short, when it comes to holding institutions accountable for default rates, a universal cutoff can unfairly punish colleges that educate the most at-risk students. A better approach would take into account the types of students each college educates to create an input-adjusted, or predicted, default rate, which can be compared to the college’s actual default rate for accountability purposes. Such a change would be a welcome improvement over the current policy.