At the Post’s College Inc. blog, Dan Devise writes about the Washington Monthly ranking of America’s best community colleges, which I created. Always nice to seem them mentioned, but this is incorrect:
Once you get past the top 10, though, many of the schools on the Washington Monthly list perform barely above the national average on most of the survey metrics. Those schools appear to have been ranked high mostly by dint of high graduation rates well above the national average, which is put at anywhere from about 25 to 40 percent.
As the ranking methodology clearly notes, only 15 percent of each college’s ranking score is based on graduation rates. So the top ranked schools were not ranked high “mostly” because of that measure. It’s also wrong to say that many of the top colleges perform “barely” above the national average. CCSSE benchmarks are placed on a 0 to 100 scale, but they’re not like a test where you can get between 0 and 100 percent correct. They represent the statistical norming of students responses on scores of questions, some of which have possible values like “Sometimes” and “Always” while others ask about the number of papers assigned and books read. Among the hundreds of community colleges that have CCSSE benchmarks, there are no scores of 10 or 20 and no scores of 80 or 90. The distribution is concentrated in the 40 to 60 range. So the judgment of how far above the norm a given benchmark is–whether it is “barely” so or otherwise–is a function of how far it deviates from the mean within that population, i.e. how many standard deviations or other appropriate measure. This is Statistics 101, which–not to pick on Dan because he’s one of the Post’s better education reporters–a lot of journalists have never taken, but should.