College metrics of success
There’s a really interesting article over at the Wall Street Journal today, written by Andrea Fuller and entitled The Watchdogs of College Education Rarely Bite. The article discusses the accreditation system for colleges, and how it is more or less dysfunctional. Here’s an example from the article of how they are failing to do a good job:
At Bluefield State College in West Virginia, accreditors from the Higher Learning Commission suggested in 2011 that new electronic signs on campus might be difficult for students to read while driving, according to a copy of the report. The report didn’t mention the college’s graduation rate of 25% or less since 2006.
There is troubling evidence presented in the article that we should definitely pay attention to. It’s quite possible that the accreditors are being paid off, or at least have insufficient reason to come down hard on terribly performing schools. I hope we spend time rethinking the whole system.
However, I think it’s interesting to think about the metrics of success that were used in the article. It’s also an important step towards designing a more “data-driven” accreditation approach.
So, for the most part, the article described things in terms of graduation rates and student loan defaults. Not a bad start if you wanted to measure a school: you want high graduate rates, and you want low student loan default rates. Also, they did a good thing, namely compared these numbers to a baseline. In this case their baseline was the average for the schools that have lost accreditation since 2000. Here’s their plot:
Again, these are important metrics, but the logic of the above chart seems to be, if there is a school with a lower graduation rate or a higher default rate than these baseline numbers, or both, then you should also lose your accreditation.
And by the way, I’m not really disagreeing – there are too many bad schools out there, and this seems like a pretty good way of finding truly terrible outliers. Even so, as a data nerd, I need to make the argument that these statistics are highly misleading, or can be.
Say you are trying to compare two school, and one has a higher graduation rate than the other. Do you conclude that the one with a higher graduation rate is better? Well, no. It could just graduate people because it pushes people through the classes without really teaching them anything. Or, the other one could be lower because it takes a chance on more students. In other words, a graduation rate can be lower or higher for good or bad reasons, and taken alone is not a great indicator. Lots of community colleges, moreover, are set up to be transfer schools, and the students deliberately start at that school, then transfer to 4-year colleges, thus lowering the overall graduation rate. It’s a good thing that such schools exist, and we wouldn’t want to close them all down.
Similarly, higher default rates on student loans could be an artifact of a school taking chances on students that otherwise have fewer options, or a bad economy, or even just the type of education that is offered. Engineering schools tend to graduate students who find jobs quickly and easily, but that doesn’t mean every school should become an engineering school. So I wouldn’t compare default rates of two colleges and conclude that the college with a low default rate is necessarily better.
What I’m coming to is that deciding whether a given college has become a failure is actually pretty tricky, and we can complain – and should complain, apparently – about the current system of accreditation, but we can’t claim that it’s as simple as looking at two metrics and deciding what the cut-off is. Choosing a perfect threshold would be tricky.
Or rather, we could do something like that, but then it might have weird effects. If we closed all the schools that don’t keep graduation rates high and default rates low, we might see non-engineering students pushed out of the system, or we might see schools create partnerships with corporations and become federal aid-funded corporate training centers, we might just see (even more) widespread fraud in terms of reporting such things.