The 2015 edition of the Maclean’s University Rankings marks the 20th anniversary of the publication. Although it is subject to derision by the institutions it features, most of these protests have subsided into occasional whimpers. Really, there’s not much the universities can do. As Maclean’s states in the methodology for the study, the data they pull is publicly available, or generated through their own research; they don’t rely on the universities to get it.
The Maclean’s University Rankings drive me crazy — in part because they are so very, very badly done, and more deeply because they play a significant part in generating and legitimizing a toxic culture of pointless competition in our higher education system. Yet the damn things continue to fly off the shelves. Why do we buy in?
The problems with the methodology are too numerous to mention, but I’ll hit a few highlights here:
- The categories used to arrive upon the rankings are weighted, seemingly, arbitrarily. Why count library spending and acquisitions at 15% and reputational ranking at 20%? Your guess is as good as mine.
- The number of students winning awards may have more to do with the academic calibre of the students attracted to and admitted into programs than the given university’s ability to nurture academic talent. But it’s implied that academic awards are the direct products of good teaching. This is simply not the case.
- The “best professors” ranking is based on a system of awards that is itself deeply flawed. Although amazing professors deserve recognition, aspects of the award system and individual awards are dubious if not outright self-serving. For those who don’t know, professors are in some cases directed to apply for awards to pad their CVs. And some awards are corporate sponsored. You get the idea. But the worst part of this aspect of the rankings is that it fails to recognize how many of our undergraduates are taught by graduate students and contract faculty who can’t even apply for these awards.
- Rankings based on expenditures like library and student services in no way reflect the effectiveness of the programs they are funding.
- The 20% reputational ranking, already a very soft measure by any account, is based on a lame response rate of 9.3% (2015).
In other words, the rankings are composed of junk data. Further, the rankings themselves, once composed, are meaningless because there is no metric showing the difference between, say, ranking #4 versus ranking #5. What is the difference between a 4 and a 5? Does it mean the same as the difference between #10 and #11? Who knows?
I could go on. But picking on the methodology is almost too easy. What I’d really like to focus on – and it is my deeper point – is how these annual rankings foster a damaging competitiveness among our institutions of higher education, the students who attend them, and the parents who fret over their kids’ futures. I’m not about to propose that education solely “for its own sake” is feasible; ambling through a general degree with no worries about its outcomes is a luxury few young adults can afford. Anxiety around the right educational “investment” is high, and there are legitimate reasons for this.
But things get kind of nasty when this anxiety is channeled into a competitive pursuit of status and prestige. And I don’t mean a mob-in-the streets kind of nasty. Nope, it’s more genteel than that. It’s a kind of an unapologetic “I gotta look out for me and my own” that seems to increasingly characterize the way we approach all of our social institutions. Schools at all levels, I argue, are increasingly training camps for this kind of individualism. Tests, rankings, awards, and other sources of academic “score keeping” get us focused on how we and our own kids are doing at the expense of an overall concern for the wellbeing of our wider communities.
In 2004, Dave Marshall argued that an historic virtue of the Canadian post-secondary education system has been its overall equity. Students across the country could attend any university, and be assured of a high quality and universally recognized degree. In the same article, Marshall notes the many ways in which this system has been replaced by one which is increasingly fragmented, complex, and competitive.
This is sad because it reduces education to a series of heats toward the rat race of life. We lose sight of the ways in which we all benefit, as a society, from education. People of all ages grow when they learn, and they don’t always grow in ways we can attach salaries and degrees too. The competition fuelled by rankings overshadows this idea of our collective betterment as a society in favour of a toxic mix of fear and status-desire.
Thus we might ask whether, collectively, we want Maclean’s and ilk recasting what Marshall identified as a fundamentally equitable system, into a highly competitive one that encourages us to bestow pedigrees upon our young adults. Because rankings have everything to do with these institutional pedigrees, and very little to do with the quality of education offered. Most certainly the rankings don’t do a damn thing to help our universities make us better people, or a better society.
See Cramer and Page (2007), who state that the Maclean’s rankings exploit a broad “naivete” on the part of the public about statistics. They state, “Many measurement limitations of rank-based data do not allow for assessment of how ‘good’ or ‘bad’ the top and bottom universities are, nor of how or how much they might differ in relation to others or to each other” (p. 6). In other words, we have no real knowledge of how much better a “1” ranking is than a “2”.
This is Statistics 101, folks. With qualitative data (i.e. the kind you can’t count), ordinal rankings (i.e. with numbers) are useless. You can think of it this way: Say I poll all my friends on their ice-cream flavour preferences. 15 say chocolate is their favourite, 13 say they like vanilla best, and 5 go for strawberry. So we “rank” the responses as 1, 2 and 3. Do you start to see problems here? Is there really any important difference between chocolate and vanilla? On the other hand, strawberry is way back in the pack. Does that mean something different than, say, if 12 people liked strawberry? (This is a pretty dumbed down example, but honestly it isn’t much dumber than the rankings you see in Maclean’s.)
 There’s lots more to this story. The increasing tuition rates students have been experiencing for the last 30 years have been justified to a great extent by positioning education as a “private good” more so than a “public good” (in economics speak). Trying to put number values on is challenging, and also very political because it feeds into how much of education is funded by tax-payers en masse versus by individual tuition-payers. The “private benefit” is your job – the lifetime earnings you gain by getting a degree. The public benefit is much harder to quantify, but here’s a pretty accessible defense/explanation of what public benefits are generally held to be.
 I should clarify that it is a bit unfair just to pick on Maclean’s here. The Maclean’s University Rankings are prominent in the Canadian context, but obsessions with institutional rankings are global phenomena.