In my California town, it’s hard to go for ice cream without being assaulted by artisanal flavors like Strawberry Balsamic, Pear & Blue Cheese, and Blue Corn Tortilla. (Hey, my kids want ice cream, not a salad!) Back in the day, the town center featured a Baskin-Robbins and 31 normal flavors, but it’s now an inordinately expensive clothing store within a festival mall – no doubt outfitting people who prefer Pear & Blue Cheese ice cream. So whenever we go anywhere, especially out of state, my kids are on the lookout for a Dairy Queen.
In a world of Smoked Mac & Cheese and
Arbequina Olive Oil ice cream, Warren Buffett and the
now trillion-dollar Berkshire Hathaway – Dairy Queen’s owner since 1998 – have held the line on an American treasure. On our early DQ visits, I’d opt for a classic (my kids said boring) DQ Sandwich while they ordered Blizzards and competed to see who could
flip theirs the longest. But after finishing a few uneaten Blizzards – and two sad occasions when an upside-down Blizzard met an early demise on the floor – I realized I should be ordering an insurance Blizzard (solely for risk management purposes, I told myself). The question was which one? What was the best Blizzard? And so this summer, pulling out my phone in the most American of ice cream shops, I happened upon the most American of headlines:
I Ate All 29 Blizzards at Dairy Queen and Ranked the Flavors from Worst to Best
The Business Insider ranking took so long to read my kids finished their Blizzards first. While the reporter made no attempt to construct a methodology, and while it’s hard to map some comments to rankings (#28 M&M Blizzard “a simple yet satisfying option” while for #4 Raspberry Fudge Bliss Blizzard “once I got a quarter of the way in, all that I tasted was vanilla and chocolate”), any errors of omission or commission were trivial compared to the informative close-up photos of Choco Brownie Extreme Blizzard, Oreo Brookie Blizzard, Peanut Butter Puppy Chow Blizzard, and 26 more.
It occurred to me that most rankings would be best supplanted by similar images. While rankings are all about establishing a methodology with multiple inputs and weights in order to come across as authoritative – and therefore click- and time-worthy – selection of said inputs and weights is inherently arbitrary and leads to results about as helpful as photos of the subjects – or in the case of Blizzards, less helpful.
I’ve suspected such since college when my brother spent a summer working for U.S. News & World Report, calling admissions offices to harass them for information they hadn’t sent in. Yes, U.S. News college rankings rely on self-reported data, which proved problematic. In 2017, Austin Peay State, Dakota Wesleyan, Drury, Hampton, Oklahoma City, Saint Louis, Saint Martin’s, and Randolph College were caught submitting false information like graduation rate, average faculty salaries, and data as basic as total enrollment: Drury University reported 1,611 students vs. actual enrollment of 3,571 to inflate spending per student. None admitted fault; some claimed unintentional errors and several blamed U.S. News. But it can’t be coincidence that all “errors” helped colleges rather than hurt them. Perhaps the bigger scandal was that U.S. News waited to announce the fraud so that the eight schools were listed as “unranked” for fourteen whole days, then given a clean sheet for the next year’s rankings.
Unsurprisingly, the cheating continued. Temple’s business school decided that if it was going to get caught for one thing, it might as well go Full Monty. So Temple reported higher applicant GPAs, underreported the number of admissions offers, overreported test scores, provided false information about graduate debt, and counted academic coaches as faculty members in faculty-student ratio. When U.S. News started asking questions, Temple dissembled further, prompting U.S. News to drop the matter. That is, until real journalists began asking questions, leading to the criminal prosecution, conviction, and imprisonment of the dean.
Next up was the Ivy League. A year before Columbia’s annus horribilis, Columbia University math professor Michael Thaddeus published an analysis dissecting Columbia’s dizzying climb to the #2 spot and alleged its data was “inaccurate, dubious, or highly misleading.” According to Thaddeus, Columbia told U.S. News that 83% of its classes had fewer than twenty students when its own class directory indicated the correct number was below 67%. Columbia tried to ignore him at first, but then began an official review. Subsequently, U.S. News announced its standard slap-on-the-wrist penalty for Columbia, Villanova, and eight other schools found to have submitted false data: removal from the rankings for a short period until launch of the 2023 rankings. Then after Columbia declared it wouldn’t be able to provide accurate data in time, U.S. News came up with “ competitive set values” to keep Columbia on the list, albeit at #18 – a fall worse than an upside-down Blizzard dropping onto the dirty Dairy Queen floor.
***
As U.S. News releases its 2025 rankings this month, besides keeping an eye on what happens to poor Columbia, the biggest problem isn’t that colleges are cheating, but rather that the entire exercise measures inputs, not outcomes. In its 2024 methodology (U.S. News keeps changing in order to mix up top schools – not unlike a Blizzard), 44% is based on inputs like financial resources per student, faculty salaries, standardized test scores, and the all-important peer assessment (20% of the total). Only 31% measures outputs such as graduation rates for at-risk students, student loan debt, and income (borrowers earning more than a median high school graduate four years after graduating, weighted at a meager 5%). From a student perspective, the remaining 25% is a blend: overall graduation and retention rates (higher for wealthier, more prepared students); research productivity (unclear correlation with undergraduate outcomes). Splitting the difference yields 56.5% inputs, 43.5% outcomes, a big improvement on U.S. News’ old input-only methodology, but still input-heavy, and the higher education equivalent of Business Insider ranking principally by quality of candy mix-ins rather than doing the hard work of scarfing down 29 Blizzards.
U.S. News’ continued reliance on inputs has made college less affordable. Over decades of input-only rankings, there was little argument against increasing spending per student (and tuition and fees accordingly). Input-only rankings made it prohibitively expensive to move up. One study found that pushing University of Rochester from mid-30s to top 20 would cost an additional $112M annually in faculty salaries and spending per student. And reliance on inputs has kept out low-income students and exacerbated inequality; in 2011, when Georgia State adopted a new strategic plan to graduate more low-income students, it dropped 30 places. As Brit Kirwan, former chancellor of the University of Maryland System, recognized: “If some foreign power wanted to diminish higher education in America, they would have created the U.S. News and World Report rankings.”
U.S. News is no longer the only ranking in town and it’s likely that younger siblings like Washington Monthly have pushed Big Brother to begin incorporating outcomes. But even Washington Monthly’s lauded list ( new rankings released last week) isn’t based principally on the economic outcomes students care about most – or in the words of Northern Arizona University President José Luis Cruz Rivera (quoted in Washington Monthly – ironically, likely unintentionally), “Are they getting good, high-paying jobs that will provide them with family-sustaining wages? Are they positioned for success in their careers? Are they getting into graduate schools that will allow them to meet their full potential?” Washington Monthly’s income metric (actual earnings vs. predicted earnings) is weighted at less than 7% of the total and overall social mobility only counts for 1/3; the other 2/3 is comprised of research and community service metrics – for most students, nice-to-have but not must-have.
What’s going on here? The only available income data comes from the Department of Education’s College Scorecard, which matches individual financial aid recipients with IRS tax records then maps back to institution(s) attended. But this data has a few limitations:
More important, median income 4-6 years after graduation isn’t the be-all and end-all of outcome metrics. It could be a product of geography: are former students working in higher income/cost of living metropolitan areas? ( Fordham beats University of Michigan.) It’s also possible that colleges enrolling students from wealthier families e.g., High Point University – see When The College Of Last Resort Is A Resort – do better on this metric than first-gen schools like Cal State San Bernadino regardless of income/cost of living (25% higher in San Berdoo than Winston-Salem, but High Point still tops CSSB on median income).
Any useful attempt to measure, compare, or rank educational institutions must be based on value added by the school. And that means evaluating outcomes for similarly situated students, like hospital rankings i.e., survival rates for patients with the same condition. But hospital rankings benefit from a federal Centers for Medicare & Medicaid Services with a predilection for data not shared by the Department of Education.
Developing value-added metrics would be hard even if colleges tracked placement and employment outcomes. But they don’t because they’re not forced to: not by the Department of Education, not by accreditors, and not by their own trustees. As a result, beyond self-selected responses to email surveys and blandishments like 85% of our students report being employed or in graduate school nine months after graduation, American colleges and universities have no earthly idea what happens to their graduates (and even less of an idea what happens to drops). So all we’ve got is a flawed median income number which even well-meaning rankers like Washington Monthly are loathe to weight too heavily.
U.S. News never was or will be an authoritative Consumer Reports for higher education. (Consumer Reports hasn’t attempted to rate colleges for good reason.) The U.S. News rankings are simply the sole surviving vestige of a third-rate news magazine taking advantage of higher education’s data desert as well as those of us who feel the need to be rankings-justified before investing in either college or frozen dessert.
***
The silver lining is that when U.S. News releases its new rankings, fewer people than ever will be paying attention. This is because education rankings have jumped the shark. In just the last month, in addition to Washington Monthly, I’ve been on the receiving end of the following new rankings:
Ridiculous rankings extend far beyond education. In order to appear authoritative and attract attention, everyone is ranking everything. This includes U.S. presidents, every season of Big Brother, the dumbest dog breeds, the sandworms in Dune, Carvel ice cream cakes (Fudgie the Whale only #4? C’mon!), the 10 worst decisions made by Jurassic Park characters, and the most amped-up ranking of all time: I Tried 21 Energy Drink Brands & Ranked Them From Best To Worst. With rampant rankings pollution, no ranking can be taken too seriously.
It’s high time we recognized college rankings for what they are: clickbait, no better than those slideshows advertised with alluring photos at the bottom of less-than-reputable news sites, or Business Insider’s gluttonous Blizzard bonanza. So don’t let the officious numbers fool you. In the absence of a serious attempt to measure value added, for the sake of reducing both misinformation and college crime, U.S. News should replace its rankings and pseudo-scientific methodology with a lookbook of photos of beautiful college campuses. Or better yet, switch to ranking Blizzards.
By the way, Business Insider’s Blizzard “ranking” turned out to be about as useful as a clickbait slideshow; only four of the 29 ranked Blizzards were on the menu; the rest were seasonal or perhaps fictional e.g., Wonder Woman Cookie Collision Blizzard? Fortunately, unlike higher education, there was no need to measure value added with Blizzards, only delicious outcomes. I ended up ordering Peanut Butter Puppy Chow and can confirm it ranks much higher than the salad that passes for ice cream in my town.