What College Rankings Actually Do
American colleges and universities rise and fall based on their rising and falling in a handful of rankings. This is true to a significant extent in terms of their finances, reputations, and general welfare. Up and up or down and down, as goes its rank, so goes the university. U.S. News and World Report’s is the most influential. Forbes, The Princeton Review, and others offer alternatives with slightly different methodologies and surprisingly similar results. All purport to demonstrate which colleges and universities have the best academic programs. All of them fail to do so. By virtue of measuring many things unrelated to quality of education, they incentivize the diverting of resources away from educating students.
The established rankings show institutional wealth, both directly and indirectly. They also show an institution’s popularity, both among prospective students, enrolled students, and faculty at peer institutions. Many also emphasize how well their incoming students performed in high school, shown by grades and standardized test scores. The rankings show little else. But by providing the illusion of demonstrating educational quality, the rankings incentivize many things but not support for the actual quality of education at any given institution.
U.S. News and World Report’s “America’s Best Colleges”
The U.S. News ranking considers the following “indicators of academic excellence” from their website:
22.5% “Undergraduate academic reputation.” This is simply a subjective assessment by upper level administrators at peer institutions who fill out a survey. A school will rank high if people think that it should. Alas, perception of quality does not demonstrate quality.
22.5% Retention. While keeping the students it admits may show that a school is providing quality education, it may instead show that the school has succeeded in attracting the kinds of students who are likely to stay or that the school gives the students what they want, not necessarily what they need.
20% “Faculty resources.” A high score demonstrates a large number of small classes (6% of total final score), a small number of large classes (2%), and a low student-faculty ratio (1%). Admittedly, these all set students and faculty up to succeed. But these must be the fruit of an institution’s financial resources and having this recipe for success does not demonstrate the fulfillment of that success. Also included in this category are the percentage of faculty with terminal degrees (3%) and the level of faculty pay (7%), and the percentage of faculty who are full-time (1%). These three factors demonstrate the institution’s commitment to hiring and supporting well-trained faculty. Unfortunately, these only count for a small slice of the final score and they do not show how faculty perform. This category defines institutional resources made available for faculty. Quantity of pay does not demonstrate quality of service.
12.5% “Student selectivity.” This is defined by SAT and ACT scores of incoming first-year students (8.125% of total final score), the percentage of incoming first-year students who graduated high school at the top of their class (3.125%), and the smallness of the percentage of students accepted out of the applicant pool (1.25%). This category shows how good an institution’s students were in high school, not how good the institution does at facilitating their education once they have been admitted. Test scores and high school academic performance can also serve as indirect indicators of students’ familial wealth. High-scoring colleges are those who can attract well-prepared, often well-funded students. No matter how well a school does at educating under-prepared, under-funded students, it will do poorly in this category.
7.5% Six year “graduation rate performance.” This measures how much better or worse than expected a school did, controlling for test scores and the number of students on federally subsidized, need-based grants. This control is helpful, since students with greater financial needs often have greater struggles in graduating. While these nuanced controls are helpful, the ability to graduate students does not demonstrate the ability to effectively foster their education.
10% “Financial resources.” This consists of average spending per student, including research and teaching (faculty pay again), and excluding athletics, housing, and medical buildings.
5% “Alumni giving rate.” This is certainly an expression of alumni gratitude and wealth, but which penalizes schools that have fewer graduates able to serve as donors, but that are nonetheless providing a quality education.
In short, the “best colleges” of the U.S. News rankings are those who are the wealthiest and who have attracted the best performing high school students. If you are an elite high school senior, this is a useful ranking in telling you which schools have the most students like you. However, regardless of what kind of prospective college student you might be, this ranking does nothing to tell you which colleges will do the most to help you learn.
Forbes and the Center for College Affordability and Productivity’s “America’s Top Colleges”
The U.S. News ranking’s chief competitor does little better. In principle, Forbes and the Center for College Affordability and Productivity (CCAP) seek to demonstrate college value in their collaborative ranking. In practice, they primarily account for the wealth of institutions’ graduates. According to its website, the Forbes/CCAPranking accounts for:
25% “Student satisfaction.” This represents data from the less than objective RateMyProfessor.com (10% of total score; minus hotness scores, at least); and a comparison of first- to second-year student retention rates, both actual (12.5%) and predicted (2.5%). Again, keeping students may show that the school is educating its students well; or it may show that it gives them what they want, which may or may not be in educational terms, or that it has enrolled students who are inclined to stay.
32.5% “Post-graduate success.” This includes alumni salaries (10%). This, of course, penalizes institutions whose alumni make less money, despite being successful in other ways. Some students succeed financially against the odds, while others succeed financially with the benefit of significant family financial support; the ranking does not distinguish between the two. Neither does the ranking recognize non-financial forms of success. In particular, Washington University in St. Louis has a significant number of alumni who are leaders in the not-for-profit sector; the stellar institution consistently has less than stellar performance on the Forbes ranking for this reason. The category also includes the number of alumni on CCAP’s “America’s Leaders List,” which includes the various Forbes lists, Power Women, 30 Under 30, CEOs on the Global 2000; and the number of alumni who hold a Nobel Prize, a Pulitzer Prize, a Guggenheim or MacArthur Fellowship, membership in the National Academy of Sciences, an Oscar, an Emmy, a Tony, or Grammy (22.5%).
25% Relative absence of student debt. While this is a helpful category, it reveals little, unless it also accounts for absence of graduate debt relative to in-coming student financial need. It should surprise no one that students who have little need for financial aid have little debt.
7.5% Graduation rate. This includes actual (5%) and actual compared to predicted (2.5%).
10% “Academic success.” This is defined by the number of students who win prestigious graduate scholarships and fellowships (7.5%) and the number who earn Ph.D.s (2.5%).
Because it directly and indirectly measures similar things—in-coming student wealth, institutional wealth, alumni wealth, and how pleased the students have been with their experience—the Forbes/ CCAPranking produces similar results to U.S. News, both in what it actually reveals and in which schools perform well, with equally scant attention to the actual quality of education.
Other Rankings
Many of the other rankings rely on metrics similar to those of U.S. News and Forbes, but differently proportioned. Money gives affordability, financial career outcomes, and educational quality equal weight. It bases this last category on graduation rates (real and real vs. expected), in-coming students’ standardized test scores, student-faculty ratios, the percentage of admitted students who enrolled, and professors’ scores on RateMyProfessor.com. Same ingredients as Forbes, but with slightly different proportions. The Fiske Guide also has an approach defined by value and affordability.
There are some novel rankings. Business Insider surveys professionals who make hiring decisions and asks them which institutions produce graduates who are best prepared to do quality work. Parchment aggregates data to determine which schools are most likely to have the students they admit actually enroll. The Princeton Review and Niche both offer a wide variety highly specialized rankings geared toward specific categories to help prospective students find the best fit school. Global Language Monitor sometimes tracks which schools generate the most internet buzz. The Faculty Scholarly Productivity Index monitors the number of publications that a university’s faculty produce; but this is a measure of research quantity, not teaching quality. The Daily Beast ranking accounts for students’ average future earnings, campus diversity, quality of campus nightlife, student graduation rates, affordability, campus quality, student life, athletics, and academics, in this case based on data from the National Center for Education Statistic. Their methodology is otherwise opaque.
There are a few standouts in terms of demonstrating possible indicators of educational effectiveness. The Social Mobility Index compares average graduate financial success in comparison to incoming students’ financial backgrounds, alongside affordability, graduation rate, and the size of the school’s endowment. This is a significant improvement upon Forbes and Money, for it evaluates not merely economic success but economic progress for students. While this is not a stand-alone measure of academic success, this is a significant result. Washington Monthly’s “Worst Colleges” highlights schools with exorbitant student debt and abysmal graduation rates, among other red flags. What Will They Learn? ranks schools based on the number of courses they require in English composition, literature, foreign language, U.S. government or history, economics, mathematics, and natural or physical science. Having a large number of requirements neither demonstrates quality of education nor endears students to one’s academic programs.
Why this is a Problem
Educational quality is difficult to evaluate and often possible to measure in any meaningful, quantifiable way. The rankings gravitate toward quantifiable variables. Unfortunately, what is measurable is not always what is meaningful. Too many rankings pass themselves off as demonstrating what they do not demonstrate: educational quality. They may show who has the most resources and the best students in terms of high school grades and standardized test scores, but that is not the same as providing the best education or being the best overall. In some cases, the rankings present evidence of quality education, as in the case of student-faculty ratios. But these pieces of evidence fall short of being proof and are lost amid a sea of other variables.
Nonetheless, college and university leaders find themselves asking how they can lead their respective institutions to rise in the rankings. Whether consciously or not, this is primarily another way of asking how they can attract better performing high school students and how they can become wealthier as institutions.
Rankings are a zero-sum game, in which there can be only one #1 (or two, in a tie). None of the schools might actually be doing well at educating their students, for the rankings emphasize where schools stand relative to each other.
All colleges and universities should be concerned first and foremost with educating the students that they already have. Most institutions should be more worried about how to reach prospective students whom the top schools do not reach, rather than about how to snag the next batch of top students. Providing a college education is a service to society and a calling. The business culture that has come to define many administrations is antithetical to this.
Professors may receive more pay at more prestigious institutions, but prestige is problematic. Maybe it is deserved. Maybe it is not. The afterglow of yesteryear’s accolades does not demonstrate the quality of education at a given institution today. The schools that attract the top students have, in many cases, attracted the students who are highly driven and likely to educate themselves. Because of or in spite of their professors, these students do well because that is what they do. This should surprise no one. What is surprising, a tragedy and a travesty on a number of levels, is that the prevailing rankings do nothing to reward schools for educating those who struggle to learn.
Schools game the system. Some have asymmetrical teaching loads, with all professors teaching a heavier load in the fall, when average class sizes are calculated, to have a smaller number; never mind that this does not reflects any improvement in the quality of education. Some universities seek to rise in the rankings by “pivoting”: becoming less selective in the short-run, in order to have more incoming tuition and, in a few years, become more selective in the long-run, cutting off the lower end of the performance range of incoming freshmen. This is a clever tactic. But it is aimed at rising in the rankings, rather than on educating well the students that the university already has.
America’s elite universities are fighting to stay elite. America’s middling universities are fighting to become elite. All are fighting to attract the highest performing and wealthiest high school students. What they should be doing is working to educate the students that they already have. No rankings exist to demonstrate which universities do this, regardless of the level of performance, high, middle, or low, of a school’s incoming students. One recent study sought to evaluate undergraduate learning at a variety of universities and found that most of the students it surveyed had learned little.
Plausible and Implausible Solutions
The liberal arts purport to be fields of study that liberate the mind. They ask the hard questions. They seek answers. When pursued earnestly, they cultivate intellectual curiosity, creativity, and compassion. Such are the things that lives and civilizations are made of. Yet such things can scarcely be evaluated, much less measured. The liberal arts are the soul of the college experience. American higher education is in danger of losing that soul.
College is too expensive. In the push to make it more affordable, there is a danger that metrics such as those employed by the prevailing college rankings will serve as the measure of success. Colleges need to find ways to prioritize the education of their students, to recognize and reward success, and to recognize and effectively respond to failure. And yet, the things that matter most are some of the hardest to see. The best professors sow ideas and ideals, many of which do not take root or blossom in their students for many years.
Standardized tests are not the answer. They have already crushed much joy out of K-12 education in the U.S. They cannot measure the stuff of life. Yes, they can test for some valuable skills; but they can neither reveal nor cultivate virtue.
Intellectual virtue is the end, to become the kinds of people who pursue truth and spur others to do the same. If that is the kind of person you want to be, you will know kindred spirits when you meet them. The pursuit of truth can be uncomfortable. Paths of growth and becoming always are.
The siren song of comfort, whether to pursue it or to provide it, can be too great. It is simpler to manicure lawns like those of a country club or to build dorms like a Mediterranean resort than it is to train minds to hunger for what is true, right, and good. Simpler, it is true, but far more costly in every possible way.