A critical reflection on global university rankings: Power, perception and prestige

By Prof Linda du Plessis

We live in a world obsessed with comparison. From “top performers” to “best universities,” rankings give the illusion of order in complexity. Yet, when it comes to higher education, these lists often conceal more than they reveal. Not everything fits neatly on a list, especially when value is subjective and contextualised, like education and university rankings. The origin of university rankings is debated: some see them as tools for transparency and accountability, while others, like Joshua Gross, argue media companies created them for profit rather than to guide students. Over time, they have become prominent and controversial.

Originally intended to help students compare institutions, global rankings have become a tool for global competition, pushing universities to focus on research and visibility to chase "world-class" status. Several ranking schemes, Times Higher Education (THE), Academic Ranking of World Universities (ARWU), the Centre for World University Ranking (CWUR) and the QS World University Ranking, use different methodologies and are widely cited by media to shape perceptions of university quality. Universities, especially in developing countries like South Africa, face pressure to appear in them. While providing standardized comparisons, rankings are often overly quantitative, favour wealthier institutions, and ignore diverse socio-economic contexts, raising questions about their impact on strategic priorities and missions, particularly in the Global South.

The rankings game: Who really benefits?

What began as a transparency tool has evolved into a high-stakes business. The main ranking bodies are for-profit enterprises that not only use institutional data but also sell consultancy services to help universities improve their scores. The irony is striking: universities must pay to understand and climb a ladder they did not design. Meanwhile, media headlines celebrate the “world’s best universities,” rarely questioning how these rankings are produced or what they truly measure.

In the media reports, the focus is not on the metrics but rather which universities made it to the “top” or deserves the label “best universities in the world” as can be seen from some of the media headlines: “How to find the best universities in the world for you” and “Top 10 Universities In South Africa - Best University Choices”. The phrases “top university” and “best university” become very loaded words!

Are rankings truly objective? 

Global rankings claim objectivity through metrics like publications, citations, and reputation scores. Yet their indicators, weightings, and data sources reflect Western academic norms. QS and THE rely on subjective reputation surveys favouring Global North institutions, while ARWU’s focus on Nobel Prizes and top journals disadvantages those in emerging economies.

The chosen metrics, are changed by agencies from time to time without consultation on its impact on national or regional contexts. Institutions delivering quality education despite limited resources, will score lower in the rankings, since the criteria focuses, amongst other, on spending per students and student: staff ratio. In this way the criteria define a narrow and competitive version of “top institutions”. When an institution performs the best in the country according to a national professional body (e.g. SAICA) it has no bearing on ranking.

What rankings does not reflect

Rankings overlook key missions, especially in the Global South, such as access, justice, local knowledge, and cultural diversity. Teaching is measured through weak proxies like staff ratios or employer surveys. Achievements like the NWU’s national teaching award go unrecognised. Rankings ignore historical disadvantage, resource gaps, national development goals, multilingualism, curriculum decolonisation, and efforts to tackle societal challenges such as inequality or public health.

Rankings should be considered carefully, noting both what they reveal and what they omit. Here’s a few criticisms about the ranking methodologies. Disciplinary bias is inherent when using citations as a measure and citation practices vary widely between disciplines. Institutions strong in under-cited fields may appear weaker despite high-quality research. Ranking agencies show a bias toward institutions with strong research funding, high-profile alumni, and global branding power.

Due to the dependence of rankings on research output and citations, quantity receives more attention than quality. A high citation count may reflect popularity, controversy, or large collaborative papers rather than academic rigor. Self-citation has become a common occurrence. Citations take time to accumulate, so recent but high-quality research are undervalued.

Research funding is often viewed as a marker of quality, but this can be misleading. A recent example is the announcement of significant funding cuts by Pres. Donald Trump, which has resulted in universities across South Africa losing millions in research grants. While less funding negatively impacts the ranking of an affected institutions, I want to argue that these institutions continue to provide quality education, despite the impact of external financial constraints beyond their control.

Can universities choose to participate in a ranking or not?

Ranking methodologies vary, shaping how institutions participate. Webometrics, for instance, ranks universities independently using public web data, indexed research, and online visibility - institutions cannot opt out. Other rankings depend on submitted data; if institutions stop providing it, they may be excluded, though some agencies still use public information to rank them. In South Africa, Rhodes University has taken a public stand against international rankings, arguing that they undermine transformation and contextual relevance, yet it still appears on some lists, illustrating how little control institutions have once drawn into the ranking machinery.

In 2022, leading law schools - including Yale, Harvard, and others - began withdrawing from U.S. News & World Report’s rankings, citing concerns about undermining if core values, discouraging public interest work, need-based aid, and socioeconomic diversity. In the same year three major Chinese universities quit international rankings opting to forge their own strategy for their social context, rather than being “world-class” institutions. In 2023 Utrecht University withdrew, following concerns about the rankings’ emphasis on scoring and competition. Recently, one of the oldest universities in Paris, Sorbonne University, known globally as a symbol of education, science and culture, announced that it will stop submitting data to THE, arguing that “the data used to assess each university’s performance is not open or transparent” and “the reproducibility of the results produced cannot be guaranteed”.

Should we participate?

Rankings can offer useful benchmarking, insights, and visibility, but problems arise when they dictate identity, values, and strategy. Institutions should prioritise what is meaningful, not merely measurable. While total withdrawal risks reduced recognition, a balanced, selective engagement without treating rankings as the main measure of success is more sustainable.

Can universities cheat the rankings?

Yes, and several institutions have made headlines when it was exposed. As some rankings are metric-driven, they can be gamed. For example, some institutions inflate international staff ratios, selectively report data, form citation cartels, or even hire highly cited researchers as adjuncts with little actual engagement. The QS and THE rankings rely heavily on reputation surveys, which are inherently subjective and vulnerable to lobbying or targeted marketing campaigns.

If not ranking, what then?
Avoiding reliance on rankings as measures of performance, universities and the Department of Higher Education and Training could develop contextual, purpose-driven evaluations - such as SDG-linked impact rankings or national quality systems. Qualitative assessments and stronger industry and alumni feedback can also provide meaningful insights into the relevance and impact of qualifications.

University rankings merit recognition but not idolization. Each uses its own metrics, often misaligned with local missions. In contexts like South Africa, universities must prioritise transparency, accountability, and societal impact over ranking success. True excellence should be contextual, measured against an institution’s mission, history, and contribution to social development. When universities fail to show their value, commercial rankings fill the gap. Instead of chasing global prestige, they should focus on community service, responsible data use, innovation, and ensuring higher education remains a public good.

As such, every institution should critically consider a ranking scheme and ask: Is this the lens through which we want to be seen? If not, the real challenge lies in reshaping the narrative - not to chase rankings, but to reflect authentic impact.

Linda du Plessis

Prof Linda du Plessis

Submitted on