Inequality and Social Mobility
University Rankings in Singapore: A Need for Critical Reflection

Last year, when the Times Higher Education World University Rankings (THE) results were announced, I received an email from a friend who worked at the University of British Columbia, in Canada. I must be delighted, he wrote, that NUS had come out just ahead of his university. Never mind, just wait for next year!

My first reaction was amusement. I and most of my colleagues are sceptical about such ranking systems, but I think most of us also have a fleeting feeling of satisfaction when our institutions do well. NUS’s and NTU’s rise in the rankings reflects something more than simply a job well done by a team of which we are members. It also potentially represents a reconfiguration of academic power. Singapore’s intellectuals within the universities have looked and often still look elsewhere for inspiration: to the United Kingdom, China, or the United States. In the future, Asia’s economic rise may also, in this story, result in a fairer distribution of intellectual capital, with Asia resuming its position as a key world centre of learning, and Singapore serving as a key node or regional hub.

Yet, on reflection, the ranking system is disquieting. Can a university, in a single year, decline greatly in terms of quality in comparison to is peers? What is being measured? And does a narrow focus on rankings stop us thinking about the larger role of our universities in society, from making informed choices about what Singaporean universities might be?

What exactly are we measuring?

Comparing universities is notoriously difficult. Tertiary institutions throughout the world vary greatly in size, teaching methodologies, research activity, and their role in society. In addition, in most societies there is considerable debate about the purpose of universities. Some may see higher education as an industry like any other; others may view universities as a training ground for the professions. A powerful strand of thinking about universities has emphasised the institutions’ social role in promoting critical inquiry. Universities provide undergraduates with a chance for reflection, for standing back from the social world and asking questions of it in a process of learning that will enable them to be better engaged in society in the future. The university also provides a place for reflection for intellectuals drawn not only from the ranks of its faculty but from wider society. In the humanities and social sciences, it is a space where new and controversial ideas can be exchanged and debated freely; in the sciences, it is a place where research that is not immediately and perhaps never will be commercially viable can be carried out.

When we attempt to compare the performance of 15-year-old students in a subject such as Mathematics, we can have students take an international standardised test, such as the Organisation for Economic Cooperation and Development’s Programme for International Student Assessment (PISA) surveys. Yet such a test is impossible for universities. There is no controlled environment in which we can compare institutions, and we do not even agree on what we are measuring.

The problem with proxies

Faced with this problem, the major international university rankings system first narrow ranking criteria. Any attempt to measure or compare the social role of different universities, which would be impossible to quantify, is excluded. The two ranking systems that receive most attention in Singapore, the THE and the Quacquarelli Symonds Global Academic Survey (QS), focus on two major factors: research excellence and teaching quality. They also consider, to a smaller extent, international influence, income generation and the employability of graduates.

Even these narrowed criteria, however, present problems. Research excellence and teaching quality are nebulous concepts that have subjective elements and are situated in a wider social context. They are thus difficult to compare. Take teaching quality, for instance. One undergraduate student may respond best to a university experience in which she attends lectures and tutorials but is largely left alone to explore ideas in a self-motivated fashion; another may respond better to intense, small-group teaching and continuous assessment. Which of the two represents better teaching quality?

Both QS and THE use the proxy of a university’s staff-to-student ratio, as a quantifiably measurable indicator that they argue permit objective comparison between universities. There are two questionable assumptions associated with the use of such a ratio: first, that a high staff-to-student ratio will result in smaller classes, and second, that small classes promote a better learning environment. The second assumption is not unreasonable. Yet more staff and less students does not necessarily mean small classes. In a major research-intensive university, faculty have much lighter teaching loads and spend far more time on research. Individual classes thus may not necessarily be smaller. Indeed, many of the faculty may be less motivated to teach well than their peers at an institution where the staff-student ratio is lower but classes are smaller because teaching loads are higher, and where teaching is at the centre of the institution’s mission.

The limitations in measuring reputation

Aware of the limitations of proxies, both QS and THE have designed reputational surveys, in which a large number of academics from institutions worldwide are surveyed online, and asked their opinion of research and –in the case of THE – teaching excellence at different institutions regionally and globally. Both rankings organisations have been careful in recent years to revise their methodologies to take into account possible biases in favour of Anglophone universities and universities located in Europe or North America. Yet reputational surveys, which now constitute over 30% of the total weightage in both QS and THE rankings, come with their own set of limitations.

THE awards 15% of total ranking points to the results of a survey of academics worldwide regarding teaching quality. Academics are asked to first indicate which region they are most familiar with, and then to name up to fifteen institutions within the region that have the highest teaching quality in their subject area. But regions are not equal in terms of interconnectivity. The vast majority of teaching in North American institutions, with the exception of Quebec and Mexico, is conducted in English. A faculty member who claims familiarity with a subject area in North America is part of a larger academic community of which they can reasonably attempt to gain an overview. Southeast Asia, in contrast, is fragmented into a number of different national higher education systems in which teaching occurs in different languages. I know of no one who knows all the languages of instruction used in ASEAN tertiary education institutions well enough to have the regional view that comes so easily in North America.

Even if an academic knows a university system well, they may find it difficult to judge teaching quality within it. NUS, where I teach, has an elaborate system of evaluating teaching through student feedback, peer review of classes, and the review of teaching dossiers. While the system might be criticised for a bias against eccentric or maverick but stimulating teachers, it is, I think, reasonably effective in highlighting the university’s best teachers, and in identifying good practices that help others to improve. NTU, SMU, and other universities in Singapore have similar systems. Yet it would be impossible for any respondents, unless they specialise in educational research, to say which of the universities has the best quality teaching. NUS might argue that the focus on teaching that it has given over the last five years, as witnessed by the foundation of its Teaching Academy, makes it a superior teaching institution. SMU might reply that its focus on seminar-style classrooms enables better and more student-centred learning. And if we move outside Singapore, the situation becomes even more hazy. What is likely, is that in the majority of cases, the respondent to the questionnaire simply lists as “top” institutions those that already figure prominently in the rankings.

The downside of rankings-induced competition

One might argue, of course, that the rankings systems, while clearly flawed, are harmless. The problem is how the rankings are used. Annual rankings are widely publicised in the media, and indeed now supplemented by releases of regional or subject area rankings at different times of the year. Rankings are one of the few comparative and seemingly objective measures of a university’s achievements available to the general public, who are important stakeholders in any state university system. Politicians and policymakers, many of whom have little knowledge of the complexities of a tertiary education system, may also view the rankings as evidence of success or failure of state-funded higher education systems, and pressurise university administrations to improve ranking scores.

Pressure from both the general public and political leadership thus results in what social scientists characterise as “perverse incentives”: there is pressure to see success as improvement in an institution’s performance on proxy indicators, rather than exploring a wider vision of what a university might be. While there have been a few cases in North America of universities consciously trying to “game” ranking systems by inflating some statistics and suppressing others, most university leaderships are conscientious enough to firmly reject such measures. Yet proxies also cast a long shadow over the institution, and produce a culture in which notions of individual and collective excellence are narrowed, and the relations with society neglected. Aware of the importance of citation factors and publication in major academic journals, junior faculty, not yet assured of tenure, may decide to focus entirely on academic writing, and neglect their role as public intellectuals.

Both QS and THE rankings system give points for the percentage of international faculty and students at an institution. There is thus an incentive to hire a smaller percentage of local faculty, and to increase international student intake, even when these actions do not accord with a public university’s social role. The power of proxies and rankings is particularly strongly felt in institutions in developing countries that are trying to increase their international visibility, and to become centres of research and teaching excellence. Universiti Putra Malaysia, for instance, in its School of Graduate Studies Roadmap 2014-2020, explicitly lists increasing “graduate student enrolment in order to be ranked in the QS Top 200 by 2020” as a strategy to improve the quality of its postgraduate programmes. Increasing the ratio of graduate students to undergraduates will certainly improve its performance on a key proxy, but in itself will not improve the quality of education. Indeed, if the number of graduate students is simply increased without any other institutional changes, the quality of their educational experience is likely to decline.

Going beyond rankings in Singapore

Two elements of Singapore’s own history over the last fifty years make us uniquely vulnerable to misusing the ranking systems, and ascribing too great an importance to the performance of our tertiary institutions in them. The first we might call a globalised notion of meritocracy. In this story, Singapore’s economic success in relation to that of other nations, and despite an absence of natural resources, is a meritocratic one, arising from both good policy choices and the enterprising spirit of its people. The country’s growth and development relative to others has often been best measured in quantitative terms – GDP per capita and World Health Organisation indicators for instance. Educational achievements have been expressed through international comparative tests such as the PISA. Such statistics have proved a measure of evaluating Singapore’s standing in the world, and of measuring achievements of which Singaporeans, and indeed all those who have contributed to Singapore’s success, may be justly proud.

The second is the desire to eliminate corruption by privileging quantitative data – seen as objective and resistant to manipulation – over subjective qualitative assessments. This faith in quantitative measurement has grown in tandem with the development of what sociologist Michael Power has termed an “audit society” globally over the past two decades, in which rituals of measurement and verification, often quantitative in nature, have replaced older trust-based networks. In the university, the growth of quantitative measurements of research and teaching productivity have certainly made staff more productive, with the faculty member who published nothing on receiving tenure and who made a minimal effort in teaching now very much a thing of the past. Yet they have also reduced an atmosphere of cooperation and trust, and resulted in a decline in affective bonds between colleagues.

Within Singapore in the last few years, however, there has been a growing sense that such statistical measures do not adequately account for social well-being. Increases in per capita GDP have been accompanied by rising social inequality that existing redistributive mechanisms seem powerless to overcome. At the same time, many meritocratic structures have evolved into what Singaporean economist Donald Low has called an “’arms race’ phenomenon” in which individuals compete against each other for relative positions, rather than against an absolute standard, thus eroding social trust. Rankings of all kind are prime examples of such an “arms race.”

Singapore universities have reached an important juncture at which it is vital for them to move beyond a narrow focus on teaching and research excellence. What Singapore needs above all from its universities is for them to be sites of intellectual debate, in which public intellectuals make use of the space of the university to ask difficult and comprehensive questions. In the last few years a number of prominent public intellectuals have left Singapore universities: their departure, to a large degree, has been precipitated by a university environment that cannot accommodate the contribution of public intellectuals within a narrow set of criteria regarding excellence. These criteria do not originate with global rankings systems, but they are fostered by undue attention to them.

Rankings systems will not go away, but there is surely a case to be made for all of us to pay less attention to them. Journalists might do greater research into what the rankings mean; the public might reflect more deeply on what they want from the universities. Above all, we might remind ourselves that it is precisely those things that rankings cannot measure—universities’ engagement with the inner experiences of individuals, and their larger social role– that are in fact central to university life.

Philip Holden is a professor of English at the National University of Singapore. He has worked in Singapore academia for 20 years. A shorter version of this piece appeared in TODAY on 17 September, 2014.

Photo credit: NUS

  • Tags:

Subscribe to our newsletter

Sign up to our mailing list to get updated with our latest articles!