Analyzing the Disciplinary Focus of Universities: Can Rankings Be a One-Size-Fits-All?

Analyzing the Disciplinary Focus of Universities: Can Rankings Be a One-Size-Fits-All?

Nicolas Robinson-Garcia, Evaristo Jiménez-Contreras
Copyright: © 2017 |Pages: 25
DOI: 10.4018/978-1-5225-0819-9.ch009
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The phenomenon of rankings is intimately related with government interest in fiscalizing the research outputs of universities. New forms of managerialism have been introduced into the higher education system, leading to an increasing interest from funding bodies in developing external evaluation tools to allocate funds. Rankings rely heavily on bibliometric indicators. But, bibliometricians have been very critical with their use. Among other, they have pointed out the over-simplistic view rankings represent when analyzing the research output of universities, as they consider them as homogeneous ignoring disciplinary differences. Although many university rankings now include league tables by fields, reducing the complex framework of universities' research activity to a single dimension leads to poor judgment and decision making. This is partly because of the influence disciplinary specialization has on research evaluation. This chapter analyzes from a methodological perspective how rankings suppress disciplinary differences which are key factors to interpret correctly these rankings.
Chapter Preview
Top

Introduction

In the last five years, we have observed a rapid transformation in the way research policymakers use university rankings. These tools have rapidly been integrated as a new support tool on which to base their decisions. They have reshaped the higher education landscape at a global level and become common elements for politicians and university managers’ discourse (Hazelkorn, 2011). Not only have they become external key factors as a means to attract talent and funds, but they are also used as support tools along with bibliometric techniques and other methodologies based on publication and citation data (Narin, 1976). Their heavy reliance on bibliographic data has stirred the research community as a whole, raising serious concerns on the suitability of such data as a means to measure the ‘overall quality’ of universities (Marginson & Wende, 2007). At the same time, university rankings have caught bibliometricians off guard. Although they use them quite often (i.e., journal rankings), they have traditionally disregarded them for institutional evaluation, focusing on more sophisticated techniques and indicators (Moed et al., 1985). On the other hand, university rankings have been traditionally based on survey data and have not considered the use of bibliometric indicators until recently. Moreover, despite their success in the United States, they have had little presence in the European research policy scenario (Nedeva, Barker & Osman, 2014).

The launch of the Shanghai Ranking in 2003 did not only set up the starting point of the globalization of the higher education landscape, but introduced bibliometric-based measures to rank universities. Surprisingly, the Shanghai or the Times Higher Education World University Rankings and QS Top Universities Rankings were not produced by bibliometricians, not even by practitioners. From the beginning, this caught the interest of the bibliometric community which rapidly positioned themselves against the use of these tools. Such strong opposition is resumed in the correspondence maintained between Professor van Raan from Leiden University and the creators of the Shanghai Ranking (Liu, Cheng & Liu, 2005; van Raan, 2005a; van Raan, 2005b). Here, van Raan (2005a) highlights serious methodological and technical concerns which are later emphasized by others (i.e., Billaut, Bouyssou & Vincke, 2009). Such shortcomings have to do with the careless use these rankings make of bibliometric data, neglecting many of the limitations bibliometric databases have, and offering compound indicators of dubious meaning which intend to summarize the global position of universities.

Rankings have evolved from marketing tools which have a great impact on the image of universities and their capacity to attract talent and funds (Bastedo & Bowman, 2010) to research evaluation tools which are used strategically by research policymakers shaping their political agenda (Pusser & Marginson, 2013). However, their strong focus on research and their reliance on bibliometric data, entail important threats and misinterpretation issues which may:

  • 1.

    Endanger the institutional diversity of universities, and

  • 2.

    Misinform policymakers on the performance of universities or national higher education systems.

Complete Chapter List

Search this Book:
Reset