Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Introduction

This chapter focuses on the research function of higher education and on how current policy discourses and initiatives may be reshaping research processes and outcomes within higher education institutions. The specific focus is upon the UK where important changes are in the process of being introduced which will affect both the funding and the conduct of research. The aim of the chapter is to discuss whether research evaluation systems lead to the transformation of processes of research production within higher education institutions or whether they are more likely to reinforce existing practices and traditions. The research function of universities, along with the rest of university activities, has become subject to the imperative of the ‘new managerialism’ and of neo-liberal ideologies supporting growing competitiveness and consumerism. Academics are increasingly accountable for what they do. Targets are set and outputs measured against published criteria. In research, this can lead to a distinction between ‘research active’ and ‘research inactive’ staff. But it can also shape the nature of the knowledge produced – its nature and focus, the audience to which it is addressed, as well as its quantity and form of dissemination. This chapter represents an attempt to consider the potential implications of research evaluation systems for research processes within universities. We will do so with particular reference to the new Research Excellence Framework (REF) being introduced in the UK as a basis for rating and funding of research undertaken by UK academics and universities.

In the first part of the chapter, we will approach the construction of the REF, focusing in particular on the role assigned to bibliometrics and impact. The REF moves among several influences such as the peer review system, output measures and the wider impact of research. It is part of a discourse which emphasizes rankings of research and, as a result, may be part of a growing competitiveness between researchers and between institutions. Our empirical work will be based on a critical analysis of the REF framework. Although the REF is in the process of implementation, a pilot exercise has been undertaken and we will draw upon the progress reports made by the Higher Education Funding Council for England (HEFCE) so far, as well as the reactions from several sources such as the media, websites from unions and higher education institutions and interviews of individual academics conducted in 2008 and 2009 in the context of a research project into the transformation of modes of knowledge production in England.

In the second part of the chapter, we will adopt a theoretical perspective which draws upon theorizing on the transformation of the modes of knowledge production, approaching Mode-1 and Mode-2 typologies (Gibbons et al. 1994; Nowotny et al. 2004), the emergence of new science regimes regarding reliable and post-academic science (Ziman 1994) and the issue of epistemic cultures (Knorr-Cetina 1999). We will discuss how discourses promoted by evaluation systems such as the REF which involve a growing focus on ‘assessment’, ‘quality’ and ‘impact’ are transforming (or not) research production in higher education institutions and whether the REF can be seen as a truly ‘new’ discourse or rather as a reinforcement of certain existing ones. We will discuss the interests which such discourses represent and whether such influences can constitute a coherent framework for research or whether they rather constitute a field of tensions that will create new contradictions concerning the kinds of research which may be privileged by higher education institutions. From that perspective, it will be relevant to note and understand the effects of disciplinary influences and how far some disciplines are being ‘excluded’ (or not) or ‘disadvantaged’ by the criteria introduced by the REF. The implications for more applied, interdisciplinary research will also be explored and the effects of differences associated with the institutional settings for research will be considered.

We go on to make conclusions about the kinds of research likely to be linked and privileged by the REF and their implications for future research and knowledge production within higher education systems subject to such evaluations.

Research Assessment: The Case of the UK Research Excellence Framework

An Evolving Policy for Assessing Research

Research assessment in the UK has been associated, from 1986 until 2008, with a regular Research Assessment Exercise (RAE). The RAE was undertaken on behalf of the four UK higher education funding councils, the Higher Education Funding Council for England (HEFCE), the Scottish Further and Higher Education Funding Council (SFC), the Higher Education Funding Council for Wales (HEFCW) and the Department for Employment and Learning in Northern Ireland (DELNI). Although we will focus only on the actions of HEFCE, our discussion can be applied to the policies of all four councils.

According to the HEFCE, the RAE consisted of an explicit and formalized assessment process of the quality of research, being the principal means by which institutions assured themselves of the quality of the research undertaken in the higher education sector. Its results were also the basis of the research funding decisions made by HEFCE as well as carrying significant reputational weight in the steeply stratified UK higher education system. The 2008 RAE, like previous RAEs, used the main principles of peer assessment. The RAE-related budget was relatively minor compared with the teaching budget of HEFCE and research funding from other sources and it is fair to say that the RAE was more about reputation than money in the eyes of most academics and institutions. It should also be mentioned that much of the public funding for research in the UK comes via subject-based research councils which operate independently of the above-mentioned higher education funding councils. The research councils mainly fund projects and studentships whereas universities have substantial discretion about how to use their RAE funding. However, one similarity between the two funding streams is the growing emphasis on research impact and how it can best be achieved. This reflects national economic strategies and the role envisaged by government for universities in achieving them. As such, it represents an important argument in making the case for substantial public funding of universities. As mentioned above, the RAE has been providing a measure of research reputation in UK higher education which has been at least as important as the funding it brings. It would not be an exaggeration to say that obtaining a high RAE score has been the major objective of research strategies in many UK universities.

There have been several criticisms of the operation of the RAE, reflecting the importance attached to it within the academic world. The criticisms take a number of forms. The lack of attention to diversity (of institutions, disciplines and what constitutes research) seems to be one of the reasons for harsh criticism of the RAE (Sharp and Coleman 2005). Elton (2006) identifies as a long-term consequence the competitive, adversarial and punitive spirit among academics evoked by the RAE. In ‘playing’ the RAE game, many institutions have been very selective about the academics they ‘entered’ for the RAE, with career, identity and reputational implications for those academics who were not ‘entered’. It may well be that these effects of the RAE were not the intentions of the policy bodies who introduced and managed it, but they do reflect the ways in which policies tend to be ‘recontextualized’ when they hit different organizational levels and contexts. Outcomes are rarely the same as intentions.

The Research Excellence Framework (REF) is replacing the RAE. The REF was proposed by the HEFCE as the new system for assessing the quality of research in UK higher education institutions. The first REF exercise is due to be completed (with the publication of outcomes) in December 2014. While the full details of the new methodology are still not clear and are likely to differ to a degree between different subject areas, the main changes from the RAE appear to be about the greater use of various output metrics together with a lessening of the administrative load of the exercise and a greater attention to the ‘impact’ of research.

When we conducted interviews with some English higher education key actors in 2008, one interviewee argued that the replacement of the RAE by the REF was linked to the reduction of the burden on universities and that the ‘metric’ discourse that initially characterized the REF would evolve into a hybrid discourse between a metric and peer review system:

The changing name [from Research Assessment Exercise to Research Excellence Framework] is not significant. (…) So the changing of the name is to create a sort of waterline and a break from the old system. The changing purpose is to make it, to some extent, to reduce the burden particularly the areas where there is enough numeric data available, probably reduce the amount of effort and work which goes into it. But that is the underlying logic… (…) I think what you will find is that by the time it is launched in 2011 or 2012, so we hope, you will have a mix of peer review as well as metrics and that will be true in all subjects. (Extract from an interview with an English higher education key actor)

The REF, according to HEFCE, will focus on three elements, which together reflect the key characteristics of research excellence. These are (a) outputs – the primary focus of the REF will be to identify excellent research of all kinds. This will be assessed through a process of expert review, informed by citation information in subjects where robust data are available (for example, in medicine and science), (b) impact – significant additional recognition will be given where researchers build on excellent research to deliver demonstrable benefits to the economy, society, public policy, culture and quality of life. Impacts will be assessed through a case-study approach that has been tested in a pilot exercise. Finally, (c) environment – the REF will take into account the quality of the institutional research environment in supporting a continual flow of excellent research and its effective dissemination and application.

As we have already observed, policies become recontextualized when they hit different organizational levels. Thus, the aims and dimensions of the REF from the perspectives of national policy become recontextualized into concerns about reputational rankings, income and the amount of internal institutional administrative load generated to achieve optimum outcomes. The analysis of these different aspects will produce different answers in different types of institutions as well as within different parts of the same institution. The ‘game’ is likely to be played according to different rules in different places, reflecting different agendas, strengths and objectives.

According to the University and College Union (UCU), the selection of particular academic staff for inclusion and non-inclusion in the research assessment represents a continuity between the RAE and the REF: “We continue to have major reservations about a research assessment process based on universities selecting particular academic staff for inclusion or non-inclusion. The 2008 RAE resulted in a significant amount of unfair and punitive treatment of academic staff and we fear that similar practices will occur in the 2013 REF” (UCU 2009).

The REF can represent, according to UCU, a research assessment through the changing policies of governments or the perceived needs of business, and not on the basis of a peer review system: “The biggest problem with the HEFCE consultation document is the proposal to base 25 % of the REF on an assessment of the ‘economic and social impact’ of research. (…) Academics are concerned that the proposals will: undermine support for basic research across all disciplines as well as disproportionately disadvantaging research in the arts and humanities, lead to the further commercialization, and therefore narrowing, of the research agenda” (UCU 2009).

While these comments are understandable commentary from the academics’ representative body, they of course fail to reflect the arguments made within the political sphere for the funding of university research in the first place – in competition with the claims of health, transport, defence and the like. Outside the academic community, a case based on impact and public benefit is likely to carry the greatest weight.

Whereas in the RAE the peer review system per se was emphasized, this has evolved in the framework of the REF into a focus on metrics combined with a peer review system. In that sense, the focus on bibliometrics (or citation information) and impact is emphasized more strongly in the REF. While still important, peer review is complemented by methods which may be felt to be more objective, less consuming of time and resource, and taking more account of the larger public benefits of research.

A Growing Emphasis on Research Impact

While the focus on impact is understandable from a public policy viewpoint, it confronts mixed reactions when it hits the academic community. An academic from an English university emphasized, when interviewed in 2009, the fuzzyness of such a concept:

The government turn out saying ‘well we are happy that people study and research things, we want impact, we want to have impact’, ok? And everybody says ‘what do you mean by impact?’ And of course that the game is we are trying to find out what impact means… Clearly there is gradually more pressure to work along particular lines. (Extract from an interview with an academic from an English university)

According to the HEFCE official website, the REF aims at the identification and reward of the impact that excellent research has had on society and the economy. The pilot exercise that ran during 2010 aimed to test the feasibility of assessing research impact.

The report to the UK higher education funding bodies by the chairs of the impact pilot panels sets findings and recommendations that are relevant to discuss here (we will exclude those referring exclusively to the case study methodology). The pilot, according to the report, overall showed that it is possible to assess impacts arising from research in the disciplines approached – Clinical Medicine, Physics, Earth Systems and Environmental Sciences, Social Work and Social Policy, English Language and Literature.

One key finding is related to the variety of impacts: “higher education institutions in the pilot provided evidence of a wide variety of impacts arising from their research. This provided a unique collection of evidence that made explicit the social and economic benefits of research from each of these disciplines” (HEFCE 2010: 2). A key finding related to the methodology of the study and the report argued the need for further development in the sense of achieving a greater robustness. Another key finding concerns disciplinary differences: “Although the pilot covered five disciplines with very different kinds of impacts, the broad findings in terms of the feasibility and method of assessing impact were similar. A common broad approach for all disciplines based on case studies should be possible, with generic criteria and the same weighting for impact. Within this common approach REF panels should develop guidance as appropriate to the nature of impacts arising from research in their discipline” (HEFCE 2010: 3).

A final key finding regards the weight that will be conferred to impact by the REF: “A robust assessment of impact should carry a weighting in the REF sufficient to ensure it is taken seriously by all stakeholders. A lot has been learned from the pilot exercise about how to assess impact robustly, but the assessment in the first full REF will still be developmental, and it will be important to carry the confidence of the academic community. In light of this the weighting of impact in the REF should be considered carefully. One option would be for impact to have a lower weighting than 25 % for the 2014 REF, with a clear intention to increase this for future exercises as the method beds down” (HEFCE 2010: 3).

Additionally, the reports made a number of recommendations regarding three themes: the definition of research impact – a broad definition, but excluding impact purely within academia -, the evidence of impact provided by institutions – construction of a narrative with case studies and indicators -, the assessment of impact by the REF panels – disciplinary specifics and robustness. The preference for a case study approach to the assessment of impact is indication of the perceived lack of credible hard indicators of impact and reliance on a mainly narrative style of evidence. Thus, a narrative and case study approach to the difficult question of impact assessment appears to be the compromise solution most likely to gain acceptance among the different interest groups. Whether this removes or accentuates the concerns about ‘fuzziness’ expressed above is a different matter.

In March 2011 the funding bodies announced their decisions on the weighting and assessment of impact within the RAE. They decided, in line with the key findings mentioned above, that: “a) In the REF there will be an explicit element to assess the ‘impact’ arising from excellent research, alongside the ‘outputs’ and ‘environment’ elements. b) The assessment of impact will be based on expert review of case studies submitted by higher education institutions. (…) c) A weighting of 25 per cent for impact would give recognition to the economic and social benefits of excellent research. However, given that the impact assessment in the 2014 REF will still be developmental, the weighting of impact in the first exercise will be reduced to 20 per cent, with the intention of increasing in subsequent exercises. d) The assessment of research outputs will account for 65 per cent, and environment will account for 15 per cent, of the overall assessment outcomes in the 2014 REF. These weighting will apply to all units of assessment” (Higher Education Funding Council for England 2011).

Thus, assessments of research according to the above criteria will be the basis of both funding and reputational differentiation of UK higher education after 2014 with consequences both for individual institutions and the academics working within them. The assessments leave significant room for ‘recontextualization’ both within different subject peer review panels as well as within different higher education institutions. In the short term at least, this may be one of its strengths.

Using Bibliometrics to Assess Research

Regarding bibliometrics, and according to the HEFCE website, responses to the 2009 consultation on the REF exercise showed support for the use of citation information raising, though, concerns about the costs involved and the potential implications for equality.Footnote 1 According to the report on the pilot exercise to develop bibliometric indicators for the Research Excellence Framework, there are clear limits to the application of bibliometrics in the REF: “Bibliometrics are not sufficiently robust at this stage to be used formulaically or to replace expert review in the REF. However there is considerable scope for citation information to be used to inform expert review. The robustness of the bibliometrics varies across the fields of research covered by the pilot, lower levels of coverage decreasing the representativeness of the citation information. In areas where publication in journals is the main method of scholarly communication, bibliometrics are more representative of the research undertaken” (HEFCE 2009: 3).

According to the HEFCE website, each sub-panel will be invited to decide whether it wishes to use citation information to inform its review of outputs and it will reconsider whether the benefits of incorporating citation information into the REF outweigh the costs if only a small minority of panels request citation information, the costs are high, or if the equality implications cannot be effectively mitigated.

Regarding the metric discourse, an English higher education key actor has argued that such a system would be of lower cost and involving fewer people:

I think [REF] will change [things] because the experience proves that what gets measured, gets done can drive behavior so in other funding allocations were aware that people have incentives to record what they do in a way to deliver higher funding. It might be a more rational system and I think initially the departments, a couple of years ago, were looking to introduce some much more metric based systems (…) before they move fully towards a metricated system. Constantly universities complain about the administrative bureaucracy of having to be part of a peer review panel, read lot of papers and compare the results and if we had metrics that would perhaps simplify that but it would remove the element of a sort of human interaction and influence. Ultimately the decision being made by a group of people might have more legitimacy than a metric but that perhaps is a personal opinion. But there is a close correlation, I think, between those universities that are very successful in winning public funding and also those that have lots of business income and business research. So it would be a lower cost system involving fewer people. (Extract of an interview with an English higher education key actor)

Hence, the issue of low costs and low use of human resources can be in tension with the apparent flexibility to discuss the use of bibliometrics in the REF. Additionally, as bibliometrics tend to reinforce the dominant discourse related to the focus on publications and research (Sousa 2011) – embraced consensually by society and economy, the use of citation information to assess research has a high probability to be the dominant manner of assessing research within the REF. A related issue here may be the relationship – and possibly tension – between quality and productivity. One of the effects of the RAE has been to increase substantially the publication productivity of UK academics. The impact on quality is less clear-cut and may represent a triumph of quantity over quality as well as having long-term implications for the capacity of academics to digest the results of new research outputs entering their research environments on new and massive scales.

Transforming (or Not) Research Production

National policies on the research function of universities need to be set within an appreciation of the changing nature of that function. The book, ‘The New Production of Knowledge: The Dynamics of Science and Research in Contemporary Societies’, of 1994, by Michael Gibbons, Camille Limoges, Helga Nowotny, Simon Schwarzman, Peter Scott and Martin Trow, is a major reference work in this field due to its impact and consequent discussions on the transformation of modes of knowledge production. The authors developed the discussion about the transformation of modes of knowledge production. According to their argument, knowledge production is changing from Mode-1 to Mode-2 (Table 4.1).

Table 4.1 Differences between the two modes of knowledge production (Magalhães 2001: 156)

Mode-1 is defined as “A form of knowledge production – a complex of ideas, methods, values, norms – that has grown up to control the diffusion of the Newtonian model to more and more fields of enquiry and ensure its compliance with what is considered sound scientific practice. Mode 1 is meant to summarize in a single phrase the cognitive and social norms which must be followed in the production, legitimation and diffusion of knowledge of this kind” (Gibbons et al. 1994: 2). Mode-1 represents the classic perspective on production of knowledge. Mode-2 refers to an emerging form of knowledge production focused on application: “[Mode-2] operates within a context of application in that problems are not set within a disciplinary framework. It is transdisciplinary rather than mono- or multi-disciplinary. It is carried out in non-hierarchical, heterogeneously organized forms which are essentially transient. It is not being institutionalized primarily within university structures. Mode 2 involves the close interaction of many actors throughout the process of knowledge production and this means that knowledge production is becoming more socially accountable. One consequence of these changes is that Mode 2 makes use of a wider range of criteria in judging quality control. Overall, the process of knowledge production is becoming more reflexive and affects at the deepest levels what shall count as ‘good science’” (Gibbons et al. 1994: preface).

Such emergence is debatable because Mode-1 and Mode-2 have always existed. However, if we do not interpret the definition in a straightforward manner, we can see that the emergence of Mode-2 reflects a changing balance between Mode-1 and Mode-2, with new developments and forms occurring at the Mode-2 end of the spectrum.

In Mode-1, research and the quest for knowledge per se frame knowledge production. Mode-1 is contextualized by the ideal of academic knowledge as a contribution to human emancipation, of seeking after ‘truth’. In Mode-2, the key word is ‘application’. There is a shift from pure and fundamental research to ‘strategic science’. Again, the aim may be to benefit society but the ways of so doing are pluralistic and collaborative with other social groups and interests.

Regarding the role of impact in the REF, we can discuss the hypothesis of a symmetrical coexistence of both Mode-2 (referring to “all stakeholders”) and Mode-1 (referring to the “academic community”). However we argue that this seems not to be the case as the REF excludes from the impact definition the impact purely within academia. Knowledge for its own sake or pure science are, therefore, excluded, from the impact definition sustained by the REF. This, in turn, contributes to the settlement of a Mode-2 discourse as far as impact is concerned. The focus on the evidence of benefits of research is also in line with Mode-2 discourse. The need to make the impact of research visible and clear to society and/or the economy is an issue of Mode-2 related to accountability and introduces a difference in comparison to the RAE. At the same time, of course, we recognize that ‘impact’ only accounts for 20 % of the REF score and arguably it will be Mode-1 criteria which will tend to dominate the more ‘quality’ oriented criteria of the rest of the REF.

When it comes to bibliometrics, we can also identify some differences with the RAE. Although it is recognized by the REF that bibliometrics cannot replace peer review, it is also argued that they can be used to inform peer review. At a first glance, this reinforces the Mode-1 discourse centered on academic community and the RAE. Arguably, both REF and RAE are Mode-1 focused, excluding large amounts of applied research which might never end up as journal articles. However, as we have already mentioned, it depends on each HEFCE sub-panel whether to use, or not, the citation information to inform its review of outputs and it will reconsider whether the benefits of incorporating citation information into the REF outweigh the costs if only a small minority of panels request citation information, the costs are high, or if the equality implications cannot be effectively mitigated. In this sense HEFCE introduces a potential tension between two extreme situations in REF: the use of bibliometrics by all sub-panels and the use of bibliometrics by none. This, along with the fact that bibliometrics privilege a specific kind of research production based on papers (and not books or papers at conferences) and specific databases as Web of Science and Scopus creates a gap between the RAE (more centered on traditional peer review) and the REF (more focused on peer review combined with other quality criteria).

In our view, this may strengthen the boundaries between research and teaching and we see increasingly the creation of new research centers and institutes within universities which remove responsibilities for research from traditional teaching departments. Thus, the teaching/research boundaries may actually be getting stronger. There are several reasons for this. On the one hand, playing the ‘REF game’ may distract attention away from teaching. And secondly, the research function of a university may need to be organized separately from the teaching function. For example, one might have a predominantly Mode-1 disciplinary focus while the other many have a more interdisciplinary Mode-2 focus. When this occurs, there may be less potential for knowledge ‘transfer’ between research and teaching.

Ziman (1994), in ‘Prometheus Bound: Science in a dynamic steady state’, published in the same year as the work of Gibbons et al. (1994), argued that “science is reaching its ‘limits to growth’” (Ziman 1994: vii) and is at risk due to major changes related to the managerial discourse, such as accountability and assessment. Ziman has introduced the concept of ‘academic science’ (also called ‘real science’ or ‘reliable science’) as “the systematic pursuit of scientific research in institutions of higher education” (Ziman 1994: 133). The author argues that some explicit principles of a ‘post-academic science’ are replacing the tacit demands of CUDOS (i.e., the Mertonian norms of ‘communalism’, ‘universalism’, ‘disinterestedness’, ‘originality’ and ‘skepticism)’. Ziman (1994: 178) suggested the acronym PLACE (‘proprietary’, ‘local’, ‘authoritarian’, ‘commissioned’ and ‘expert’) to characterize the work of the newly emerging environment. ‘Post-academic science’ implies a deep entanglement “in networks of practice” (Ziman 2000: 173) and an evolution to “foster (…) [the] enlarged research agenda by taking it out of the ‘invisible hands’ of research communities and putting it under the thumbs of policy and profit”. ‘Reliable science’ and ‘real science’ are threatened by ‘post-academic science’ through the duality drawn between collective and individual science. Related to real science, reliable science, and to the Mertonian ethos is the concept of individualism “that is clearly inconsistent with the corporate spirit of non-academic Research & Development” (Ziman 2000: 173).

In the framework of Ziman’s work, the REF is closer to a post-academic science than to an academic science. Although the peer review system is constantly mentioned in most of the documents and reports related to the REF, it can be argued that it appears much more as a legitimation of the introduction of changes rather than an unquestionable characteristic of the knowledge to be promoted by the REF. This is made clear when peer review is seen as not being enough on its own, needing other forms of accountability more focused on impact and environment. This change of focus – from the interior of the academy to the exterior of the academy – although very fashionably appealing must be interpreted with caution.

The RAE is not so much different from this. Although we can identify some aspects of continuity and change between the two research exercises, they both introduce an accountability dimension external to the academic community being much more policy and economically legitimized. Where there is arguably a difference in emphasis is that the RAE was primarily a drive to greater productivity, in terms of fairly traditional academic outputs, the REF is moving in a direction which places more emphasis on relevance and socio-economic return. However, it remains to be seen whether the implementation of the REF will fully reflect this change across different subject fields and different kinds of higher education institution.

Although there is still a lot of debates about what the REF will be in practice, it seems likely to promote greater emphasis on knowledge directed outside the academy (focusing on impact and environment) than the RAE had done. There is an argument that this will benefit the academy by strengthening its claims on the public purse. But there may also be costs. There is the current argument that for all the focus on diverse indicators – ‘impact’, ‘environment’, ‘quality’, ‘assessment’,… – they refer to “good science” and “good science” will still be defined in terms of peer reviewed publications. This argument is in line with Mode-1 and its focus on peer review which may be diluted to some extent in the proposals for the REF. This is due to the fact that peer review is no more the exclusive center stage of assessment of academic work. Academic work which is assessed on the basis of ‘impact’, for instance, might be ‘good’ according to its application or relevance but not, necessarily, according to academic and scientific patterns.

Economy and society are present in all progress reports regarding the REF. They appear as if they are the same and represent common goals and consequences towards knowledge. But the contributions of knowledge to society and to the economy are two different things that should be analyzed within different frameworks. Both society and economy comprise different interest groups and some may gain greater benefit from, as well as access to, the knowledge produced by the academy. Notions of the ‘public good’ have to confront the reality of different ‘publics’. The contribution of knowledge to economy and society can take many and different forms. If contribution to society is likely to be more connected with emancipation and construction of citizens (though there may be other more negative outcomes related to social control and inequalities), a more economic perspective will point towards business-value oriented research. Citizens of course may still be ultimate beneficiaries though this will be depend on many factors beyond the control of academe.

The AgoraFootnote 2 represents the social dimension of Mode-2. The contemporary Agora is seen as consisting “of a highly articulate, well educated population, the product of an enlightened educational system” (Nowotny et al. 2004: 204) and is “populated by a diversity of individuals who combine the roles of ‘citizen’ and ‘consumer’” (Nowotny et al. 2004: 206). The increasing demand for participation in the Agora is the result of two processes: democratization and the success of science. The “shift towards socially robust knowledge is sometimes described as a shift from a culture of scientific autonomy to a culture of scientific accountability” (Nowotny et al. 2004: 210).

We argue that Nowotny et al.’s perspective tends to be quite optimistic when it presents the Agora as a future and probable scenario of knowledge production and accountability. In our perspective, there is another scenario that needs to be considered related to the business-value of research that to some extent can null the Agora or, at least, can reconfigure the scenario proposed by the authors. Scientific accountability seems be responding to economic values much more than to societal values, at least if current political discourses are to be believed.

Some Consequences: Winners and Losers Among Different Fields of Study?

Research assessments such as the RAE and the REF have to embrace a range of very different disciplinary areas with different characteristics and patterns in the modes of knowledge produced. Hard sciences, for instance, have a tradition of publishing papers in scientific journals whereas the humanities place more value on book publications. Although the REF argues that disciplinary specialities should be considered, it is also argued that the same weight – 20 % in the 2014 REF – of impact should be applied for all disciplines. We agree with Cronin (2003) when he argues that the competence of humanities is no less than the one we can find in ‘objective’ sciences, rather they are contextualized in different epistemic cultures.

Following Karin Knorr Cetina (1999), we would argue that epistemic cultures have major importance for the ‘making’ of knowledge. Considering that, according to the REF, impact purely within academia appears not to be as much valued as impact outside academia, this might put at risk pure and fundamental natural/social sciences and the diversity of epistemic cultures although arguably these disciplines may benefit from the other quality measures within the REF.

Knorr Cetina (1999) sustains an argument of the fragmentation of contemporary science through the diversity of epistemic cultures: “Epistemic cultures are cultures that create and warrant knowledge, and the premier knowledge institution throughout the world is, still, science” (Knorr Cetina 1999: 1). Replacing notions such as discipline or speciality with that of an epistemic culture, it is argued that “The differentiating terms we have used in the past were not designed to make visible the complex texture of knowledge as practiced in the deep social spaces of modern institutions. To bring out this texture, one needs to magnify the space of knowledge in action, rather than simply observe disciplines or specialities as organizing structures” (Knorr Cetina 1999: 2, 3).

The central element, when dealing with epistemic cultures, is the construction of the machineries of knowledge production and not knowledge production itself. What we intend to underline about epistemic cultures is the argument of the disunity of the sciences: “It displays different architectures of empirical approaches, specific constructions of the referent, particular ontologies of instruments, and different social machines. In other words, it brings out the diversity of epistemic cultures. This disunifies the sciences” (Knorr Cetina 1999: 3).

This disunity of science has led to the subsequent thesis that there is not just one kind of knowledge production in science. Such a thesis has been sustained in the past in the realm of social sciences, an argument that has been made by authors such as Geertz (1973) and Giddens (1974). The same claim has been made regarding natural science by authors such as Suppes (1984) and Dupré (1993). It has been argued that “The image of a unified natural science still informs the social sciences and contributes to their dominant theoretical and methodological orientation. The debates raging over realist, pragmatist, skepticist, or perspectival interpretations of science all tend to assume science is a unitary enterprise to which epistemic labels can be applied across the board. The enterprise, however, has a geography of its own. In fact, it is not one enterprise but many, a whole landscape – or market – of independent epistemic monopolies producing vastly different products” (Knorr Cetina 1999: 3, 4).

Another issue regarding disciplinary area and assessment exercises such as the REF is how to assess interdisciplinary research. Citation indicators are very appealing due to their apparent clarity and easy reading when it comes to assess what disciplinary areas are interacting with each other. However this might represent a misreading interpretation as bibliometric indicators, in some cases, tend to intersect the bibliography used in a specific area and by a specific author with the area of the paper. Taking this present chapter as an example, if it was scrutinized by bibliometrics the output could be that physics is one of our disciplinary areas, as we cite an author who is a physicist (John Zyman) who works also in the epistemology of science. With this we do not wish to oversimplify bibliometrics but to emphasize that metrics have disadvantages that might not be in favor of assessing “good science”. And when we look at peer review processes independent of the use of bibliographics, we have to contend with the ‘tribal’ characteristics of the academic community and the knowledge and values which are dependent on one’s tribal membership.

The introduction of the UK REF and similar assessment exercises almost inevitably lead to distortions to the processes that they seek to administer and support. A policy of ‘anything goes’ in assessing the outcomes of complex social processes such as university research is hardly likely to appeal to any of the interested parties, within or beyond the boundaries of higher education. But a recognition that ‘different things need to go’, i.e. deserve encouragement and support, is needed. Higher education – its practices and the institutions which provide them – are increasingly diverse and differentiated and this presents the challenge to policy communities and the discourses that underpin policy. Different ‘players’ will benefit from the application of different rules in the research evaluation ‘game’. Recontextualisations of policies will inevitably occur at all levels, reflecting local circumstances and contexts. We would argue that such recontextualisations are necessary and should be welcomed.

Conclusion

As with the previous UK Research Assessment Exercises, the Research Excellence Framework reinforces existing practices and tradition, such as the focus on discipline-based peer review. The transformation of research production seems to be accorded more importance in the Research Excellence Framework as it moves towards a Mode-2 and a post-academic form of knowledge. The Research Excellence Framework attempts to accord greater recognition to a notion of research characterized by having social and economic impact outside academia together with peer review informed by citation information.

The Research Excellence Framework is also a part of a discourse which emphasizes rankings of research and is part of a growing competitiveness between researchers. And this brings risks to research production:

There are some negative impacts of [research assessments such as the RAE and the REF], the riskier research disappears in favor of research that will be very likely to lead to results in medium terms, safer research. All these exercises are artificial ways of trying to introduce competition into the academic sector because of the ideology that has come in… In management theory, recently, competition will always improve everything… Which is not true. If you make things like universities compete they will become very good at whatever you are measuring, make universities compete over money they will become very good at making or saving money, not necessarily mean that they will be good at giving a good education to students. Make universities compete in the RAE they will become very good at fulfilling the criteria of the RAE which doesn´t necessarily mean that they will do better research. (Extract from an interview with an academic from an English university)

What does this mean for future research and knowledge production within higher education systems subject to such evaluations? Can the Research Excellence Framework contribute to the construction of a new discourse around knowledge production? Although it might be too soon to answer such questions definitively, we argue that some indicators might lead us towards answering them in the affirmative.

The importance of peer review as a common element in both the Research Assessment Exercise and the Research Excellence Framework must be discussed in articulation to each one of the research evaluation systems and their characteristics. If we agree that both of them use ‘peer review’ as criteria, its use in the Research Excellence Framework may prove to be more residual than in the Research Assessment Exercise. Peer review seems to be losing some weight and strength in the accountability of research in the construction of the Research Excellence Framework. Nevertheless, the Research Excellence Framework constitutes a field of tensions that will create new contradictions concerning the kinds of research which should be privileged by higher education institutions oscillating between Mode-1 (no/very long/indirect impact) – and Mode-2 – (explicit/short term impact).

As with any policy initiative, its implementation and consequences may not accord with the intentions of the policy makers. It is likely to remain the case that the financial and reputational rewards of research assessment – to both individual academics and to institutions – will shape much of the research effort of UK universities in the years to come. Whether successfully ‘playing the REF game’ will necessarily increase the output of high quality and socially useful research remains to be seen. And whether the ‘REF game’ will contribute to or distract from the provision of high quality teaching in universities is another question that only time will answer. Academics and the departments and institutions they work for will be applying their own perspectives and interests to the implementation of the REF. But few will be ignoring it. Within the complex but expanding roles of universities in ‘knowledge societies’, it remains the case that initiatives such as the REF may work mainly to legitimize existing hierarchies and fairly conservative practices within the academic profession. However, at least in some places, they may also work as stimulants of innovation and change.