Introduction

In mid-March 2010 a rather surprising exhibition took place at London’s Royal College of Art (RCA). Entitled IMPACT!, the event was sponsored by the UK’s Engineering and Physical Sciences Research Council (EPSRC), the National Endowment for Science, Technology and the Arts (NESTA) and the RCA. Over the course of a week a dark wing of the RCA, sandwiched between London’s Imperial College and the Royal Albert Hall, was transformed into something of a mixture between a science-fiction novel and a traditional science museum. In total the exhibition included sixteen collaborative projects curiously arranged on tabletops, draped from the ceiling and hung on each gallery wall. Developed by teams of designers and artists from the RCA, working with EPSRC-funded research scientists, IMPACT! showcased a wide range of contemporary scientific fields from synthetic biology, nuclear and cosmological research. The iridescent blue exhibition programme outlined the ambition of the exhibition as “offer[ing] an alternative view of how science could influence our future. … Not to offer prediction but to inspire debate about the human consequences of different technological futures” (EPSRC et al. 2010, p. 9). The title of the exhibition and its timing were significant. It coincided with a period of intense policy debate about the economic impact and value of public research funding (Campaign for Science and Engineering 2009; Royal Society 2010; Council for Science and Technology 2010) and the political negotiations that surrounded the allocation of the 2010 UK science budget (Department for Business 2010). Initiatives such as the IMPACT! exhibition might be seen as part of a broad political struggle over the meaning and definition of science and attempts to measure and quantify the productivity of public research funding. The exhibition is also indicative of the ways in which research funding bodies have been engaged in these debates, by offering ‘alternative’ conceptions of the relationship between science and society and seeking to defend the speculative potential of basic research against calls for strategic selectivity.

Though debates about the impact of public research funding are an enduring feature of relations between science and political authority, in recent years contemporary science and research policy has been characterised by a renewed emphasis on measuring the ‘impact’ of public research funding independently from indicators of national research expenditure. The design and implementation of national research assessment exercises has also been characterised by a range of competing proposals for measuring and assessing research quality and the impact of public research funding (Geuna and Martin 2003; King 2004). Set in the context of political discourses concerning the transparency and accountability of public financing (Power 1997), and the use of benchmarking and target setting as the characteristic political technologies of neoliberal government (Bruno 2009), strategies to measure, quantify and demonstrate the impact of public research investment have become the focal point for both technical innovation and political debate. However, as indicated by the IMPACT! exhibition, these technical debates – how to capture and measure the diverse impacts of research – are also indicative of a wider set of questions concerning the cultural values and social purposes of science and research. While in formal terms national research assessment processes are increasingly framed by questions of research ‘quality’ and ‘impact’, a range of ‘intermediary organisations’, acting on behalf of the collective scientific community, have been active in seeking to redefine the meaning of these terms, arguing instead for the enduring impacts of ‘basic science’.Footnote 1

In this paper the central question we address is how science and research are rendered valuable in contemporary science and research policy. At the heart of debates about the impact of public research funding are a set of interlocking notions of the value and worth of science – and a set of institutional arguments concerning the role of research councils and funding bodies in maximising the ‘returns’ on public research investments. The template of these political arguments mirror the familiar oppositions between ‘basic’ and ‘applied’ research and between the values of scientific autonomy and independence and those of the importance of scientific innovation for national economic strategies. These opposing accounts of the cultural and economic value of science have a lengthy history and have operated as foundational resources for justifying public research investment in, and patronage of, the scientific enterprise (see, for example, Calvert 2006; Slaughter 1993). They form part of the underlying terminology of what Guston and Keniston (1994) describe as the ‘social contract’ for science whereby “government promises to fund the basic science that peer reviewers find most worthy of support, and scientists promise to provide a steady stream of discoveries that can be translated into new products, medicines or weapons” (p. 2). Though the terms of this contract have been substantially challenged over the last fifty years, the values of basic and ‘curiosity driven’ science continue to provide a repertoire of concepts deployed in defending a political mandate for the public patronage of a broadly autonomous body of scientific researchers.

Following approaches developed in what has been termed the ‘pragmatist turn’ in economic sociology (Boltanski and Thévenot [1991] 2006; Stark 2009), we argue that political and policy debates about the economic value of public research funding might be also read as invoking a set of contested cultural values concerning the role of science in contemporary social life. This approach suggests that what is at stake in recent political debates concerning the economic and strategic value of public research funding – and the innovations in techniques for measuring and quantifying the impacts of research programmes – is a “historic contestation over the standards by which the principles of creativity, autonomy, and diversity are to be judged” (Rudy et al. 2007, p. 11). Further, this approach suggests that frameworks and principles used in the assessment of research quality and economic productivity necessarily entail appeals to notions of the common good, typically couched in what Boltanski and Thévenot term ‘grammars of justification’. In what follows we explore the ways in which UK research councils have sought to recast contemporary UK policy discourse, principally by developing new techniques for measuring and demonstrating research impact. Drawing on research interviews and ethnographic observation conducted at the UK’s Engineering and Physical Sciences Research Council (EPSRC) between 2009–2010, we explore how the EPSRC has navigated these overarching discursive shifts in contemporary science policy.

We outline three strategies the ESPRC has adopted in defending the cultural norms of basic and curiosity driven science against the perceived threats of recent initiatives to quantify the economic impact of public research funding. Firstly, the council has sought to redefine research excellence in economic terms, by suggesting that it is only ‘cutting edge’ research that will lead to technological breakthroughs. Critically this strategy has entailed an overt shift in the terms of the council’s institutional position, where it has attempted to alter its relationship with the research community, moving from being a ‘research funder’ to a ‘research sponsor’ with an active stake in the shaping the development of research capacity in areas of strategic capacity. Secondly, this discursive strategy has been accompanied by an attempt to widen the notion of research impacts, beyond the purely economic, and the development of new metrological devices to capture these broader meanings of research impacts. Thirdly, the council has also begun to adopt a more qualitative conception of research impacts, augmenting the collation of research metrics with case studies and narratives about the impact of council funded research. These qualitative devices – and in particular the ‘case study’ – are used in illustrating and ‘bringing to life’ an alterative conception of the value of publically funded research. We argue that accounting for the performativity of these devices offers a significant corrective to recent theories of ‘boundary work’ (Gieryn 1999) and ‘boundary organisations’ (Guston 2000) which have tended to focus on the defensive manoeuvres engaged ‘protecting’ the cultural integrity of science. In what follows we suggest that, in defending traditional notions of ‘basic’ and ‘curiosity driven’ science, the EPSRC has acted rather more proactively in developing a range of alternative frameworks for measuring and illustrating the enduring value of basic research. Rather than simply stabilise the boundary between science and politics, the council has worked to develop a new institutional identity as both a guardian of research excellence and a proactive shaper of research capacity.

The Politics of Value

Defined by the twin discourses of ‘productivity’ and ‘accountability’, recent political debates concerning the economic value of public research funding – together with structural shifts in national research assessment processes – represent a culmination of a range of contemporary political rationalities. Over the last twenty years a class of new political technologies – standards, benchmarks, league tables and quantitative auditing techniques – has displaced traditional forms of political intervention, occupying an increasingly central position in the governance of public institutions (Power 1997). In particular, the quantitative measurement of national research expenditure has become a dominant policy tool deployed national research assessment processes. In the context of these reforms the concept of value is typically associated with norms of ‘transparency’ and ‘accountability’ and takes on an overlapping set of social, cultural and economic meanings. Whereas research funding has historically been understood as a form of public patronage, over the last quarter century research policy has been broadly reframed emphasising notions of ‘value for money’, democratic oversight and accountability. Public research funding is also increasingly understood as a strategic investment where state economic and regulatory strategies are oriented toward maximising returns on these investments. Notions of value are therefore invoked to indicate the economic returns anticipated from research outlays in the context of broader appeals to notions of ‘value for money’ as a democratic norm. While the use of budgets, benchmarks and audits is a characteristic feature of the technologies of neoliberal government (Bruno 2009), the notion of value invoked in these reforms also entail appeals to notions of social and cultural worth that extend beyond the purely economic. Waldby (2002) develops this notion of the multifaceted connotation of value – and specifically of ‘biovalue’ – in her analysis of the development of stem-cell research. Waldby defines biovalue in both economic and ontological terms as political strategies that aim to increase “the yield of vitality produced by the biotechnical reformulation of living processes. Biotechnology tries to gain traction in living processes, to induce them to increase or change their productivity along specified lines, intensify their self-reproducing and self-maintaining capacities” (p. 310). The development of policy tools designed to quantify the economic and social impacts of public research funding reflect similar political logics in an attempt to realise the ‘returns’ on public research investment in economic, social and cultural terms.

In this way, accounts of the value of public research funding perform a range of social and political functions. In his analysis of ‘accounts of worth in economic life’ Stark (2009) notes that “[t]he polysemic character of the term – worth – signals concern with fundamental problems of value while recognising that all economies have a moral component” (p. 7). In order to attend to the multivalent connotations value and worth, Stark develops a performative ‘sociology of worth’ that “rather than the static fixtures of value and values, focuses instead on ongoing processes of valuation’’ (p. 7, emphasis in original). In conceptualising the increasing prevalence of questions of worth and value in contemporary economic and political affairs, Stark outlines a critique of ‘Parsons’s pact’ – the disciplinary distinction between the study of value in economics and the analysis of social relations and cultural values in sociology.Footnote 2 Stark’s argument is that accounts of economic value do not simply stand for themselves. Rather they are sustained by a set of institutionalised evaluative practices and, more broadly, by appeals to what Boltanski and Thévenot ([1991] 2006) term culturally sanctioned ‘orders of worth’. In this sense, economic value is both produced and sustained performatively. Developing Stark’s argument here we suggest that questions of the economic value of science and innovation – which are typically addressed through quantitative measurements of public research expenditure and bibliometric indicators – are part of a broader struggle over the cultural values associated with scientific and technological development.

Stark’s critique of notions of the ‘social embeddedness’ of economic value addresses a similar conceptual problem to recent work on the ‘sociology of markets’, which has introduced a conception of the performative function of socio-technical ‘market devices’ (Callon 1998; MacKenzie et al. 2007). For Muniesa et al. (2007) such devices function by “rendering things more ‘economic’ or, more precisely, at enacting particular versions of what it is to be ‘economic’” (p. 4). The conceptual issue at stake here is the problem of abstraction that has haunted contemporary economic and sociological theorisation. Whereas neoclassical models conceptualise economic products as abstract qualities, new economic sociology largely insists on the conceptual impossibility of abstraction – that economic goods are always produced in social and historical contexts. Muniesa et al. (2007) turn this problem around, insisting that “abstraction needs to be considered an action (performed by an agencement) rather that an adjective (that qualifies an entity)” (p. 4). Institutionalised techniques used in contemporary research evaluation and assessment – and the deceptively simple political argument that the quantification and measurement of the impacts of public research funding constitutes an important form of research governance – are informed by a similar logic of abstraction: how to capture the impact of research as an independent and abstract variable, over and above raw measures of research expenditure. The sheer proliferation of methodologies and techniques for tracking research performance and the quality of research outputs is indicative of the technical challenge of measuring research impacts and this underlying problem of abstraction (Van Noorden 2010). However, the effect of Muniesa et al.’s conception of the performativity of economic calculation is to suggest that the socio-technical devices developed to account for the impacts of public research funding are intimately tied to the performative enactment of alternative accounts of the value of contemporary research and innovation.

What is striking about recent debates concerning the impact of public research funding is the degree to which initiatives aimed at quantifying the economic value of research are represented as antithetical to the cultural norms associated with scientific practice and notions of speculative and serendipitous discovery (Rudy et al. 2007). In light of political arguments concerning the strategic value of public research investment, scientific institutions have responded by highlighting the technical difficulty of measuring the impacts of research whilst at the same time arguing for the enduring economic and social values associated with ‘basic science’. These recent developments mirror an historic struggle over the meaning and definitions of scientific practice where the terminology of ‘basic research’ has “provided a flexible repertoire of characteristics that can be drawn on selectively by scientists and policy makers in a variety of contexts to protect their interests and their scientific ideals” (Calvert 2006, p. 199). Though the concept of basic research is typically underscored by an appeal to a set of cultural values – autonomy, independence, creativity and discovery – in light of political challenges scientists and policy-makers have also tended to adopt more strategic arguments in order to defend the public funding of basic research. Though the terminology of basic science has been critical to the political ‘boundary work’ engaged in defining notions of scientific autonomy and independence, the flexibility of these arguments is suggestive of the performative enactment of these values.

The concept of performativity is an important rejoinder to contemporary theories of boundary work, particularly the tendency to understand appeals to the norms of basic science in largely rhetorical or strategic terms. For example, Gieryn’s concept of ‘boundary work’ – introduced as a way of conceptualising the problem of demarcation and the cultural policing of the distinction between science and non-science – might be understood in these terms. For Gieryn (1983) the distinction between science and non-science is a social, cultural and political one, involving matters of judgement that extend beyond those of scientific and technical merit. Gieryn (1983) characterises boundary work as a “rhetorical style common in ‘public science’, in which scientists describe science for the public and its political authorities, sometimes hoping to enlarge the material and symbolic resources of scientists or to defend professional autonomy” (p. 782). Drawing on this concept of boundary work, analyses of political interventions that appeal to a conception of science as socially autonomous are typically read as protectionist and defensive, masking a range of underlying structural interests.

The principle weakness in Gieryn’s conception of boundary work lies in its overemphasis on rhetoric, which has the tendency of rendering boundary work as a cultural and political strategy engaged in both defending the credibility of science and in increasing the ‘resources of scientists’. Indeed, in later research Gieryn (1995) characterises boundary work as a ‘rhetorical game’ as if the demarcation of science from non-science is simply strategic, driven by a range of underlying social interests. In a recent study of Dutch scientific advisory processes, Bijker et al. (2009) develop a critique of these rhetorical notions of boundary work, suggesting that “because of this emphasis on boundary work as a strategic activity, Gieryn has little attention for the more structural aspects of the science/non-science relationship” (p. 145). In their subsequent analysis, Bijker et al. argue that for boundary organisations engaged in demarcating between science and politics this boundary work is not simply accomplished through powerful rhetoric or persuasive argumentation. Rather, they suggest that a scientific advisory agency “does not have unlimited freedom to position itself with respect to its surroundings. … To maintain its authoritative position, the [agency] also has to attune its views and activities toward its audience, toward the problems in policy and professional practices, as well as toward the issues raised in public debates” (p. 146). In place of rhetorical notions of boundary work, based on a conception of the political strategies of boundary organisations as the “instrumental manipulation of language and arguments to mobilise support” (Moody and Thévenot 2000, p. 274), the approach that Bijker et al. develop emphasises the relational and performative nature of boundary work. For example, Bijker et al.’s conception of the structural conditions that influence the capacity of intermediary organisations to engage in boundary work is based on an awareness of the ‘audiences’ to which such institutions are addressed. In his study of expert advisory processes Hilgartner (2000) draws a similar conclusion. Developing a series of theatrical metaphors Hilgartner suggests a boundary organisation “does more than review scientific evidence and develop recommendations; it also presents – even creates– itself as a character” (p. 13). In this sense the capacity for organisations to engage in successful boundary work is tied up with a conception of their institutional agency, identity and position. Hilgartner stresses this relational and performative institutional context, suggesting, “advisors, like all performers, envision the audience their work will eventually encounter and, at least to some extent, tailor their presentations accordingly” (p. 7).

In what follows we argue that strategies adopted in developing new frameworks for assessing research performance and demonstrating the impact of public research funding might be reconceptualised in performative terms as entailing the proactive enactment of alternative notions of value and worth. We explore the ‘impact debate’ in UK science policy – and focus on the way the EPSRC has developed new methodologies for measuring research impacts. Rather than simply ‘defend’ the cultural norms of basic science against notions of economic valuation, EPSRC strategy is characterised by an attempt to reframe the ‘research base’ as a strategic national capacity. We also argue that these performative enactments are also institutionally constrained, and have necessitated a reinterpretation of the council’s institutional mandate and an attempt to move from being a ‘research funder’ to being a ‘research sponsor’ with an active stake in “developing capability in national areas of importance” (EPSRC 2011, p. 3). As the council is committed to “deliver greater impact than ever before” (p. 2) this strategy has entailed a form of proactive boundary management where the council is engaged in the performative enactment of the value of basic science while, at the same time, becoming more explicit in its leadership of the research community.

Streamlining Impacts

Though the ‘impact agenda’ in UK science policy is premised on the proliferation of a range of new policy tools for measuring research impact, these policy tools have been developed in the context of a political debate about ‘streamlining’ the role of the research councils in the UK ‘innovation ecosystem’ (Lord Sainsbury of Turville 2007; HM Treasury 2003). Notions that research councils operate ‘at arm’s length’ from government – and a tradition where research councils are cast as ‘guardians’ of the overall health of the UK ‘research base’ – have been challenged by proposals for a more integrated relationship between research council funding programmes and strategies oriented toward commercial innovation. A programme director at EPSRC explained the nature of this debate:

There’s a sort of interface discussion about what the proper role of the research councils [is] to help in [research for innovation, higher education and skills]. It’s not a direct requirement because the Haldane principle is one where there is a broad policy how they can, within their proper remits, contribute to it. […] The long arm of Haldane slightly shrinks and then expands and shrinks and expands but is never actually gone away. So where we have, for example new articulations of big national challenges … in those policies in central government there is beginning to be a research element drawn up alongside it. (Interview Research Director, EPSRC, 3 July 2009)Footnote 3

This extract refers to the Haldane principle, a ‘creation narrative’ detailing the historic purpose of research councils as non-departmental government bodies, based on a broadly understood dualism between scientific autonomy and government priority setting (Edgerton 2009). Originally articulated during the First World War – based on a distinction between science of general and specific utility (Ministry of Reconstruction 1918) – the Haldane principle ties the institutional identities of UK research councils to cultural norms associated with the value of ‘basic’ science and the notion that scientific excellence is necessarily defined within the research community. In addition, the Haldane principle also ties research council funding to a more general concern for the health of the UK research base – traditionally conceived through notions of scientific manpower and concerns regarding the supply of skilled researchers (Shattock 1989; Salter and Tapper 1993). While the Haldane principle has typically been invoked as an expression of norms of scientific autonomy and independenceFootnote 4 – and critically that the UK research council system should operate as an embodiment and defence of these norms – in recent political debates this received interpretation has been directly challenged. On the basis of Edgerton’s (2009) historical analysis – which argues that the Haldane principle is an ‘invented tradition’ invoked in political struggles over the role of government in scientific affairs – recent policy interventions have explicitly called for a more limited interpretation of its contemporary relevance, suggesting that “Haldane Principle is useful as a basis for discussion, but should be replaced with a principle that can accommodate regional science policy, the full range of research funding streams, mission driven research, and the rationalisation of detailed and strategic funding decisions” (House of Commons Innovation Universities Science and Skills Committee 2009, p. 3).Footnote 5

Though typically couched in technical terms, particularly concerning prioritisation in research programme formulation, the re-articulation of the Haldane Principle in light of proposals to streamline the role of research councils is indicative of the discursive terms that underpin UK science policy where the contemporary drive to reinforce the relationship between public research funding and economic performance is typically cast as antithetical to the serendipitous nature of scientific practice (Campaign for Science and Engineering 2009). In practice however, the EPSRC has acted entrepreneurially in seeking to maintain traditional role definitions whilst also seeking to demonstrate the economic and social impacts of the council’s funding programmes. The core strategy of the EPSRC has been to conflate the terms ‘excellence’ and ‘impact’, and in the process define a new institutional role as proactively shaping the development of ‘research excellence’ toward areas of social need or political priority. Indeed, a recent report published by RCUK, entitled Excellence with Impact (2007) attempted to achieve this discursive shift. Noting the challenge of measuring the impact of public research funding it highlighted a ‘cultural shift’ in contemporary academic science. It stated:

Whilst UK researchers have been producing such impacts for decades, the last few years have witnessed a dramatic change with more academics engaged and interested than ever before in how their research helps society and the economy. The Research Councils have been highly active in this cultural transformation, vigorously encouraging researchers fund to produce both excellent research and greater economic impact. (p. 3)

The effect here is to recast the previous opposition between ‘basic science’ and ‘economic impacts’, suggesting that it is only ‘excellent’ science that is capable of producing the transformative breakthroughs that will sustain long-term economic prosperity. This strategy is indicative of the interpretive flexibility of the concepts of ‘basic science’ and ‘research excellence’ which are represented as both cultural norms and as a strategic national socio-technical capacity (Calvert 2006). Notions of independence and autonomy – particularly the argument that the development of detailed research priorities is best accomplished without direct political interference – are therefore justified as ends in themselves and as the most efficient mechanisms for realising anticipated economic returns. The nuance in the EPSRC’s strategy is the argument that a broad portfolio of ‘curiosity driven’ research needs to be maintained in order to sustain this underpinning capacity. Notions of strategic selectivity are therefore cast as indicative of short-term political expediency, and as a threat to this capacity. In this sense, EPSRC strategy has been characterised by an attempt to reframe notions of impact by associating it implicitly with discourses of excellence. Here the goal is to maintain and reinforce a broad portfolio of research activity whilst fostering a distinctive institutional role in shaping of research capacity.

This strategy, whereby the council has both adopted and sought to reframe political arguments about the impacts of research funding, might therefore be understood as a form of boundary work, where the terminology of impacts is conflated with that of research excellence. However, as Bijker et al. (2009) suggest, when viewed in institutional terms this form of boundary work is typically engaged in light of an awareness of an organisation’s constituent ‘audiences’ and an attempt to “adapt to changing social circumstances and contexts, precisely by defining another role for itself” (p. 140).Footnote 6 Though notions of research excellence are typically invoked in a defensive fashion, the conflation between the terminology of excellence and that of impacts has conversely required the EPSRC to adopt a more proactive definition of its institutional role and relations with both government and the research community. In order to substantiate its arguments about the value of a broadly defined research base, the council has been explicit in taking on a new role as shaping national research capacity toward policy priorities, commercial innovation and economic impact. This is particularly evident in recent changes to the council’s strategic goals – where it has sought to move from being a ‘research funder’ to becoming a ‘research sponsor’, “where our investments act as a national resource focused on outcomes for the UK good and where we more proactively partner with the researchers we support” (EPSRC 2011, p. 2). The Chief Executive of EPSRC explained this approach using a colourful metaphor:

In one of his first meetings that he chaired at the council [the chair of EPSRC Council] said… ‘As far as I can see the research councils are essentially a slot machine. And as far as the academics are concerned you are their slot machine. They come in and they put their penny, the research grant application, they pull the lever, and if they are lucky three oranges come up and they get their grant. And the slot machine plays no role other than to process the grant, to be the mechanical processor’. What it ought to be doing is actually deciding on what sort of players and what sort of application – it ought to be playing a more active role rather than a passive one. And, we were moving in that direction … if we want to actually change the research that is being done, or the balance, then you can’t do that if you’re a slot machine. (Interview with CEO, EPSRC, 8 March 2010)

The image of the slot machine evokes the intermediary position of research councils. Traditionally conceived as simply a vehicle that enables the responsive generation of scientific excellence, the quotation indicates that the EPSRC aims to take a more proactive role in shaping national research capacity. It is important to note here the terms in which this proactive role is couched. For example, the recently published EPSRC Strategic Plan (2010)Footnote 7 outlines the EPSRC’s three core strategic goals:

  1. 1.

    Delivering Impact: ‘EPSRC will ensure excellent research and talented people deliver[ing] maximum impact for the health, prosperity and sustainability of the UK’;

  2. 2.

    Shaping Capability: ‘EPSRC will shape the research base to ensure it delivers high quality research for the UK’ with a ‘research portfolio focused on the strategic needs of the nation’; and

  3. 3.

    Developing Leaders: EPSRC ‘will commit greater support to the world-leading individuals who are developing the highest quality research to meet the UK and global priorities’ (pp. 4–5).

Where earlier strategic plans spoke of ‘stimulating creativity’, ‘nurturing the most talented people’ and ‘building collaborations’ between the research base and industry (EPSRC 2006), the 2010 plan develops a more explicit leadership role for the council in shaping science and facilitating impact for national well-being. However, whilst contemporary policy framings of ‘impact’ and ‘strategic priorities’ are adopted, they are reframed through enduring tropes of research excellence and quality, linked to normative understandings of the nature of scientific practice. What is distinctive about this shift is that it represents a response to a policy drive aimed at streamlining the role of research funding in delivering strategic impacts, whilst seeking to carve out and protect a distinctive institutional role.

Beyond Economic Impacts

These political debates concerning the role of research councils in the UK’s ‘innovation ecosystem’ have been accompanied by the development of a range of frameworks for measuring research performance and a formal requirement for research councils to quantify the impact of their research funding against an established framework. In practical terms, the performance of each of the research councils is measured through the submission of two complementary annual reports: A Baseline Reporting Framework – “reports in a general qualitative way on outcomes of research council activity” – and an Economic Impact Reporting Framework (ERIF) – which is “more quantitative in character and seeks, where possible, to develop data on inputs as well as outputs” (Science and Innovation Analysis 2010, 4). In particular, the ERIF provides a framework for measuring and demonstrating impacts – supported by evidence regarding the quantity of research funding distributed – against which the annual performance of each of the councils is measured and benchmarked. In the context of these formal obligations the EPSRC’s discursive boundary work – where it has sought to emphasise the underpinning value of a broadly defined research based – has also been accompanied by the development of new methodologies and frameworks for compiling indicators of research impacts against these benchmarks. This strategy aims to fulfil existing formal obligations whilst also enacting an alternative mode of evaluating the values of science and innovation. For example, responding to recent policy discourse, Research Councils UK issued the following statement:

The Research Councils have been challenged to ‘make strenuous efforts to demonstrate more clearly the impact they already achieve from their investments’. … That said, it is also widely accepted that ‘it is difficult to measure the economic impact of innovations which may be delayed in time and indirect in consequence’. Indeed the consensus in the economics literature is that measuring the economic impacts of science, innovation and research funding is highly problematic. (RCUK 2007, 5)

As the statement implies, the approach here is to use accounting techniques and economic research to reinforce a policy argument regarding the metrological challenge posed by measuring research impact, and to argue for a widening of the notion of impacts. Following Callon (1998), we see here that techniques for what he terms the ‘measurement of properties’ become sites of political contention. The overarching strategy of the EPSRC, in response to proposals to measure and quantify the impact of public research expenditure, has been to develop new methodologies for capturing research impacts using ‘non-economic’ indices. A senior manager at EPSRC explained the council’s strategy in broadening the current policy debate:

We have tended to have stopped calling it Economic Impact, because even though the government’s definition includes society and everything else, in calling it ‘Economic Impact’ people think about the economics, naturally. So in terms of now, when you apply for a research grant you have to put in an impact plan as part of that. The majority of people are still thinking about it in terms of purely economic impact. Some people are starting to think of it a little bit more in terms of beneficiaries and broadening that out, and within the guidance it does talk about the public, and it should be included within that in terms of there should be public engagement as part of it. It’s only recently come into place and it’s something that we’re going to need to do more work with in terms of actually. (Interview with a senior manager, EPSRC, 20 November 2009)

In practical terms, this attempt to augment the notion of economic impacts with measurements of wider societal and policy impacts is evident in changes to the EPSRC’s impact reporting framework. For example, the 2005/06 EPSRC Output Framework required metrics to be developed under two broad headings – ‘a healthy UK science and engineering base’ and ‘better exploitation’. The framework outlined a range of data sources for these targets, including largely quantitative measures of the total EPSRC research expenditure, the number of post-doctoral degrees awarded and collaborative research undertaken with business and public service resource. In comparison, the 2007/08 Economic Impact Reporting Framework (EPSRC 2008) seeks to significantly widen the framework of impact measurement. In addition to pre-existing measurements of ‘investments in the research base’ and ‘knowledge generation’ – outlined against measures such as ‘net research grant expenditure’, ‘estimated total number of PhDs supported’ and bibliometric indictors of ‘UK Share of world citations’ – the framework added indicators of societal impacts, public engagement and financial sustainability. A range of qualitative and quantitative measures are designed to capture these impacts – including indicators of expenditure on public engagement initiatives and more evidence drawn from public attitude surveys on scientific issues. The development of this broader framework – and indeed the publication of metrics that substantiate this alterative account of the impact of the council’s research funding strategies – might therefore be read as an attempt to place EPSRC research in wider social and political context, and to develop indicators of its wider impacts in these domains.

Attempts to widen the meaning of research impact have also been accompanied by the development of a range of methods for capturing the qualitative impacts of research funding.Footnote 8 Though at this stage no consensus has been reached to provide a definitive framework for measuring research impacts in this expanded frame, the collective effect of these trials has been a shift from quantitative to broadly qualitative techniques (Donovan 2007a, 2007b). For the EPSRC, this has entailed a use of case studies to emblematise and illustrate a wider set of non-economic impacts. A representative of the EPSRC performance evaluation team explained the strategy in the following terms:

Case studies have been used as a way of illustrating and reinforcing the messages of impact and generally speaking I think we use case studies in broadly speaking two ways. One is illustrating, but it brings it to life in a way and helps to actually explain how things happen. So in quite a lot of our – again going back to advocacy documents and even in our reporting documents – we would use very brief case studies as a way of really giving the messages a bit more colour and a bit more impact. …Very often a well crafted case study can just bring it to life and give it a lot more impact and if you think about the audience that we’re aiming these at, you know, ministers and senior people will remember the case studies, they won’t remember the figures but the case studies will have a lot more impact with them as well. So that’s one way in which we use case studies as a kind of bring it to life and to illustrate and to reinforce. But then the other thing is that case studies are also an important way of actually developing a better understanding of what’s actually happening. So in order to, again going back to this, understand the various ways in which impact has occurred. … But what case studies won’t give you is they won’t give you a means of extrapolating so you can’t use half a dozen case studies and say on the basis of these we can extrapolate our funds say. … So they do have a role but they need to be used with care. (Interview with Head of EPSRC Performance Evaluation Team, 2 December 2009)

Though constitutionally obliged to quantify research expenditure and impacts using established frameworks, the publication of these qualitative devices aims to ‘illustrate’ and ‘bring to life’, rather than simply measure research impacts.Footnote 9 The discursive boundary work engaged by the council in defending a notion of a broadly defined research base and carving out a new institutional role has therefore required two significant socio-technical innovations. Firstly, the council’s political strategy has been accompanied by the development of new metrics to substantiate a broader evaluative framework for assessing the impact of EPSRC research funding. Secondly, the council has augmented these metrics with a range of qualitative devices – case studies, public dissemination activities and the utilisation of ‘public scientists’ to ‘make the case for science’. In attempting to protect notions of scientific autonomy and independence, the council’s strategy has required a proactive enactment of an alternative set of social, cultural and economic values.

Pathways to Impact

In addition to the development of new frameworks for measuring the impact of research funding, the council has also introduced the notion of ‘pathways to impact’ through a set of socio-technical devices that structure the ways in which potential research impacts are articulated in research grant proposals. A manager at EPSRC explained this discursive shift:

So when people apply for research funding they now have to include information about what possible pathways to impact are there for their research. So it’s not about predicting what the impact’s going to be, it’s about saying ‘if my research is successful, who might do something with that? Where might it have an impact? And how am I going to actually make sure that happens?’ So for some disciplines it will be on another discipline. For some disciplines the impact will be on society in some way, on the health of the nation in some way. Some of them it might be on policy. So making sure they’re talking to the government departments that need to know about what they’re doing. Or it might be on business. But it’s getting people to think that through. (Interview with head of research programme, EPSRC, 10 March 2010)

This shift in framing, from ‘impacts’ to ‘pathways to impact’, reflects an alternative theorisation of the relationship between research and socio-economic impact in which basic science is cast as ‘underpinning’ long-term social impacts and an attempt to generate new metrics that quantify the cumulative and non-linear effects of a broad portfolio of publicly resourced research. The practical effect of this discursive shift is a requirement for researchers to complete a ‘pathways to impact’ statement on all research proposals, in place of the pre-existing ‘impact statement’. Individual research councils and the umbrella organisation Research Councils UK (RCUK) have published a range of diagrammatic tools and websites aimed at helping researchers complete these statements, by graphically outlining expected impact pathways.

These tools serve a practical purpose as an aid for researchers in completing ‘pathways to impact’ statements. In place of single statements regarding the beneficiaries or potential ‘users’ of research, current research council application forms require applicants to disaggregate between the ‘impact’ and ‘beneficiaries’ of proposed programmes of research. The distinction between ‘impacts’ and ‘pathways to impacts’ therefore represents a relaxation of this requirement, allowing researchers to indicate longer term or speculative impacts. These devices therefore provide researchers with a set of discursive storylines in which to place their work when completing these formal requirements. For example, the EPSRC ‘pathways to impact’ diagram (Fig. 2) disaggregates four areas of possible impact – ‘society’, ‘knowledge’, ‘people’ and ‘economy’. Similarly, the guidance published by RCUK (Fig. 1) offers a matrix of possible impact pathways, between academic outputs on the one side and ‘economic and societal impacts’ on the other. These diagrams function by providing a set of phrases, such as ‘wealth creation’, ‘scientific advances’ and international development’ – that researchers might use in research proposals. A manger at EPSRC explained this approach, suggesting:

The fact is we’ve got more sophisticated in our understanding of how we have impact. That was part of that [the council] understanding how we have impact then, and then sharing that with the academic community and getting them to think about what that means for them and their research, really. (Interview with head of research programme, EPSRC, 10 March 2010)

These tools have a performative function, enabling the development of a body of evidence to substantiate an alternative theorisation of the relationship between research and real-world impacts. By regularising and standardising the kinds of impacts promised in research proposals these diagrammatic tools are designed to produce a preformatted portfolio of research metrics that correspond to established storylines about the potential economic and social impacts of research.

Fig. 1
figure 1

RCUK diagram ‘Pathways to Impact’, listing a variety of related impact pathways under the categories ‘Academic Impact’ and ‘Societal and Economic Impact’ (http://impacts.rcuk.ac.uk/content/impactmeans.htm

Fig. 2
figure 2

EPSRC ‘Pathways to Impact website’, aimed at the engineering and physical sciences community to consider impact pathways under the categories of ‘Knowledge’, ‘Society’, ‘People’ and ‘Economy’ (http://www.epsrc.ac.uk/funding/apprev/preparing/Pages/economicimpact.aspx)

Conclusion

Innovations in the economic evaluation of public research funding, driven by broader struggles over the meaning of science and innovation, might therefore be viewed in performative terms as modes of rendering science valuable. Though strategies designed to capture and measure the impact of public research funding rest on the basic proposition that the value and quality of research can be accounted for through the compilation of indices of research expenditure and performance, the innovation of new assessment frameworks is indicative of what Barry (2002) terms the “the fragility of ‘metrological regimes’” (p. 268). Suggesting that measurement is inventive and performative, Barry argues “measurement and calculation can have the effect of disrupting the frame of politics, and creating a conduit for the cross-contamination of the economic and the political” (p. 268). For Barry, measurement, quantification and assessment are political practices. The evaluation and assessment of research is therefore part of a broader set of political struggles concerning the place of science in public life and the institutional relations between the research community and government. As we have argued above, the ‘politics of impact’ are primarily negotiated by ‘boundary organisations’ in defending notions of the normative and cultural values of science, and in direct political negotiations over budgetary settlements and policy priorities. In this case, however, Barry’s notion of the inventiveness of measurement, and the way that calculation and quantification are invoked in the politics of public demonstration, necessitates a more explicitly performative account of the ongoing ‘management’ of the boundary between science and politics (Miller 2001).

We opened this paper with an account of the IMPACT! exhibition, suggesting that we might interpret such initiatives as embodying competing and contested theorisations of the relationship between science practice and strategic priorities. Initiatives such as IMPACT! exhibition are indicative of the institutional positioning adopted by boundary organisations in seeking to both affirm and reframe contemporary science policy discourse so as to maintain a historically mandated and institutionally distinctive position. However, initiatives to publically demonstrate the impact of research council funding programmes are also indicative of the performativity inherent in the evaluation of public research funding. These strategies function by enacting an alternative valuation of ‘curiosity driven’ science, by conflating the terms ‘impact’ and ‘excellence’ and by highlighting the long-term and cumulative effects of basic science. In developing this notion of the performativity of valuation, there are two conceptual issues to consider. The first concerns the degree to which conceptualisations of boundary work – and particularly ‘boundary organisation theory’ – tends toward accounts of the stabilisation of boundaries between science and politics, resulting in a theorisation of the political strategies of such organisations in rhetorical and instrumental terms. The second conceptual issue we consider concerns what we might term the ‘social life of data’ or the ‘embeddedness’ of economic calculation in social relations (Appadurai 1986; Granovetter and Swedberg 1992). In this concluding section we consider each of these issues in turn.

For Guston (2000, 1999), the figure of the boundary organisation serves as a collective theorisation of a range of organisations at the interface between scientific research, policy development and public relations. Such organisations function to demarcate matters of scientific authority from those of political judgement and to defend and maintain the cultural credibility and authority of science in contemporary public life. Developing a synthetic account, Guston outlines three characteristic features of boundary work – “the creation of a space for the creation and use of boundary objects or standardised packages, or a combined ‘scientific and social order’; the collaborative participation of principals and agents, or scientists and non-scientists; and the mooring to mutual interests and distinct lines of accountability” (1999, p. 105). For Guston, boundary organisations function to stabilise the science-politics boundary by successfully performing an intermediary role whereby both scientists and policy-makers have an “opportunity to construct the boundary between their enterprises in a way favourable to their own perspectives” (p. 106).

More recent research has begun to parse out a set of weaknesses in this conceptual terminology. Miller (2001) summarises these weaknesses in three ways, suggesting that boundary organisation theory tends to: over-universalise both science and politics; overlook the diverse institutional forms engaged in mediating relations between science and politics; and rely on a static view of science and politics in the context of the increasingly global governance of science. Van Egmond and Bal (2011) highlight a similar set of drawbacks, suggesting that boundary organisation theory “presupposes a strict boundary between science and policy, as well as a unidirectional movement from fundamental to applied knowledge” (p. 110). In response to these conceptual weaknesses, this research has begun to develop notions of ‘boundary management’ associated with what Jasanoff (1990) terms ‘hybrid’ science policy decision making, characterised by the combination of both scientific and political considerations. In order to “maintain productive and dynamic relationships, boundary organisations need to be able to manage hybrids – that is, to put scientific and political elements together, take them apart, establish and maintain boundaries between different forms of life, and coordinate activities taking place in multiple domains” (Miller 2001, p. 487). This critique centres on Guston’s notion of stabilisation – in part derived from a conceptualisation of the production of ‘standardised packages’ that enable the resolution of disputed scientific facts (Star and Griesemer 1989; Godin 2007) – and the degree to which it implies that boundary organisations are engaged simply in defensive political manoeuvres, protecting science from external political intervention. As we noted above, this conceptual tendency is also evident in Gieryn’s notion of boundary work which does not adequately address the structural limitations on organisations engaged in this form of protective work (Bijker et al. 2009). Both Miller and van Egmond and Bal develop a conception of the ways in which boundary work is sustained by the hybrid ‘management’ and ‘configuration’ of distinctions between science and politics. Rather than implying a strict dichotomy between these terms, this model suggests that the distinction between science and politics is utilised as a resource in both sustaining an institutional identity and developing political strategies. In turn, this requires the kinds of institutional flexibility displayed by the EPSRC in their capacity to switch between political arguments concerning the strategic and cultural value of a broadly defined research base.

Recent research on the sociology of markets and ‘calculative behaviour’ identifies a similar conceptual issue, particularly connected to notions of abstraction and detachment invoked in theories of economic valuation. For example, Callon and Muniesa (2005) demonstrate that accounts of economic calculation are distinguished between those that rely on neo-classical notions of calculation as an abstract and largely formal process and sociological conceptions that theorise economic evaluation as reflecting underlying social relations. For Callon and Muniesa, neither model “is particularly satisfactory. The former fails to do justice to the diversity of practices observed and the forms of calculation applied in markets. The latter denies any particularities in economic behaviours” (p. 1230). In particular, they suggest that sociological theories of economic valuation – and particularly notions of the social ‘embeddedness’ of economic calculation – are marked by a conceptual tendency where the production of value is regarded as simply a matter of ‘pure judgement’. Mirroring Callon’s (1982) earlier critique of ‘interests’, Callon and Muniesa develop a new definition of calculation. They suggest that “in order to be calculated, the entities taken into account have to be detached”; “entities are associated with one another and subject to manipulations”; and “a result has to be extracted. A new entity must be produced that corresponds precisely to the manipulations effected in the calculative space” (p. 1231). This definition suggests that calculative market devices function to render goods ‘calculable’.

Drawing on both of these conceptual insights we suggest that frameworks for measuring and quantifying the impact of public research funding function as socio-technical devices that render science and research valuable. Indeed the strategies adopted by the EPSRC in reframing notions of research excellence mirror the flexible terms in which the values of basic science are articulated and defended in contemporary science policy debate. Alongside a range of principled arguments about the cultural values of science as an alternative – and broadly autonomous – source of moral and epistemological authority sit a range of more instrumental arguments about the nature of scientific practice. In response to a set of political logics concerning the impacts of public research funding, the proper role of research councils in the UK innovation system and the accountability of scientists to forms of democratic governance, the strategy of the EPSRC has been to adopt an alternative theorisation of research impacts, and thereby develop new frameworks for measuring and quantifying research impacts, adopting three distinct strategies. Rather than being defined as a set of rhetorical political manoeuvres in defence of notions of basic science and institutional autonomy, the EPSRC’s strategy has therefore entailed the performative enactment of notions of value and worth.

Using the terminology of boundary work we suggest that this strategy has necessitated the production of a set of ‘standardised packages’ as indicators of the productivity and value of research funding initiatives. For the EPSRC these packages take a number of forms – and include quantitative metrics of research expenditure, case studies of research impacts and narrative devices embedded in a set of enduring storylines. What is common to these packages is their mobility – their capacity to translate research into movable traces of research performance. Of course, the distinction between quantitative indicators and qualitative devices is largely pragmatic – based on the political spaces in which these tools are used. Whereas quantitative indicators are developed against formal targets and benchmarks, qualitative devices are utilised in a broader discursive strategy aimed at widening the meaning of research impacts. In each case, however, these mobile packages function to produce a valuation of research through the three steps that Callon and Muniesa indicate – detachment, manipulation and extraction. Research metrics, illustrative case studies and initiatives such as the IMPACT! exhibition function by enabling research to be extracted from its context, translated and moved into new contexts. Though current political discourse concerning the impacts of public research funding are characterised by a distinction between accounts of the economic worth of science and the normative values of basic research the boundary work engaged in defending these values necessitates performative enactments of these alternative accounts of the value of science.