Introduction

The pursuit of evidence-informed policy is based on the premise that policy outcomes (and so the extent of positive change they can deliver) will be improved if the decision-making process is aided by knowledge that is both of quality and pertinent to the issue in hand. This premise is explicated through the work of advocates such as Oakley, who argues that evidence-informed approaches ensure that “those who intervene in other people’s lives do so with the utmost benefit and least harm” (2000: 3), and Alton-Lee (2012), who described the “first do no harm” principle. Oakley thus contends that there exists a moral imperative for policy-makers to only make decisions, or to take action, when armed with the best available evidence. In other words that:

we [all] share an interest in being able to live our lives as well as we can, free from ill informed intervention and in the best knowledge we can gather of what is likely to make all of us most healthy, most productive, most happy and most able to contribute to the common good (2000: 323).

Failing to employ available evidence can also lead to situations where public money is wasted and members of society (often those who are vulnerable or socially excluded) are not offered treatments or interventions at points in their lives where doing so might provide the most benefit. This is typified in the work of Scott et al. (2001) who, in their analysis of the financial cost of social exclusion, note that acts of antisocial behaviour (ASB) at the age of 10 are accurate predictors of the cost of public services consumed by a given group at age 28, typically a cost 10 times that of those with no ASB issues. Yet such costs could be avoided by policy-makers adopting effective and timely intervention strategies (see Lee et al.’s 2012 analysis for the Washington State Institute for Public Policy).Footnote 1

Initiatives designed to enhance evidence use by government

Numerous initiatives have been instigated by governments and other stakeholders (both in the UK and internationally), in an attempt to improve the links between evidence and education policy. Gough et al. (2011), as part of the Evidence Informed Policy in Education in Europe (EIPEE) project, identified 269 instances of such linking activity in a survey of 30 European countries. Gough et al. (2011: 4) suggest that: “the findings from the survey [indicate] a high level of activity across Europe and demonstrate that a wide variety of approaches [have] been taken to try to improve the use of research evidence in policy settings.” A catalogue of UK government initiatives, meanwhile, are set out in Brown (2013). The most recent of these include the 2012 Civil Service Reform Plan Footnote 2 which commits government both to broader engagement with external ‘experts’, and to investigate the feasibility of setting up an independent ‘what works’ centre for social policy. In addition, in 2013 the UK government announced the launch of the What Works Network Footnote 3—six independent ‘evidence centres’, responsible for producing and disseminating research to decision-makers on areas such as crime reduction, active and independent ageing, early intervention, educational attainment and local economic growth. The aim of the centres is to support local decision makers to invest in services that deliver the best outcomes for citizens and value for money for taxpayers. This is to be achieved through the collation, assessment and synthesis of published evidence on the effectiveness of interventions, assessing these using a common ‘currency’, publishing clear synthesis reports and sharing findings in an accessible way (see Cabinet Office 2013).

Whilst such initiatives abound, it is also argued that their impact to date has been limited (Brown 2013). For example, while a number of studies suggest that capacity to understand and consider evidence does, to an extent, exist at the level of the individual policy-maker (see Campbell et al. 2007; Brown 2009); Nutley et al. (2007) argue that the effects of those initiatives designed to improve evidence use across the civil service generally have not been fully evaluated and are restricted to case studies and simple anecdotes of success. Similarly, although the Government Office for Science (2010) observes that within the Department for Education (DfE) “there is strong and active leadership support for evidence-based policy-making” (2010: 22), an example of which is the Making Policy Happen programme, designed to shift policy-makers’ behaviours towards a better consideration and use of evidence as part of their decision making, it also noted that Making Policy Happen is yet to be fully embedded and will require continued senior level support to become so. At the time of writing, however,Footnote 4 Making Policy Happen does not feature as an active initiative on the DfE’s website. It should be noted that, between the publication of the Government Office for Science’s Science and Analysis Review and now, a number of the examples of best practice spotlighted by the Government Office for Science appear to have been discontinued. For instance the department’s annual Research Conference is no longer referenced on www.dfe.gov.uk. Gone too are the annual Analysis and Evidence Strategies; the DfE’s annual statements of evidence requirements and priorities (e.g. see Department for Children Schools and Families 2008, 2009).

There have been few attempts in the UK to assess quantitatively the uptake of research by policy-makers. Approaches have been undertaken elsewhere, however. For example in Canada, Landry et al. (2003) undertook a survey of 833 government officials from Canadian and provincial public administrations in order to examine the extent to which they employed academic research as part of the policy process. Landry et al.’s study based research use on the Knott and Wildavsky (1980) scale of research use, which ranges from ‘reception’ (“I received the university research pertinent to my work”) to ‘Influence’ (“university research results influenced decisions in my administrative unit”) via the stages of ‘cognition’, ‘discussion’, ‘reference’, and ‘effort’ (“I made efforts to favour the use of university research results”). As Webber (1992: 21) observes, the points along the Knott and Wildavsky scale are “meant not only to capture the extent to which information is processed cognitively by the policy-makers but also its consequence in the policy process.” The scale can be considered cumulative in the sense that cognition builds on reception, discussion on cognition, reference on discussion, effort on reference, and influence on effort. Since research rarely provides an immediate solution to a policy problem (Weiss 1986), policy-makers were asked to consider an elongated time period: “Concerning the use of university work, please indicate your experience in relation to the six following aspects [of the Knott and Wildavsky scale]. Drawing on your experience of the last 5 years, check a single box using the following scale, where 1 = never; 2 = rarely; 3 = sometimes; 4 = often; and 5 = always” (Landry et al. 2003: 196: my emphasis) (Table 1).

Table 1 Frequency of knowledge use (from Landry et al. 2003: n = 833)

Landry et al.’s results indicate that nearly 12 % of policy-makers report they usually or always received academic research pertinent to their work; 39 % of respondents, however, rarely or never did. Moving through the six stages from ‘reception’ to ‘influence’ it can be seen that there is an increase in university research that is rarely or never used and, conversely, a decrease in the university research that is usually or always used. Ultimately, only 8 % of respondents reported that the academic research they have received has ‘usually’ influenced decisions and, slightly less than 1 % indicated that academic research always influenced decisions in their departments, 41 % reporting that research rarely or never affects decisions (with 46 % rarely or never making efforts to favour the use of academic research). The analysis undertaken by Landry et al. (2003) again suggests that attempts to enhance the uptake of research have not been as successful as they might be.

Current assumptions underpinning evidence-informed policy making

Whilst it is clear that current attempts to improve the use of evidence are not as impactful as they might be, little analysis to date has sought to investigate why this might be the case. I suggest that there are a number of reasons why such initiatives are yet to take hold, particularly within educational policy making. These are set out below as assumptions that are often held, yet have also been problematised in extant literature (e.g. see Brown 2011), and were arrived at after undertaking an extensive review of literature. The aim of the review was to provide an overview of existing theory and an understanding of the type of empirical studies previously undertaken in this area. Literature was initially searched for in two ways: i) A search of four prominent databasesFootnote 5 using search terms synonymous with that of ‘evidence-informed policy making’ and 'knowledge adoption'Footnote 6; and ii) recommendations on seminal literature were also sought from (and provided by) colleagues, authors identified from the search above, and experts in the fields of evidence-informed policy making and knowledge adoption. The references cited by the authors of these studies were then also reviewed. This approach to sourcing literature, combined with the screening criteria, resulted in a total of 228 papers, studies, reports and books being reviewed over a one and a half year period. Despite compiling them in this way they should not be thought of as all carrying equal weight and not all will simultaneously be held by policy-makers or by researchers who have undertaken work in this area. Nonetheless at least some will ring true with individual readers familiar with this subject or with those who actively develop policy:

  • Assumption 1: That the use of evidence in policy making is inherently rational in nature: i.e. that of primary concern to policy-makers is a desire to develop policies that are optimal in terms of their efficacy, equity and value for money. As such, policy-makers not only systematically seek out evidence to aid their decision making but also pursue all pertinent evidence with regards to a particular issue (e.g. see Trowler 2003).

  • Assumption 2: That there exists a process for developing policy and that this process has both broadly definable stages and also tends to operate in a broadly linear or sequential order. Correspondingly that there are specific roles for research within this process; i.e. that research should be considered at specific points, with a view to it then being used to aid specific decisions. For example, by aiding the identification of a problem, by helping to create, form or steer the public agenda, or by aiding (or inspiring) policy directorates in the development of their initiatives (e.g. Nutley et al. 2007; Perry et al. 2010).

  • Assumption 3: That the provision of knowledge can, by itself, deliver or lead to expertise with regards to the topic in question; also to expertise in terms of social actors being able to ‘use’ evidence generally (e.g. Hackley 1999; Stewart and Liabo 2012). In a similar vein is the notion that, the more simply evidence is presented, the easier it is for policy-makers to make a decision and the more effective that decision will be (e.g. Hillage et al. 1998; Brown 2009; Cherney et al. 2012).

  • Assumption 4: That the voices of researchers carry equal weight compared both with others operating within the policy ‘sphere of influence’ (such as think tanks) and also in relation to the policy-makers who are responsible for developing a given policy (e.g. Habermas and Cooke 1999). In other words, it is the quality of the argument that matters, rather than who it is that is making the argument. Correspondingly, research use is thought to transcend fashions and fads concerning, for example, the topic in question or the author of the studies (e.g. Brown 2012).

  • Assumption 5: That instances of evidence use have not materialized in greater numbers because the process of educational research and its underpinning epistemological/ontological assumptions principally serves the interests of academics, rather than those of policy-makers. As a result, there is generally a mismatch between the research and policy making cycles; the ‘quality’ of research is poor; and researchers are unable to express their conclusions in ways that make them usable: for example, by failing to provide detail on ‘what works’ or by not providing definitive ‘facts’ about the social world in its actual (i.e. unequivocal) state (e.g. Hargreaves 1996; Hillage et al. 1998; Tooley and Darby 1998; Davies 2006). This also links to assumption 3, since it is also felt by the same critics that the communication of research has traditionally been through language and via means that policy-makers find inaccessible.

That these assumptions can be problematised and shown not to hold true in a number of empirical studies (e.g. Landry et al. 2003; Coburn et al. 2009; Brown 2009, 2011) raises a simple question: if policy development does not occur in ways currently envisaged and if these blockages to evidence use truly exist, then why continue to try and facilitate evidence-informed policy making through ways that do not effectively address or account for them? My alternative is to posit the adoption of expertise in evidence use, which has its basis in the work of Bent Flyvbjerg. In particular in his 2001 thesis which examines the role of the social sciences and how they might be best harnessed to deliver “enlightened political, economic, and cultural development” in society (2001: 3). Flyvbjerg posits that this is best achieved by employing a contemporary interpretation of Aristotle’s notion of phronesis, something often translated as ‘practical wisdom’. I now use the remainder of this paper to examine in detail Flyvberg’s notion of phronetic expertise and how this might apply to the use of evidence by educational policy makers. Correspondingly, therefore, I assess if and how a better understanding and harnessing of expertise in relation to evidence use might provide an alternative approach to that outline by the assumptions described above.

Expertise

Flyvbjerg employs the Dreyfus model of learning to illustrate what he means by expertise. The Dreyfus model employs five levels of human learning, ranging from novice to expert,Footnote 7 with each level comprising recognisably different behaviours in relation to performance at a given skill. A novice, for example is new to particular situations; will, during instruction, learn about facts corresponding to and other characteristics pertaining to the situation and so is taught or develops ‘rules for action’. Flyvbjerg suggests that for the novice:

Facts, characteristics, and rules are defined so clearly and objectively… that they can be recognised without reference to the concrete situations in which they occur. On the contrary, the rules can be generalised to all similar situations, which the novice might conceivably confront. At the novice level, facts, characteristics, and rules are not dependent on context: they are context independent (2001:11).

This is illustrated through the example of learning to drive, which comprises theoretical rules for action (the rules of the road) along with more practical instructions: e.g. what constitutes the biting point of gears; the process of moving through gear changes and so on. Both can be learnt independently of any concrete situation. Over time, however, as the driver becomes more familiar with instances in which they change gears, this process becomes more intuitive. Flyvbjerg argues that as learners advance from ‘novice’ and through the levels of ‘advanced beginner’, ‘competent performer’ and ‘proficient performer’, that a number of things occur to facilitate the normalisation of this instinctual/intuitive behaviour: firstly instances of performing in real life situations increase, correspondingly the number of ‘cases’ that the learner encounters and tackles also increases; secondly, recognition of different situations accumulates, as does recognition of the context in which those situations occur; thirdly, dependency on specific ‘rules for action’ diminishes as learners are able to interpret and judge how to perform optimally in any given situation (for example, when the noise of the engine, rather than rules of thumb concerning speed, indicates that it is time to change gear). Genuine expertise, however, only occurs as the result of a ‘quantum leap’ in behaviour and perception; from that of an analytical problem solver to someone who: “[exhibits] thinking and behaviour that is rapid, intuitive, holistic, interpretive… [expertise] has no immediate similarity to the slow, analytical reasoning which characterises rational problem-solving and the first three levels of the learning process” (Flyvbjerg 2001:14). In other words, experts immediately perceive a situation: the problem that is presented, the goal that must be achieved and the actions that will address this; without need to divide this process into distinct phases. This, Flyvbjerg argues, is “the level of true human expertise. Experts are characterised by a flowing, effortless performance, unhindered by analytical deliberations” (2001: 21).

Expertise in policy development

In the UK, the social actors directly responsible for creating central government policy are ministers and civil servants. The role of ministers is described by Riddell et al. (2011), who suggest that in relation to policy, ministerial responsibility exists both in terms of: (1) parliamentary duties (for example with regards to making statements about or in defence of policy decisions); and (2) executive and policy related responsibilities (developing policy objectives, approving decisions and providing leadership for both senior officials and their department more widely). Regarding civil servants, Ribbins and Sherratt (2012) argue that there has been a paucity of empirical studies detailing their role in policy development. Nonetheless, it is possible to set out a conceptual or theoretical position regarding what their ‘performances’ might comprise: in particular, that it is the responsibility of civil servants to serve apolitically and implement the policies of the elected government of the day (ibid). I note in Brown (2011), for example, that from the perspective of the civil servants I have previously interviewed regarding policy development, the policies they work on or develop originate from the pre-conceived ideas, the commitments and the overall narrative of the ministers or the political party (or coalition of parties) currently in power. Policies are typically developed by teams rather than individuals and each member of the team will possess a greater or lesser general understanding of the policy process work (depending on their time in post). Whilst those responsible for the development of policy texts (such as Green and White papers) are likely to be ‘generalists’, they will also draw on expertise from other areas (such as legal or economic advice) as and when required (Brown 2013). In terms of the policy process then, ‘ultimate’ expertise from a Flyvbjergian perspective (the achievement of phronetic, virtuoso expertise) may be envisaged as a state in which individual civil servants can, almost without thinking, interpret and respond to a policy request in a way that meets the requirements of the politicians requesting it, whilst also attending to the contextual nuances that might affect the successful enactment of the policy.

Invariably, however, each policy request will differ in terms of the ideologies of those requesting it, their output or impact requirements and with regards to the setting involved and the resources available. This means then that, unlike with other acts within the education sphere (such as teaching), developing solutions to ever-changing requests cannot be viewed as an act of ‘performance’ that can be practiced and perfected, nor something that can ever be universally or effectively judged against a fixed or definable standard: success simply relates to how well the needs and likely responses of those requesting the policy have been anticipated and met. As such, unlike with more realist notions of expertise; for example, the often quoted 10,000 h required to achieve certain recognised levels of performance (e.g. see Lemov et al. 2013), possession of expertise in the policy setting can be viewed as more constructivist and so temporal and context specific. For example, an individual may get on well with one Minister and be able to quickly ascertain and meet their needs, but may initially jar with the personality of another; this may also then affect their ability, at first, to provide solutions that both meet the Ministers’ requirements and that which can effectively be enacted; alternatively, an official may move departments and so have to learn about its recent policy history and the aims and successes of these policies; or an exogenous event (for example an act of legislation, or economic, social or natural disaster) may mean that the policy context changes and so a new understanding may be required of what is achievable. As such, the development of policy expertise is likely to be gradual, interrupted rather than linear and stem from constant immersion within both the policy process and in relation to the focus, remit and the personalities of those running the department in which they operate. Not only this, but given the gradual turnover of staff in any organisation, at any one time government departments will be populated with officials at different levels of competence. As a consequence, policy texts and documents will be constructed by (and also reflect) the range of proficiency and understanding that exists. Nonetheless, the general trend, despite any disruptions which occur en route, should be seen as one which heads towards competence (e.g. see Dowling 2010a, b) as policy makers engage more and more with specific policy cases and instances.

Expertise as learning

Notions of expertise from a Flyvbjergian perspective derive through the learning that accrues from experience: i.e. expertise is explicitly related by Flyvbjerg to the number of cases an individual interacts with. This approach is thus congruent with more constructivist/socio-cultural aspects of learning which consider the mental models learners employ when responding to new information and which reflect the notion that knowledge itself emerges from participation in cultural practices (see Paavola et al. 2004). Important, too, is their posited notion of ‘distributed cognition’ (i.e. that aspects of knowledge will be distributed amongst individuals), which implies that collaborative problem solving can be more productive than the efforts of individuals since it will bring together a myriad of perspectives. As such, it would appear that, through their day to day actions, interactions and engagement, policy makers learn through the constructivist and sociocultural modes and it’s this learning that leads them to develop expertise in policy development.

When it comes to evidence use as currently conceived, however, civil servants would appear both to be positioned and to act, at the level of novice. For example, I have noted above the linear perspectives regarding how evidence is used, which posit that there will be fixed points at which evidence will or should be consulted in order to inform policy. In other words that evidence is not regarded as something to be considered continuously or holistically but separately as part of a defined, rationalised, sequence of events. This disrupted engagement can occur either directly with regards to research texts or in terms of those who might provide them: for example, where they exist, government departments often separate rather than bring together within policy teams, specialists and more ‘generalist’ civil servants (with the former holding experience with regards to social research, economic knowledge, legal knowledge and so on). Such separation can vary from teams being located in different floors of an office to them being situated within different cities (the Department for Education, for example has much of its research and statistical activity based outside of London, including in offices in Sheffield and Darlington); separation does however mean that specialists are often called in at fixed points to discuss an issue rather than provide constant input into the policy development process. This, as a result, serves to limit the number of instances or cases of evidence ‘generalist’ policy-makers are exposed to.

This notion is reaffirmed when we examine the kinds of evidence requested and privileged by civil servants, which is often akin to the knowledge that may be found in an instruction manual. For example evidence that details ‘what works’; i.e. that provides generic, context independent rules of thumb (which can be applied to any situation: e.g. to all classrooms or schools); or evidence that is ‘policy ready’ (Brown 2013). In both cases, this type of evidence can be directly applied against potential plans for action or used to pinpoint solutions in themselves. In addition, this type of evidence is easily digested and thus frequently doesn’t require more than cursory engagement or interpretation in order for it to be employed. In fact, the more esculent the evidence, the more likely its ‘use’ can be limited to a simple acceptance/rejection of the recommendations or solutions presented (Cherney et al. 2012). For example, should it point to a solution that is not cost effective or impractical to implement, then this type of evidence is likely to be dismissed out of hand. A more valuable engagement, however, would see policy makers taking into account the basic or underlying principles of the message in question and for these to become intertwined with other situational/contextual variables in order to produce a solution (also see Ball 1994 and Pollard and Newman 2010). Such engagement is typically referred to as being ‘reflective’ in nature (Hannay and Earl 2012) and I suggest that the non-reflective use of research would appear to stem from how research is currently recontextualised.

Recontextualisation

Dowling describes the process of ‘recontextualisation’ as “any action that views one practice from the perspective of another” (Dowling 2010a, b: 3). For the purposes of evidence informed policy making, therefore, recontextualisation refers to the viewing of the practice and outputs of research from the practice and desired outputs of policy making. In Dowling (2013), the following schema is provided for the process of recontextualisation (Table 2).

Table 2 Modes of recontextualisation (from Dowling 2013)

The axes of the schema may be defined in the following way: firstly, whether the ‘practice’ being recontextualised exhibits strong or weak discursive saturation (i.e. the extent to which strategies associated with the practice tends to render its principles explicit). For example, the process of undertaking academic research (issues with epistemological paradigms aside) may be regarded as explicit since ‘best practice’ is described in research text books and for students, examined in a formalised way. Second, whether the practice which engages in the recontextualisation activity does so via formalised and explicit strategies or those which are more ad-hoc in nature. For example, in the case of policy-making, whether there are distinctive rules or procedures for action in creating government policy (specifically, as relate to the use of evidence). This then provides four possible modes of recontextualisation, of which I am concerned with two: re and de-principling: this is because, as noted above, I suggest that the process of creating research may be regarded as exhibiting strong discursive saturation with extant notions regarding ‘best practice’ set out in terms of strategies to ascertain validity and reliability (and that stretch across disciplines and paradigms).

In order to ascertain which mode (re and de-principling) currently dominates, however, we have to understand the present practices of utilising research within policy development. Empirical analysis would appear to suggest that these are more context specific or even ad-hoc in nature (e.g. see Brown 2013) and so are tacitly principled. For example, current strategies that might be employed by researchers (or intermediaries) in order that evidence might be employed within policy development are set out in Brown (2011) and relate to researchers: (1) providing outputs which attempt to meet policy-makers’ specific requirements from research (‘policy ready’ strategies); (2) researchers seeking to effectively communicate and/or use effective techniques or channels to promote their research (‘promotional’ strategies); (3) researchers engaging in ‘traditional’ academic behaviour (‘traditional’ strategies); and (4) researchers attempting to shift their relative position with regards to how ‘privileged’ they are by policy-makers (which affects the ease with which they can access or influence them), or how policy-makers perceive the policy context to which their research pertains (‘policy preference’ strategies).

To the extent that there appear to be no explicit, formalised strategies for engaging with evidence by policy makers, other than to request of it that it is transformed by researchers (who can only do this by essentially ‘second guessing’ what needs to be done and without guarantee that their efforts will be successful), we might argue that policy-makers’ are seeking what Dowling (2010a) describes as ‘instruction’: pedagogic action in relation to research that simply concerns one-off performances and occurs without any general desire by or requirement to develop more enduring competence.Footnote 8 As a result, their current approach to recontextualisation is one of de-principling: of not seeking to fully engage with or learn from research, rather to use it to bolster policy-arguments.

Whilst the novice-level use of de-principled evidence might be appropriate in solving well specified tasks, where the solution is equally well defined and understood, it is less well suited to situations in which the task is complex, where the solution needs to incorporate a multitude of factors (including the economic and ideological) and where any solution proposed will be scrutinised by the media and critiqued by those with vested interests in its outcome. In other words, when it comes to the process of policy making as it actually occurs. True expertise in evidence use, on the other hand, provides a vision of policy makers as social actors who intuitively develop responses to situations: with this intuitive holistic reading based on an amalgamation of the formal knowledge they have adopted to date, an understanding of the specific case they are dealing with and their understanding of the other environmental factors that might influence the policy decision (e.g. the amount of money that is available; the ideological or personal perspectives of the ministers initiating the policy; how the press/public might respond; who might try and block its implementation; those stakeholders who might need to be courted; the capacity of available delivery mechanisms to ensure that the policy is implemented on the ground, and so on). This then moves conceptions of how policy makers should engage with evidence away from something separate and temporally specific, to something which is fully and continuously integrated.

Moving to expertise in evidence use

My proposed way forward for the notion of evidence-informed policy making is therefore to suggest that policy makers, as an essential element of their role, move to more continuous engagement with research and researchers. As well as illustrating what might be, however, I argue that the phronetic approach actually represents a more realistic conception of evidence use as it currently stands. For example, it is ordinarily implicitly assumed that the mind of the policy maker must be ‘empty’ of knowledge in relation to a given issue until they have been provided with evidence in relation to it. Patently, this cannot be true: policy makers will have considered opinions, are likely to have an understanding of the wider policy environment and may have already digested research in relation to a given issue before they are specifically required to tackle a given problem. As such, adopting a phronetic approach illustrates the fallacy of conceiving of evidence use as something separate from policy development: that instead, we must recognise that policy-makers and their decisions will already be (explicitly or implicitly) informed by every facet that has shaped their perspective/reality to date; including the evidence and knowledge they have already adopted.

I also argue that my proposed approach presents a more effective way of accounting for some of the issues (problematised assumptions) I outline above, which serve to prevent increased instances of evidence use. For example, continuous engagement with research means that policy decisions will not be contingent on evidence being considered at a fixed point in time in order for them to be considered ‘evidence-informed’. In addition, this position also provides a much needed ‘constructivist’ alternative to the dominant ‘realist positivist’ perspective: rather than policy makers relying on or awaiting tranches of evidence to provide direction, policy makers instead will develop their own understanding of the evidence base and draw their own implications (in terms of ‘what works’) from it. At the same time, how evidence is interpreted and whether it is adopted, rather than solely a function of ‘rational’ engagement, will be driven by the perspectives developed by policy makers over time and the realities that they inhabit. This means that rather than assuming concepts of quality and the methodology employed will be the key determinants as to whether evidence will be used, they will instead sit alongside notions such as how well the story ‘resonates’ with policy makers (something alluded to by Huberman 1990). In addition, as policy makers develop a picture of the evidence base over time, they will be less likely to be faced with situations where they will be required to accept/reject the findings of a particular study in relation to a policy decision. Instead their decisions will be steeped in a rich bank of evidence; of which some (depending upon their time in post) will have its origins in a multitude of past ‘policy agoras’ (Brown 2011) corresponding to a wealth of ideological positions held by previous governments: civil servants will thus be aware of potential solutions residing outside of the agora and the ways in which these solutions might be brought into the fold.

What kind of engagement?

So what format should such continuous engagement take? Collins and Evans (2007) argue that developing expertise requires deep immersion amongst those considered to be experts. Learning communities are an alternative form of capacity building, which embrace this approach and have been described by Stoll (2008: 107) as a means through which to build “learning [in order] to support educational improvement”. Learning communities comprise: “inclusive, reflective, mutually supportive and collaborative groups of people who find ways, inside and outside their immediate community to investigate and learn more about their practice” (ibid). The notion of such communities thus encapsulates instances where policy-makers and researchers might conjoin in order to facilitate learning about and from formalized/academic knowledge.

A key benefit of the learning communities approach may be attributed to the nature of the learning that takes place within them, which is encapsulated by the process of knowledge ‘creation’; described by Stoll (2008) as one where the producers and users of formal knowledge, who are, respectively, also the users and holders of ‘practical’ knowledge, come together to create ‘new’ knowledge (Stoll, however, uses the term ‘animation’). Nonaka and Takeuchi (1995) conceptualise this process of creation as one which arises from the interactions that arise between tacit and explicit (or informal and formal) knowledge. In particular a ‘spiralling’ that accrues from the occurrence of four sequential types of knowledge conversation: (1) the conversation between tacit knowledge and tacit knowledge (labelled ‘socialisation’); (2) the ‘externalisation’ of tacit knowledge (i.e. its explication); (3) a conversation between explicit and explicit knowledge (which represents the ‘combination’ of explicit knowledges); and iv) the ‘internalisation’ of knowledge (i.e. its move from being external to knowledge which is tacit). The first stage, ‘socialisation’, should be regarded as representing baseline behaviour, i.e. the normal state of affairs which governs’ policy-makers’ interaction with one another; since it may be assumed that those operating in a given policy area will all have some commonality or overlap with regards to their perceptions and understandings in terms of a given issue. The second stage (‘externalisation’) represents a situation in which policy-makers and researchers are cataloguing ‘what is known’ (both in terms of ‘formal’ and praxis based knowledge); the third stage represents creation, as the totality of ‘what is known’ is combined and shaped into new solutions to issues. In the sense that I have been discussing throughout this paper, such creation results in ‘policy ready’ knowledge, i.e. evidence directed at policy problems. The final stage then represents the situation of policy-makers porting this new knowledge and intuitively drawing upon it as part of the day to day process of developing policy solutions.

I note above that creation will lead to new knowledge that is ‘policy ready’. Unlike as currently conceived, however (where I have critiqued the notion of ‘policy ready’, because of the expectations by policy-makers that these outputs should comprise a solo enterprise by researchers), through knowledge creation, policy-makers build their capacity to use research (i.e. by engaging with researchers who can assist with explanations as to method and meaning), whilst simultaneously merging formal and informal/tacit knowledge in a dynamic way to address real and current policy concerns. In other words, policy-makers engage to construct their own ‘policy ready’ knowledge-based understanding, rather than act as passive recipients of research knowledge. It is this creation (and the journey towards it), which thus develops policy makers’ phronetic expertise with regards to evidence use.

How might such engagement be facilitated and enforced?

The final consideration then must be how to facilitate and enforce continuous engagement via the development of policy learning communities. This is likely to need thought and effort both at the level of the individual and at the level of departments/organizations. At an individual level, if the general remit or expected requirements of policy-makers are expanded so as to include an active engagement with evidence, then expectations with regards to their ability to do so must be altered too. The competency framework for central government officials in the UK is set out within Professional Skills for Government (PSG). The PSG provides four thematic groupings for Civil Service competencies: (1) leadership; (2) core skills; (3) professional skills; and (4) broader experience.Footnote 9 Within these, two statements of ‘ability’ encapsulate what is currently required in terms of evidence-use: in ‘leadership’, for example, there is the requirement that civil servants “build capacity for the organisation to address current and future challenges”; under ‘core skills’ is the requirements that civil servants are able to engage in “analysis and use of evidence”.

I suggest that these are neither sufficiently descriptive, or prescriptive enough to encourage the development of capacity/instances of evidence use; nor the regular participation of officials in learning communities (or similar activities). Beginning with the former, for example, Hannay and Earl (2012: 313) argue that the skills required for the twenty-first century include: “collaboration, problem framing, critical thinking, ‘thinking outside of the box’, innovation and creativity,” all of which are functions or outputs associated with being able to employ evidence. Pierson et al. (2012), examining the health sector in Canada, also suggest that core competencies for policy-makers working in this area should include ‘proficiency’ in evidence-informed decision making (which implies developing ‘experience of’ rather than simply having an ‘ability to’). In a similar vein, Pollard and Newman argue that:

The ability of teachers to recognise the type of knowledge required to address a particular practice issue, to find such knowledge, to appraise its quality and relevance, and to interpret it for their own practice environment are… key features of professional teaching practice. This process and the set of skills/knowledge required to apply it are what is referred to as evidence- informed practice (2010: 264).

I argue that the same might apply to policy-makers seeking to find innovative and effective solutions to problems (whether in one department or across departments or even jurisdictions where specific problems have multiple roots or drivers: e.g. also see Hargreaves 1999, 2010). As such, it is clear that current descriptive requirements in this area need to be revised and addressed so that policy-makers are not only required to engage with evidence, but that it is obvious what policy-makers are expected to be able to achieve once they have done so.

In addition to competency, however, is the mechanism of enforcement: i.e. prescription with regards to how the descriptive expectation noted above is to be realised in practice. In recent years UK governments have sought to establish various ways of improving the competence and capacity of the education work force. This approach is encapsulated by the notion of the ‘self-improving’ school system and its corresponding four forces of ‘top-down performance management’; the development of ‘capability and capacity’; market ‘incentives’ to improve efficiency and quality; and customers/users being able to dictate and shape services (Ball 2008: 103). Ball (2008) argues that hand in hand with the emergence of this system have been the use of a number of key terms to signify what is required from reform; including notions such as: ‘transformation’, ‘enterprise’, ‘modernization’, ‘innovation’, ‘creativity’, ‘competition’ and ‘dynamism’. Within the self-improving system, specific policies in relation to the governance and modernization of the school work force have included, for example both ‘standards’ for teachers and the National Professional Qualification for Headship (NPQH) for school leaders. These serve not only to set out expected behaviour, against which performance can be judged, but have also, in recent years, acted as frameworks for progression within the teaching profession (with the NPQH, for example, essentially acting as a benchmark entry requirement for headship in UK schools). Given the notion of such a system, of which policy-makers must surely be part, it is argued that the reforming gaze of government may also be required to look inwards and apply some of the transformative and modernizing approaches used elsewhere on itself; putting in place mechanisms of performitivity (such as standards) to ensure that engagement with evidence and participation in activities such as knowledge creation are essential rather than perfunctory and form a vital and integral part of any policy-makers’ career progression.

Organisational culture too, however, will need to accommodate learning community activity. Talbert (2010), for instance, notes that, whilst learning communities rely on collaboration, access to a variety of resources and are directed to establishing mutual accountability and responsibilities in terms of reaching effective outcomes; that bureaucracies rely on creating checks and balances. In other words, that bureaucracies operate by limiting the power of individuals or groups of individuals to act. As a consequence, Talbert argues that bureaucratic resources must be used to facilitate learning strategies: i.e., that aspects such as collaboration, mutual trust, participation and accountability must be allowed to develop within a context of rules, check-up and accountability.

Conclusion

Within this paper I have argued that current conceptions of evidence-informed policy making are dominated by a number of assumptions which fail to either meaningfully characterize the policy process or to fully account for the role of evidence within it. Instead they serve to hinder efforts to marry evidence to decision-making (by grounding them falsely) and in doing so perpetuate what has been previously described as the ‘evidence-dilemma’: an intuitive understanding that evidence should have a substantive influence on policy, combined with a disappointed resignation to the fact that it invariably will not (Brown 2013). My alternative has been to engage with Flyvbjerg’s notion of expertise and to show how the learning that accrues from the engagement with multiple cases will in the long term lead to competency. As a result I have firstly proposed educational change, by suggesting that policy-makers should seek to engage with evidence in a continuous rather than sporadic way throughout the policy process. My proposed approach to this is the establishment of learning communities and the instigation of processes of knowledge creation within them, as well as to introduce mechanisms for ensuring policy-makers are required but also supported to participate within such communities. I have also, however, sought to put forward suggestions for ways of facilitating more effective educational change in terms of the development of educational policy. This is because I argue that it is only by unleashing the type of expertise that will accrue from such activity that we might see evidence use increaseing the probability of policy being more effective, equitable and efficient in terms of its value for money (Oxman et al. 2009).