1 Introduction

Policies are all around us and, directly or indirectly, they influence many aspects of our life. Quite often, people ask themselves how such policies have been conceived, why politicians have decided to implement that policy, and not another one, why it has been implemented in that precise way and with particular resources, how these have been used and so on. As decision analysts we are quite often confronted with “clients”, being public agencies or stakeholders, involved in public decision processes to whom we are expected to provide useful knowledge for such processes. But what is useful knowledge in such a context? Some international agreements, laws and norms, a macroeconomic plan, some politicians’ interests, the national statistics, surveys and polls? This paper aims to discuss “evidence-based policy making”, a recent attempt to summarise such useful knowledge as “evidence” which should guide policies. It seeks to provide a critical perspective on it and then propose a new concept for supporting policy making: policy analytics.Footnote 1

We start by considering policies as shapeless objects, modelled by politics, where a set of interrelated actions is aimed at achieving a set of multiple and interrelated goals within a period of time. In a public policy context the nature of “decision process” or “policy cycle” (Lasswell 1956) (we will use them interchangeably) can be characterised by some relevant features. First, the policy cycle consists of a set of interrelated decision processes linked by goals, resources, areas of interest or involved stakeholders. Second, in a public context, once we start considering laws, rights and governance principles it is difficult to identify the person(s) who will have the power to decide: who is(are) the policy maker(s). Third, issues may be ill defined, goals may be unclear, the stakeholders are usually many and difficult to detect. Fourth, actions and policies are interrelated, and so are their consequences, although sometimes seemingly very distant and disconnected. Fifth, the factor time must be introduced. On one hand, public policies in order to be effective and solve problems in a comprehensive and organic manner need a strategic approach, establishing long term agendas; these might be in contrast with the short-term agendas policy makers may have. On the other hand, the longer the time horizon of a policy, the more we need to consider different and unforeseeable risks and uncertainties (Beck 1992).

In the last few years this context has become more complex: participation and “bottom up” actions become frequent and are often required by law. Citizens, if and when directly affected by some policy, are (over)informed and becoming active in the policy-making process. They do not wait until some obscure decisions fall from top down, and want to know and be informed about the government decisions and actions. They want to receive explanations before accepting decisions. They are not subjects but agents of democracy and, in this sense, participatory processes become crucial both to have a democratic process, and to avoid opposition phenomena as “not in my backyard”. That is why policy makers now more than ever are expected to be accountable and policy-making “evidence-based” rather than based on unsupported opinions difficult to argue. A transparent relation between decision makers and stakeholders becomes fundamental in order to attain and preserve consensus. Consensus is a very important resource in every public policy process. Most actions policy makers or other relevant stakeholders undertake are guided by consensus seeking and most resources committed within a policy cycle become important in relation with their ability to be transformed into consensus (Dente 2011). Thus, policies become instruments “to exercise power and shape the world” (Goodin et al. 2006, p. 3).

In 1997 the concept of evidence-based policy making (EBPM) was introduced, in a modern form, by the Blair government (Blair 1994). The idea of creating policies on the basis of available knowledge and research on the specific topic is not new and is generally accepted. However, we need to deepen into the concept and its peculiarity in order to better understand exactly what it means. First of all, what is evidence? According to the Oxford English Dictionary, evidence means an “available body of facts or information indicating whether a belief or proposition is true or valid”. However, this definition is far from clear and somewhat ambiguous. Why is “evidence” once again highlighted as a support for deciding about policies? The idea of using some form of evidence in order to conceive a policy is not really new. In which sense is its contribution now different?

EBPM has been defined as the method or the approach that “helps people make well informed decisions about policies, programmes and projects by putting the best available evidence from research at the heart of policy development and implementation” (Davies 1999). It is important to point out that the scope of EBPM is to help and “inform the policy process, rather than aiming directly to affect the eventual goals of the policy” (Sutcliffe and Court 2005). We could say that EBPM essentially consists of “the integration of experience, judgement and expertise with the best available external evidence from systematic research” (Davies 2004). Evidence should include all data from past experiences, as well as information and good practices from the literature reviews. This is certainly important and necessary, but is it also sufficient to make a good policy? Dente (2011) claims that in order to understand and assess a policy, what is really important is the policy making process which led to such a policy, rather the policy itself. Thus, in order to support such a decision process, we need information and knowledge considering the policy cycle as a whole, able to support accountability requirements.

Davies (1999, 2004) and Gray (1997) claim that the introduction of EBPM produces a shift away from opinion-based decision making towards evidence-based decision making. This shift is far from easy. On one hand, EBPM seems to be viewed as an objective method to decide, distant from political ideology. On the other, this statement is controversial, and must be better explained. Despite EBPM trying to base policies on “facts” and “evidence” rather than insisting on bureaucracies or on political ideology, it is important to underline to the stakeholders (technical and not) that such “facts” and “evidence” do not provide an unambiguous guide to decision-making. In fact, we know that data can be manipulated, that interpretations are subjective and that good practices are strictly linked with a specific framework. In other words, constructing evidence does not end the analyst’s work or the need for the stakeholders’ critical intelligence. It is not a way to delegate decisions, because values, preferences and decisions should remain a political act.

The aim of this paper is to review the literature about policy making and evidence-based policy making (and related issues), highlight the origins, understand the criticisms and controversies, while looking for a new perspective which we shall call “policy analytics”. Our main claims are:

  1. 1.

    The policy making process or “policy cycle” is a long term decision process characterised by:

    • the specific nature of public policies;

    • the requirements of legitimation, accountability and deliberation;

    • the existence of multiple public decision processes within the same policy cycle.

  2. 2.

    Supporting the policy cycle cannot be reduced to producing just “evidence” (in terms of data, knowledge, expertise etc.). The analytics providing evidence aiming at supporting general decision making processes are necessary, but not sufficient in the case of public policy making. Constructing evidence should be seen as a specific, purpose-built, type of decision aiding process, and as such, should be methodologically well founded.

  3. 3.

    We need a new and richer concept accounting for all decision aiding activities that aim at supporting the policy cycle: we call this term “policy analytics” and we will briefly introduce some of its main features in this paper.

The paper is organised as follows. We start outlining the meaning of some important terms and concepts (Sect. 2). Next, we briefly present a review about the EBPM state of the art (Sect. 3). After that we discuss criticisms, which involves the policy making process and the introduction of evidence within it (Sect. 4). We will then introduce and sketch out the concept of “Policy Analytics” as a new term grouping the activities and knowledge created to support policy making throughout the whole policy cycle (Sect. 5). A concluding section, including future challenges, ends the paper.

2 Terms and concepts

2.1 Public policy

To start with, it is important to understand that the concept of public policy (PP) should be wide and abstract enough in order to adapt itself to various applications and contexts. For such reasons, over the past 50 years many definitions have been coined to define PPs. They have different meanings as the authors bring into focus different aspects such as processes, stakeholders, objects and decision levels (Anderson 1975; Dente 2011; Dunn 1981; Dye 1972; Hill 1997; Jenkins 1978; Kraft and Furlong 2007). From this literature we can identify six main characteristics of PPs:

  • the power relations between different stakeholders,

  • the different institutional levels,

  • the duration over time,

  • the use of public resources,

  • the act of deciding (including deciding not to decide),

  • the impacts of decisions.

However, according to different contexts and goals, different types of policies may result in combining the above characteristics at different levels: long term city waste management is a low institutional level policy with a long time horizon implying a moderate use of public resources potentially involving a small number of stakeholders, while locating a regional landfill often results in a very conflict-ridden (many stakeholders with strong commitments) situation although for a short time, potentially involving several institutional levels.

First of all we need to understand the term “public”. Intuitively public is any issue concerning the community, something which in a direct or indirect way affects all citizens. Then, in our specific field we shall emphasise that every PP is a process that implies a set of public decisions; thus, it is a public decision process. It is developed over a relatively long period of time and involves different decisional levels, interacting according to a set of determined rules. The process and entailed interactions are developed in order to solve a problem referring to a public issue, or rather a problem in which resources and rationality are public. The concept of “public issue” is not always clear: the issue that the policy will address is an object which conveys a meaning. Naming a public policy is the action of defining such a meaning and it implies the legitimation of this meaning. However, every subject affected by the policy (policy makers, experts, citizens, stakeholders) makes-up his or her own meaning of the policy, legitimated by its name and definition. Practically speaking, stakeholders interpret a policy according to their own needs and/or commitment of resources and accordingly exercise their legitimation. Seen from a resources point of view, a public decision is a public choice and it implies an allocation of public resources. Even no-action in a determined field is considered a policy, because it implies the public choice of maintaining the same resource allocation as before. Speaking about public resources, governments/public agencies have to make understandable how and why they use public resources in order to tackle specific issues. The public decision process is requested to be accountable to the citizens in contrast with the complexity of the entire process. Thus, we need an operational definition able to summarise the characteristics introduced:

Definition 2.1

We consider a public policy as a public agreement, allocating public resources to a portfolio of actions aiming at achieving a number of objectives set by the public decision maker, considered as an organisation. Such agreement can be interpreted in many different ways depending on who is concerned by the policy.

Through this definition of public policy we want to highlight that a policy has a meaning for the stakeholders affected by the policy itself and for the citizens in general, but that such meaning could be neither shared nor consensual. Possibly the policy may achieve both knowledge sharing and consensus about the resource distribution, but this is a potential outcome. A policy does not only pursue quantifiable objectives; it generates a legitimation space, thus producing inclusion and/or exclusion. A legitimation space is an abstract space where the stakeholders reveal (at least partially) their concerns, preferences, values and goals, where they commit and look for resources and where they are able to seek for and create legitimation, namely agreement on decisions and actions, through relations and discussions (see Ostrom’s “action arena” Ostrom and Ostrom 2004, 1971, and Ostanello and Tsoukiàs’ “interaction space” Ostanello and Tsoukiàs 1993). This is a crucial difference with respect to generic policies of the type a private business will typically conceive.

Example 2.1

In the following section we shall introduce a running example in order to allow the reader to understand a number of concepts introduced in the paper. The example is taken from a real case study on the design of risk reduction measures for urban and transportation infrastructure (for more information see Mazri et al. 2012, 2014).

The case concerns the broader issue of designing risk reduction measures around hazardous industrial plants, namely the ones considered by the so called “Seveso directives”. In France the law 699/2003 obliges the “préfet” (the government representative at local level) to produce, for each hazardous plant in the territory under his jurisdiction, a “Technological Risk Reduction Plan (PPRT in French)” aiming at reducing to an acceptable level the risks the population may have to face in case of a major accident occurring in that plant. Such measures concern urban planning actions as well as actions addressing the transportation infrastructure present within a reasonable distance at the plant. Examples of such measures can be establishing areas (around the plant) where houses need to be demolished or deciding to divert a railroad.

We consider such PPRT as public policies:

  • they affect multiple stakeholders;

  • they use and redistribute public resources (land, money, authority etc.);

  • their designs result in de facto, but also de jure, participatory decision processes;

  • there is at least one moment in which the plan is deliberated;

  • such plans affect a “public issue”, that is citizens’ safety, a controversial issue allowing for multiple interpretations (different stakeholders, such as the local politicians, the scientific advisors, the citizens individually etc., will likely have different interpretations of what safety means and how risks can be reduced).

2.2 Policy making process

Since the 1950s, policy making has been interpreted as a process, that is a sequence of interactive stages or phases. Under such perspective, the policy making process can be considered as developing in time and space, merging actions and intentions, decisions and also a lack of decision-making, impacting on society and on the political system itself.

The idea of modelling the policy making process (or cycle) in terms of stages was first put forward by Lasswell (1956). He introduced a model of the process divided in seven stages: intelligence, promotion, prescription, invocation, application, termination, and appraisal. This set of stages has been contested and criticised, but the model itself has been successful as a framework for subsequently studying policy science and policy analysis (Jann and Wegrich 2007). During the 1960s and 1970s, a number of different process typologies were developed, used to organise and systematize the growing research (for such typologies see Anderson 1975; Brewer and Leon 1983; Brewer 1974; Jenkins 1978; May and Wildavsky 1978). Today the most conventional process to describe a chronology of a policy making process is made up of (Hill 1997; Jann and Wegrich 2007):

  • agenda setting,

  • policy formulation,

  • decision making,

  • implementation,

  • evaluation.

This kind of policy making process—as presented by Lasswell and others - has been designed like a problem-solving model, according to the rational model of decision-making developed in organisation theory and public administration (Jann and Wegrich 2007). Simon (1947) has pointed out that the real world does not follow such stages. However, this kind of process still counts as a major reference (other models used in public policies: the incremental model Dror 1964; Etzioni 1967; Lindblom 1959; the garbage can model Kingdon 1984; March and Olsen 1976, 1989). This origin of the studies on policy making, as sequential stages, will be our basis for interpreting the policy making process as a proper decision-making process.

We need to emphasise that a policy cycle goes beyond the public organisations concerned: it is not an internal process to them. A public decision process involves multiple and different organisations and/or individuals. Thus, there is no single rationality to simply follow. This process is characterised by several rationalities, which may conflict, and the process generates a legitimation space in which these rationalities interact (possibly following what Habermas 1984 calls communicative rationality).

Example 2.2

We continue with the example about the PPRT. Establishing such a plan implies entering a policy cycle:

  • agenda setting: typically the political authority (préfet) establishes the issues which need to be considered as critical as far as the citizens’ safety is considered (areas to be analysed, infrastructures to be considered, types of accidents to be taken into account, budget allocated and timing of the process, just to mention some of the issues which are typically introduced in the cycle);

  • policy formulation: different stakeholders, such as field experts, process experts, local focus groups, representatives of other actors and social groups, need to establish what, how and why is going to be used in order to assess risks, mitigation measures and their effectiveness etc.;

  • decision making: a number of meetings, discussion forums, consultation actions, concerning both the whole set of involved stakeholders as well as each group individually are scheduled, aiming at addressing the issues introduced into the agenda and formulated as elements composing the policy (characterising different areas through the level of the incumbent risk, measuring the potential impact of each single action potentially involved in the policy etc.); the process ends when the plan is officially deliberated by the “prfét”;

  • implementation: the plan is enforced through legal actions, specific projects are scheduled (moving households considered under extreme risk, building protections, implementing risk management procedures etc.), actions communicating the plan contents are performed etc.;

  • evaluation: at a given time the results of the plan are assessed, possibly against benchmarks and/or targets, besides observing unforeseen consequences.

N.B. We do not wish to suggest that these activities necessarily occur in a linear fashion.

2.3 Public deliberation, legitimation and accountability

A feature which helps to distinguish a public decision process from other decision processes is “public deliberation”, namely the legislative act. The outcome of the decision is a public issue, and the public authority must communicate it, the citizens must know about it. The publication of an official document which defines and explains the policy is the act that produces the wanted (and unwanted) outcomes and reactions to the decision. The intermediate and final act explain the motivations and the causes of that policy. Public deliberation is also expected to establish accountability of the public decision process and of the public authority itself and, to some extend, their legitimation to the general public or stakeholders.

Linked with deliberation we find two concepts, recently used in the field of PP: “Legitimation” and “Accountability”. Both are important in order to understand the relations established between stakeholders. They are fundamental in the creation, evolution and maintenance of a conceptual social space that we call a legitimation space, where stakeholders interact creating relationships, goods, services, but above all develop and define their rationality (Habermas 1984).

In relation with legitimation, we refer to authorisation and consensus. The need for legitimisation stems from the relative dimension of power, that makes it frail in terms of collective acknowledgement. Political power does not get identification and legitimation from a transcendent order; thus, the recognition of its value, and so the collective acknowledgement, becomes an immanent issue: legitimation is obtained through authority and consensus. On one hand, the legitimacy of the public action and the relative decisions, comes from the law, which gives to the elected the authority to decide and manage public resources. On the other hand consensus is obtained when acting pretends to be rational: more generally speaking when the decision process itself pretends to be “rational” (whatever rationality model we may consider). Actually the fact that different rationalities co-exist within a policy cycle allows to shift our attention to the pretentions of rationality: policy decisions and actions pretend to be rational and look for logical frames allowing them to be perceived as such.

However, before proceeding, we need to make a little clarification. When speaking about rationality, in this context, we are not referring to rational planning, a concept used in urban and regional planning (Faludi 1973; Friedman 1987) (criticised in Alexander 2000; Hostovsky 2006). For us, a policy maker has a different role from a “rational planner”. A rational planner adopts a precise model of rationality. Policy makers are able to legitimate their decisions and actions because at any time they adopt the most suitable rationality model. Rational planners feel legitimate because they act “rationally” (according to some model). Policy makers feel rational because they act according to different legitimate models of rationality.

How could rationality be legitimising in the field of PP? The concept of rationality is not straightforward, neither trivial. In research, many forms of rationality have been identified (the idea of multiple rationalities was introduced first by Weber 1922), all of them aiming at some validity claims (Habermas 1984). We distinguish three different approaches in establishing such validity:

  • economic rationality (Hammond 1997; Harsanyi 1955; Robbins 1932); policies should maximise the utility of a society seen as the aggregation of the consumers within it;

  • bounded rationality (Simon 1955); policies should satisfy some subjectively defined decision maker’s requirements for action;

  • communicative rationality (Habermas 1984); policies should result as consensual artifacts through four validity dimensions:

    • truth;

    • scientific support;

    • normative rightness;

    • sincerity

When talking about accountability we refer to openness and transparency: public administrations can, should and sometimes must show and justify the reasons that support a decision, to some policy or any allocation of public resources. The 2008 EVALSED guide of the European Union (Regional Policy Inforegio 2011) defined it as:

Obligation, for the actors participating in the introduction or implementation of a public intervention, to provide political authorities and the general public with information and explanations on the expected and actual results of an intervention, with regard to the sound use of public resources. From a democratic perspective, accountability is an important dimension of evaluation. Public authorities are progressively increasing their requirements for transparency vis-a-vis tax payers, as to the sound use of funds they manage. In this spirit, evaluation should help to explain where public money was spent, what effects it produced and how the spending was justified. Those benefiting from this type of evaluation are political authorities and ultimately citizens.

We want to highlight that the EU definition emphasises the concept of accountability as an unavoidable dimension of evaluation (from a democratic point of view). This statement leads us back to the concept of legitimation and lets us understand that these two concepts are complementary (see also the discussion emphasising the difference between “new public management” and “new public governance” in Almquist et al. 2012). The definition also emphasises that evaluation should aim at helping the accountability of policy makers for the use of public resources, the effects of the implemented policies, and the reasons for choosing a specific alternative. In order to improve legitimation (and thus the acceptance of policies) it is important that the stakeholders feel some ownership over the result (understand it, understand the consequences, realise that different points of view have been analysed and compared). Ownership is associated with justified knowledge and beliefs: stakeholders and policy makers need to be able to explain and justify why and what they do to their communities.

Example 2.3

We conclude the presentation of the PPRT case explaining the three concepts introduced above.

  1. 1.

    Public Deliberation The process constructing the PPRT is de-facto participatory (besides participation being enforced by law). However, such participation is officially recognised through the establishment of a committee (named CLIC: Comit Local d’Information et Concertation) which is expected to act as the place where all issues are discussed. Each single stakeholder, the CLIC itself, as well as the decision maker (the préfet) perform a number of public acts: releasing a document containing risk analysis, publishing the minutes of a CLIC meeting, releasing an intermediate or the final version of the PPRT. All such actions are public deliberations which characterise the policy cycle and allow the policy making process to be observed.

  2. 2.

    Legitimation The issue here is not just to reach an agreement among the stakeholders about the plan to be deliberated. First of all it is crucial that the whole process is considered as legitimate (relevant stakeholders have been invited, the agenda has been discussed and agreed, unforeseen issues raised by some of them have been seriously considered etc.). Controversial conclusions might be nevertheless accepted or approved (despite opposition) if the process is considered legitimate. Then the result itself should be legitimating in the sense that the public resources redistribution resulting from adopting a specific PPRT needs to address the expectations of the involved stakeholders (although perhaps not exactly as these were considering them). To give a more precise example: if the local transportation agency was expecting to improve safety for the users of their services travelling through the area considered as “risky”, there should be somewhere resources allocated taking this issue into account; perhaps not the ones the agency was originally considering (building a concrete protection), but a reasonable alternative (installing a warning system). In the PPRT case it has been shown that being able to demonstrate to each stakeholder that the actions included in the plan address the concerns they have carried within the discussions allows the stakeholders to feel that the plan “takes care of them”: in other terms that the use, reallocation and distribution of public resources is done for “public purposes” and not for satisfying opposing private ones.

  3. 3.

    Accountability Accountability is related to the process through which the involved stakeholders become able to understand, justify and explain the policy cycle and its outcomes. In the PPRT case it is important for the stakeholders to be able to show how certain information (a risk level), combined with a value structure (fairness in cost distribution) leads to a certain decision (covering part of a railway, paid by the hazard plant generating the risk). It is equally important to show that the way some stakeholders perceive risks has been integrated in the procedures where risk is quantified. Last, but not least it is important to communicate about the potential risks and their consequences in a way that different stakeholders understand at least one common core message (for instance: “we work for your safety”).

3 Evidence-based policy making: state of the art

3.1 Premises

“Evidence-based policy making” (EBPM) is a “new” topic that pervades the last decade of social sciences’ debates. However, there is nothing new in the idea of using “evidence” to support decisions. Aristotle (1990) claims that decisions should been informed by knowledge. Later, this way of thinking and acting created several philosophical movements around “positivism” (see Comte 1853, 1865; De Saint-Simon 1976; Giddens 1974; Hanfling 1981, Zammito 2004, up to “constructivism” Watzlawick et al. 1967). The interest towards the use of knowledge as rational and logical reasoning, grew until the second half of the 20th century, when these were intended both as cause and effect (Dryzek 2006; Sanderson 2002), as well as the ability to rank all known available alternatives (Ostrom and Ostrom 1971) (see also Bouyssou et al. 2000, 2006). In the beginning, the concept of rational decision was central both for the economic dimension of problem solving and the scientific management of enterprises (Tsoukiàs 2008).

In this it was considered possible to use the scientific method to improve policy making. An important figure in this field was Harold Lasswell, committed to the idea of a “policy science democracy” (Lasswell 1948, 1965; Lerner and Lasswell 1951). In 1963 Buchanan and Tullock organised a conference in which the shared interest was the application of “economic reasoning” (commonly considered as a good example of rationality) to collective, political or social decision-making. In 1967, the term “public choice” was adopted to distinguish this area (Ostrom and Ostrom 1971). The public choice approach is related with the theoretical tradition in public administration, formulated by Wilson (1887), later criticised by Herbert Simon. Wilson’s major thesis was that “the principles of good administration are much the same in any system of government” (Ostrom and Ostrom 1971, p. 203), and “Efficiency is attained by perfection in hierarchical ordering of a professionally trained public service” (Ostrom and Ostrom 1971, p. 204). Wilson gave also a strong economic conceptualisation of the term efficiency. He said “the utmost possible efficiency and at the least possible cost of either money or of energy” (Wilson 1887, p. 197 cited in Ostrom and Ostrom 1971, p. 204, see also Ostrom and Ostrom 1971; White 1926; Wilson 1887). Under such a perspective “policy problems were technical questions, resolvable by the systematic application of technical expertise” (Goodin et al. 2006, p. 4). However, already since the ’40s, Simon (1947, 1955, 1959, 1962, 1964, 1969, 1979) strongly criticised the theory implicit in the traditional study of public administration, because there are no reasons to believe in one “omniscient and benevolent despot”.

In spite of such criticism, as Dryzek (2006, p. 191) said:

these dreams may be long dead, and positivism long rejected even by philosophers of natural science, but the terms “positivist” and “post-positivist” still animate disputes in policy fields. And the idea that policy analysis is about control of cause and effect lives on in optimising techniques drawn from welfare economics and elsewhere, and policy evaluation that seeks only to identify the causal impact of policies.

Dryzek’s quote suggests that “positivism” and “post-positivism”, are still alive and indeed we claim that the promotion of EBPM has been a return to such approaches.

3.2 Evolution

3.2.1 From medicine to social science

Evidence-based policy making was born from the roots of evidence-based medicine (EBM, Dowie 1996; Sackett et al. 1996) and evidence-based practice (EBP, Melnyk and Fineout-Overholt 2005; Mitchell 1999). Indeed, it is easy to track such roots in the EBPM logic, and the way to understand problems and solutions. EBM and EBP are based on the simple concept of finding the best solution which integrates past experience into the problem-solving process. The practice of EBM needs to integrate individual clinical expertise with the best available external critical evidence from systematic research, in consultation with the patient, in order to understand which treatment suits the patient best. In this sense, we can say (with Solesbury 2001) that EBM and EBP have both an educational and a clinical function. In other words, this kind of evidence is based on a regular assessment through a defined protocol of the evidence coming from all the research. In order to respond to this need of EBM for systematic up-to-date review, the Cochrane Collaboration was initiated in 1993, which deals with the collection of all such information.

Subsequently, given the good results obtained in medicine using such an approach, politicians were interested in using the same scientific method to support public decisions and legitimise policy making. Given the success of the Cochrane Collaboration in the production of a “gold standard”, the Campbell Collaboration was established in 2000, which aims at systematically reviewing social science in the fields of education, crime, justice and social welfare.

3.2.2 EBPM in the UK

In 1994, the Labour party termed itself as “New Labour” announcing a new era: “New Labour” was expected to be a party of ideas and ideals but not of outdated ideology. “What counts is what works”. The objectives were radical. The means would be “modern” (Blair 1994). In this first announcement it was possible to recognise the same roots and philosophy pervading EBM and EBP. Moreover, in 1997 when the Labour Party won the general elections they decided to open a new style of policy making. In order to organise and promote it, they published the Modernising Government White Paper (Cabinet Office, London 1999), in which they argue that:

government must be willing constantly to re-evaluate what it is doing so as to produce policies that really deal with problems; that are forward-looking and shaped by the evidence rather than a response to short-term pressures; that tackle causes not symptoms; that are measured by results rather than activity; that are flexible and innovative rather than closed and bureaucratic; and that promote compliance rather than avoidance or fraud. To meet people’s rising expectations, policy making must also be a process of continuous learning and improvement. (p.15)

better focus on policies that will deliver long-term goals. (p.16)

Government should regard policy making as a continuous, learning process(...)We must make more use of pilot schemes to encourage innovations and test whether they work.(p.17)

encourage innovation and share good practice (p.37)

In this document they describe the goals of the new government changing the approach to public policy. This change implied the evolution of the evidence-based method and logic. We can consider the Government White Paper as the Manifesto of United Kingdom’s EBPM, where EBPM covers the same role of EBM, that is to give accountability at the field of policy. Such accountability is promoted by two main forms of evidence (Sanderson 2002):

  • the first one refers to the goals and then to the effectiveness of the work of the government;

  • the second one refers to the effective results and, consequently, the knowledge on how well policy works under different circumstances.

In practice, policy processes have been viewed as learning processes that have to be studied, analysed and monitored in order to obtain new evidence for building future policies. The expectation was a goal shift of the policy making process: from a short term policy founded on ideology and no-scientific knowledge to a long term policy founded on identified causes of the social problems being faced. Under such a perspective, any components of the policy process based on non scientific arguments are considered a deviation from the “truth/reality” of the problems. In fact, David Blunkett, in his speech in 2000 (Blunkett 2000), emphasised that:

This Government has given a clear commitment that we will be guided not by dogma but by an open-minded approach to understanding what works and why. This is central to our agenda for modernising government: using information and knowledge much more effectively and creatively at the heart of policy making and policy delivery.

“What works and why” became the UK slogan for EBPM promotion. The following government put emphasis on EBPM, although with some differences. The shift was from policy learning to policy delivery, and thus the need to move away from experimentation and the awareness that what matters most is hard quantitative data. In fact, in recent years EBPM has evolved from a giving attention to any kind of scientific analysis to a great attention on quantitative and economic analysis (Cabinet, Performance and Innovation Unit, London 2001; HM Treasury, London 1997).

3.2.3 Other experiences

After being developed in UK, EBPM expanded its influence to other English speaking countries, mainly USA and Australia. In the USA, the most representative event was the foundation in 2001 of the US Coalition for Evidence Based Policy that aims at increasing government effectiveness through the use of rigorous evidence about what works. Evidence is again consciously borrowed by medicine, with the explicit goal of replicating the effectiveness that produced many advances in the field of human health in the field of social policies. Evidence-based policy making was seen as an instrument of rationality that would let society avoiding to waste resources pursuing expensive but ineffective social policies. Evidence is thus a resource-rationing tool (Marston and Watts 2003) in the sense that it indicates the right way to address a social problem, making the country more efficient: focussing the spending only on satisfactory policies.

In Australia there is no formal coalition and no explicit formal willingness to apply EBPM, but the language of evidence has spread to many fields of public policy: we can see examples in health and family services, community services, education and immigration. In these fields we can find sentences referring to evidence that implies that EBPM is being actively promoted in a specific way: “research helps to depoliticise educational reform” (Department of Education, Training ad Youth 2000, p. 190) or, as Mark Latham, then leader of the Federal Parliamentary Australian Labor Party, put it in his speech on welfare reform (Latham 2001, p. 1):

My conclusion is that we should forget about the grand theories of sociology and the ideologies of the old politics and pursue an evidence based approach to welfare reform

Here, evidence is considered as a solution far away from political ideology and thus as an apolitical solution; Smith and Kulynych (2002, p. 163) state that “efficiency becomes the primary political value, replacing discussion of justice and interest”.

Is it true? Does the use of evidence mean avoiding ideology? Is efficiency improved using evidence? In the next section we will discuss whether indeed the use of “evidence” can effectively replace “ideology” and “values”.

4 Criticism of EBPM

Within the EBPM debate, authors cast doubt on whether introducing evidence in the policy making process is actually innovative. If previously policy making could be described as a “swamp” (Schn 1979) characterised by complexity, uncertainty and ignorance, then EBPM aims to move onto firmer ground in which sound evidence, rather than political ideology or prejudice, could drive policy. The question is whether this confidence in the power of evidence is really a step forward, as EBPM could appear as a return to the old time trust in instrumental rationality. In fact Parson (2002, p. 44) states that:

EBPM must be understood as a project focused on enhancing the techniques of managing and controlling the policy making process as opposed to either improving the capacities of social science to influence the practices of democracy.

Sanderson (2002, p. 1) argues that: “the resurgence of evidence-based policy making might be seen as a reaffirmation of the modernist project, the enduring legacy of the Enlightenment, involving the improvement of the world through the application of reason ”.

Actually, in the UK EBPM the focus was on effectiveness, efficiency and value for money. This experience is characterised by a managerial emphasis (Trinder 2000, p. 19). EBPM, in its effort to implement accountability, is linked with an instrumentalist way of managerial reforms that have infiltrated public administration practices in many western democracies over the past three decades (see Almquist et al. 2012). Despite opposite claims these managerial reforms can be assimilated by the same technocratic logic, concerned with procedural competence rather than substantive output (Marston and Watts 2003).

In the following section we introduce the main issues for which EBPM has been criticised: the existence of multiple evidences; the multiplicity of factors influencing policy making; and the contingent character of evidence.

4.1 Multiple evidences

There are many typologies of evidence: the distinction most used is between hard/objective and soft/subjective. The first one includes primary quantitative data collected by researchers from experiments, secondary quantitative social and epidemiological data collected by government agencies, clinical trials and interview or questionnaire-based social surveys. Other sources of evidence, typically devalued as “soft” (Marston and Watts 2003, p. 151), are photographs, literary texts, official files, autobiographical material like diaries and letters, the files of a newspaper and ethnographic and particular observer accounts. Davies defines a scheme (Davies 2004, p. 15) in which he shows that there are seven kinds of evidence that originate from scientific research: impact evidence, implementation evidence, descriptive analytical evidence, public attitudes and understanding, statistical modelling, economic evidence, and ethical evidence. Moreover, Davies states that in policy making “privileging any type of research evidence or research methodology, is generally inappropriate” (Davies 2004, p. 11). Thus, a balance between the different kinds of research methodology and general competence of the full range of research methods is required. Otherwise, Sanderson (2002) states the need for developing just impact evidence (as defined by Davies) in order to build policies through long term impact evaluation.

Due to different opinions in the debate, in order to avoid misunderstandings in practice, the UK Cabinet Office clarify the meaning of evidence in the White Paper on Modernising Government (Cabinet Office, London 1999) defining it as:

expert knowledge; published research; existing research; stakeholder consultations; previous policy evaluations; the Internet; outcomes from consultations; costings of policy options; output from economic and statistical modelling

From this definition it seems that this conception of evidence allows for both “conventional and unconventional scientific methods”. However, a hierarchy of these is implicitly established in practice. This kind of practice is far from being neutral or objective. Indeed, the selection of the “more appropriate” evidence or the one with “greater weight” is necessarily a limitation of what counts as valid knowledge. The building of a hierarchy of knowledge means that we can consider some forms of knowledge to be more related to reality/truth. If this could be considered correct in some cases, it is always not neutral. The same thing happens each time we choose a theory or a method rather than another. Every theory is based on some hypotheses or interpretations of the complex reality and they are not omni-comprehensive. In choosing what counts as the valid knowledge for policies, policy makers implicitly state their interpretation of reality. For instance, observing the recommended and adopted evidences in the UK experience we can deduce that the interpretation of the reality could be intended as post-positivist, in the sense that the stress is on the cause-effect relationship. This claim is supported by the importance given to the concept of effectiveness and efficiency. These two concepts became in the UK policy evaluation often the first, if not the only, qualification needed by a policy to be implemented (HM Treasury, London 2003).

If the discussion up to this point highlighted the problem that around us there are multiple evidences (statistics, surveys, polls etc.) which are difficult to choose from and/or prioritise, we need to focus on another aspect often neglected or underestimated when talking about evidence. The problem is the multiple interpretations that the same “evidence” may carry. This is all the more important, since evidence is expected to be used within a policy cycle where multiple stakeholders with multiple concerns are involved and who are naturally going to interpret the evidence differently. To make things more complicated, such multiple interpretations can be influenced by how evidence is technically produced. We present two examples to better make our point.

Example 4.1

(Air Quality) Consider the case of Air Quality (see Bouyssou et al. 2000). “Evidence” about Air Quality (in France) is expected to be provided by the ATMO index. This index takes into account four pollutants, measured (it does not matter here how) on a scale from 0 to 10 and chooses the maximum among them (the worst). This way to construct the index reflects the approximate knowledge we have about the impact on health of these four pollutants: anyone among them is supposed to be equally unhealthy. However, consider three consecutive measurements: the first one at time \(t_1\) is the current situation (at a certain location), while \(t_2\) and \(t_3\) refer to the situation observed after two policies have been implemented, using the same budget. The case is summarised in Table 1.

Table 1 Three different measurements of Air Quality

Should we consider the ATMO index to be “evidence” then the policy conducting to situation \(t_3\) should be considered as better than the policy conducting to situation \(t_2\). Indeed for the ATMO index, the quality of the air did not improve from \(t_1\) to \(t_2\), while it did from \(t_1\) to \(t_3\). Obviously this is counter-intuitive! One could claim that the disaggregate information should be used as “evidence”, but then are we sure that each of the four measures does not suffer the same type of problem the ATMO index presents? We shall not discuss here what the more appropriate way to measure Air Quality should be. What we want to emphasise is that the ATMO index can be used as “evidence” about whether an observed situation is “healthy”, but cannot be used as “evidence” about the effectiveness of Air Quality improvement policies. In other terms, this index (like any other) allows multiple interpretations which are more or less suitable to the type of assessment we are interested in performing. Such interpretations are strongly related, among others, to how the index has been (technically) established.

Example 4.2

Statistics about Poverty Some may claim that raw statistics reporting “facts” should be considered as “evidence”. But then consider the following fact: 95 % of rural households in country XXXX do not have tap water available. What should be inferred from that? Perhaps that connecting rural households to fresh water distribution is a national priority, requiring an appropriate policy (and corresponding investments).

Surprisingly, if we ask the household owners what they think about that, we could discover that this is not a priority for them. They might claim that they do not see the problem. They fetch water from the nearby water pools. Ok! Here is the problem. Typically, water is fetched by women, while the household owners are men. Certainly, men do not see the problem. What happens if we ask the women? Surprisingly, the women also claim that there is no problem! Indeed, a more thorough investigation reveals that going for water is one of the rare occasions they have to get out of home! Even though fetching water is a hard task, it pays because it allows women to have some social life.

The story, which is a simplified version of a real one, tells us that raw statistics do not automatically reveal any truth. Facts need to be interpreted in order to be used for any decision process and such interpretations are related to subjective values, constraints, customs, history or social norms, among others. The example tells us that “evidence” does not exist independently from the decision process for which it is expected to be used. Although “facts” exist, choosing the “facts that matter” is a subjective process and interpreting these “facts” is another subjective process.

Summarising both examples, we can claim that looking for evidence while considering how to solve a problem is certainly a sound attitude to have and certainly preferable to a purely intuitive approach. However, contrary to the dominant idea that “evidence” should guide policy making, it seems that it is the policy that should guide us in looking for appropriate “evidence”. Actually we should consider questions of the type:

  • Who needs this evidence?

  • Why is that (s)he needs this evidence (what is the problem)?

  • What is the purpose (how the evidence is going to be used)?

  • Who else is affected by such evidence and how?

  • What resources do we commit and what do we expect?

It turns out that such questions are practically the same ones we need to answer when trying to model a decision aiding process, see Tsoukiàs (2007). Under such a perspective we can still consider that we can follow a scientific approach in aiding policy makers involved in a policy cycle, but without denying subjectivity, political priorities, values or culture, placing them, instead, at the center of the methodology to be used.

The problem in policy making is not whether there is enough relevant information, but how to consider, construct and interpret it: “the danger is not that one uses no evidence at all, but that one uses simply the most readily available” (Perri 2002).

4.2 Policy making as a result of many factors

“Evidence” is not the only determinant of policy making, but it is just one of several factors that influence policy makers in choosing and determining policies. It is sure that EBPM represents an evolution with respect to the traditional approach that largely considers in power, people and politics the only policy making factors (Parson 2002). However, it is clear that we have to overcome the “nave” concept of Evidence-Based Policy Making, where research replaces policy, and experts/technicians replace the politicians. Policies are complex “objects” and the policy making process is influenced by several relevant factors. Davies (2004) indicated the following ones:

  • Experience, Expertise and Judgement Policy making implies several stakeholders each carrying different types of knowledge such as grounded experience (of local groups, citizens, economic actors), expertise (of technical staff, scientists, experts) and judgements (public opinion, elected bodies, committees). Such knowledge is expected to be integrated in the policy making process (Nutley et al. 2003). This could play a significant role when the existing information is imperfect or non-existent (Grimshaw et al. 2003).

  • Resources Establishing a policy mobilises material and immaterial resources (knowledge, authority, capital, land, etc.) and results the allocation of resources aimed at implementing a plan of actions. Such resources are bounded (and scarce). The result is a quest for efficiency both as far as the policy making process and its outcomes are concerned. This “economic” aspect of the policy making process is perhaps the most studied in terms of supporting methodologies and practices (Cabinet Office, London 2001, 2003; HM Treasury, London 2003; ODPM, Office of the Deputy Prime Minister, London 2000).

  • Values Values are the essence of policy making. They induce preferences, priorities, judgements and justify actions. They have several different origins: ideology, culture, religion, beliefs, knowledge, discussion etc.. It is unlikely that any policy making process can be legitimated without making reference to some set of values. However, it should be noted that values evolve over time in unexpected directions (consider the cases of the value of the environment in the last 50 years, the value of women rights in the last 150 years or the value of individual freedom in the last 250 years).

  • Habit and Tradition Political institutions have their own organisational inertia. The policy making process is characterised by procedures and patterns often rooted in culture and history, but nevertheless constraining the potential outcomes. Several times such constraints appear under the form of fundamental laws (such as constitutions), but equally likely they can appear as socially constructed legitimation processes and outcomes.

  • Lobbyists, Pressure Groups and Consultants Any policy making process mobilises pressure groups, informal or organised lobbyists as well as the opinion of experts. Such stakeholders are not always visible and have a less systematic influence. However, they play a key role in the participation process, allowing specific concerns, stakes and interests to find their way the discussion.

  • Pragmatics and Contingencies Policy making, agendas and decisions are influenced by unanticipated contingencies and “emergency” procedures which do not necessary fit with rational policy making. Policies are expected to take into account long term uncertainties as well as the aspirations of future generations. This can be in contradiction with a contingent, short term view of policy making (Phillips Inquiry, London 2001; Royal Society, London 2002).

The above list of factors, which are pragmatically considered when conceiving or evaluating a policy, shows that “evidence” needs to be understood in terms of knowledge produced within a decision aiding process and not as objective information revealing the truth.

4.3 Evidence as contingent knowledge

Young et al. (2002) identify five models in which the relation between research and policy can be shaped and defined, five ways through which knowledge inputs are managed through the policy cycle:

  • the knowledge-driven model,

  • the problem-solving model,

  • the interactive model,

  • the political/tactical model,

  • the enlightenment model.

These models are used to understand how evidence is thought to shape or inform policy in exploring the assumptions underlying evidence-based policy making. The first two models are the extreme forms; they differ in the direction of influence of the relationship: in the first (knowledge-driven model) research leads policy in a sort of scientific inevitability, whereas in the second (problem-solving model) research priorities follow policy issues. In the interactive model there is no position of influence, and the relationship is characterised as mutual, subtle and complex. The political/tactical model sees research priorities as settled by the political agenda; studies are used to support a political position. In the enlightenment model, however, the benefits of research are indirect because they contribute to the comprehension of the context in which the policies will act.

These five models are steps in a ladder between the power of authority and the power of expertise. “Emphasising the role of power and authority at the expense of knowledge and expertise in public affairs seems cynical; emphasising the latter at the expense of the former seems naïve” (Solesbury 2001, p. 9). In the experience of the United Kingdom, we cannot recognise any of these as a dominant model (Wells 2007, p. 25). Sanderson (2002, p. 5) defines a set of variables that are implied in the definition of a model: “the nature of knowledge and evidence; the way in which social systems and policies work; the ways in which evaluation can provide the evidence needed; the basis upon which evaluation is applied in improving policy and practice”. Under such a perspective, the Labour government announcement that a new era in which policy would be shaped by evidence, thereby implying that “the era of ideologically driven politics is over” (Nutley et al. 2003, p. 3) is controversial, to say the least. It is neither neutral, nor uncontested; evidence is a fundamentally ambiguous term.

The way through which EBPM has been perceived and practiced reveals an idea of policy making based on a “cause-effect” principle. To simplify, social outcomes are seen as the result of how certain mechanisms work within a certain social context: once we know the mechanisms and the context, we can foresee the consequences. This approach has been criticised by constructivists, for whom the “knowledge of the social world is socially constructed and culturally and historically contingent” (Sanderson 2002, p. 6). Sanderson (2002) points out that research does not have the role of producing objectivity, or solutions for policy makers, concluding that constructivism needs to be reconciled with “practical requirements”.

5 Discussion

Let us summarise our claims.

  • Policies have a twofold impact:

    • they deliberate an allocation of resources aimed at pursuing some objectives (not always measurable though);

    • they generate a legitimation space, thus producing inclusion (or exclusion), inviting stakeholders to enter within (on leave) it.

  • Policy making is a long term decision making process with specific characteristics:

    • it entails participation “de facto” (due to the legitimation associated with any policy);

    • there are moments, at least one, of public deliberation;

    • it is expected to be accountable, not only for the involved stakeholders, but also for the citizens in general;

    • it is guided by the search of legitimation, both for the policy itself and the policy makers.

  • Policy making should be viewed as a “policy cycle”, from the perception of a problem, to the design of policies, their legitimation, their implementation, their monitoring, their assessment etc.. Under such a perspective, a policy cycle:

    • requires knowledge aimed at supporting the processes within it;

    • produces knowledge used both within the cycle and beyond it.

  • Within a policy cycle several decision problems arise such as: Which aspects of the problem should be considered as more important? What information is relevant and should be used? Who are the principal stakeholders? Who else is affected by the situation and possible policies? Which resources are allocated, where, how and when? What matters in terms of potential consequences? Decisions (of any type) result from combining factual information with subjective values, opinions and likelihoods. For this reason, decisions are synonymous with responsibility. Under such a perspective, there is nothing like an “objective decision”. Decisions are made by somebody, or a group of people, and reflect his/her/their standpoint within a decision process. The consequence is that there will never exist an “objectively defined policy”. Policies will always reflect what subjectively matters for those implied in the policy cycle.

  • What could be considered as a legitimated source for values, opinions and likelihoods to be considered by the policy makers in presence of multiple stakeholders and multiple scenarios? The market? A referendum? A focus group? A public debate? A poll? Whatever we adopt we should remember that:

    • it is a subjective choice to privilege any source of knowledge;

    • there exist different forms and levels of participation (see Daniell et al. 2010), and these do not necessary result in improving the efficiency of the decision process (sometimes more participation may result in less efficiency);

    • the validity of any legitimation claim is a social construction, resulting from argumentation about facts, norms, values, sincerity and relevance.

Our survey about the EBPM movement highlights a number of issues:

  • there has been a legitimate demand for allowing scientific knowledge, facts, statistics and other information sources to acquire a status within the policy cycle;

  • despite opposite declarations, there has been a clear trend in considering such “evidence” as the driving force in the process of designing policies, thus claiming for such evidence a status of “objective knowledge”;

  • such a trend contradicts the nature of the policy cycle both from a substantial point of view (knowledge is not objective, it is functional to some purpose) and from a procedural point of view (there exist many evidences with different possible interpretations, arbitrarily used by the stakeholders within the policy cycle);

  • arguing about policies needs legitimate knowledge; it also produces knowledge which in turn needs to become legitimated.

The above discussion leads us to consider the problem of how knowledge is produced (constructed) in order to support decision making. The construction of knowledge (or evidence) in order to design a business policy has already been considered establishing what is called today “Business Analytics”.Footnote 2 Business analytics has been initially developed mainly for the private sector (Davenport et al. 2010; Davenport and Harris 2007), although applications in other areas, for example health analytics and learning analytics are also growing (see Buckingham Shum 2012; Fitzsimmons 2010). Seen from a very pragmatic point of view, “analytics” is an umbrella term under which many different methods and approaches converge. These include statistics, data mining, business intelligence, knowledge engineering and extraction, decision support systems and, to a larger extent, operational research and decision analysis. The key idea consists in developing methods through which it is possible to obtain useful information and knowledge for some purpose, this typically being conducting a decision process in some business application.

The distinctive feature in developing “analytics” has been to merge different techniques and methods in order to optimise both the learning dimension as well as its applicability in real decision processes. In recent years “analytics” has been associated with the term “big data” in order to take into account the availability of large data bases (and knowledge bases as well) possibly in open access (Open Data organisations are now becoming increasingly available). Such data come in very heterogeneous forms and a key challenge has been to be able to merge such different sources, in addition to solving the hard algorithmic problems presented by the huge volume of data available.

However, the mainstream approach developed in this domain is based on two restrictive hypotheses. The first is that the learning process is basically “data driven” with little (if any) attention paid to the values which may also drive learning. This is with regard both to “what” matters and also to “why” it matters, potentially incorporating considerations of the extent to which different stakeholder perspectives are valued or trusted. The second is that, in order to guarantee the efficiency of the learning process seen, from an algorithmic point of view, as a process of pattern recognition, it is necessary to use “learning benchmarks” against which it is possible to measure the accuracy and efficiency of the algorithms. While this perspective makes sense for many applications of machine learning, it is less clear how it can be useful in cases where learning concerns values, preferences, likelihoods, beliefs and other subjectively established information, which is potentially revisable as constructive learning takes place.

What does this tell to us as decision analysts? We denote with this term the professionals/practitioners who are invited to enter the policy cycle as “experts” or as “technical staff” with the explicit or implicit role of producing the decision support knowledge within the cycle. We can summarise the type of demand decision analysts receive, in terms of producing information and knowledge aiming at aiding the stakeholders, the policy makers or the citizens to:

  • better understand the stakes and issues at play;

  • better understand the potential consequences of potential actions;

  • better foresee potential unexpected/unwanted outcomes;

  • better justify, explain, argue about options, decisions and strategies;

  • design/construct/conceive new options beyond the existing ones;

  • improve participation, inclusion and, ultimately, democracy.

In doing so decision analysts need to use existing information (facts, science, grounded knowledge, best practices etc.), need to constructively model values, opinions and likelihoods for the stakeholders and need to do so in a meaningful, operational and legitimated way. We denote this set of skills under the term of “policy analytics” (Tsoukiàs et al. 2013): To support policy makers in a way that is meaningful (in a sense of being relevant and adding value to the process), operational (in a sense of being practically feasible) and legitimating (in the sense of ensuring transparency and accountability), decision analysts need to draw on a wide range of existing data and knowledge (including factual information, scientific knowledge, and expert knowledge in its many forms) and to combine this with a constructive approach to surfacing, modelling and understanding the opinions, values and judgements of the range of relevant stakeholders. We use the term “Policy Analytics” to denote the development and application of such skills, methodologies, methods and technologies, which aim to support relevant stakeholders engaged at any stage of a policy cycle, with the aim of facilitating meaningful and informative hindsight, insight and foresight.

6 Conclusion

Designing, implementing and assessing public policies is a major challenge for our societies. Our paper was guided by a general underlying question: why is aiding to design, implement and assess public policies so different from other decision aiding skills used when the clients are business oriented and the problem context does not concern public issues?

The aim of this paper was twofold. On one hand, we tried to understand why public policies represent a specific challenge for decision aiding. On the other, we analysed the so-called “evidence-based policy making” literature since it represents the most recent attempt, originating from the clients’ side (the policy makers), to focus on the relation between the policy making process and the technical, scientific, expert, analytical support that such a process demands. The standpoint of our analysis has been clearly decision analytic. We do not underestimate the sociological or political dimension of policy modelling support. We have rather focused on the challenges policy making offers to our discipline and profession.

The analysis of the so-called policy cycle shows that policy making is a decision making process with precise characteristics: long time horizon, use, exchange and redistribution of public resources, de-facto participatory nature, deliberative, accountability and legitimation driven. This is related to the specific nature of public policies, which besides being deliberations about resource allocation, create a legitimation space for stakeholders and citizens. If decision aiding is a process generating knowledge (possibly in an analytic form, but not only) to be used in a decision process, then it is clear that the knowledge required to support policy making processes needs to address such characteristics (for instance addressing the problem of legitimate knowledge or of legitimated argumentation).

Under such a perspective our paper suggests that the evidence-based policy making approach, although originating from a legitimated demand, fails to address such challenges. The reason is that evidence does not exist independently from policies and that once identified does not “objectively” drive the policy cycle. However, the demand for using analytic information in order to support policy making remains valid, but needs to be addressed differently. This new frame, briefly presented in the paper, we call “policy analytics”.