Introduction

Regulating technologies, innovations and risks is an activity that, as much as scientific research needs proofs and evidenceFootnote 1. As new technologies and products have appeared along the XXth century, regulatory bodies have been created to look over their safety, quality, efficacy, reliability or accessibility. Medicines, foods, chemical products, biological innovations but also aircraft, ships or financial products are affected by this evolution. The regulatory organizations that control these technologies and markets—whether by delivering permits or patents, setting product specifications, performing risk assessments, or performing some other function—are inevitably knowledge-intensive. They are organized to gather and treat information, they maintain large databases, execute or order experiments and tests, set standards for these tests and their interpretation. They employ great numbers of scientific staff, and negotiate close relationships with industrial and academic worlds to perform and/or interpret these tests.

This knowledge-intensity stems, initially, from the complexity, uncertainty or ambiguity of technological properties and effects. Knowledge is indispensable to deal with the task of regulating technologies and markets. It stems, secondly, from the cultural legitimacy of scientific knowledge in contemporary societies, the pervasiveness of the norms that define the nature of credible knowledge as well as the fields of practice from which these norms emerge. Regulatory agencies, from this perspective, are “boundary-organizations” (Guston 2001) that draw legitimacy from both scientific and political spheres. Even though the knowledge they produce or mobilize to assess products and calculate risks or benefits is distinct from academic science, as conveyed by such expression as “mandated science” (Salter 1988) or “regulatory science” (Jasanoff 1990), they still “claim the mores and authority” of conventional science (Wynne 1984). The power of regulatory agencies, in many ways, equates with and is measured by the credibility of the knowledge they produce. Knowledge is a primary resource for these organizations to assert their legitimacy, as is the authority and credibility of the scientific experts and professionals that they recruit or consult through advisory committees.

Much of what experts do in or for regulatory agencies concern the interpretation of regulatory science. Regulatory science covers the testing of technologies and their risks and the interpretation of test results in a mixed industrial, bureaucratic and academic environment, to legitimize the adoption of policy measures (marketing authorization, labelling, withdrawal, definition of thresholds for exposure, use conditions…) (Irwin et al. 1997; Borraz and Demortain 2015). In regulatory science, dedicated tests, measurements and all sorts of quantitative information are used to try and prove that a technology may or may not be authorized, that a component is hazardous and therefore should be banned or explicitly labelled, or that use and exposure to the technology should be limited. Relevant examples include clinical trials and the licensing of medicines; toxicological risk assessment and authorization of chemicals; technology assessment and pollution control technologies; fault-tree analysis for nuclear reactor design; life-cycle analysis for elements of new transportation systems, and so on. These forms of knowledge are pivotal in the evaluation of technologies, innovations and risks. They are partly defined in de jure and de facto standards (guidelines), often internationally (Cambrosio et al. 2009). For another part, they rest on professional conventions, or semi-codified ways of evaluating problems and on the experience of scientists and engineers who perform or interpret them. These forms of knowledge perform regulatory intervention. By their very content and form, and given the socio-technical protocols and networks that support them, they enable to make public, legally binding decisions about technologies. They may also be invoked to justify not intervening, letting technologies reach the market or stay in use.

Regulatory science is not a new subject in the social sciences, particularly in the field of research on expertise and use of science by regulatory agencies for instance, or in the field of Science and Technology Studies (STS). This introduction to the special issue offers thoughts on why it seems relevant and timely to bring it back in, in the light of newer issues of regulation of innovation, regulatory failure and public problematics of industry bias, regulatory capture and experts conflicts of interest. Given the centrality of regulatory science, the enormous decisions and stakes it commends, it becomes crucial to ask where its standards come from and gain credibility, but also, what valuations of technology, and appreciations of their risks or benefits they embed? Who controls these standards? This paper introduces the four contributions comprising the special issue. It outlines a perspective to question the construction of regulatory science or, in the terminology adopted here, the authorization and standardization of regulatory knowledge, particularly the role of networks of scientific experts therein.

Controversies in Regulatory Science and Expertise

Today, most policies and regulations are, in discourse at least, founded on scientific assessments and quantitative evidence. In this context, the regulatory science of testing and evaluating is of primary importance. It is the channel through which regulatory acts and non-acts are negotiated, through which products diffuse, or fail to, the one through which policy objectives of protecting health and the environment or of sustaining innovation, are pursued. Standards of regulatory science are as important a stake as the technology standards they allow adopting. Regulatory agencies and the regulated industries know this for sure. They invest massive amounts of money into building the necessary capacities and competencies and what appears to be an overall vast economy of testing and evaluation, complete with its own infrastructures, industries and professionals. Being so central to policies for technological innovation and risks, knowledge requirements are also the object of a lot of lobbying and corporate influence activity. Industries spend great efforts trying to influence and negotiate these protocols, interpreting the resulting information, rather than waiting to see what sort of measure regulators will eventually derive from them.

Not only are the results of regulatory science—the particular assessments of that risk or that technology—controversial. Its constitutive standards, concepts and protocols are now frequently the object of critique. The standards of knowledge applied in a given regulatory area by experts, professionals and scientists are more and more frequently seen as the source of regulatory failures. They are at least partly seen as responsible for the inability to detect and avert uncertainties, sometimes major ones. This has emerged as a major source of questioning of regulatory science and experts, and their possible biases.

Many potential cases across diverse regulatory areas may illustrate this point. Cases of sudden and unexpected discoveries of serious adverse drug reactions are interesting cases to think about this phenomenon. The general history of the surveillance of adverse drug reactions shows the difficulty of official systems of monitoring to channel signals and experiences of adverse reactions that arise from patients and professionals that are not duly inscribed in these lines of communication, and respective of its models of information and hierarchies of expertise. The recent scandal around benfluorex (Mediator) in France is precisely one such case. The deliberately faulty and misleading information provided by the firm to regulators over the years, and the incapacity of the latter to get closer and control the industry as a site of knowledge production, meant that there was a high level of ignorance of the product and its effect within the regulatory system. This was compounded by the incapacity—both professional and epistemic—of the medical experts that populate the system of collection of signals of adverse drug reactions to set aside the standard of statistically-proven causal link between a medicine and an adverse event. That means that the system ignored or minimized the experience which was reported by individual doctors, of individual patients, allegedly suffering from an adverse reaction to the drug. The system of pharmacovigilance is designed to channel cases of individual adverse drug reaction up to the national agency for consideration. Once submitted, a report of suspected adverse reaction undergoes several evaluations of the plausibility of the link that is made between the adverse reaction and the use of the drug. This work of “imputation” and causality assessment is guided by an algorithm, which is written in the law and defended by the clinical pharmacologists who built this pharmacovigilance system. The algorithm is a practical and decisive tool to organize and streamline the collective work of interpreting clinical information. However, it leads in practice to classify most case reports as “doubtful” (the different scores are: “incompatible,” “doubtful,” “possible,” “likely”). As the official inquiries revealed (Bensadon et al. 2011), it sustains a culture of clinical and statistical certainty, by which one needs a large array of converging case reports to issue a judgment of likely cause, and go to a regulatory decision of withdrawal—at the expense of another practice which would consider any individual case report as a possible signal, or alert, to be further investigated. Despite several convincing and legitimate proposals to reform this approach towards pharmaceutical regulatory knowledge, the algorithm keeps being used.

In chemicals regulation and risk assessment, a dominant standard of regulatory science is animal testing and the development of dose-response curves (NTP 2002). Controversies are more frequent about the implications of this standard practice and culture of testing chemicals. In May 2012, for instance, the Food and Drug Administration (FDA) once more resisted calls to modify the limits of exposition to the chemical Bisphenol A and decided to continue permitting its use. The FDA was thus confirming a first assessment dating back to 2008, going against the calls of environmental and scientific groups to ban it—or at least to take into consideration the fact that Bisphenol A was proven to have detrimental health consequences at low levels of exposure. Over the years, academic research has established plausible links between exposure to Bisphenol A, including at low doses, and a number of health problems such as breast cancer, low sperm count or developmental disorders. The vast majority of these studies, however, were left out of the corpus of data that the FDA considered, or were not given great weight in the overall set of studies.

One criterion was instrumental in selecting studies: compliance with the so-called “Good Laboratory Practices” (GLP), codified both by federal regulatory agencies in the US and now by the Organization for Economic Cooperation and Development (OECD). GLP requirements in no way define what a “valid” assay or study of a chemical risk is. GLP rules imply to, for instance, appoint a study director and a quality assurance unit, to ensure that all people involved have the appropriate qualifications to perform the test, that all raw data be kept for inspection, that sufficient room and storage capacity exist to separate groups of animals or tests systems, routine maintenance and calibration of equipment, and so on…: an extensive list of requirements that are designed to ensure the tests are of sufficient quality and “integrity”—and to eliminate lame private laboratories from the testing market. By applying that standard upfront, however, regulatory agencies mechanically cut off a large part of the relevant research from the science they consider, because many of the low-dose studies come from academic labs which do not apply GLP requirements. On the contrary, those labs that are GLP-certified tend to restrict their testing work to standard, high-dosage protocols. By leaving out a certain amount of studies that apply low dosages, and do not follow GLPs, regulatory agencies form a more homogeneous body of research and knowledge, which concurs in showing that there actually is a safe dose of exposure, against those who think that following conventional, regulatory methods of setting a safe dose will hardly protect against this non-dose related, large public health threat (Myers et al. 2009a, b; Vogel 2013).

The standardization of high-dose animal testing for regulatory purpose is not only at stake for BPA and endocrine disruptors. Similar controversies have erupted in other areas, for instance, that of genetically modified foods and pesticides – an area in which Gilles-Eric Séralini, a professor of molecular biology, has consistently engaged with what he sees as insufficient and biased animal testing for regulatory purposes by the agro-chemical industry, or that of the eco-toxicological impact of pesticides: the tests that are routinely accepted by regulatory agencies to evaluate the danger of pesticides for birds and the environment are defined in private and expert circles. There also, they seem to be unable to capture certain effects, or test safety hypotheses that NGOs push forward. Since at least ten years now, the methods themselves, their incapacity to capture low-dose risks, the possible pro-innovation biases of the experts that devised these methods, all of that has become an object of controversy, in this case with genetically modified organisms (Levidow et al. 2007), elsewhere with bee-harming insecticides (Suryanarayanan and Kleinman 2013).

These issues go beyond environment or health regulation and the kinds of knowledge that is routinely called “regulatory science” (that has indeed turned into a field of its own, concerning mainly the science of testing and evaluating pharmaceuticals, chemicals, pesticides, cosmetics; see FDA 2011; Moghissi et al. 2014; NAS 2016; Busquet and Hartung 2017). Going back to the financial crisis of 2008, for instance, it has become manifest that the models and standards of evaluation employed by financial analysis in banks and rating agencies were a part of the problem (MacKenzie 2011, 2014; Dorn 2012). Not to mention probabilistic analysis and tests on nuclear reactor designs that seem hardly affected by their inability to integrate the possibility of Fukushima-like events (Downer 2014).

The standards of regulatory science—the criteria against which the objects are assessed, the models and laboratory settings in which they test representations of these objects, the metrics and thresholds that are used to classify them as risky, sustainable, efficacious or not—resist change. It is even difficult to even realize their particularities, and become conscious of the particular paradigms or evidential culture (Collins 1998; Böschen 2013) they are rooted in unless a disaster or “epistemic accident” (Downer 2011) reveal their singularity. Where the perception of their insufficiency or biases becomes more acute, these standards can become the object of controversy and critique. And indeed, it looks like regulatory science and regulatory scientists have a bad image nowadays. From drug disasters to the financial crisis, the same sort of information shortages and regulatory failure seem to be lamented against. Regulators and inspectorates are frequently being accused of missing signals of upcoming failures or disasters. The criticism extends to those that test or evaluate their products, from scientific experts and engineers that are consulted by regulatory authorities, to the professionals that populate these organizations.

Regulatory science is often accused, publicly, of broad and generalized bias towards technical innovation. Controversies surrounding the reform of clinical trials methodology (Carpenter 2016) or program of toxicity biomonitoring (Daemmrich 2012) underline the fact that this science is mostly of industrial origin, and therefore by definition framed by an emphatic understanding of products’ qualities. This science, being produced and reviewed in closed settings, does not undergo the kind of open peer review that would bring credibility and trust to it (Michaels and Wagner 2003). The persistence of uncertainties about the impact of products is ignored in risk or technology assessments that often result in reassuring numbers. Discrepancies between studies that take the same technologies or risks as their objects, or between assessments of these studies by different expert committees, compound to create doubt about the objectivity of regulatory science.

A pervasive theme, these days, is that of conflicts of interest: experts who evaluate technologies, benefits, risks, and so on, would be too close, organizationally or cognitively, to the industries they participate to regulate. They are unable to critically review products, and to produce negative decisions about them, such as a ban or a stringent reduction in their distribution. Regulatory agencies are affected by revolving door phenomena, by which members of the industry that they regulate come to work on the public side, and minimize interventions by limiting the circulation of negative information. By sustaining set methods and ways of (not) seeing risks, experts are effectively acting as agents of broader strategies of active production of doubt and ignorance (Michaels 2008; McGarity and Wagner 2008; McGoey 2016; Dedieu and Jouzel 2015). Regulatory tests would be designed to minimize or evacuate risks, and make industry strategies to alert on other risks than that posed by their products, more efficient. They disenfranchise environmental and health activists, even those that work to put their claims through in scientific terms. They disguise value choices (for technological innovation and market development) in scientific methods and measurements of the risks and benefits of technologies.

The notion of “capture” finishes making the point (Carpenter and Moss 2014): this regulatory and corporate science only produces data and measurements that are favorable to the product being evaluated. But given the cultural and social interactions among industries, experts and regulators (Kwak 2014), all align on the same way of seeing risks and technologies, and tend to believe in the proclaimed objectivity of these measurements. Regulators, in particular, thus lose the capacity to go against the regulated industry and the opinion of their experts. Regulatory science, in this sense, culturally disables agencies’ “directive” power (Carpenter 2010). It favors particular audiences—the regulated industry, and a neoliberal framework of market expansion (Abraham and Reed 2001, 2002; Abraham and Ballinger 2012), by staging closed negotiations around the validating and content of the evidence it produces. It also contributes to the exclusion of various non-expert publics from regulatory consultations, and prevents considering the experience and views of these publics of the technology under consideration.

These debates are not completely new of course. Protests against regulatory science, such as methods for the risk assessment of chemicals, encompassing toxicological methods of dose characterization and statistical extrapolation techniques, emerged almost as rapidly as the techniques themselves stabilized and entered in use. Starting in the second half of the 1990s, public concerns with the choice of methodologies and scientific criteria applied to environmental issues to define whether or not to intervene. Risk assessment was gradually linked to conservative interests, and to industry influence. Tobacco, chemical and oil industries have repeatedly pleaded in the US for the application of risk assessment and cost-benefit analysis methods, promoted as the standard of “sound science” (Wagner 1995). Even though risk assessment could have translated into more precautionary and protective methods, and was in fact at times advocated by environmental and public health groups, it was more and more seen in the 1990s as an instrument of industry and conservatives’ deregulation agenda. The common opinion about risk assessment, on the sides of public interest groups at least, started to be that risk experts buried pro-industry value judgments within arcane and reductive risk calculations (O’Brien 2000; Wynne 2002).

But the current situation may be new in this that regulatory science has institutionalized, often on a global plane (Levidow and Murphy 2003; Winickoff and Bushey 2010; Winickoff 2015; Winickoff and Mondou 2017). Most methods in place decades ago are still applied. At the same time, the testing, monitoring or evaluating technologies continue to be framed in a scientistic way, stressing the importance of quantification and predictivity. Thresholds of evidence (Collins 1998) have continued to increase, at least formally, as the industry and profession of regulatory science, testing and evaluation of products have organized. The above critiques of regulatory science may be a testimony of the fact that it has emerged as a strong, well-structured economy of knowledge production, one that is harder to change or reorient. This is particularly problematic in the light of the emergence of new technologies, which require adapting these tests to evaluate the singular risks potentially associated with them. One only tests or assesses plausible, expectable aspects of these technologies. In the terminology of the kinds of unknowns, regulatory science focuses on known unknowns, not unknown unknowns. This is indeed a defining feature of testing (Pinch 1993: 26). So, new technologies are sometimes tested like old ones, even if their properties and effects can be substantially different and not well known. This regulation of innovation problematic is pervasive, as the case of new medicines (Faulkner 2012; Groves 2013) or nanotechnologies show (Laurent 2017).

Expanding Research on Regulatory Science

Regulatory science became an issue in the 1970s and early 1980s, when lawyers and public administration scholars started to worry about the conditions in which more and more scientific knowledge was channeled into the processes of risk and innovation regulation. The science that was used, and how it informed decisions by regulatory agencies, was the site of intense debates and controversies as concerns the political and institutional factors that led to the possible distortions or “mis-use” of science, and the legitimacy of regulatory agencies to shape this knowledge (Regens et al. 1983; Schmandt 1984; Greenwood 1984a, b; Majone 1984). The study of the use of science in policy-making by public administration scholars has also shown that the shaping of an expertise based on external consultations with scientists or academic professionals is full of dilemmas for administrations (Ashford 1984; Greenwood 1984a, b; Rushefsky 1986).

In STS research, “regulatory science” denotes an intermediary or in-between domain of scientific practice, apart from both research and policy-making, which aims to fill gaps in the knowledge base relevant to regulation, to provide knowledge syntheses, and to make predictions (Jasanoff 1990). The notion of regulatory science emerged after or alongside other notions to capture the dilemmas of producing scientific-looking knowledge in politicized environments. Weinberg (1972) had spoken about trans-science, to denote questions that were formulated in the language of science but unanswerable by science alone, because of their ambiguity and inherent uncertainty. After “regulatory science,” Ravetz and Funtowicz spoke about post-normal science, meaning the science produced in relation to issues for which “facts [are] uncertain, values in dispute, stakes high and decisions urgent” (Funtowicz and Ravetz 1993). Regulatory science, like these other terms, denotes a practice of knowledge production and claim-making that does not adhere to the modes of knowledge validation that prevail in established scientific disciplines. It is a kind of scientific practice that is subject to constant negotiation of criteria of facticity and truth; one in which the deconstruction of claims is systematic, and that leaves nearly no site of scientific authority unaffected. Jasanoff (1987, 1990, 1995) has amply documented how people involved in regulatory science are involved in constant boundary-work, to separate the politics from the scientific in their practice, and thus to preserve their chances to be perceived as being objective (Jasanoff 2012). Credibilization work takes different forms in different sectors and countries – scientists follow local institutional logics to enhance their legitimacy. These practices are variegated. They are structured by very diverse cultures and modes of reasoning. They are “culturally situated, contested, and enacted at multiple sites and organizational levels” (Jasanoff 2012: 308), which is why regulatory science has different faces depending on the institutional settings in which it is practiced (Halffman 1995; Joly 2016).

Research in the field of STS not only documented these local, cultural determinations of what science for policy and regulation looks like. It also stressed (but perhaps dedicated much less investigation to) the fact that regulatory science is the product of a mutual construction of scientific paradigms and policy frameworks. Shackley and Wynne, based on a study of the dominant family of climate models (general circulation models), find that “the dominant agendas, commitments and goals of particular policy communities” are referred to in scientific communities as justifications to construct and adopt “particular scientific styles, practices and dispositions” (Shackley and Wynne 1995: 221). In return, these scientific commitments, once they appear uncontested and objective, cement the policy goals that motivated them initially. Shackley and Wynne observe a similar mutual construction of science and policy in the regulatory control of the marketing of chemicals. In toxicity testing, a common fabric of assumptions and claims unites experts and policy-makers, such as the notion that predictions of human safety can reliably be derived from tests on animals. In particular, it is assumed that the existence of a threshold of exposure (a dose below which no adverse effect materializes) is similar in animals and in humans (Shackley and Wynne 1995: 220). This assumption both strengthens toxicologist’s expertise, based on the performance of these animal tests and the computation of these thresholds, and secures the capacity for decision-makers to make an effective decision and regulate: the threshold can easily be translated into a risk management measure of limiting the exposure of consumers to a certain amount of the substance.

This notion of mutual construction of science and policy, continued by the language of the “coproduction” of the social order by science and law (Jasanoff 2004), illustrates the fact that regulatory science is not simply a site of complicated negotiations of the credibility and authority of scientific expertise. It also reflects the fact that science is the source of these rules, instruments and “animating ideas” that make up regulatory regimes (Hood et al. 2001). Regulatory science incorporates legal criteria of “facticity” and truth (Salter 1988; Wynne 1989; Irwin et al. 1997; Jasanoff 2004), adapt to institutional regimes and cultures (Collingridge and Douglas 1984; Johnston 1984; Campbell 1985; Hamlin 1986; Alm 1997). In the terminology of Pickstone (1993), as adapted by Gaudillière (Gaudillière 2009; Gaudillière and Hess 2012), regulatory science embodies a particular way of knowing things, that legitimizes a way of intervening on these very things and associated markets. There is an intimate relation between the supposedly scientific activities of measuring, assessing or testing objectified phenomena such as the risks or benefits of a technology, and the legitimacy of deciding on its fate, of structuring a market for it.

Regulatory science, as an activity of testing, evaluating and monitoring technologies and their effects, entrenches a particular definition of the regulatory object (Cloatre and Pickersgill 2015). The methods of regulatory science, the very design of the tests that are employed and the rubrics of information produced, operate on a definition of the object. Clinical trials are designed to produce a simplified measure of the benefits and risk associated with a drug, where the benefit/risk ratio is the property of the drug based on which a regulatory decision is made (Marks 1997). A genetically modified organism is uniquely defined by a “transformation event,” defined in the law and this concept defines methods of measurement and testing of each new genetically modified organism, and a field of practice for regulatory science (Lezaun 2006). Regulatory science is the science that operates within the framework of these legal and technical references for what the product is and what problems these products may pose. It generates information within this framework. It institutionalizes this framework further, and brings credibility and veracity to the references it is founded on.

Standards for what and how to analyze in products and technology also embed regulatory strategies and concepts (Demortain 2011), and particular decision options. By prescribing the production of certain information and not others, regulatory science enables certain regulatory strategies, and discourages others. To use an example, again, founding the evaluation of genetically modified organisms on a concept of “substantial equivalence” and on a methodology of comparative chemical analysis, is biased towards positive decision of authorizations of these very products, since such chemical analysis rarely finds differences and concerns about potential hazards (Levidow et al. 2007; Demortain 2013, 2015). As a regulatory-scientific method, it embeds a preference for a light-touch approach, and seems to ease positive market authorization decisions. Regulatory science is broadly speaking a form of gate-keeping (Faulkner 2017), but its specific criteria, methods and representations of the product and of the risks matter, for they give rise to the possibility to apply certain regulatory options and strategies.

Regulatory science, in turn, is constitutive of actors of regulatory systems. It is constitutive of a scientific and expert personnel, first: those that can claim with credibility to be able to operate the scientific methods in question, or to have founded them. Behind each regulatory regime stands a dominant disciplinary cadre of expert, which develops the regulatory science framework prevailing in the area (Bodewitz et al. 1987), creating a mutually constitutive link between experts, as people, and objects and ways of testing them (Cambrosio et al. 1992; Eyal 2013). The information and science that is mandated to regulate products also defines what a regulatory and a regulated organization is, does and looks like. In a regime of regulation by information (Kleindorfer and Orts 1998; Sunstein 1999; Karkkainen 2000), an attempt is made “to change behavior indirectly, either by changing the structure of incentives of the different policy actors, or by supplying the same actors with suitable information” (Majone 1997: 265). Regulation by information has led to myriad product labeling laws and regulations that mandate the computation and disclosure of various types of indicators, from accident rates to emissions of toxic chemicals, mainly applicable by the regulated companies (Coglianese and Lazer 2003). These formats of regulatory information, over time, shape regulated organizations, as companies gradually organize and build up the expertise necessary to execute the protocols and inform negotiated risk criteria. For instance, research on food safety systems has shown that the requirements for food hygiene and safety have de-selected smaller businesses from an initially large and diverse food industry, because of the incapacity of those to simply organize to produce this particular sort of scientifically-validated information about contaminations occurring in food production lines, as prescribed by the dominant HACCP method (Demortain 2008; Wengle 2016). Rigorous drug risk-benefit evaluations by regulatory agencies require as rigorous and ambitious testing of drugs by companies, to such an extent that the companies that are organized to understand and respond to information needs of drug regulators (with large regulatory affairs and Research & Development departments) fare better on the market than others (Carpenter 2010).

The same goes for regulatory organizations. Regulatory agencies, in a sense, draw their authority from the fact of shaping and being shaped in return by certain forms of knowledge and competence (Selznick 1985). They can credibly make decisions based on the results of particular tests or sets of accident data, only if the various actors that produce and convey this knowledge, from the outside of the organization through to the inside of it, and from those staff that work on the information up to those who decide on it, share similar criteria, and that knowledge can flow seamlessly between them. The expertise that the organization recruits, the knowledge criteria and practices it institutes internally very much matter, then, in the capacity of the organization to be recognized as being expert and objective (Carpenter 2001).

Regulatory Knowledge: Sites, Standards and Politics

Much of what constitutes an effectively functioning regulatory regime thus takes shape at the same time as standards for what needs to be analyzed and known. Questioning the emergence of these knowledge standards, their institutionalization, the networks within which they are negotiated, the cadres of scientists and professionals that make them credible, represents a potentially rich field of inquiry about science, knowledge and regulation.

These questions are prompted by the examples given above, of contestations of the standard methodologies for evaluating risks and benefits of technologies, and the accusations against expert frameworks for mis-evaluating the problems potentially associated with these technologies. Why then, or in what circumstances do science-based regulatory regimes come to be contested, as the examples above seem to show? Why does the credibility and legitimacy of regulatory science get attacked, in certain cases at least? Why do the frameworks of regulatory science, at the same time, seem so difficult to alter?

Asking these questions is a way to acknowledge the fact that the standards of regulatory science are only a fragment of the potentially quite diverse regulatory knowledge that exists in any regulatory area. Regulatory knowledge has been approached as the knowledge of rules and of their enforcement, especially in the field of socio-legal studies (Bardach and Kagan 1982; Hutter 1997; Levi and Valverde 2001). The term appears sporadically in other contexts, e.g., to cover the knowledge practices of scientists involved in regulatory risk assessment (Jasanoff 2012; Camic et al. 2012). Michael Power uses the term to designate the data categorized and reported by lawyers and internal auditors in the framework of financial control (Power 2005). The sociologist of law Alan Hunt defined regulatory knowledge as the information gathered to construct an object of regulatory intervention (Hunt 1997). An object of regulatory intervention is constituted when collective experience about a problem emerges, and information is collected about this problem. This information, in turn, constitutes the problem in a well-defined phenomenon, something to which one can pay attention. This prompts the development of technical and scientific procedures to collect information more systematically, to measure the problem and to investigate the causes of possible problems. Before any scientific method of assessing health, environmental, safety or financial risk of one technology can be advanced, there needs to be an agreement around the fact that this risk is a relevant experience to measure. This experience needs to be named and categorized, and it needs to be agreed-upon as real, as well as valued as something that needs to be measured and controlled.

Many of the objects of regulation mentioned in this text, and probably any object of regulation, is the object of a potentially very diverse regulatory knowledge, understood in just this manner as the ensemble of experiences, value appreciations and information that construct an object for regulation and legitimize regulatory intervention. The evaluation and control of technologies indeed motivates a great variety of social actors to produce and circulate knowledge to influence the regulatory fate of technologies, starting with regulatory agencies (not a single actor: there are various interests, groups or divisions within each regulatory agency, and there are many agencies and committees at national, European or international level; regulatory agencies integrate more or less cohesively various expert committees or scientific panels), but also a variety of regulatory intermediary bodies (many of which are private like credit rating agencies in the case of financial products, but also like contract research organizations and regulatory affairs consultancy more generally when it comes to pharmaceutical or chemical products). But there are many more actors: expert and professional communities, NGOs and consumer groups, journalists (each of these categories having an internal diversity to them), organized groups of users… Each of those screen, experience or experiment, analyze and assess technologies by their own means and potentially put the knowledge thus gained in circulation. Regulatory knowledge evokes their experience and competence, but also their particular ways of seeing and knowing technologies. They may all contest that their own experience and ways of knowing technologies are represented.

Regulatory science represents one part of this potential, socially distributed regulatory knowledge. To use Collins’ terminology of types of knowledge (Collins 1993), it may be called the embrained and (legally) encultured part of regulatory knowledge. It is incorporated in the concepts and paradigms, which inform what and how to look for certain properties of technologies. Regulatory science represents that part of regulatory knowledge which industries, regulators and experts agree to consider relevant, and to replicate through agreed-upon tests that measure this particular property—but not necessarily others. However, the formal, mandated science and information used in public policy or governance more generally is the tip of an iceberg (Lindblom and Cohen 1979; Freeman and Sturdy 2015). Experience, or “embodied” knowledge (Collins 1993; Blackler 1995), is a term that can help cover all of the not necessarily codified and formalized bits of knowledge that circulate, about technologies, their benefits and risks, which constitute valuations of technology (Vatin 2013). These valuations, in turn, draw from other sorts of knowledge that can not all be classified as scientific: appreciations of the potential use of a product in the population, observations of its risks and benefits, but also, very often, economic information about the technology in question.

Regulatory knowledge encompasses the variety of experiences and situated knowledge that various agents, more or less close to regulated businesses, regulatory organizations and the arena in which they interact (Boullier 2016), are in a position to put in circulation. Certain groups get inspired to contest common regulatory knowledge of one or the other technology, based on the accumulation of particular experiences and alternative information. Such groups include, for instance, AIDS activists on medicines (Epstein 1996), or green activists opposing conventional safety assessment methods for pesticides (Tesh 2000). These agents that are initially more peripheral in the contested social field of the regulation of technology and their risks do take care to accumulate information and data about technologies, review scientific studies, and sometimes perform tests themselves. Professionals and regulatory agents themselves are concerned by this form of knowledge production. The competence of the latter includes the very much tacit ability by which toxicologists devise tests for new chemicals, or by which administrative officials interpret regulatory data. But it should also extend to the informal ethnographic practice of administrative officials who sense and assess technological developments as they interact with representatives of firms or talk to any market actor.

What do we learn if we take into account the regulatory knowledge that administrative officials, or any professional or consumer group accumulates as it experiences and observes the effects of certain technologies, or the observations and sensations that NGOs or doctors collect among their members, consumers or patients to form a new image of, say, the hazardousness of a product? How is that knowledge incorporated, or not, into the body of information that gets verified and tested in the framework of standard regulatory science, and that regulatory actors decide upon? How is, in general, more synthetic, situated and experiential knowledge (as opposed to analytic, quantitative knowledge) used at all in regulatory processes of monitoring, standard-setting and control of technologies? How and with what result do the local and situated experiences of technologies of various collective actors—e.g., consumer associations or environmental NGOs collecting perceptions from consumers about the toxicity of a product—collide with the knowledge produced following formal standards of testing and monitoring? Whither this alternative regulatory knowledge? How is this knowledge “authorized” (Levi and Valverde 2001) or not? Who selects knowledge, in what sites, following what sort of social and epistemic criteria?

No particular theoretical perspective is advocated here, rather a set of questions and approach of this broad issue. Regulatory knowledge is that knowledge which constitutes and legitimizes regulatory interventions. But regulatory knowledge is also, at its source, potentially diverse. How is it selected then, as it gets formalized and standardized in professional and industrials networks? Who controls this selection of knowledge and its labelling as regulatory science? Who “black-boxes” this knowledge in the test protocols and models that turn out estimations of risks, and are these reopened and scrutinized, and if so by whom (Fisher et al. 2010)?

One implication of this emphasis on regulatory knowledge is a change in the analytical status of the notion of “regulatory science.” Past research in the social sciences about regulatory science used the term conceptually, to designate a particular kind of science that is constructed and negotiated at the interface between research and policy (Jasanoff 1990, Irwin et al. 1997). This strand of work was concerned with the paradoxical call for science in policy, which resulted very often with a politicization of this science and an impossibility to sustain the claim to objectivity and authority. It connected to an older problem about the relation between knowledge holders and politics or decision-making, and “uses” of research and science in policy (Weiss 1979; Boswell 2009; Schrefler 2010). The main categories of thinking continue to be “science” and one of its supposed counterparts: the “state,” “policy,” “decision” or “the law.” The main problem here is that regulatory science does not necessarily take shape at the “interface” and interactions between science and policy (Weingart 1999), and in the hands of scientific experts alone—the main focus of research on expertise and regulatory science (Grundmann 2017). It takes place upstream from it, before any policy decision about the object can be formed, and it takes place elsewhere, in the multiple fora in which information about technologies and their effects emerge and in which they are socially evaluated (Hunt and Shackley 1999).

The other issue in using regulatory science as a sociological category of knowledge is that the term itself is used as a rhetorical tool of legitimation. Irwin et al. (1997) noted some time back already that characterizing regulatory science against normal or research science led to idealize the latter, and idealize it much more than the sociology of scientific knowledge would agree to. Now, in the context of political debates about regulatory science, about its industrial origins, the nature of its standard tests and measures, the extent to which it is captured by a pro-technology vision, this becomes even more of a problem. The term “science” itself is the object of contention and tactics of appropriation. It is a part of the contentious attempt to promote some knowledge over others. Through such qualification, we see public and private experts, industries, NGOs, administrations, arguing over which knowledge to take into account in decisions. “Regulatory science” is accused of being “corporate” or “industrial science” in disguise. Academic and industry experts are quick to raise the need to revert to better, more “sound science,” against the information and technical claims used by environmental or health public interest groups, disparaged as “militant science.” All of these expressions are ways of qualifying or disqualifying knowledge about technologies and risks. All of these knowledges necessarily fail to fulfill science’s ideals and norms. The question is rather who wins in this epistemic and political contest over what knowledge is endorsed in regulatory regimes. The term science is clearly a part of the problem, not a means of conceptual resolution. And arguing that this knowledge is, somehow, “science,” leaves the impression that the social analyst is in to adjudicate the dispute.

Expertise and the Authorization of Regulatory Knowledge

This special issue sheds light on this diversity of regulatory knowledge, of its forms and sites of production. It also highlights and seeks to explain the particular ways of knowing technologies that prevail in given regulatory domains and what the increasingly instrumental attempts of variegated actors and coalitions to shape it do to these standards that define regulatory science. It looks at the practical and political factors behind the intertwining of certain regulatory interventions with particular bodies of regulatory knowledge, the network of actors that control these, and how they have come to dominate a regulatory regime.

Going back to the problem of expertise, it deals with the following questions: What does scientific, professional or certified and authoritative specialized knowledge do to this selection and standardization of knowledge? For all the diversity of knowledges that potentially percolate in regulation, there is still a lot of asymmetry in who defines and controls explicit knowledge forms. Presumably, experts select what experience with product and risk seems legitimate to recognize, and adjust instruments to assess the frequency of that particular experience. Expertise is a form of gate-keeping, where possession of legitimate experience and competence in running conventional tests and interpreting their results is valued, to be included in regulatory activities. Expertise accounts for these disciplinary and professional networks that control the regulatory regime (Kastenhofer 2011), help align its various actors and institutions on the same shared experience and appreciations of technologies and their effects.

The papers in this special issue look at the regulation of pharmaceuticals, innovative medicines or transport technologies. In a general sense, they consider the regulation of innovative products and their risks. They shed light on prevalent norms of regulatory knowledge in pharmaceutical, chemical or transport technology regulation – clinical trials, reliability testing, risk assessment, technology assessment. All of them illuminate the paradox of regulatory knowledge: the fact that it is diverse in its sites and forms, and thus potentially quite controversial; but also that it is the subject of processes of selection, authorization and standardization, and that it institutionalizes under the form of a dominant regulatory science. What the papers do is show precisely in what sort of contexts the authorization and standardization of regulatory knowledge occurs. In a nutshell, they show that some people are experts of the regulation of technologies, not (or not only) because they have the sufficient credibility and authority to calculate the level of risks or benefits of these technologies, but because they are in a social position that allow them to control the experience, appreciations and sets of information based on which products are seen as risky, beneficial, worthy or else, hence constructed as objects of regulatory intervention. The experts of regulatory science, whether they perform or interpret tests, in many ways pre-define regulatory problems and objects of regulatory attention.

Alberto Cambrosio, Pascale Bourret, Peter Keating and Nicole Nelson (Cambrosio et al. 2017) study changes in the methodology of cancer trials, more specifically the increasing use of biomarkers in clinical trials to define safety and efficacy of medicines. They show that, biomarkers being so critical to define what an effective drug is and for how to test these very anti-cancer drugs, those who know and can collectively standardize biomarkers in effect have a massive influence on regulatory outcomes. These transnational networks of authoritative clinicians are in a close relationship with the Food and Drug Administration, which rely on them to be able to define and supervise protocols for clinical trials, instead of prescribing a standard method for these trials and control their execution at a distance. In doing so, it participates in the selection of biomarkers, in the shaping of clinical trials, and eventually in the selection of drugs that may pass these tests. The central demonstration of the paper concerns the role that what the authors call networks of expertise play in constituting biomarkers as regulatory objects, and as changing clinical trials as the norm of regulatory decision-making for drugs.

Based on a rare opportunity to observe the work of expert assessors of new drugs the specialized body of the European Union, Boris Hauray demonstrates that the construction of a decision of this kind in fact draws on much more varied bodies of information, statistical data but also therapeutic experience and perception of the needs of patients (Hauray 2017). In other words, a regulatory regime that is officially rooted in science and medicine, as well as on the most-accomplished form of statistical proof of the efficacy of a drug, these decisions can only be formed with the support other forms of qualitative knowledge and sources of information. Quite paradoxically, the regulation is all the more evidence- or science-based as it relies on arenas in which these various bodies of information can be collected, synthetized but also freely criticized and appraised, away from public eyes.

Alex Faulkner and Lonneke Poort (Faulkner and Poort 2017) continue this theme, on a different object: the work of experts to examine the attributes of new technologies, in their case advanced therapy medicinal products, to either stretch, maintain or break existing legal frameworks to deal with novel technologies. The two authors compare two regimes and several countries, to show that, far from being only reflective of explicit regulatory avoidance strategies by firms, these law-making practices are rooted in a broad knowledge base covering the qualities and risks of the actual technology, but also knowledge of the regulatory frameworks themselves, the traditions and assumptions built within them, as well as ethical desirability. This knowledge is assembled, defended and codified in networks of scientific experts and specialized advisory committees, more often than in open policy fora. Only with this broad coverage and expert institutional legitimacy can knowledge of technologies be used to forge a new regulatory regime.

Finally, John Downer, in a comparative study of the regulation of civil aircrafts and nuclear reactors, shows that the regulation of innovation in civil aviation can not be reduced to the authoritative intervention by regulatory bodies on aircraft manufacturers, to authorize or reject a given design (Downer 2017). What Downer calls “innovative restraint” is a sort of self-regulatory attitude of aircraft manufacturers, who develop designs incrementally, duly following the lessons learned through multiple hours of use, and accumulated ‘service data.’ In other words, the way in which one knows that a design is safe is through experience. The sector is in great part regulated thanks to the accumulation and sharing of this experience among regulators and regulated companies, at least as much as through decisions inspired by physical tests and flight simulations. Statistical service data is the best predictor of reliability, and at the top of the hierarchy of knowledge the Federal Aviation Agency uses. Any attempt to influence regulation to let innovation through – say to authorize greener aircraft designs and motorization – will have to apply that knowledge norm and way of knowing what’s safe and reliable.