Introduction

Any commercially available, bench-scale, or even proposed “back-of-envelope” application of synthetic biology (SynBio) will prompt discussion and debate—perhaps highly philosophical, perhaps highly practical and legalistic—both about how to think about the application and what, if anything, to do about it, pro or con. The former kind of discussion is the domain of precautionary or “permissionless” (Thierer 2016) rhetoric, of quantitative risk assessment, and of cost-benefit analysis; the latter is the domain of risk management, regulation, information disclosure, industrial policy, and other interventions.

SynBio applications are controversial because of their promise and their peril—in short, because they can greatly reduce risks and also because they threaten to expose humans and the natural environment to new or increased risks. I will discuss these issues throughout this chapter, but a working introductory definition of SynBio is “using tools of molecular biology to engineer new or improved cellular products or processes” (see Cameron et al. 2014). A working definition of quantitative risk assessment (QRA) is “a method that synthesizes information from basic sciences (e.g., toxicology, epidemiology, chemistry, statistics) to explore the probability that one or more adverse outcomes will occur from a product or process, and to gauge the severity of each outcome” (see Kaplan and Garrick 1981). As I will discuss below in the section “Risk Assessment Methodologies for SynBio”, the output of a useful QRA is not a yes/no pronouncement about the existence of a risk, or even a quantitative estimate both of its likelihood and its consequence. It is, instead, a “characterization” of risk (NAS 1983) that offers information about (1) the extent of scientific uncertainty that, if analysts are honest and humble, precludes them from pinning down the probability or severity with precision and (2) the extent and nature of interindividual variability in the risk, so that different populations can appreciate that probabilities and severities also depend on who is facing the hazardous condition(s).

This chapter, the capstone product of a project supported by the Alfred P. Sloan Foundation, breaks new ground in two fundamental and complementary ways—one dealing with analysis of evidence and one with evidence-based action. With respect to analysis, many thought processes about and formal assessments of possible harms to human health, safety, and environment (HSE) begin and end with the most simple of questions: is it “safe”? Or, slightly more broadly, “does it promise economic benefits in excess of the (monetized) harms it presents?” Such questions allow (or at least encourage) only dichotomous answers, but worse, they crowd out more sophisticated, sweeping, and bold risk assessment questions. Similarly, many interventions (risk management) to control possible HSE harms consider only the narrowest range of actions: should we ban the new process/product/activity, or should we declare “nothing to see here; let’s move on?” Here my concern is that the “poverty of choices” can lead to poor decisions, akin to how a “poverty of questions” can lead to poor analyses.

I instead start from the premise that asking a wider range of questions and considering a wider range of actions are both unambiguous virtues. This is not to say that expansive and protracted analyses always outperform simpler ones or that circumscribing the choices to “go/no-go” is always mediocre—only that simplifying the analysis and narrowing the range of options should be done consciously and at least somewhat reluctantly.

These twin considerations apply in spades to the new arena of SynBio (for an excellent primer on the issues raised, see Moe-Behrens et al. 2013, or Rodemeyer 2009). First, to a greater extent than is true for most other new and continuing sources of HSE risks, the dangers posed by SynBio applications are offset (sometimes partially, completely, or “more than completely”) by their direct and often unprecedented power to reduce other risks. Hence, I argue that traditional “is it safe?” risk assessment questions are particularly myopic here, as they ignore the real possibility that the new application is at the same time both objectively dangerous and yet a risk-reducing improvement over the status quo. Traditional “go/no-go” risk management choices are also particularly inappropriate for new SynBio technologies, because of their novelty, the rapidity with which unforeseen risks or unforeseen risk-reducing benefits may be realized soon after their deployment, their ethically controversial nature, and their dependence on a social license to operate and perhaps even public sector funding. For decisions like these, society has the opportunity (arguably the responsibility to itself and to posterity) to consider many shades of gray between draconian regulation and laissez-faire—as well as various creative options that are actually either more stringent than even an outright ban or more encouraging than even a hands-off posture.

To introduce the rich range of assessment questions and management options that I urge should be posed and considered in the analysis and governance of SynBio, I offer a hierarchical ordering of each; the risk assessment questions ranging from the most rudimentary to the most nuanced and expansive and the control options ranging from the most favorable to the SynBio application to the most restrictive.

Table 1 (adapted from Finkel 2018a) presents ten distinct levels of analytic complexity , several of which I will highlight here. Level 1 represents the most qualitative appraisal possible: the “is it safe?” (or “is it costly to avoid?”) question. Level 5 offers the traditional cost-benefit question: are the expected risks reduced by the policy greater than its expected costs? As we will see in detail below, this question can easily be recast as an appraisal of the net benefit profile of a new application or technology: on average, does it reduce risk by more than it exacerbates it? The remainder of the hierarchy basically enriches the simple cost-benefit (or risk-risk) estimate with considerations of the two most fundamental phenomena surrounding all risks—the uncertainty impeding our ability to precisely quantify risk and the interindividual variability that makes any risk estimate uniquely applicable to only one individual or subgroup within the affected population. A cost-benefit (or risk-risk) analysis that fully considered both phenomena would ask questions of the form “for these particular citizens, what is the range of possible outcomes (the new technology reduces, leaves unchanged, or increases risks, by how much?), and what is the probability of each outcome?”

Table 1 Characteristics of a ten-rung ladder of complexity in policy decision-rules

A “Solution-Focused” Partnership between Analysis and Action

Armed with the answers to one or more assessment questions about a SynBio application, society could then consider whether they justify a response at or near the highly “bullish” left end of the spectrum, the highly restrictive right-hand end, or somewhere in between. Figure 1 displays a very broad range of possible responses to a SynBio application; of particular note, the right-hand tail of the range offers somewhat more ambitious prospects for SynBio control than are generally contemplated, while the left-hand region offers various gradations of incentives and support for SynBio that go far beyond merely permissive responses. The other unusual feature of this schema is that it explicitly construes the SynBio application as competing with existing materials or applications—therefore, options exist to constrain the SynBio application indirectly (by subsidizing the competing application or loosening regulations on it) or to promote SynBio indirectly (by regulating or banning the existing application).

Fig. 1
figure 1

Spectrum of possible governance responses to a synthetic biology application

The main contribution of this chapter, however, comes in the space between risk assessment questions and risk management control options , simply because analysis should not result in a specific action when the dots are connected poorly and with little forethought. Knowing only the net risk (or net benefit) of a SynBio application, however comprehensively that risk is assessed (see Table 1), one can certainly take some action somewhere along the spectrum in Fig. 1, but this is far from the only paradigm for linking the results of assessment to the form, ambition, and stringency of control. When we look solely to risk assessment to inform and guide action, we implicitly assume that the amount of concern or worry proportionally dictates the amount of resources we should expend to reduce the given hazard. Instead, I have proposed (Finkel 2011; see also Natl Acad Sci 2009; Goldstein 2018) that a host of questions should intervene between the two parts of the “big risks, large controls” mantra:

Risk assessment for its own sake is an inherently valuable activity but, at best, a risk assessment can illuminate what we should fear, and tap into our inexhaustible supply of worrywhereas a good solution-focused analysis can illuminate what we should do, and mobilize our precious supply of resources. (Finkel 2011, p. 781)

By “solution-focused ,” I mean a decision process that eschews risk assessment performed in one-risk-at-a-time isolation and disconnected from the appraisal of what solutions are or may be available to control the risks being compared. “Solution-focused risk assessment,” or SFRA, seeks above all to resist the temptation to declare victory when a risk has been quantified and a lower level of risk deemed “acceptable.” Such a mindset suffers from two fundamental flaws: it defines success as an isolated risk reduction, rather than a more comprehensive solution, and it is often satisfied with the aspirational success of declaring an acceptable risk level, even if the risk actually never is reduced to that level. So SFRA instead emphasizes (1) that we must compare risk-reducing (welfare-enhancing) opportunities , not disembodied risks, and (2) that the earlier in the decision process we array the possible solutions, the less likely we will define away promising answers to risk-risk and cost-benefit dilemmas and end up with a course of action that is inferior to others we neglected to consider. For example, the US Environmental Protection Agency (EPA) has a vigorous research program concerned with the toxicity of various plasticizers (beginning with bisphenol A (BPA) ) that can leach into drinking water provided in disposable plastic bottles. Eventually, this work may lead to regulatory limitations on the allowable concentrations of BPA in bottled water. If EPA assesses the risks more holistically, it might lead to a suite of concentration limits on the various substitutes for BPA as well.

But imagine posing the question not as “how many parts per billion of each substance is acceptable?” but as “how can the market deliver clean, cold drinking water at an affordable price and with the smallest environmental and human health footprint?” That linkage between analysis and action might prompt public discussion of the energy use and disposal issues associated with the current annual production of 49 billion plastic bottles in the USA (from a baseline of essentially zero several decades ago, a time when US consumers did not want for ready access to drinking water). And that question might lead to discussions of how governmental incentives, taxes, or investments in infrastructure might help reduce the runaway demand for plastic water bottles of any kind and increase the supply of “free” (funded by taxpayers) or low-cost drinking water provided as we remember it in the 1960s–1990s—available in public places and lobbies of private buildings, via fountains and water coolers.

This emphasis on solutions is not only the polar opposite of the way EPA and other US federal HSE regulatory agencies have largely construed their missions since their founding decades ago but is very different from recent “baby steps” EPA has taken to ground risk assessment in practical utility. In particular, EPA has highlighted, particularly in referring to its 1998 Guidelines for Ecological Risk Assessment, that it incorporates “problem formulation” into its planning as a way to make risk assessment more useful. Unfortunately, this semantic change only means that EPA sometimes asks up front the question “how can we limit the scope of our research and risk analysis to issues that can help set a risk reduction goal?”—and this is quite different from “how can we harness risk assessment to discriminate among possible ways to fulfill a human need effectively and with minimal imposition of new risks?” SFRA is not a wholly new concept by any means, however—it can be thought of as a marriage between QRA and a more impressionistic “innovation-based strategy for a sustainable environment” (Ashford 2000) that steers industrial policy toward solutions that minimize risks.Footnote 1

SFRA is also particularly useful for emerging technologies such as SynBio, because it can reveal and supplant the false choice between risk and benefit. As Caruso (2008) pointed out near the inception of SynBio as a viable set of technologies, developers often advocate postponing risk-related inquiries until the benefits can be communicated (she quotes an official in Spain as saying “Let’s first see what [the technology] is good for. If you first ask the question about risk, then you kill the whole field”). The central premise of SFRA, of course, is that the acceptability of a new risk depends crucially on “what the technology is good for.” By exploring benefit and risk simultaneously (and by comparing the findings to benefit and risk analyses for current approaches to solving the same problem), SFRA can help avoid foolish actions (where small new risks are deemed intolerable despite massive risk reductions they can provide) and foolish inactions (where large new risks are permitted on account of small or phantom benefits they offer).

Bearing in mind these two premises—that risk-reducing opportunities should be compared, not simply “optimized” one at a time, and that creative questions about human needs can impel thoughtful discussion about fulfilling those needs in risk-reducing and welfare-enhancing ways—how might society grapple with new SynBio applications in a “solution-focused” paradigm? Here and in a recent article (Finkel et al. 2018a), I outline four different, and increasingly “solution-focused,” ways to evaluate the merits of any SynBio application:

  1. 1.

    Does the application have positive net benefit? That is, does its risk reduction potential exceed its propensity to create additional risks?Footnote 2

  2. 2.

    Compared to other ways to produce the same or similar material, does the SynBio application have greater marginal net benefit than the alternative(s)?Footnote 3

  3. 3.

    Compared to other ways to fulfill the same human need, does the SynBio application have greater marginal net benefit than the alternative(s)?Footnote 4

  4. 4.

    Does the existing dominant means of fulfilling (or failing to fulfill) a particular human need have a particularly poor risk profile, such that society might look to an unmet application of SynBio to displace it?

I emphasize that the nature of the new SynBio applications, as well as the stage of the product life cycle they occupy at the time of this writing, makes the application of the SFRA concept to this set of risks and benefits particularly timely, for three reinforcing reasons:

  1. 1.

    The risks and benefits involved are so different from most of what has come before that the substance-by-substance paradigm is simply a caricature of what is needed.

  2. 2.

    The applications are poised for completion but are largely not “out in the world”—so we have an opportunity to start a revolution in technology with the simultaneous transformation of governance arrangements that are fit-for-purpose. Such an approach will help minimize the need to “grandfather” a first generation of products and can help avoid untoward events that can both threaten human health or the environment and can fatally stigmatize this new technology before it can achieve successes.

  3. 3.

    As one of my colleagues has observed (Coglianese 2012), when a governance system waits for a tragic failure to occur (viz., the BP oil spill), it can be doubly unfortunate, because in addition to the tangible damage done, there is usually a rush to apply ill-conceived policy band-aids that can actually make future failures even more likely or more severe. The “first failure” of SynBio could wipe away most hope for a proactive system of governance, one that we have time to craft now .

This report will describe and evaluate the various linkages among the design of risk assessments, the use of risk and benefit information to make solution-focused comparisons among technologies and materials, and the risk-informed governance of SynBio applications. In turn, I will discuss:

  • The crucial components of a risk assessment method that, when adapted to the special challenges of SynBio risks, can provide reliable, transparent, and “humble” (Andrews 2002) information. Here I will emphasize the extent to which existing risk assessment methods can be sensibly ported over to the SynBio context. While I will also highlight areas where new methods will have to be developed, this report will not per se generate any new risk assessment algorithms.

  • The attributes of various solution-focused risk management questions that might allow for the reasoned expansion of some SynBio applications, the restriction of others, and the imposition of “prudent vigilance” (PCSBI 2010) on still others.

  • The importance of revealing the many hidden value judgments that permeate the process of risk and cost-benefit analysis , so that governance decisions can be made with a fuller appreciation of their ethical implications.

  • The current state of risk communication (and “benefit communication”) for SynBio, as reflected in written pronouncements on these matters by leading pioneers in the field.

  • A table summarizing tentative conclusions about how each broad class of SynBio application measures up, applying a solution-focused governance context.

  • The potential to complement the solution-focused approach with a “solution-generating” mindset for SynBio.

Risk Assessment Methodologies for SynBio

Although this report is not intended to break new ground in quantifying the risks (or net risks) of SynBio applications, I hope here to jump-start a discussion of how society could do so. It is troubling that so much of the “risk assessment” dialogue and writing about SynBio contains little or no systematic, careful, or thorough estimation of any risks or benefits: rather, these discussions have introduced and perpetuated two of the most fundamental errors possible in risk assessment: (1) stating or implying that if an outcome (bad or good) is possible, it is likely or certain to transpire (this is insensitivity to probabilityFootnote 5) or (2) stating or implying that one possible magnitude of the harm or benefit is its expected magnitude (insensitivity to uncertainty, or simply biased mis-estimation). Many conversations or pairs of opposing peer-reviewed articles about a SynBio application merely pit the claim that “this innovation will cure disease X” (or “clean up environmental problem Y,” or “produce valuable product Z much more cheaply than any current method can”), against the counterclaim that “but it can spread a mutant protein throughout the human genome.”Footnote 6 This is perhaps an example of a “risk-aware” conversation , but it is certainly not the basis for a sensible risk-benefit governance decision. For that latter—and vastly more useful—task, society needs at least a minimum set of raw materials with which to quantify risks and benefits, instead of a claim of good or harm that provides no information about its probability or magnitude.

This section of the report will sketch out such a core set of raw materials, useful for any of the risk management questions posed earlier and explored in more detail in the section “Implementing a Solution-Focused Management Regime”. I will also elaborate on a richer set of risk assessment inputs that could help organize a more robust and intellectually honest analysis of the goods and harms of encouraging/discouraging any given SynBio application. Because the methods of QRA were first applied to the exposure and dose-response questions posed by synthetic chemicals in the environment and workplace, the discussions herein will use chemical risk assessment as a jumping-off point and template. QRA for SynBio will of course have to evolve to accommodate the challenges of estimating probability and severity for the novel risk (and risk-reducing) scenarios these applications pose, but QRA has previously risen, albeit fitfully, to similar challenges in other highly complex systems. Examples include risk assessment for pathogens (Mokhtari et al. 2006), the evolution of antibiotic resistance (Cox and Popken 2014), the adverse effects of molecules that can catalyze reactions (Hammitt 1990), the paradoxical dose-response relationships for immunotoxins such as beryllium (Willis and Florig 2002), the probability and consequences of contaminating an entire extraterrestrial environment with Terran microorganisms (NAS 2006), and the behavior of prions in the environment and in vivo (Schwermer et al. 2007).

Fundamental Concepts

Although there exist dozens of definitions and typologies of risk in the peer-reviewed and “gray” literatures (as well as in public discourse), no adequate definition of “risk” can fail to incorporate all of these three most fundamental questions (Kaplan and Garrick 1981): (1) what can happen?Footnote 7; (2) with what probability can it happen?; and (3) how severe are the consequences if/when it happens? Once the “what?” question has been posed, any appraisal of “risk” must therefore integrate—perhaps via simple multiplication, perhaps via any more complicated function of the two—information about both probability and consequence; otherwise, it is not a properly construed expression of risk.

In particular, two common “risk-like pronouncements” about some eventuality are not correct or useful expressions of risk. To state (whether perfunctorily or as the culmination of a seemingly sophisticated technical analysis) that “exactly this consequence could happen” ignores or erases all of the powerful information that probability brings to the table. No matter how precisely one explains the exact hazard (e.g., a precise hazard statement would be “if the rope breaks, you will fall 500 feet to your certain death”), only by adding information about its probability can we reveal whether the risk is trivial, apocalyptic, or anywhere in between. Conversely, to state that “there is exactly one chance in 123.4 (a probability of 8.104×10-4) that the rope will break” is useless without knowing whether the resulting fall will cover 5 inches, 5 feet, or 5 kilometers. Carelessness about probability (the former type of lapse mentioned above) often stems from the orientation that it is immoral to allow any non-zero probability of an involuntary harm to persist, but surely a tiny residual risk is at least less immoral than a large one.

Carelessness about severity is more insidious: when an outcome appears to be fully-described but is not, all kinds of value-laden conditions can be tacitly imposed upon the analysis or the decision. For example, consider the claim that more than 1000 Americans died “needlessly” (Gigerenzer 2006) in the year after September 11, 2001, when they chose to drive rather than to fly, because the per-mile probability of highway death is much greater than that of death in an aircraft. But despite the clear rank order of the probabilities, the risk of driving is only clearly greater than the risk of flying if the outcomes have identical severity—and it was far from “irrational” to regard the specter of a protracted death-by-hijacking as qualitatively more dire than a sudden car crash (Finkel 2008; Assmuth and Finkel 2018).

Although [probability combined with severity] is the core of any meaningful expression of risk, properly considering both inputs may still yield an impoverished or an ambiguous risk estimate or risk-based conclusion, if one or more of these most basic definitional issues about risk are not considered:

  • “Pathway risk” versus total risk . Any source of risk can present multiple consequences simultaneously, so it is important to consider all the major pathways or else explicitly highlight the partial nature of the thought process. For instance, a chemical in household water may be capable of causing several different acute health effects, and still other chronic effects, and can enter the body via ingestion, dermal contact, or inhalation (as in showering with hot water). Each pathway, and each effect, may merit its own risk assessment, the panoply of which combine to yield a holistic estimate of overall risk .

  • Conditional probability versus unconditional probability. Many risk appraisals involve two different kinds of probability: the chance that some untoward effect will occur and then the likelihood that the results of the event will proceed to cause harm. The former assessment of probability often involves “fault trees” or other means for estimating the odds of a discrete occurrence (such as an accidental release of a particular chemical from a manufacturing plant or transportation system), while the latter may involve using toxicology data or epidemiologic studies to estimate the “potency” of the substance (the probability that a given concentration will cause a particular adverse health effect.) In such cases, the risk assessment must consider the joint probability both that the event will occur and that the health effect will occur conditional on such event .

  • Isolated risk versus aggregated risk. A particular exposure may be the only contributor to a given health or environmental effect (e.g., beryllium is the only known cause of chronic beryllium disease), or it may add a small increment to an existing background risk of that effect (e.g., the amount of ionizing radiation emitted from nuclear power plants versus the natural background of radiation from Earth’s crust and from cosmic rays). This does not mean that incremental involuntary exposures should be ignored merely because they may be small relative to unavoidable background exposures, only that decision-makers and the public should know whether a policy would reduce a small or a large fraction of the aggregate risk .

  • Point estimate of risk versus acknowledging uncertainty in risk. It is simply misleading to present probability or consequence estimates without providing confidence bounds (Finkel and Gray 2018) and preferably a probability density function (and note that uncertainty in risk-risk comparisons (Finkel 1995) is generally X2 as large as single-risk uncertainty).

  • Point estimate of risk versus acknowledging interindividual variability in risk. QRA has suffered mightily from examples where population-average risks were deemed acceptable, when in fact risks varied dramatically depending upon the exposure, susceptibility, or other characteristics of subpopulations (Finkel 2008).

  • “Target risk” versus ancillary risk(s). There is a growing literature attesting to the folly of assuming that an intervention to reduce one risk will have no untoward consequences in another risk area (e.g., increasing highway mileage per gallon but failing to improve upon the safety performance of lighter-weight cars) (Graham and Wiener 1997). This literature, however, is tempered by a second round of scholarship (Rascoff and Revesz 2002; Finkel 2007) helping us distinguish between legitimate and sham trade-offs.

  • Life cycle orientation. Ideally, the risks of a product or technology should be assessed from its production through its use and disposal, with an eye both toward general population risks and the disproportionate risks that workers usually bear (Powers et al. 2012).

This subsection concludes by emphasizing that when the stakes are high, QRA is unambiguously preferable to the three most commonly touted alternatives to it:

  1. 1.

    A “precautionary principle ” that requires society to avert (or eschew) a single disfavored eventuality to the exclusion of others (Friends of the Earth, International Center for Technology Assessment, and ETC Group (2012)). Once precaution advocates realize that other advocates—for example, those insisting on the Iraq invasion of 2003 on the grounds that a “1% chance” of hidden chemical/biological weapons there should be regarded as a certainty (Suskind 2007) or those implicitly urging extreme precaution about economic costs rather than the harms caused by market failures—can define precaution to mean the opposite of what they do, the inadequacy of “pure precaution” is obvious (Montague and Finkel 2007).

  2. 2.

    “Scenario analysis ” (Aldrich 2018), which commonly fails to discriminate between dire scenarios that are highly unlikely to occur and those that are far more plausible.

  3. 3.

    Qualitative risk assessment, in which hazards or scenarios are given color-coded severity rankings—this practice is often seen as intermediate between scenario analysis and full QRA, but various scholars have shown (e.g., Cox 2008) that following its dictates can be worse than choosing randomly without any risk information.

The Special Case of “Risk in the Name of Risk”

Analyzing the probabilities, severities, uncertainties, and other aspects of a SynBio application is rather more difficult even than ascertaining its downside risk(s), because many of the most interesting applications also promise to deliver significant risk reduction either as a prime mover or as incidental to it. Hence, the analysis needs to consider net risk reduction (or net increase) rather than the downside alone. Of course, a well-developed literature and set of practices exist for cost-benefit analysis (CBA) , which can be thought of as the technique of comparing the risk of a product or practice to the benefits of producing it without constraints.Footnote 8 Here, I assume that the economic costs of reducing the risks of a SynBio application are small relative to the more fundamental question: do the risk reduction benefits the application offers exceed the novel risks the application poses? Net risk analysis, like a traditional CBA, requires two separate considerations, which could be termed “R↓” (the decrease in risk that the application promises) and “R↑” (the increased risk imposed by the application). If the SynBio application offers positive net benefit (does more good than harm), then the difference [R↓- R↑] will be greater than zero.Footnote 9

However, estimating the magnitudes of the two terms in a “risk in the name of risk” trade-off is rather different from, but in some ways easier than, the standard estimation problem in CBA . Standard CBA requires estimation of the economic costs of control, which can be surprisingly difficult (Finkel 2012). Standard CBA also requires that the benefits of control (aka. risk reduction) be “monetized” or converted from “natural units” (e.g., expected number of lives saved due to the controls or expected increase in biodiversity or other ecological indices) into dollars, in order that benefit can be compared to cost, and this is a highly controversial practice (Ackerman and Heinzerling 2005). In contrast, the estimation problem here does not require monetization, as both the risks imposed and the risks reduced are in “natural units ,” such as the expected number of lives lost or the estimated acres of habitat destroyed. When the risks on both sides of the ledger are in the same natural unit, there is no need to convert either to a dollar metric, although issues of commensurability will still persist if the natural units are different for risks reduced versus risks imposed. In considering the governance of an emerging SynBio or other technology, of course, we may have to consider that in order for the risk-superior application to be used, government may choose to subsidize it (hence accruing public costs that must be subtracted from monetized net benefits) or may have to regulate/tax/ban the riskier alternative (which would impose monetary costs in the form of reduced consumer surplus).

It is also quite possible that even if the risks reduced and risks imposed are in the same natural unit, the effects will accrue to different populations—see Graham and Wiener (1997) for a comprehensive treatment of the 2×2 different situations where either risks or populations (or both) can be identical or different. In such cases, simple subtraction may not yield a coherent net estimate or one that reveals important information about equity.

This is not to say that estimating [R↓- R↑] is by any means easy, only that it is conceptually straightforward. The first term could often be thought of as the baseline “toll” of some HSE problem, modified by the expected amount by which the SynBio application would effectively reduce that toll (see, e.g., Rooke 2013 for a catalog of SynBio advances that might reduce various human diseases). For example, suppose that the Oxitec hybrid mosquito (see summary of this case study in the section entitled “Broad/Tentative Observations about Comparative Risk Profiles of SynBio Categories”) could, with 80% probability, reduce by 95% the number of mosquitoes capable of transmitting dengue fever in a region of the world where the disease was killing one million people annually (and with 20% probability would be ineffective). Then the expected amount of risk reduction the application would offer would be 760,000 statistical cases of disease averted per year (0.8 probability of reducing the death toll by 950,000).Footnote 10

The R↑ term is in many ways at the heart of this project, as it represents the untoward side effects of a SynBio application, and it is more difficult to estimate because almost by definition the raw materials of probability and severity of consequence are as yet unrealized (Dana et al. 2012). Conceptually, a useful estimate of R↑ requires information on:

  • The nature of each particular downside scenario (analogous to the “hazard identification” stage in classical human health risk assessment)

  • The probability of each scenario manifesting itself

  • The severity of the consequences if the scenario occurs

  • How the consequences are actually experienced by the affected human population or ecological niche

With this raw material, the downside risk R↑ is the sum of the [probability times experienced consequence] of each scenario, preferably with both probability and consequence expressed with the uncertainty in each. Once the risk is estimated, society could choose to treat very small risks as functionally equal to zero (Wareham and Nardini 2015) and, of course, could choose to reduce the probability and/or the severity of a risk by requiring developers to add additional safeguards to reduce the probability of an accidental release or to render an organism “inherently safe” even if released (Schmidt and de Lorenzo 2012; Wright et al. 2013).

The foregoing is, of course, a “much easier said than done” summary of how to arrive at a reasonable downside risk estimate for a SynBio application. Perhaps the most useful reference for understanding the tasks involved in estimating a downside SynBio risk is found in Bedau et al. (2009), which gives a “checklist” of how to think about scenarios. In looking for a template that could be improved upon for performing a state-of-the-art risk assessment for an emerging SynBio application (in this case, the risks of engineering hybrid mosquitoes to control dengue fever), Finkel, Trump et al. (2018a) recommended the assessment performed by Hayes et al. (2015), which offers a very complete risk assessment with respect to the probabilities of many downside risks, although it does not quantify the range of possible severities for any of the scenarios.

As QRA for SynBio improves, analysts can make greater use of existing techniques to cope with the particularly vexing problems inherent in estimating these probabilities and severities, including:

  • Techniques for estimating the probabilities of unprecedented or “virgin” risks (Kousky et al. 2010)

  • Techniques for bounding the probability of “surprise” (Shlyakhter 1994)

  • Techniques for handling “deep uncertainty” (Cox 2012)

  • Structured expert elicitation methods that force respondents to construct logically coherent scenarios (Cooke et al. 2007)

In contrast to the real need for additional complexity, it is also possible that SynBio risk analysts may be able to invoke some “first principles” for distinguishing high-concern scenarios from other ones, allowing for simpler assessments. For example, it may be the case that hybrid organisms designed to be less fit than the wild type cannot pose a significant risk to the ecosystem; if so, any scenarios involving mutations in which the hybrid organism remains less fit than before pose risks that might safely be ignored in a risk-risk analysis .

Risk Assessment in the Solution-Focused Regime

As I will discuss in the next section of this chapter, the bridge between net risk assessment and solution-focused governance is conceptually simple; it involves comparing the net risk profiles of various approaches to solving a human need or fulfilling a human want and using policy tools to support and encourage the solution(s) with the relatively most favorable profile, while discouraging, regulating, or banning solutions with inferior risk profiles. In comparing a new SynBio (“s”) application to the most useful conventional (“c”) solution to the same HSE problem, the question boils down to whether this equation is positive:

$$ \left(R{\downarrow}_{\mathrm{s}}-R{\uparrow}_{\mathrm{s}}\right)-\left(R{\downarrow}_{\mathrm{c}}-R{\uparrow}_{\mathrm{c}}\right) $$

This equation symbolizes the incremental net benefit of the SynBio application over the conventional solution. Rearranging terms, the same equation can be expressed as:

$$ \left(R{\downarrow}_{\mathrm{s}}-R{\downarrow}_{\mathrm{c}}\right)-\left(R{\uparrow}_{\mathrm{s}}-R{\uparrow}_{\mathrm{c}}\right), $$

which represents the incremental risk-reducing power of the SynBio alternative net of its incremental risk-increasing potential. In either case, if the equation yields a result greater than zero, the SynBio alternative can be said to have positive incremental net benefit over the “competition.”

Alternatively, if we define the quantity RRi, the “risk remaining” after a solution is implemented to partially eliminate a hazard (i.e., the status quo risk minus Ri), then we could evaluate the equation:

$$ \left({RR}_{\mathrm{c}}+R{\uparrow}_{\mathrm{c}}\right)-\left({RR}_{\mathrm{s}}+R{\uparrow}_{\mathrm{s}}\right), $$

which is the total risk (old plus new) for each solution. If this equation has a positive sum, then the SynBio application results in less total risk than the conventional solution it could supplant.

Implementing a Solution-Focused Management Regime

Armed with reliable methods to construct the risk-risk profiles (with attendant uncertainties) of a set of technologies, substances, or processes that includes one or more SynBio applications, how can government and the citizenry move from analysis to action? How can they/we decide what strictures, encouragements, outreach, taxes, subsidies, research, or other concerted actions are desirable or optimal? Although other orderings are possible, what follows is a chronological ordering of six tasks describing how SFRA maps onto this question of SynBio governance. Note that most of these elements are also described in a video in the “Risk Bites” series available on YouTube (Maynard and Finkel 2018).

  1. (a)

    Pose the fundamental question “which human need or want is unfulfilled?” Unlike “problem formulation” as defined by EPA (see “A ‘Solution-Focused’ Partnership between Analysis and Action”), this mindset defines “the problem” not as a specific hazard presented by one product or process but essentially the opposite—as something one or more technologies might be able to solve. In other words, conventional risk assessment would ask “how much perchloroethylene can/should be emitted when dry-cleaning clothes?”, while SFRA would ask “how can consumers clean their clothes most safely and effectively?” Here I will also distinguish between “needs” (e.g., humanity needs better methods to control disease-carrying mosquitoes without introducing new and untoward risks) and “wants” (consumers may benefit from a less-expensive or higher-quality artificial food-grade vanillin). SynBio applications can fulfill either needs or wants, but risk management governance may wish to consider these differently when balancing marginal risk increases against marginal risk reduction or other benefits.

  2. (b)

    Array a narrow or an expansive list of possible solutions to fulfill the need or satisfy the want. Although the most fundamental distinction between SFRA and conventional risk assessment/management is that the former evaluates solutions rather than quantifies risks, the breadth and ambition of the solutions considered greatly distinguishes SFRA exercises from each other. It is possible to consider only “window dressing” responses to a human need (e.g., a medical professional advising a patient complaining of tight pants could suggest s/he get used to the discomfort or buy larger pants), or instead to emphasize “upstream” remedies that require much more expansive changes (in this case, advising the patient to change his/her diet or undergo bariatric surgery). In the chemical risk assessment arena, Finkel (2011) develops a case study contrasting narrow sets of possible solutions to the occupational and environmental health risks of chlorinated solvents for stripping paint from airplanes (one solvent versus another), somewhat more expansive sets (adding mechanical abrasives such as crushed walnut shells to the comparison), and very ambitious sets (including the option of leaving planes unpainted or even employing market mechanisms to reduce demand for business travel by plane). A good description of the correlation between the degree of upstream intervention and the “radicality” of the contemplated intervention is found in Løkke (2006), who notes that “the levels should not be mistaken as a grading of alternative solutions; most people would agree that preventive strategies are better than cleaning up, and increasing radicality will often but not necessary lead to better environmental solutions .”

  3. (c)

    Estimate the net risk consequences (or non-risk benefit minus risk, for “wants”) of each solution. Using the risk assessment techniques and goals described in the section “Risk Assessment Methodologies for SynBio, SynBio and other means of solving a particular problem can be compared by deriving (with uncertainty) the extent to which each solution can reduce risks to the greatest extent, net of the new risks it poses. If there is no particular problem, but instead a set of ways to satisfy consumer wants, the comparison is similar, except that the “pro” term of every pro-net-of-con estimation would instead represent the benefits (perhaps using consumer surplus as a proxy) of each application or product. First, it is usually easy to reject outright solutions or products that have a negative profile (new risks exceed risk reduction or other benefits), as these are usually inferior to the status quo. Among the remaining choices, and because of uncertainty, there may well be no unique “winner” in any of these comparisons; often, the solution with the highest expected net risk reduction may not have the most favorable risk profile when the reasonable upper bound for the “con” term of the estimate is substituted (see the case study of dengue fever control in Finkel et al. 2018a). But choosing for or against an option that is “better on average but may be worse” (or “worse on average but may be better”) is conceptually straightforward when the decision-maker openly chooses a degree of aversion to one unfortunate outcome or the other based on his/her own or on public attitudes toward regret (Lempert and Collins 2007).

  4. (d)

    (1) “Choose” the solution with the most favorable net risk profile—which is to say, consider regulating, discouraging, or banning the less-favorable solution(s) and consider promoting, encouraging, or subsidizing the most favorable one. Depending on how intrusive these interventions are , when a government goes beyond merely providing information about different products/technologies and implements regulations, taxes, subsidies, or the like to make it easier to sell and use some technologies and harder to use others, this may well smack of “picking winners and losers.” This criticism must give decision-makers pause, especially if it is clear that by advancing a particular application, one monopoly producer will reap all the benefits (and in rarer cases, a single producer of a net riskier application will also bear all the costs of the other’s “win”). But there is an element, perhaps a large one, of hypocrisy in denunciations of government’s “picking winners.” It is an article of faith that when the “free market” picks winners and losers, as happens constantly and relentlessly, those decisions stem from adequate information and by definition increase net economic benefit. So it is the case that governance decisions that advantage some producers over others can similarly be evidence-based and can provide net economic benefits as well as reducing externalities. The related claim that regulation should generally avoid specifying the means of compliance (technology-based standards) and instead set performance goals and let regulated industries find their own least expensive and burdensome ways to meet them (but see Wagner (2000) for a counterargument) is also based on some inconsistencies . Performance standards, which would tend to be less disruptive on market structure, are hard to enforce (Coglianese and Lazer 2003)—but more tellingly, in some cases, businesses clamor for “flexibility” only to later rebuke government agencies for not providing technological specifications that give them assurances of how to comply (Finkel and Sullivan 2011). But perhaps the weakest argument against allowing the democratic process (through participatory regulatory governance) to identify and support “winning” technologies that reduce risks is the fact that we have long allowed government to do this anyway, in many accepted though opaque ways. In the USA, the federal government has provided the coal industry with more than $70 billion in subsidies since 1950 (Taxpayers for Common Sense 2009); the effects on newer energy sources of this sort of market distortion are difficult to estimate but could well be monumental in size. In the pharmaceutical industry, the policy of allowing unlimited off-label use for drugs once they have been approved for a specific use amounts to a “leg up” over both established and innovative therapies for the same diseases (Comanor and Needleman 2016)—this amounts not only to “picking winners” but giving these favored technologies the kind of head start over competitors that could last for generations. The most pervasive arena in which government already picks winners and losers is probably that of international trade. Anecdotally, the USA and EU negotiated a pair of reciprocal tariffs several decades ago, with the EU disfavoring American cars and the USA placing a heavy tariff on European light trucks—this, of course, has had the effect of helping our domestic truck manufacturers “win,” to the detriment of the domestic passenger car sector. So while promoting industrial policy for SynBio raises hackles, it should not be the very idea of favoring some industries over others that causes us to turn our backs on such policies .

    OR

  5. (d)

    (2) Choose a mix of solutions implemented together in quantities sufficient to fulfill the need or want but parceled out in such a way that the sum of all net risks is even lower than the net risk of the relatively most favorable single solution. It is possible that the optimal policy would involve a portfolio of solutions, with each one governed by policies that would accentuate its benefits while keeping its downside risks relatively low (and especially keeping them below any sharp nonlinearities in the technology’s exposure-risk function). Such an approach would require vigilance and planning but might blunt some of the concern about brighter-line policies that would elevate one solution to “winner” status while greatly or completely curtailing others’ roles in the economy.

  6. (e)

    Consider those governance tools qualitative regulation (bans), quantitative regulation (exposure limits or controls of a given exposure reduction efficiency), or any of a variety of “soft law ” mechanisms—that best produce the desired optimal net risk profile. The gap between seeking net risk reduction and fulfilling that desire must fall to one or more tools of regulatory governance. Finkel, Deubert et al. (2018b) elaborates in order of stringency on various subtypes of “nudges” (information dissemination, guidance documents, and the like), public-private partnerships (Marchant and Finkel 2012), enforcement of general norms, and enforcement of newly written regulations, as each might apply to the problem of repeated head trauma and brain disease in professional football. However, there are several useful kinds of governance tools not mentioned in that article, including using civil liability as a powerful incentive to reduce downside risks (McCubbins et al. 2013; De Jong 2013) or requiring developers of new technologies to post bonded warranties against unforeseen harms (Baker 2009). On the other hand, Finkel, Deubert et al. emphasize one innovative governance idea that is not often included among the portfolio of “soft law” ideas commonly recommended (Mandel and Marchant 2014): an “enforceable partnership” in which a regulated industry develops its own code of practice and/or exposure controls but explicitly agrees to agency citations and penalties for violating that code. Such an arrangement might be especially appropriate for SynBio applications, since the developers generally can revise their views about which controls are most effective much faster than the public rulemaking process ever could. One other way to array the various governance options, as seen in Fig. 1, is to deemphasize the specific tools and instead portray the range of orientations from most supportive of emerging technologies to least supportive. In any event, the literature makes various recurring points about the nuances of emerging technology governance, particularly (1) that it is most “artful” when it seeks “effective compromise” such that while not all participants will be satisfied, all will agree that their views were heard and that the regulator’s logic was transparent and reasonable (Zhang et al. 2011; Coglianese 2015); (2) that the choice of instrument and the stringency of control should vary depending on the stage at which the technology currently exists (e.g., laboratory work vs. field trials vs. first full-scale releases vs. routine releases) and that government should establish “checkpoints” to appraise the most sensible controls at each stage (Bedau et al. 2009); and (3) that agencies sometimes can make good use of “soft law” mechanisms early in the lifespan of an emerging technology but should be ready to eventually “harden” those tools into traditional regulatory forms lest the regulated industries correctly perceive that the agency is using “soft law” as a crutch (Cortez 2014).

  7. (f)

    Consider structural change in government to better organize itself to administer and enforce the tools chosen. Most of the sparse literature that considers improving the capacity of government to regulate emerging technologies focuses on “small gaps” where no agency has authority to solve a particular problem or where duplicative authorities foster controversy and delay (Paradise and Fitzpatrick 2012). For example, Taylor (2006) pointed out that the US Food and Drug Administration has jurisdiction over the safety of cosmetics but lacks statutory authority to oversee, prior to their marketing, cosmetics made with nanotechnology components. Similarly, Mandel and Marchant (2014) recommend that EPA seek authority to require a pre-manufacture notice from developers of new microorganisms, not just for those that combine genetic material from two or more organisms from different genera but those that combine genetic material from species within the same genus. Most scholars construe these problems as solvable with minor statutory changes (see, e.g., Carter et al. 2014) or with interagency coordination provided by a White House office (see, e.g., PCSBI 2010). However, at least one investigator (Davies 2009) has gone further to recommend the reorganization of several current agencies (EPA, OSHA, NIOSH, the Consumer Product Safety Commission, the National Oceanic and Atmospheric Administration, and the US Geological Survey) to create a Cabinet-level “Department of Environmental and Consumer Protection” to regulate existing and emerging technologies that affect human health, safety, and the environment.

Although this process of conceiving of, comparing, and choosing among solutions can be intricate and can demand creative and bold thinking about “tragic choices” (Calabresi and Bobbitt 1978), its core tenet can be simply described: in contemplating whether to encourage or discourage an emerging technology solution to a human need, society should tolerate more potential downside risk when the solution has greatly improved potential for unprecedented risk reduction. For “wants,” the logic would be the related statement that “society should tolerate more potential downside risk when the solution can fulfill the want in unprecedented new ways or to a new extent.” And the fundamental corollary to each of these principles would be that “society should be especially wary of courting new downside risks when the risk reductions they offer are negligible or when the consumer benefits are marginal.”

For example, a SynBio (or, for that matter, a conventional) product that makes clothes whiter is arguably less worth taking risky chances on than one that could substitute for gasoline in cars; and further, if the SynBio product only makes clothes marginally more white than the next-best conventional alternative, it may be even less worth risking harm for.

How “radical” (Løkke 2006) is this precept? We are already comfortable declaring that some larger risks are more acceptable than related smaller risks, when we can explain this as a consequence of voluntary choice versus involuntary imposition (Starr 1969). But here I am arguing that we should consider certain risks less or more acceptable not because of qualities of the harms but qualities of the solutions that may make risks more or less worth bearing. Setting an ambient air quality standard only requires the decision-maker to consider the likely costs of achieving it against the benefits of doing so; requiring automobiles, on average, to achieve a given higher level of fuel efficiency goes a bit further toward favoring certain technologies over others but implicitly considers any technology with a positive risk-risk profile as acceptable. So it may be unprecedented to take the next logical but large step and compare risk profiles in order to favor technologies with significant new net benefits over marginal ones.

To the contrary, I suggest that placing hurdles in the way of products with small marginal benefits and worrisome new risks is in fact very similar to proposals made beginning several decades ago (Nussbaum 2002) that the FDA should treat truly novel pharmaceuticals more permissively than it treats “me-too” drugs that only offer slight variations on existing substances, because the former have novel benefits that may be more likely to justify their new risks. As Angell (2004) pointed out, the FDA currently treats both novel and derivative drugs equally, approving them if they are both safe and are more effective than a placebo: “the [‘me-too’ drug] needn’t be better than an older drug already on the market to treat the same condition; in fact, it may be worse. There is no way of knowing, since companies generally do not test their new drugs against older ones for the same conditions at equivalent doses.” Angell and others (Gagne and Choudhry 2011) have repeatedly called for FDA to make “approval of new drugs contingent on their being better in some important way than older drugs already on the market.”Footnote 11

A solution-focused approach, applicable to the other end of the marginal benefit spectrum, is also being suggested with respect to the FDA and the drug approval process. A major part of the “21st Century Cures Act,” signed into law in 2016, provides for expedited approval for new medical devices that may benefit patients with “unmet medical needs for life-threatening or irreversibly debilitating conditions” (Avorn and Kesselheim 2015). Similarly, FDA has issued several regulations streamlining the drug approval process to treat certain very serious conditions that have no effective current therapies, stating that “these procedures reflect the recognition that physicians and patients are generally willing to accept greater risks or side effects from products that treat life-threatening and severely-debilitating illnesses, than they would accept from products that treat less serious illnesses. These procedures also reflect the recognition that the benefits of the drug need to be evaluated in light of the severity of the disease being treated” (FDA 2014).

These kinds of benefit-aware risk comparisons, of course, are precisely what a comparative risk-risk (solution-focused) analysis of SynBio versus conventional approaches to solve a problem would do—allow, and encourage, those approaches that are “better in some important way” than the status quo, either because of the paucity of effective solutions at present or because of a truly groundbreaking advance over approaches that are satisfactory but not ideal.

Overt and Hidden Values in Risk Assessment and Management

Both in assessing the net risks of any technology (SynBio or otherwise) and in deciding whether and how to manage any risks identified, we need more than methodological improvements in risk estimation and in decision-making under uncertainty; we need a much more transparent mode of analysis such that the large number of hidden influential value judgments that pervade the analysis can be brought to light. In Finkel 2018b, I identified more than 70 steps within a typical cost-benefit analysis where influential value judgments are made and generally kept implicit or are disclosed but misleadingly labeled as objective or purely scientific choices. These judgments range in scope from narrow and quantitative choices that influence key numerical quantities in one portion of a risk assessment or a CBA (e.g., the use of a particular single discount rate to render future consequences less salient than present ones) to fundamental definitional choices that influence the entire direction of the analysis (e.g., whether the “optimal” decision is tacitly defined as the one that maximizes total net benefit, one that achieves an arbitrarily “sufficient level” of benefit at the bare minimum cost, or as some other legitimate resting place). The main problem with embedding one value-laden choice out of many at multiple places in an analysis is, of course, that affected citizens may not realize that they profoundly disagree with the particular value chosen and would welcome the (possibly quite different) results of an analysis that substitutes one or more values they do agree with.

Some of the dozens of hidden value-laden assumptions I and others have identified would arise only infrequently in the kind of net-risk-versus-net-risk comparisons advocated here for making policy about SynBio applications—either because they affect portions of the analysis (particularly the estimation of the economic costs of regulatory control) that are not crucial to the comparison or because they involve aspects of the policy process (e.g., post hoc evaluation of the results of regulatory or other interventions) that do not affect the comparisons themselves. In comparing the risk profiles of SynBio and conventional applications, some of the more important recurring value judgments include:

  • Should harms to non-human species be included among the “risks that matter” (assuming said harm does not indirectly affect people at all)?

  • Should the non-utilitarian concerns of some citizens, particularly the aversion to “tinkering with the natural order” for good or ill, be given weight apart from the consequences themselves?

  • Should analysis take account of risk reduction benefits or new harms to citizens outside the USA when making choices about domestic policy?

  • Should harms that would affect subsequent generations be discounted at the same rate as intra-generational harms or at a lower rate so they don’t effectively vanish from the equation?

  • Should we treat risks from naturally occurring substances or organisms as equivalent to equal risks from synthetic ones?

  • Should a risk profile with a lower expected value but longer right-hand tail than another one be treated as preferable (on the basis of expectation) or the opposite (on the basis of a worst-case comparison)? (see Finkel, Trump et al. 2018a for the claim that the risk profile of the Oxitec SynBio mosquito, compared to pesticides and other conventional approaches to controlling dengue fever, may have a favorable expectation but a longer right tail)?Footnote 12

I advocate for substantial efforts to reveal all of the value judgments permeating evidence-based policy analysis—not through a laborious process of highlighting them each time but rather by the publication (as a single document affecting all health, safety, and environmental agencies or perhaps by agency-specific documents) of a free-standing “value statement” that would flag them all, explain which judgments the agency would generally make by default in the absence of specific reasons to the contrary (and why this value judgment was chosen), and offer one or more alternative value judgments that could be made instead if sufficient reason was provided in a specific assessment. This may be a daunting task, but there is a much more practical and imminent first step; even if analyses of SynBio and other technologies cannot be made fully transparent to “the outside world” as to their embedded value judgments, the analysts themselves must recognize them and ensure that in each case, the same judgment is used on both sides of the comparison. If this is not done, the comparison will be worse than misleading, as it will foster the impression that the “safer” alternative was chosen rationally. Imagine, for example, a comparison of the risk profiles of a compound like artemisinin produced via natural sources (the wormwood plant) versus one produced by genetically engineered yeast, with the former profile tacitly considering the economic harm to industries anywhere in the world if the competing application was supported, while the latter profile tacitly only considered economic harm to US industries. In this hypothetical, the major unemployment effects would not be counted for the one alternative (drying up the market for wormwood) where they were substantial.

Cheerleading and Poor Risk (and Benefits) Communication in SynBio

Careful risk assessments , whether performed in the classical or the solution-focused paradigm, can be undone by tone-deaf risk communication. Among the many recurring deficiencies in efforts to communicate risks, lapses that “can create threats larger than those posed by the risks that they describe” (Morgan et al. 2002) are the deliberate or unintentional trivialization of risk, the overuse of jargon, the reliance on misleading or inappropriate comparisons to unrelated risks, and the tendency to provide only population-wide average risks and mask substantial interindividual differences (Finkel 2016). Many experts (Sandman 1993; NAS 1996) stress that intentional attempts to persuade people via risk communication sometimes work but eventually often backfire. I’ve read several of the leading general-interest books on SynBio (along with many peer-reviewed articles), which arguably give a good cross-section of how experts communicate to laypeople about these new applications. In my reading, I found some troubling signs not only in how risks are described but how benefits are (Kahn 2011):

  • At the least important end of the spectrum, developers of SynBio technology sometimes take verbal shortcuts describing their advances. For example, Oxitec, developers of a hybrid male mosquito, often referred to them as “sterile,” when the central advance Oxitec made is that the males are fertile but produce offspring that die before they are mature enough to bite humans (Finkel et al. 2018a). This distinction is largely semantic, and, as Oxitec has said, “there is no layman’s term for ‘passes on an autocidal gene that kills offspring’” (Specter 2012). However, the ambiguity (or discrepancy, depending on one’s point of view) gave a critic from Friends of the Earth an opening to say that Oxitec has been “less than forthcoming” with its public statements that allow the more reassuring interpretation that the mosquitoes cannot produce offspring (Specter 2012).

  • Of greater concern is a tendency of SynBio developers to condescend to the public by suggesting that it is irrational to fixate on the downside risks. For example, scientific giant Craig Venter has made the sweeping statement that “few of the questions raised by synthetic genomics are truly new” (Venter 2013, at 152), which of course sidesteps the question of whether “old” risks created anew can be unacceptably high. Similarly, Brassington (2011) used an odd phrase to “quantify” SynBio risks: “However, while these risks are not vanishingly small, they can be met not by forbidding SynBio research, but by pursuing it wisely” (emphasis added). Something “large” is also not “vanishingly small,” but the phraseology here strongly implies that we know SynBio risks to be “small”—perhaps non-zero, but arguably so small as to be impalpable and hence unworthy of concern.

  • A similar tendency involves a kind of acknowledgment that public fears are not unfounded but one that sequesters this concern and ultimately steps away from it. Consider this quote by Venter (2013, at 155): “For me, a concern is ‘bioerror’: the fallout that could occur as the result of DNA manipulation by a non-scientifically trained biohacker or ‘biopunk.’” This is essentially a “safe if used as directed” warning, which is not a warning at all but a denial of the inherent danger(s) in favor of dangers brought on by insufficient policing of human actors. Without taking a position on the merits, this does seem reminiscent of the “guns don’t kill people; people do” argument that seeks to channel concern away from “the right users.”

  • Most generally, pioneers in synthetic biology sometimes invoke their own expertise, or that of the cadre of developers more broadly, as a kind of talisman that can turn estimated risks into irrelevancies (Rampton and Stauber 2002). When a New York Times reporter (Rich 2014) brought up various potential risks of de-extinction technology, the lead scientist at “Revive and Restore” simply asserted that “We have answers for every question… We’ve been thinking about this for a long time.” Perhaps more tone-deaf still is this assertion from Venter, who invoked Isaac Asimov’s “three laws of robotics” to reassure readers that nothing can go seriously awry: “One can apply these principles equally to our efforts to alter the basic machinery of life by substituting ‘synthetic life form’ for ‘robot’” (Venter 2013, at 153). Here citizens concerned about untoward risks of SynBio are met with fictional solutions to a problem —in Asimov’s created world, robots could be hardwired to always obey and never to harm, but of course hybrid organisms do not have programmable brains, and wishing for a fail-safe mechanism is quite different from building one.

If at the same time that SynBio advocates were understating risks or hyping untested ways to eliminate any risks that remain, they were also overstating the benefits of their innovations, citizens might be doubly disadvantaged as they try to make sense of the trade-offs. However, my sense is that the potential benefits of SynBio are not being stressed enough and that categories of benefit that are less impactful are emphasized:

  • In particular, developers and advocates often emphasize the “elegant” features of SynBio advances—and not just in applications such as “glowing fish” that may have no tangible benefits other than their novelty. For example, Lee Silver (2007) quoted MIT professor Tom Knight as stating that “the genetic code is 3.6 billion years old. It’s time for a rewrite”—without linking that intellectually compelling prospect to any specific (or even hypothetical) advantages it might confer. This wide-eyed enthusiasm for the “can,” rather than the “should,” may also serve to heighten concern about the possible downside risks that are not mentioned.

  • Even some medical applications of SynBio are praised for their ability to move the human organism closer to “perfection,” which again mentions an inchoate benefit, and here one that reasonable people may actually consider a disbenefit (Hurlbut 2013).

  • There are, by contrast, examples where supporters of SynBio emphasize the tangible and pragmatic benefits of applications, such as this observation from Rooke (2013). I suggest that more successful risk-benefit communication ought to look more like this example than the previous ones:

    Technological advances in the field of health continually bring us closer to a world where a healthy life is a real option for every individual on the planet, regardless of geography, culture, or socioeconomic status. However, these benefits tend to accrue disproportionately to the developed world; the need is still great for solutions that can diagnose illness, protect against infection, and treat disease in a broad array of low-cost settings with developing-world healthcare systems and limited infrastructure.

Broad/Tentative Observations About Comparative Risk Profiles of SynBio Categories

In other work performed with Sloan Foundation support , my colleagues and I published a detailed case study of the Oxitec SynBio mosquito (Finkel et al. 2018a) but also investigated in broad terms the kinds of incremental benefits and risks that various types of different SynBio applications might pose. Table 2 presents some tentative observations, using exemplar applications from each of ten categories where SynBio developers are working, suggesting that in some kinds of applications (e.g., biological pesticides), the SynBio alternative may have large incremental risks that do not justify the small incremental benefits they offer over conventional solutions to this problem. We also suggest that in other categories (e.g., specialty chemicals), the new downside risks would likely be small, but so would the incremental benefits. In contrast, we see the general categories of disease vector controls and medical treatments as ones where the new risks from SynBio may be comparable to or smaller than the risks we currently tolerate from conventional approaches and where the efficacy of a new approach may make the SynBio application a win/win for fulfilling a human need.

Table 2 General observations about risk-risk aspects of synthetic biology solutions, by category

Solution Generation as the Complement to Alternatives Assessment

The discussion to this point has not exhausted the potential for solution-focused thinking , as the various proposals (including full-blown marginal risk profile analysis of solutions) have all presupposed that a development of a new application will then prompt discussion about the new risks it poses and new risk reductions it offers, in context of other ways to meet the same need or fulfill the same want. But what if instead of a problem that has already attracted multiple solutions, we are faced with a problem desperately in need of even one good solution? The complement to regulators seeing competing solutions available and asking “why?” (or “which”?) would be someone “dreaming things that never were and saying ‘why not’?” (Shaw 1949).

One way to organize creative thought around “solutions we need” is to extrapolate from existing lines of SynBio research to instances where similar technology might be able to do vastly more good. For example, various researchers are trying to engineer microbes that would have salutary effects on human health and quality of life if introduced into the human gut microbiome. It is also the case, though, that collectively the digestive systems of domesticated ruminant animals worldwide (primarily cows, sheep, and bison) add enormous amounts of methane to the atmosphere—roughly 20% of all anthropogenic methane (Lassey 2007), a potent greenhouse gas (Friedman et al. 2018). Investigators have been attempting to reduce methane generation by changing the animals’ diets, and by selective breeding, but have not succeeded in making a dent in the total. But very recently, students at the University of Nebraska began, but “ran out of time,” experimenting with introducing a gene from a red alga (C. pilulifera), one that codes for the enzyme bromoperoxidase, into E. coli for introduction into the digestive systems of cattle (University of Nebraska-Lincoln 2017). Interestingly, cattle can be fed bromoperoxidase directly by feeding them large amounts of seaweed, but the bromoform produced in seaweed farming is a potent depletor of stratospheric ozone, which the students described aptly as “fix[ing] one environmental issue by creating another.” Currently there appears to be little experimental or commercial interest in using SynBio to attack the problem of methanogenesis in ruminants and its role in exacerbating global climate change, surely a problem in need of a breakthrough.

Similar “if only…” thinking can also be applied to existing products that satisfy consumer demand, simply by looking for products with the largest environmental or human health “footprints.” Surely high on such a list would be palm kernel oil, whose worldwide production converts several million hectares each year from tropical forest to monoculture, with implications for endangered species like the orangutan and with widespread use of child labor for harvesting (Rosner 2018). But while industrial feedstocks like isoprene have attracted much interest from SynBio developers, there appears to be only one company actively trying to engineer organisms to produce synthetic palm oil (that company, Solazyme, has been criticized for choosing algae as its host organism, in a system that requires large amounts of sugarcane to be harvested to feed the algae; SynBioWatch 2016). As with the Nebraska team, a group of students at the University of Manchester also participated in an iGEM competition (Univ. of Manchester 2013) and explored the possibility of producing synthetic palm oil in E. coli instead but apparently lacked the resources to bring this idea past the conceptual stage.

So given the economic realities that set developers’ sights based on market potential rather than on reducing environmental or other externalities (whether caused by the paucity of solutions or by the footprints of existing products), how can governments focus on “solution generation” to complement solution appraisal, and how can they attract entrepreneurs to fill the vacuums they identify?

Here the useful ideas are conceptually simple though politically fraught and are ones the US and other nations have grappled with already (for a prime example, see the Orphan Drug Act of 1983, which provided tax incentives and extended patent protection to developers of drugs that are intended to treat a disease affecting fewer than 200,000 Americans). Government or private philanthropies could identify areas where a novel solution would be immensely beneficial and then offer a “grand challenge” prize for its development (Adler 2011; also see Table 1 in Rooke 2013) or directly subsidize the early stages of research and development. Manzi (2014) summarizes the salutary results from subsidies, concluding that “the Breakthrough Institute has produced excellent evidence that government subsidies for speculative technologies and research over at least 35 years have played a role in the development of the energy boom’s key technology enablers: 3D seismology, diamond drill bits, horizontal drilling, and others.” He recommends that our “existing civilian infrastructure … can be repurposed, including most prominently the Department of Energy’s national laboratories, the National Institutes of Health, and NASA. Each of these entities is to some extent adrift the way Bell Labs was in the 1980s and should be given bold, audacious goals. They should be focused on solving technical problems that offer enormous social benefit, but are too long-term, too speculative, or have benefits too diffuse to be funded by private companies.” In other words, these giant agencies could devote some of their resources to identifying “orphan problems” that we have learned to live with but where innovation could conceivably reveal that this acquiescence wasn’t necessary.

A related idea has been championed by Outterson (2014), who has suggested in Congressional testimony that the federal government could offer a guaranteed payment stream to the successful developer of a needed antibiotic or other drug (one with large social benefits but marginal profitability for the developer) in exchange for the right to market the product.

Of the two parts to the “solution generation” puzzle—identifying situations where society needs a new solution and providing the “activation energy” so that developers will have the incentives and resources to explore such a solution—the former is clearly already occurring, as evidenced by the fact that I began by idly speculating about the benefits of a SynBio approach to methanogenesis in ruminant animals or a SynBio alternative to palm oil monoculture, only to find that various university groups were already working on the broad outlines of these very breakthroughs. But the other side of the coin—that to my knowledge neither idea has left the university environment to be brought forward to bench-scale fruition—suggests that new policies and organizational arrangements are needed to move good ideas forward in the absence of clear short-term profitability.

Conclusions

It should not be controversial that we are better off knowing whether a new technology has net risk reduction benefits that would outperform the status quo, whether or not we have the will to act on that knowledge (especially the will to act in ways that would cause the less efficient solutions to make way for more efficient ones). Of course, the probability and severity of risky scenarios are always uncertain, and risk comparisons are more uncertain than risk estimates (Finkel 1995), so judgment will always be needed to weigh the differential costs of error (boosting a new technology on the basis of its likely superiority, but one that will turn out to have risk-increasing consequences, versus impeding a new technology such that risk-increasing solutions will be allowed to persist).

The controversy comes when we contemplate intervening in the market to promote risk-reducing solutions over risk-increasing ones. Most ideologies other than pure libertarianism welcome the idea of government choosing policies that promote social welfare, whereas many liberals and conservatives bristle at the idea of government promoting individual companies over others (despite how often we tolerate government doing so via earmarks, subsidies, and the like). In between the extremes of “picking winning policies” and “picking winning firms” lies the notion of picking technologies or industries that solve problems with fewer untoward harms. Here I agree with much prior scholarship, particularly that of Rycroft and Kash (1992), that we need to repudiate the idea that “the politicians and bureaucrats who make these critical decisions would have neither the incentives nor the ability to pick winners as well as the private market place now does” (quoting a 1983 speech by Martin Feldstein, then chairman of the White House Council of Economic Advisers). Markets do reasonably well at allocating resources based on consumer preferences (as influenced by those doing the marketing), but much less well at allocating resources to minimize externalities. Comparative net risk analysis of solutions to a human need (or of ways to satisfy a want) provides the evidence that government needs to consider the non-market benefits and harms of technologies, which will allow government to consider strengthening the barriers to entry for innovations that tend to increase net risk while attenuating those barriers for innovations that tend to decrease net risk.

In other words, SFRA can tee up governance decisions that reject “permissionless innovation” when the SynBio or other new application is duplicative, ineffective, or harmful but that equally reject laissez-faire market primacy when the innovation is what we truly need to solve pressing health, safety, environmental, or other problems.