Abstract
A small literature exists exploring quantitative risk assessment for synthetic biology (SynBio) applications, alongside some articles about governance arrangements to regulate SynBio—but there have been few if any explorations of how the analysis of the risks and benefits of SynBio can and should lead to specific actions, be they traditional regulatory controls or any of a wide variety of other solutions. This chapter attempts to fill that gap, by offering a way of embedding risk assessment for SynBio in a “solution-focused ” paradigm that explicitly compares a panoply of ways to ban, control, channel, encourage, or subsidize a given SynBio application. Because SynBio applications are attractive precisely because they may be able to reduce existing risks, I argue that the conventional risk management question—“Does this new technology pose ‘unacceptable’ risks?”—is myopic. Instead, we should ask “does it have a relatively more favorable net risk profile (risks reduced net of new risks created) than any of the current ways we are trying to fulfill a particular human need or want?” The core premise of this proposed way of managing SynBio applications is that a society should tolerate more potential downside risk when the solution has greatly improved potential for unprecedented risk reduction; the corollary to this being that we should be less tolerant of new and potentially disruptive risks when the solution merely satisfies a want rather than a risk-reduction need. Although the chapter does not discuss any specific SynBio applications in detail, Table 2 summarizes ten broad categories of SynBio advances and offers general thoughts about the risk profiles of each kind of advance relative to the best conventional solutions currently available.
Access provided by Autonomous University of Puebla. Download chapter PDF
Similar content being viewed by others
Introduction
Any commercially available, bench-scale, or even proposed “back-of-envelope” application of synthetic biology (SynBio) will prompt discussion and debate—perhaps highly philosophical, perhaps highly practical and legalistic—both about how to think about the application and what, if anything, to do about it, pro or con. The former kind of discussion is the domain of precautionary or “permissionless” (Thierer 2016) rhetoric, of quantitative risk assessment, and of cost-benefit analysis; the latter is the domain of risk management, regulation, information disclosure, industrial policy, and other interventions.
SynBio applications are controversial because of their promise and their peril—in short, because they can greatly reduce risks and also because they threaten to expose humans and the natural environment to new or increased risks. I will discuss these issues throughout this chapter, but a working introductory definition of SynBio is “using tools of molecular biology to engineer new or improved cellular products or processes” (see Cameron et al. 2014). A working definition of quantitative risk assessment (QRA) is “a method that synthesizes information from basic sciences (e.g., toxicology, epidemiology, chemistry, statistics) to explore the probability that one or more adverse outcomes will occur from a product or process, and to gauge the severity of each outcome” (see Kaplan and Garrick 1981). As I will discuss below in the section “Risk Assessment Methodologies for SynBio”, the output of a useful QRA is not a yes/no pronouncement about the existence of a risk, or even a quantitative estimate both of its likelihood and its consequence. It is, instead, a “characterization” of risk (NAS 1983) that offers information about (1) the extent of scientific uncertainty that, if analysts are honest and humble, precludes them from pinning down the probability or severity with precision and (2) the extent and nature of interindividual variability in the risk, so that different populations can appreciate that probabilities and severities also depend on who is facing the hazardous condition(s).
This chapter, the capstone product of a project supported by the Alfred P. Sloan Foundation, breaks new ground in two fundamental and complementary ways—one dealing with analysis of evidence and one with evidence-based action. With respect to analysis, many thought processes about and formal assessments of possible harms to human health, safety, and environment (HSE) begin and end with the most simple of questions: is it “safe”? Or, slightly more broadly, “does it promise economic benefits in excess of the (monetized) harms it presents?” Such questions allow (or at least encourage) only dichotomous answers, but worse, they crowd out more sophisticated, sweeping, and bold risk assessment questions. Similarly, many interventions (risk management) to control possible HSE harms consider only the narrowest range of actions: should we ban the new process/product/activity, or should we declare “nothing to see here; let’s move on?” Here my concern is that the “poverty of choices” can lead to poor decisions, akin to how a “poverty of questions” can lead to poor analyses.
I instead start from the premise that asking a wider range of questions and considering a wider range of actions are both unambiguous virtues. This is not to say that expansive and protracted analyses always outperform simpler ones or that circumscribing the choices to “go/no-go” is always mediocre—only that simplifying the analysis and narrowing the range of options should be done consciously and at least somewhat reluctantly.
These twin considerations apply in spades to the new arena of SynBio (for an excellent primer on the issues raised, see Moe-Behrens et al. 2013, or Rodemeyer 2009). First, to a greater extent than is true for most other new and continuing sources of HSE risks, the dangers posed by SynBio applications are offset (sometimes partially, completely, or “more than completely”) by their direct and often unprecedented power to reduce other risks. Hence, I argue that traditional “is it safe?” risk assessment questions are particularly myopic here, as they ignore the real possibility that the new application is at the same time both objectively dangerous and yet a risk-reducing improvement over the status quo. Traditional “go/no-go” risk management choices are also particularly inappropriate for new SynBio technologies, because of their novelty, the rapidity with which unforeseen risks or unforeseen risk-reducing benefits may be realized soon after their deployment, their ethically controversial nature, and their dependence on a social license to operate and perhaps even public sector funding. For decisions like these, society has the opportunity (arguably the responsibility to itself and to posterity) to consider many shades of gray between draconian regulation and laissez-faire—as well as various creative options that are actually either more stringent than even an outright ban or more encouraging than even a hands-off posture.
To introduce the rich range of assessment questions and management options that I urge should be posed and considered in the analysis and governance of SynBio, I offer a hierarchical ordering of each; the risk assessment questions ranging from the most rudimentary to the most nuanced and expansive and the control options ranging from the most favorable to the SynBio application to the most restrictive.
Table 1 (adapted from Finkel 2018a) presents ten distinct levels of analytic complexity , several of which I will highlight here. Level 1 represents the most qualitative appraisal possible: the “is it safe?” (or “is it costly to avoid?”) question. Level 5 offers the traditional cost-benefit question: are the expected risks reduced by the policy greater than its expected costs? As we will see in detail below, this question can easily be recast as an appraisal of the net benefit profile of a new application or technology: on average, does it reduce risk by more than it exacerbates it? The remainder of the hierarchy basically enriches the simple cost-benefit (or risk-risk) estimate with considerations of the two most fundamental phenomena surrounding all risks—the uncertainty impeding our ability to precisely quantify risk and the interindividual variability that makes any risk estimate uniquely applicable to only one individual or subgroup within the affected population. A cost-benefit (or risk-risk) analysis that fully considered both phenomena would ask questions of the form “for these particular citizens, what is the range of possible outcomes (the new technology reduces, leaves unchanged, or increases risks, by how much?), and what is the probability of each outcome?”
A “Solution-Focused” Partnership between Analysis and Action
Armed with the answers to one or more assessment questions about a SynBio application, society could then consider whether they justify a response at or near the highly “bullish” left end of the spectrum, the highly restrictive right-hand end, or somewhere in between. Figure 1 displays a very broad range of possible responses to a SynBio application; of particular note, the right-hand tail of the range offers somewhat more ambitious prospects for SynBio control than are generally contemplated, while the left-hand region offers various gradations of incentives and support for SynBio that go far beyond merely permissive responses. The other unusual feature of this schema is that it explicitly construes the SynBio application as competing with existing materials or applications—therefore, options exist to constrain the SynBio application indirectly (by subsidizing the competing application or loosening regulations on it) or to promote SynBio indirectly (by regulating or banning the existing application).
The main contribution of this chapter, however, comes in the space between risk assessment questions and risk management control options , simply because analysis should not result in a specific action when the dots are connected poorly and with little forethought. Knowing only the net risk (or net benefit) of a SynBio application, however comprehensively that risk is assessed (see Table 1), one can certainly take some action somewhere along the spectrum in Fig. 1, but this is far from the only paradigm for linking the results of assessment to the form, ambition, and stringency of control. When we look solely to risk assessment to inform and guide action, we implicitly assume that the amount of concern or worry proportionally dictates the amount of resources we should expend to reduce the given hazard. Instead, I have proposed (Finkel 2011; see also Natl Acad Sci 2009; Goldstein 2018) that a host of questions should intervene between the two parts of the “big risks, large controls” mantra:
Risk assessment for its own sake is an inherently valuable activity but, at best, a risk assessment can illuminate what we should fear, and tap into our inexhaustible supply of worry—whereas a good solution-focused analysis can illuminate what we should do, and mobilize our precious supply of resources. (Finkel 2011, p. 781)
By “solution-focused ,” I mean a decision process that eschews risk assessment performed in one-risk-at-a-time isolation and disconnected from the appraisal of what solutions are or may be available to control the risks being compared. “Solution-focused risk assessment,” or SFRA, seeks above all to resist the temptation to declare victory when a risk has been quantified and a lower level of risk deemed “acceptable.” Such a mindset suffers from two fundamental flaws: it defines success as an isolated risk reduction, rather than a more comprehensive solution, and it is often satisfied with the aspirational success of declaring an acceptable risk level, even if the risk actually never is reduced to that level. So SFRA instead emphasizes (1) that we must compare risk-reducing (welfare-enhancing) opportunities , not disembodied risks, and (2) that the earlier in the decision process we array the possible solutions, the less likely we will define away promising answers to risk-risk and cost-benefit dilemmas and end up with a course of action that is inferior to others we neglected to consider. For example, the US Environmental Protection Agency (EPA) has a vigorous research program concerned with the toxicity of various plasticizers (beginning with bisphenol A (BPA) ) that can leach into drinking water provided in disposable plastic bottles. Eventually, this work may lead to regulatory limitations on the allowable concentrations of BPA in bottled water. If EPA assesses the risks more holistically, it might lead to a suite of concentration limits on the various substitutes for BPA as well.
But imagine posing the question not as “how many parts per billion of each substance is acceptable?” but as “how can the market deliver clean, cold drinking water at an affordable price and with the smallest environmental and human health footprint?” That linkage between analysis and action might prompt public discussion of the energy use and disposal issues associated with the current annual production of 49 billion plastic bottles in the USA (from a baseline of essentially zero several decades ago, a time when US consumers did not want for ready access to drinking water). And that question might lead to discussions of how governmental incentives, taxes, or investments in infrastructure might help reduce the runaway demand for plastic water bottles of any kind and increase the supply of “free” (funded by taxpayers) or low-cost drinking water provided as we remember it in the 1960s–1990s—available in public places and lobbies of private buildings, via fountains and water coolers.
This emphasis on solutions is not only the polar opposite of the way EPA and other US federal HSE regulatory agencies have largely construed their missions since their founding decades ago but is very different from recent “baby steps” EPA has taken to ground risk assessment in practical utility. In particular, EPA has highlighted, particularly in referring to its 1998 Guidelines for Ecological Risk Assessment, that it incorporates “problem formulation” into its planning as a way to make risk assessment more useful. Unfortunately, this semantic change only means that EPA sometimes asks up front the question “how can we limit the scope of our research and risk analysis to issues that can help set a risk reduction goal?”—and this is quite different from “how can we harness risk assessment to discriminate among possible ways to fulfill a human need effectively and with minimal imposition of new risks?” SFRA is not a wholly new concept by any means, however—it can be thought of as a marriage between QRA and a more impressionistic “innovation-based strategy for a sustainable environment” (Ashford 2000) that steers industrial policy toward solutions that minimize risks.Footnote 1
SFRA is also particularly useful for emerging technologies such as SynBio, because it can reveal and supplant the false choice between risk and benefit. As Caruso (2008) pointed out near the inception of SynBio as a viable set of technologies, developers often advocate postponing risk-related inquiries until the benefits can be communicated (she quotes an official in Spain as saying “Let’s first see what [the technology] is good for. If you first ask the question about risk, then you kill the whole field”). The central premise of SFRA, of course, is that the acceptability of a new risk depends crucially on “what the technology is good for.” By exploring benefit and risk simultaneously (and by comparing the findings to benefit and risk analyses for current approaches to solving the same problem), SFRA can help avoid foolish actions (where small new risks are deemed intolerable despite massive risk reductions they can provide) and foolish inactions (where large new risks are permitted on account of small or phantom benefits they offer).
Bearing in mind these two premises—that risk-reducing opportunities should be compared, not simply “optimized” one at a time, and that creative questions about human needs can impel thoughtful discussion about fulfilling those needs in risk-reducing and welfare-enhancing ways—how might society grapple with new SynBio applications in a “solution-focused” paradigm? Here and in a recent article (Finkel et al. 2018a), I outline four different, and increasingly “solution-focused,” ways to evaluate the merits of any SynBio application:
-
1.
Does the application have positive net benefit? That is, does its risk reduction potential exceed its propensity to create additional risks?Footnote 2
-
2.
Compared to other ways to produce the same or similar material, does the SynBio application have greater marginal net benefit than the alternative(s)?Footnote 3
-
3.
Compared to other ways to fulfill the same human need, does the SynBio application have greater marginal net benefit than the alternative(s)?Footnote 4
-
4.
Does the existing dominant means of fulfilling (or failing to fulfill) a particular human need have a particularly poor risk profile, such that society might look to an unmet application of SynBio to displace it?
I emphasize that the nature of the new SynBio applications, as well as the stage of the product life cycle they occupy at the time of this writing, makes the application of the SFRA concept to this set of risks and benefits particularly timely, for three reinforcing reasons:
-
1.
The risks and benefits involved are so different from most of what has come before that the substance-by-substance paradigm is simply a caricature of what is needed.
-
2.
The applications are poised for completion but are largely not “out in the world”—so we have an opportunity to start a revolution in technology with the simultaneous transformation of governance arrangements that are fit-for-purpose. Such an approach will help minimize the need to “grandfather” a first generation of products and can help avoid untoward events that can both threaten human health or the environment and can fatally stigmatize this new technology before it can achieve successes.
-
3.
As one of my colleagues has observed (Coglianese 2012), when a governance system waits for a tragic failure to occur (viz., the BP oil spill), it can be doubly unfortunate, because in addition to the tangible damage done, there is usually a rush to apply ill-conceived policy band-aids that can actually make future failures even more likely or more severe. The “first failure” of SynBio could wipe away most hope for a proactive system of governance, one that we have time to craft now .
This report will describe and evaluate the various linkages among the design of risk assessments, the use of risk and benefit information to make solution-focused comparisons among technologies and materials, and the risk-informed governance of SynBio applications. In turn, I will discuss:
-
The crucial components of a risk assessment method that, when adapted to the special challenges of SynBio risks, can provide reliable, transparent, and “humble” (Andrews 2002) information. Here I will emphasize the extent to which existing risk assessment methods can be sensibly ported over to the SynBio context. While I will also highlight areas where new methods will have to be developed, this report will not per se generate any new risk assessment algorithms.
-
The attributes of various solution-focused risk management questions that might allow for the reasoned expansion of some SynBio applications, the restriction of others, and the imposition of “prudent vigilance” (PCSBI 2010) on still others.
-
The importance of revealing the many hidden value judgments that permeate the process of risk and cost-benefit analysis , so that governance decisions can be made with a fuller appreciation of their ethical implications.
-
The current state of risk communication (and “benefit communication”) for SynBio, as reflected in written pronouncements on these matters by leading pioneers in the field.
-
A table summarizing tentative conclusions about how each broad class of SynBio application measures up, applying a solution-focused governance context.
-
The potential to complement the solution-focused approach with a “solution-generating” mindset for SynBio.
Risk Assessment Methodologies for SynBio
Although this report is not intended to break new ground in quantifying the risks (or net risks) of SynBio applications, I hope here to jump-start a discussion of how society could do so. It is troubling that so much of the “risk assessment” dialogue and writing about SynBio contains little or no systematic, careful, or thorough estimation of any risks or benefits: rather, these discussions have introduced and perpetuated two of the most fundamental errors possible in risk assessment: (1) stating or implying that if an outcome (bad or good) is possible, it is likely or certain to transpire (this is insensitivity to probabilityFootnote 5) or (2) stating or implying that one possible magnitude of the harm or benefit is its expected magnitude (insensitivity to uncertainty, or simply biased mis-estimation). Many conversations or pairs of opposing peer-reviewed articles about a SynBio application merely pit the claim that “this innovation will cure disease X” (or “clean up environmental problem Y,” or “produce valuable product Z much more cheaply than any current method can”), against the counterclaim that “but it can spread a mutant protein throughout the human genome.”Footnote 6 This is perhaps an example of a “risk-aware” conversation , but it is certainly not the basis for a sensible risk-benefit governance decision. For that latter—and vastly more useful—task, society needs at least a minimum set of raw materials with which to quantify risks and benefits, instead of a claim of good or harm that provides no information about its probability or magnitude.
This section of the report will sketch out such a core set of raw materials, useful for any of the risk management questions posed earlier and explored in more detail in the section “Implementing a Solution-Focused Management Regime”. I will also elaborate on a richer set of risk assessment inputs that could help organize a more robust and intellectually honest analysis of the goods and harms of encouraging/discouraging any given SynBio application. Because the methods of QRA were first applied to the exposure and dose-response questions posed by synthetic chemicals in the environment and workplace, the discussions herein will use chemical risk assessment as a jumping-off point and template. QRA for SynBio will of course have to evolve to accommodate the challenges of estimating probability and severity for the novel risk (and risk-reducing) scenarios these applications pose, but QRA has previously risen, albeit fitfully, to similar challenges in other highly complex systems. Examples include risk assessment for pathogens (Mokhtari et al. 2006), the evolution of antibiotic resistance (Cox and Popken 2014), the adverse effects of molecules that can catalyze reactions (Hammitt 1990), the paradoxical dose-response relationships for immunotoxins such as beryllium (Willis and Florig 2002), the probability and consequences of contaminating an entire extraterrestrial environment with Terran microorganisms (NAS 2006), and the behavior of prions in the environment and in vivo (Schwermer et al. 2007).
Fundamental Concepts
Although there exist dozens of definitions and typologies of risk in the peer-reviewed and “gray” literatures (as well as in public discourse), no adequate definition of “risk” can fail to incorporate all of these three most fundamental questions (Kaplan and Garrick 1981): (1) what can happen?Footnote 7; (2) with what probability can it happen?; and (3) how severe are the consequences if/when it happens? Once the “what?” question has been posed, any appraisal of “risk” must therefore integrate—perhaps via simple multiplication, perhaps via any more complicated function of the two—information about both probability and consequence; otherwise, it is not a properly construed expression of risk.
In particular, two common “risk-like pronouncements” about some eventuality are not correct or useful expressions of risk. To state (whether perfunctorily or as the culmination of a seemingly sophisticated technical analysis) that “exactly this consequence could happen” ignores or erases all of the powerful information that probability brings to the table. No matter how precisely one explains the exact hazard (e.g., a precise hazard statement would be “if the rope breaks, you will fall 500 feet to your certain death”), only by adding information about its probability can we reveal whether the risk is trivial, apocalyptic, or anywhere in between. Conversely, to state that “there is exactly one chance in 123.4 (a probability of 8.104×10-4) that the rope will break” is useless without knowing whether the resulting fall will cover 5 inches, 5 feet, or 5 kilometers. Carelessness about probability (the former type of lapse mentioned above) often stems from the orientation that it is immoral to allow any non-zero probability of an involuntary harm to persist, but surely a tiny residual risk is at least less immoral than a large one.
Carelessness about severity is more insidious: when an outcome appears to be fully-described but is not, all kinds of value-laden conditions can be tacitly imposed upon the analysis or the decision. For example, consider the claim that more than 1000 Americans died “needlessly” (Gigerenzer 2006) in the year after September 11, 2001, when they chose to drive rather than to fly, because the per-mile probability of highway death is much greater than that of death in an aircraft. But despite the clear rank order of the probabilities, the risk of driving is only clearly greater than the risk of flying if the outcomes have identical severity—and it was far from “irrational” to regard the specter of a protracted death-by-hijacking as qualitatively more dire than a sudden car crash (Finkel 2008; Assmuth and Finkel 2018).
Although [probability combined with severity] is the core of any meaningful expression of risk, properly considering both inputs may still yield an impoverished or an ambiguous risk estimate or risk-based conclusion, if one or more of these most basic definitional issues about risk are not considered:
-
“Pathway risk” versus total risk . Any source of risk can present multiple consequences simultaneously, so it is important to consider all the major pathways or else explicitly highlight the partial nature of the thought process. For instance, a chemical in household water may be capable of causing several different acute health effects, and still other chronic effects, and can enter the body via ingestion, dermal contact, or inhalation (as in showering with hot water). Each pathway, and each effect, may merit its own risk assessment, the panoply of which combine to yield a holistic estimate of overall risk .
-
Conditional probability versus unconditional probability. Many risk appraisals involve two different kinds of probability: the chance that some untoward effect will occur and then the likelihood that the results of the event will proceed to cause harm. The former assessment of probability often involves “fault trees” or other means for estimating the odds of a discrete occurrence (such as an accidental release of a particular chemical from a manufacturing plant or transportation system), while the latter may involve using toxicology data or epidemiologic studies to estimate the “potency” of the substance (the probability that a given concentration will cause a particular adverse health effect.) In such cases, the risk assessment must consider the joint probability both that the event will occur and that the health effect will occur conditional on such event .
-
Isolated risk versus aggregated risk. A particular exposure may be the only contributor to a given health or environmental effect (e.g., beryllium is the only known cause of chronic beryllium disease), or it may add a small increment to an existing background risk of that effect (e.g., the amount of ionizing radiation emitted from nuclear power plants versus the natural background of radiation from Earth’s crust and from cosmic rays). This does not mean that incremental involuntary exposures should be ignored merely because they may be small relative to unavoidable background exposures, only that decision-makers and the public should know whether a policy would reduce a small or a large fraction of the aggregate risk .
-
Point estimate of risk versus acknowledging uncertainty in risk. It is simply misleading to present probability or consequence estimates without providing confidence bounds (Finkel and Gray 2018) and preferably a probability density function (and note that uncertainty in risk-risk comparisons (Finkel 1995) is generally X2 as large as single-risk uncertainty).
-
Point estimate of risk versus acknowledging interindividual variability in risk. QRA has suffered mightily from examples where population-average risks were deemed acceptable, when in fact risks varied dramatically depending upon the exposure, susceptibility, or other characteristics of subpopulations (Finkel 2008).
-
“Target risk” versus ancillary risk(s). There is a growing literature attesting to the folly of assuming that an intervention to reduce one risk will have no untoward consequences in another risk area (e.g., increasing highway mileage per gallon but failing to improve upon the safety performance of lighter-weight cars) (Graham and Wiener 1997). This literature, however, is tempered by a second round of scholarship (Rascoff and Revesz 2002; Finkel 2007) helping us distinguish between legitimate and sham trade-offs.
-
Life cycle orientation. Ideally, the risks of a product or technology should be assessed from its production through its use and disposal, with an eye both toward general population risks and the disproportionate risks that workers usually bear (Powers et al. 2012).
This subsection concludes by emphasizing that when the stakes are high, QRA is unambiguously preferable to the three most commonly touted alternatives to it:
-
1.
A “precautionary principle ” that requires society to avert (or eschew) a single disfavored eventuality to the exclusion of others (Friends of the Earth, International Center for Technology Assessment, and ETC Group (2012)). Once precaution advocates realize that other advocates—for example, those insisting on the Iraq invasion of 2003 on the grounds that a “1% chance” of hidden chemical/biological weapons there should be regarded as a certainty (Suskind 2007) or those implicitly urging extreme precaution about economic costs rather than the harms caused by market failures—can define precaution to mean the opposite of what they do, the inadequacy of “pure precaution” is obvious (Montague and Finkel 2007).
-
2.
“Scenario analysis ” (Aldrich 2018), which commonly fails to discriminate between dire scenarios that are highly unlikely to occur and those that are far more plausible.
-
3.
Qualitative risk assessment, in which hazards or scenarios are given color-coded severity rankings—this practice is often seen as intermediate between scenario analysis and full QRA, but various scholars have shown (e.g., Cox 2008) that following its dictates can be worse than choosing randomly without any risk information.
The Special Case of “Risk in the Name of Risk”
Analyzing the probabilities, severities, uncertainties, and other aspects of a SynBio application is rather more difficult even than ascertaining its downside risk(s), because many of the most interesting applications also promise to deliver significant risk reduction either as a prime mover or as incidental to it. Hence, the analysis needs to consider net risk reduction (or net increase) rather than the downside alone. Of course, a well-developed literature and set of practices exist for cost-benefit analysis (CBA) , which can be thought of as the technique of comparing the risk of a product or practice to the benefits of producing it without constraints.Footnote 8 Here, I assume that the economic costs of reducing the risks of a SynBio application are small relative to the more fundamental question: do the risk reduction benefits the application offers exceed the novel risks the application poses? Net risk analysis, like a traditional CBA, requires two separate considerations, which could be termed “R↓” (the decrease in risk that the application promises) and “R↑” (the increased risk imposed by the application). If the SynBio application offers positive net benefit (does more good than harm), then the difference [R↓- R↑] will be greater than zero.Footnote 9
However, estimating the magnitudes of the two terms in a “risk in the name of risk” trade-off is rather different from, but in some ways easier than, the standard estimation problem in CBA . Standard CBA requires estimation of the economic costs of control, which can be surprisingly difficult (Finkel 2012). Standard CBA also requires that the benefits of control (aka. risk reduction) be “monetized” or converted from “natural units” (e.g., expected number of lives saved due to the controls or expected increase in biodiversity or other ecological indices) into dollars, in order that benefit can be compared to cost, and this is a highly controversial practice (Ackerman and Heinzerling 2005). In contrast, the estimation problem here does not require monetization, as both the risks imposed and the risks reduced are in “natural units ,” such as the expected number of lives lost or the estimated acres of habitat destroyed. When the risks on both sides of the ledger are in the same natural unit, there is no need to convert either to a dollar metric, although issues of commensurability will still persist if the natural units are different for risks reduced versus risks imposed. In considering the governance of an emerging SynBio or other technology, of course, we may have to consider that in order for the risk-superior application to be used, government may choose to subsidize it (hence accruing public costs that must be subtracted from monetized net benefits) or may have to regulate/tax/ban the riskier alternative (which would impose monetary costs in the form of reduced consumer surplus).
It is also quite possible that even if the risks reduced and risks imposed are in the same natural unit, the effects will accrue to different populations—see Graham and Wiener (1997) for a comprehensive treatment of the 2×2 different situations where either risks or populations (or both) can be identical or different. In such cases, simple subtraction may not yield a coherent net estimate or one that reveals important information about equity.
This is not to say that estimating [R↓- R↑] is by any means easy, only that it is conceptually straightforward. The first term could often be thought of as the baseline “toll” of some HSE problem, modified by the expected amount by which the SynBio application would effectively reduce that toll (see, e.g., Rooke 2013 for a catalog of SynBio advances that might reduce various human diseases). For example, suppose that the Oxitec hybrid mosquito (see summary of this case study in the section entitled “Broad/Tentative Observations about Comparative Risk Profiles of SynBio Categories”) could, with 80% probability, reduce by 95% the number of mosquitoes capable of transmitting dengue fever in a region of the world where the disease was killing one million people annually (and with 20% probability would be ineffective). Then the expected amount of risk reduction the application would offer would be 760,000 statistical cases of disease averted per year (0.8 probability of reducing the death toll by 950,000).Footnote 10
The R↑ term is in many ways at the heart of this project, as it represents the untoward side effects of a SynBio application, and it is more difficult to estimate because almost by definition the raw materials of probability and severity of consequence are as yet unrealized (Dana et al. 2012). Conceptually, a useful estimate of R↑ requires information on:
-
The nature of each particular downside scenario (analogous to the “hazard identification” stage in classical human health risk assessment)
-
The probability of each scenario manifesting itself
-
The severity of the consequences if the scenario occurs
-
How the consequences are actually experienced by the affected human population or ecological niche
With this raw material, the downside risk R↑ is the sum of the [probability times experienced consequence] of each scenario, preferably with both probability and consequence expressed with the uncertainty in each. Once the risk is estimated, society could choose to treat very small risks as functionally equal to zero (Wareham and Nardini 2015) and, of course, could choose to reduce the probability and/or the severity of a risk by requiring developers to add additional safeguards to reduce the probability of an accidental release or to render an organism “inherently safe” even if released (Schmidt and de Lorenzo 2012; Wright et al. 2013).
The foregoing is, of course, a “much easier said than done” summary of how to arrive at a reasonable downside risk estimate for a SynBio application. Perhaps the most useful reference for understanding the tasks involved in estimating a downside SynBio risk is found in Bedau et al. (2009), which gives a “checklist” of how to think about scenarios. In looking for a template that could be improved upon for performing a state-of-the-art risk assessment for an emerging SynBio application (in this case, the risks of engineering hybrid mosquitoes to control dengue fever), Finkel, Trump et al. (2018a) recommended the assessment performed by Hayes et al. (2015), which offers a very complete risk assessment with respect to the probabilities of many downside risks, although it does not quantify the range of possible severities for any of the scenarios.
As QRA for SynBio improves, analysts can make greater use of existing techniques to cope with the particularly vexing problems inherent in estimating these probabilities and severities, including:
-
Techniques for estimating the probabilities of unprecedented or “virgin” risks (Kousky et al. 2010)
-
Techniques for bounding the probability of “surprise” (Shlyakhter 1994)
-
Techniques for handling “deep uncertainty” (Cox 2012)
-
Structured expert elicitation methods that force respondents to construct logically coherent scenarios (Cooke et al. 2007)
In contrast to the real need for additional complexity, it is also possible that SynBio risk analysts may be able to invoke some “first principles” for distinguishing high-concern scenarios from other ones, allowing for simpler assessments. For example, it may be the case that hybrid organisms designed to be less fit than the wild type cannot pose a significant risk to the ecosystem; if so, any scenarios involving mutations in which the hybrid organism remains less fit than before pose risks that might safely be ignored in a risk-risk analysis .
Risk Assessment in the Solution-Focused Regime
As I will discuss in the next section of this chapter, the bridge between net risk assessment and solution-focused governance is conceptually simple; it involves comparing the net risk profiles of various approaches to solving a human need or fulfilling a human want and using policy tools to support and encourage the solution(s) with the relatively most favorable profile, while discouraging, regulating, or banning solutions with inferior risk profiles. In comparing a new SynBio (“s”) application to the most useful conventional (“c”) solution to the same HSE problem, the question boils down to whether this equation is positive:
This equation symbolizes the incremental net benefit of the SynBio application over the conventional solution. Rearranging terms, the same equation can be expressed as:
which represents the incremental risk-reducing power of the SynBio alternative net of its incremental risk-increasing potential. In either case, if the equation yields a result greater than zero, the SynBio alternative can be said to have positive incremental net benefit over the “competition.”
Alternatively, if we define the quantity RRi, the “risk remaining” after a solution is implemented to partially eliminate a hazard (i.e., the status quo risk minus R↓i), then we could evaluate the equation:
which is the total risk (old plus new) for each solution. If this equation has a positive sum, then the SynBio application results in less total risk than the conventional solution it could supplant.
Implementing a Solution-Focused Management Regime
Armed with reliable methods to construct the risk-risk profiles (with attendant uncertainties) of a set of technologies, substances, or processes that includes one or more SynBio applications, how can government and the citizenry move from analysis to action? How can they/we decide what strictures, encouragements, outreach, taxes, subsidies, research, or other concerted actions are desirable or optimal? Although other orderings are possible, what follows is a chronological ordering of six tasks describing how SFRA maps onto this question of SynBio governance. Note that most of these elements are also described in a video in the “Risk Bites” series available on YouTube (Maynard and Finkel 2018).
-
(a)
Pose the fundamental question “which human need or want is unfulfilled?” Unlike “problem formulation” as defined by EPA (see “A ‘Solution-Focused’ Partnership between Analysis and Action”), this mindset defines “the problem” not as a specific hazard presented by one product or process but essentially the opposite—as something one or more technologies might be able to solve. In other words, conventional risk assessment would ask “how much perchloroethylene can/should be emitted when dry-cleaning clothes?”, while SFRA would ask “how can consumers clean their clothes most safely and effectively?” Here I will also distinguish between “needs” (e.g., humanity needs better methods to control disease-carrying mosquitoes without introducing new and untoward risks) and “wants” (consumers may benefit from a less-expensive or higher-quality artificial food-grade vanillin). SynBio applications can fulfill either needs or wants, but risk management governance may wish to consider these differently when balancing marginal risk increases against marginal risk reduction or other benefits.
-
(b)
Array a narrow or an expansive list of possible solutions to fulfill the need or satisfy the want. Although the most fundamental distinction between SFRA and conventional risk assessment/management is that the former evaluates solutions rather than quantifies risks, the breadth and ambition of the solutions considered greatly distinguishes SFRA exercises from each other. It is possible to consider only “window dressing” responses to a human need (e.g., a medical professional advising a patient complaining of tight pants could suggest s/he get used to the discomfort or buy larger pants), or instead to emphasize “upstream” remedies that require much more expansive changes (in this case, advising the patient to change his/her diet or undergo bariatric surgery). In the chemical risk assessment arena, Finkel (2011) develops a case study contrasting narrow sets of possible solutions to the occupational and environmental health risks of chlorinated solvents for stripping paint from airplanes (one solvent versus another), somewhat more expansive sets (adding mechanical abrasives such as crushed walnut shells to the comparison), and very ambitious sets (including the option of leaving planes unpainted or even employing market mechanisms to reduce demand for business travel by plane). A good description of the correlation between the degree of upstream intervention and the “radicality” of the contemplated intervention is found in Løkke (2006), who notes that “the levels should not be mistaken as a grading of alternative solutions; most people would agree that preventive strategies are better than cleaning up, and increasing radicality will often but not necessary lead to better environmental solutions .”
-
(c)
Estimate the net risk consequences (or non-risk benefit minus risk, for “wants”) of each solution. Using the risk assessment techniques and goals described in the section “Risk Assessment Methodologies for SynBio”, SynBio and other means of solving a particular problem can be compared by deriving (with uncertainty) the extent to which each solution can reduce risks to the greatest extent, net of the new risks it poses. If there is no particular problem, but instead a set of ways to satisfy consumer wants, the comparison is similar, except that the “pro” term of every pro-net-of-con estimation would instead represent the benefits (perhaps using consumer surplus as a proxy) of each application or product. First, it is usually easy to reject outright solutions or products that have a negative profile (new risks exceed risk reduction or other benefits), as these are usually inferior to the status quo. Among the remaining choices, and because of uncertainty, there may well be no unique “winner” in any of these comparisons; often, the solution with the highest expected net risk reduction may not have the most favorable risk profile when the reasonable upper bound for the “con” term of the estimate is substituted (see the case study of dengue fever control in Finkel et al. 2018a). But choosing for or against an option that is “better on average but may be worse” (or “worse on average but may be better”) is conceptually straightforward when the decision-maker openly chooses a degree of aversion to one unfortunate outcome or the other based on his/her own or on public attitudes toward regret (Lempert and Collins 2007).
-
(d)
(1) “Choose” the solution with the most favorable net risk profile—which is to say, consider regulating, discouraging, or banning the less-favorable solution(s) and consider promoting, encouraging, or subsidizing the most favorable one. Depending on how intrusive these interventions are , when a government goes beyond merely providing information about different products/technologies and implements regulations, taxes, subsidies, or the like to make it easier to sell and use some technologies and harder to use others, this may well smack of “picking winners and losers.” This criticism must give decision-makers pause, especially if it is clear that by advancing a particular application, one monopoly producer will reap all the benefits (and in rarer cases, a single producer of a net riskier application will also bear all the costs of the other’s “win”). But there is an element, perhaps a large one, of hypocrisy in denunciations of government’s “picking winners.” It is an article of faith that when the “free market” picks winners and losers, as happens constantly and relentlessly, those decisions stem from adequate information and by definition increase net economic benefit. So it is the case that governance decisions that advantage some producers over others can similarly be evidence-based and can provide net economic benefits as well as reducing externalities. The related claim that regulation should generally avoid specifying the means of compliance (technology-based standards) and instead set performance goals and let regulated industries find their own least expensive and burdensome ways to meet them (but see Wagner (2000) for a counterargument) is also based on some inconsistencies . Performance standards, which would tend to be less disruptive on market structure, are hard to enforce (Coglianese and Lazer 2003)—but more tellingly, in some cases, businesses clamor for “flexibility” only to later rebuke government agencies for not providing technological specifications that give them assurances of how to comply (Finkel and Sullivan 2011). But perhaps the weakest argument against allowing the democratic process (through participatory regulatory governance) to identify and support “winning” technologies that reduce risks is the fact that we have long allowed government to do this anyway, in many accepted though opaque ways. In the USA, the federal government has provided the coal industry with more than $70 billion in subsidies since 1950 (Taxpayers for Common Sense 2009); the effects on newer energy sources of this sort of market distortion are difficult to estimate but could well be monumental in size. In the pharmaceutical industry, the policy of allowing unlimited off-label use for drugs once they have been approved for a specific use amounts to a “leg up” over both established and innovative therapies for the same diseases (Comanor and Needleman 2016)—this amounts not only to “picking winners” but giving these favored technologies the kind of head start over competitors that could last for generations. The most pervasive arena in which government already picks winners and losers is probably that of international trade. Anecdotally, the USA and EU negotiated a pair of reciprocal tariffs several decades ago, with the EU disfavoring American cars and the USA placing a heavy tariff on European light trucks—this, of course, has had the effect of helping our domestic truck manufacturers “win,” to the detriment of the domestic passenger car sector. So while promoting industrial policy for SynBio raises hackles, it should not be the very idea of favoring some industries over others that causes us to turn our backs on such policies .
OR
-
(d)
(2) Choose a mix of solutions implemented together in quantities sufficient to fulfill the need or want but parceled out in such a way that the sum of all net risks is even lower than the net risk of the relatively most favorable single solution. It is possible that the optimal policy would involve a portfolio of solutions, with each one governed by policies that would accentuate its benefits while keeping its downside risks relatively low (and especially keeping them below any sharp nonlinearities in the technology’s exposure-risk function). Such an approach would require vigilance and planning but might blunt some of the concern about brighter-line policies that would elevate one solution to “winner” status while greatly or completely curtailing others’ roles in the economy.
-
(e)
Consider those governance tools —qualitative regulation (bans), quantitative regulation (exposure limits or controls of a given exposure reduction efficiency), or any of a variety of “soft law ” mechanisms—that best produce the desired optimal net risk profile. The gap between seeking net risk reduction and fulfilling that desire must fall to one or more tools of regulatory governance. Finkel, Deubert et al. (2018b) elaborates in order of stringency on various subtypes of “nudges” (information dissemination, guidance documents, and the like), public-private partnerships (Marchant and Finkel 2012), enforcement of general norms, and enforcement of newly written regulations, as each might apply to the problem of repeated head trauma and brain disease in professional football. However, there are several useful kinds of governance tools not mentioned in that article, including using civil liability as a powerful incentive to reduce downside risks (McCubbins et al. 2013; De Jong 2013) or requiring developers of new technologies to post bonded warranties against unforeseen harms (Baker 2009). On the other hand, Finkel, Deubert et al. emphasize one innovative governance idea that is not often included among the portfolio of “soft law” ideas commonly recommended (Mandel and Marchant 2014): an “enforceable partnership” in which a regulated industry develops its own code of practice and/or exposure controls but explicitly agrees to agency citations and penalties for violating that code. Such an arrangement might be especially appropriate for SynBio applications, since the developers generally can revise their views about which controls are most effective much faster than the public rulemaking process ever could. One other way to array the various governance options, as seen in Fig. 1, is to deemphasize the specific tools and instead portray the range of orientations from most supportive of emerging technologies to least supportive. In any event, the literature makes various recurring points about the nuances of emerging technology governance, particularly (1) that it is most “artful” when it seeks “effective compromise” such that while not all participants will be satisfied, all will agree that their views were heard and that the regulator’s logic was transparent and reasonable (Zhang et al. 2011; Coglianese 2015); (2) that the choice of instrument and the stringency of control should vary depending on the stage at which the technology currently exists (e.g., laboratory work vs. field trials vs. first full-scale releases vs. routine releases) and that government should establish “checkpoints” to appraise the most sensible controls at each stage (Bedau et al. 2009); and (3) that agencies sometimes can make good use of “soft law” mechanisms early in the lifespan of an emerging technology but should be ready to eventually “harden” those tools into traditional regulatory forms lest the regulated industries correctly perceive that the agency is using “soft law” as a crutch (Cortez 2014).
-
(f)
Consider structural change in government to better organize itself to administer and enforce the tools chosen. Most of the sparse literature that considers improving the capacity of government to regulate emerging technologies focuses on “small gaps” where no agency has authority to solve a particular problem or where duplicative authorities foster controversy and delay (Paradise and Fitzpatrick 2012). For example, Taylor (2006) pointed out that the US Food and Drug Administration has jurisdiction over the safety of cosmetics but lacks statutory authority to oversee, prior to their marketing, cosmetics made with nanotechnology components. Similarly, Mandel and Marchant (2014) recommend that EPA seek authority to require a pre-manufacture notice from developers of new microorganisms, not just for those that combine genetic material from two or more organisms from different genera but those that combine genetic material from species within the same genus. Most scholars construe these problems as solvable with minor statutory changes (see, e.g., Carter et al. 2014) or with interagency coordination provided by a White House office (see, e.g., PCSBI 2010). However, at least one investigator (Davies 2009) has gone further to recommend the reorganization of several current agencies (EPA, OSHA, NIOSH, the Consumer Product Safety Commission, the National Oceanic and Atmospheric Administration, and the US Geological Survey) to create a Cabinet-level “Department of Environmental and Consumer Protection” to regulate existing and emerging technologies that affect human health, safety, and the environment.
Although this process of conceiving of, comparing, and choosing among solutions can be intricate and can demand creative and bold thinking about “tragic choices” (Calabresi and Bobbitt 1978), its core tenet can be simply described: in contemplating whether to encourage or discourage an emerging technology solution to a human need, society should tolerate more potential downside risk when the solution has greatly improved potential for unprecedented risk reduction. For “wants,” the logic would be the related statement that “society should tolerate more potential downside risk when the solution can fulfill the want in unprecedented new ways or to a new extent.” And the fundamental corollary to each of these principles would be that “society should be especially wary of courting new downside risks when the risk reductions they offer are negligible or when the consumer benefits are marginal.”
For example, a SynBio (or, for that matter, a conventional) product that makes clothes whiter is arguably less worth taking risky chances on than one that could substitute for gasoline in cars; and further, if the SynBio product only makes clothes marginally more white than the next-best conventional alternative, it may be even less worth risking harm for.
How “radical” (Løkke 2006) is this precept? We are already comfortable declaring that some larger risks are more acceptable than related smaller risks, when we can explain this as a consequence of voluntary choice versus involuntary imposition (Starr 1969). But here I am arguing that we should consider certain risks less or more acceptable not because of qualities of the harms but qualities of the solutions that may make risks more or less worth bearing. Setting an ambient air quality standard only requires the decision-maker to consider the likely costs of achieving it against the benefits of doing so; requiring automobiles, on average, to achieve a given higher level of fuel efficiency goes a bit further toward favoring certain technologies over others but implicitly considers any technology with a positive risk-risk profile as acceptable. So it may be unprecedented to take the next logical but large step and compare risk profiles in order to favor technologies with significant new net benefits over marginal ones.
To the contrary, I suggest that placing hurdles in the way of products with small marginal benefits and worrisome new risks is in fact very similar to proposals made beginning several decades ago (Nussbaum 2002) that the FDA should treat truly novel pharmaceuticals more permissively than it treats “me-too” drugs that only offer slight variations on existing substances, because the former have novel benefits that may be more likely to justify their new risks. As Angell (2004) pointed out, the FDA currently treats both novel and derivative drugs equally, approving them if they are both safe and are more effective than a placebo: “the [‘me-too’ drug] needn’t be better than an older drug already on the market to treat the same condition; in fact, it may be worse. There is no way of knowing, since companies generally do not test their new drugs against older ones for the same conditions at equivalent doses.” Angell and others (Gagne and Choudhry 2011) have repeatedly called for FDA to make “approval of new drugs contingent on their being better in some important way than older drugs already on the market.”Footnote 11
A solution-focused approach, applicable to the other end of the marginal benefit spectrum, is also being suggested with respect to the FDA and the drug approval process. A major part of the “21st Century Cures Act,” signed into law in 2016, provides for expedited approval for new medical devices that may benefit patients with “unmet medical needs for life-threatening or irreversibly debilitating conditions” (Avorn and Kesselheim 2015). Similarly, FDA has issued several regulations streamlining the drug approval process to treat certain very serious conditions that have no effective current therapies, stating that “these procedures reflect the recognition that physicians and patients are generally willing to accept greater risks or side effects from products that treat life-threatening and severely-debilitating illnesses, than they would accept from products that treat less serious illnesses. These procedures also reflect the recognition that the benefits of the drug need to be evaluated in light of the severity of the disease being treated” (FDA 2014).
These kinds of benefit-aware risk comparisons, of course, are precisely what a comparative risk-risk (solution-focused) analysis of SynBio versus conventional approaches to solve a problem would do—allow, and encourage, those approaches that are “better in some important way” than the status quo, either because of the paucity of effective solutions at present or because of a truly groundbreaking advance over approaches that are satisfactory but not ideal.
Overt and Hidden Values in Risk Assessment and Management
Both in assessing the net risks of any technology (SynBio or otherwise) and in deciding whether and how to manage any risks identified, we need more than methodological improvements in risk estimation and in decision-making under uncertainty; we need a much more transparent mode of analysis such that the large number of hidden influential value judgments that pervade the analysis can be brought to light. In Finkel 2018b, I identified more than 70 steps within a typical cost-benefit analysis where influential value judgments are made and generally kept implicit or are disclosed but misleadingly labeled as objective or purely scientific choices. These judgments range in scope from narrow and quantitative choices that influence key numerical quantities in one portion of a risk assessment or a CBA (e.g., the use of a particular single discount rate to render future consequences less salient than present ones) to fundamental definitional choices that influence the entire direction of the analysis (e.g., whether the “optimal” decision is tacitly defined as the one that maximizes total net benefit, one that achieves an arbitrarily “sufficient level” of benefit at the bare minimum cost, or as some other legitimate resting place). The main problem with embedding one value-laden choice out of many at multiple places in an analysis is, of course, that affected citizens may not realize that they profoundly disagree with the particular value chosen and would welcome the (possibly quite different) results of an analysis that substitutes one or more values they do agree with.
Some of the dozens of hidden value-laden assumptions I and others have identified would arise only infrequently in the kind of net-risk-versus-net-risk comparisons advocated here for making policy about SynBio applications—either because they affect portions of the analysis (particularly the estimation of the economic costs of regulatory control) that are not crucial to the comparison or because they involve aspects of the policy process (e.g., post hoc evaluation of the results of regulatory or other interventions) that do not affect the comparisons themselves. In comparing the risk profiles of SynBio and conventional applications, some of the more important recurring value judgments include:
-
Should harms to non-human species be included among the “risks that matter” (assuming said harm does not indirectly affect people at all)?
-
Should the non-utilitarian concerns of some citizens, particularly the aversion to “tinkering with the natural order” for good or ill, be given weight apart from the consequences themselves?
-
Should analysis take account of risk reduction benefits or new harms to citizens outside the USA when making choices about domestic policy?
-
Should harms that would affect subsequent generations be discounted at the same rate as intra-generational harms or at a lower rate so they don’t effectively vanish from the equation?
-
Should we treat risks from naturally occurring substances or organisms as equivalent to equal risks from synthetic ones?
-
Should a risk profile with a lower expected value but longer right-hand tail than another one be treated as preferable (on the basis of expectation) or the opposite (on the basis of a worst-case comparison)? (see Finkel, Trump et al. 2018a for the claim that the risk profile of the Oxitec SynBio mosquito, compared to pesticides and other conventional approaches to controlling dengue fever, may have a favorable expectation but a longer right tail)?Footnote 12
I advocate for substantial efforts to reveal all of the value judgments permeating evidence-based policy analysis—not through a laborious process of highlighting them each time but rather by the publication (as a single document affecting all health, safety, and environmental agencies or perhaps by agency-specific documents) of a free-standing “value statement” that would flag them all, explain which judgments the agency would generally make by default in the absence of specific reasons to the contrary (and why this value judgment was chosen), and offer one or more alternative value judgments that could be made instead if sufficient reason was provided in a specific assessment. This may be a daunting task, but there is a much more practical and imminent first step; even if analyses of SynBio and other technologies cannot be made fully transparent to “the outside world” as to their embedded value judgments, the analysts themselves must recognize them and ensure that in each case, the same judgment is used on both sides of the comparison. If this is not done, the comparison will be worse than misleading, as it will foster the impression that the “safer” alternative was chosen rationally. Imagine, for example, a comparison of the risk profiles of a compound like artemisinin produced via natural sources (the wormwood plant) versus one produced by genetically engineered yeast, with the former profile tacitly considering the economic harm to industries anywhere in the world if the competing application was supported, while the latter profile tacitly only considered economic harm to US industries. In this hypothetical, the major unemployment effects would not be counted for the one alternative (drying up the market for wormwood) where they were substantial.
Cheerleading and Poor Risk (and Benefits) Communication in SynBio
Careful risk assessments , whether performed in the classical or the solution-focused paradigm, can be undone by tone-deaf risk communication. Among the many recurring deficiencies in efforts to communicate risks, lapses that “can create threats larger than those posed by the risks that they describe” (Morgan et al. 2002) are the deliberate or unintentional trivialization of risk, the overuse of jargon, the reliance on misleading or inappropriate comparisons to unrelated risks, and the tendency to provide only population-wide average risks and mask substantial interindividual differences (Finkel 2016). Many experts (Sandman 1993; NAS 1996) stress that intentional attempts to persuade people via risk communication sometimes work but eventually often backfire. I’ve read several of the leading general-interest books on SynBio (along with many peer-reviewed articles), which arguably give a good cross-section of how experts communicate to laypeople about these new applications. In my reading, I found some troubling signs not only in how risks are described but how benefits are (Kahn 2011):
-
At the least important end of the spectrum, developers of SynBio technology sometimes take verbal shortcuts describing their advances. For example, Oxitec, developers of a hybrid male mosquito, often referred to them as “sterile,” when the central advance Oxitec made is that the males are fertile but produce offspring that die before they are mature enough to bite humans (Finkel et al. 2018a). This distinction is largely semantic, and, as Oxitec has said, “there is no layman’s term for ‘passes on an autocidal gene that kills offspring’” (Specter 2012). However, the ambiguity (or discrepancy, depending on one’s point of view) gave a critic from Friends of the Earth an opening to say that Oxitec has been “less than forthcoming” with its public statements that allow the more reassuring interpretation that the mosquitoes cannot produce offspring (Specter 2012).
-
Of greater concern is a tendency of SynBio developers to condescend to the public by suggesting that it is irrational to fixate on the downside risks. For example, scientific giant Craig Venter has made the sweeping statement that “few of the questions raised by synthetic genomics are truly new” (Venter 2013, at 152), which of course sidesteps the question of whether “old” risks created anew can be unacceptably high. Similarly, Brassington (2011) used an odd phrase to “quantify” SynBio risks: “However, while these risks are not vanishingly small, they can be met not by forbidding SynBio research, but by pursuing it wisely” (emphasis added). Something “large” is also not “vanishingly small,” but the phraseology here strongly implies that we know SynBio risks to be “small”—perhaps non-zero, but arguably so small as to be impalpable and hence unworthy of concern.
-
A similar tendency involves a kind of acknowledgment that public fears are not unfounded but one that sequesters this concern and ultimately steps away from it. Consider this quote by Venter (2013, at 155): “For me, a concern is ‘bioerror’: the fallout that could occur as the result of DNA manipulation by a non-scientifically trained biohacker or ‘biopunk.’” This is essentially a “safe if used as directed” warning, which is not a warning at all but a denial of the inherent danger(s) in favor of dangers brought on by insufficient policing of human actors. Without taking a position on the merits, this does seem reminiscent of the “guns don’t kill people; people do” argument that seeks to channel concern away from “the right users.”
-
Most generally, pioneers in synthetic biology sometimes invoke their own expertise, or that of the cadre of developers more broadly, as a kind of talisman that can turn estimated risks into irrelevancies (Rampton and Stauber 2002). When a New York Times reporter (Rich 2014) brought up various potential risks of de-extinction technology, the lead scientist at “Revive and Restore” simply asserted that “We have answers for every question… We’ve been thinking about this for a long time.” Perhaps more tone-deaf still is this assertion from Venter, who invoked Isaac Asimov’s “three laws of robotics” to reassure readers that nothing can go seriously awry: “One can apply these principles equally to our efforts to alter the basic machinery of life by substituting ‘synthetic life form’ for ‘robot’” (Venter 2013, at 153). Here citizens concerned about untoward risks of SynBio are met with fictional solutions to a problem —in Asimov’s created world, robots could be hardwired to always obey and never to harm, but of course hybrid organisms do not have programmable brains, and wishing for a fail-safe mechanism is quite different from building one.
If at the same time that SynBio advocates were understating risks or hyping untested ways to eliminate any risks that remain, they were also overstating the benefits of their innovations, citizens might be doubly disadvantaged as they try to make sense of the trade-offs. However, my sense is that the potential benefits of SynBio are not being stressed enough and that categories of benefit that are less impactful are emphasized:
-
In particular, developers and advocates often emphasize the “elegant” features of SynBio advances—and not just in applications such as “glowing fish” that may have no tangible benefits other than their novelty. For example, Lee Silver (2007) quoted MIT professor Tom Knight as stating that “the genetic code is 3.6 billion years old. It’s time for a rewrite”—without linking that intellectually compelling prospect to any specific (or even hypothetical) advantages it might confer. This wide-eyed enthusiasm for the “can,” rather than the “should,” may also serve to heighten concern about the possible downside risks that are not mentioned.
-
Even some medical applications of SynBio are praised for their ability to move the human organism closer to “perfection,” which again mentions an inchoate benefit, and here one that reasonable people may actually consider a disbenefit (Hurlbut 2013).
-
There are, by contrast, examples where supporters of SynBio emphasize the tangible and pragmatic benefits of applications, such as this observation from Rooke (2013). I suggest that more successful risk-benefit communication ought to look more like this example than the previous ones:
Technological advances in the field of health continually bring us closer to a world where a healthy life is a real option for every individual on the planet, regardless of geography, culture, or socioeconomic status. However, these benefits tend to accrue disproportionately to the developed world; the need is still great for solutions that can diagnose illness, protect against infection, and treat disease in a broad array of low-cost settings with developing-world healthcare systems and limited infrastructure.
Broad/Tentative Observations About Comparative Risk Profiles of SynBio Categories
In other work performed with Sloan Foundation support , my colleagues and I published a detailed case study of the Oxitec SynBio mosquito (Finkel et al. 2018a) but also investigated in broad terms the kinds of incremental benefits and risks that various types of different SynBio applications might pose. Table 2 presents some tentative observations, using exemplar applications from each of ten categories where SynBio developers are working, suggesting that in some kinds of applications (e.g., biological pesticides), the SynBio alternative may have large incremental risks that do not justify the small incremental benefits they offer over conventional solutions to this problem. We also suggest that in other categories (e.g., specialty chemicals), the new downside risks would likely be small, but so would the incremental benefits. In contrast, we see the general categories of disease vector controls and medical treatments as ones where the new risks from SynBio may be comparable to or smaller than the risks we currently tolerate from conventional approaches and where the efficacy of a new approach may make the SynBio application a win/win for fulfilling a human need.
Solution Generation as the Complement to Alternatives Assessment
The discussion to this point has not exhausted the potential for solution-focused thinking , as the various proposals (including full-blown marginal risk profile analysis of solutions) have all presupposed that a development of a new application will then prompt discussion about the new risks it poses and new risk reductions it offers, in context of other ways to meet the same need or fulfill the same want. But what if instead of a problem that has already attracted multiple solutions, we are faced with a problem desperately in need of even one good solution? The complement to regulators seeing competing solutions available and asking “why?” (or “which”?) would be someone “dreaming things that never were and saying ‘why not’?” (Shaw 1949).
One way to organize creative thought around “solutions we need” is to extrapolate from existing lines of SynBio research to instances where similar technology might be able to do vastly more good. For example, various researchers are trying to engineer microbes that would have salutary effects on human health and quality of life if introduced into the human gut microbiome. It is also the case, though, that collectively the digestive systems of domesticated ruminant animals worldwide (primarily cows, sheep, and bison) add enormous amounts of methane to the atmosphere—roughly 20% of all anthropogenic methane (Lassey 2007), a potent greenhouse gas (Friedman et al. 2018). Investigators have been attempting to reduce methane generation by changing the animals’ diets, and by selective breeding, but have not succeeded in making a dent in the total. But very recently, students at the University of Nebraska began, but “ran out of time,” experimenting with introducing a gene from a red alga (C. pilulifera), one that codes for the enzyme bromoperoxidase, into E. coli for introduction into the digestive systems of cattle (University of Nebraska-Lincoln 2017). Interestingly, cattle can be fed bromoperoxidase directly by feeding them large amounts of seaweed, but the bromoform produced in seaweed farming is a potent depletor of stratospheric ozone, which the students described aptly as “fix[ing] one environmental issue by creating another.” Currently there appears to be little experimental or commercial interest in using SynBio to attack the problem of methanogenesis in ruminants and its role in exacerbating global climate change, surely a problem in need of a breakthrough.
Similar “if only…” thinking can also be applied to existing products that satisfy consumer demand, simply by looking for products with the largest environmental or human health “footprints.” Surely high on such a list would be palm kernel oil, whose worldwide production converts several million hectares each year from tropical forest to monoculture, with implications for endangered species like the orangutan and with widespread use of child labor for harvesting (Rosner 2018). But while industrial feedstocks like isoprene have attracted much interest from SynBio developers, there appears to be only one company actively trying to engineer organisms to produce synthetic palm oil (that company, Solazyme, has been criticized for choosing algae as its host organism, in a system that requires large amounts of sugarcane to be harvested to feed the algae; SynBioWatch 2016). As with the Nebraska team, a group of students at the University of Manchester also participated in an iGEM competition (Univ. of Manchester 2013) and explored the possibility of producing synthetic palm oil in E. coli instead but apparently lacked the resources to bring this idea past the conceptual stage.
So given the economic realities that set developers’ sights based on market potential rather than on reducing environmental or other externalities (whether caused by the paucity of solutions or by the footprints of existing products), how can governments focus on “solution generation” to complement solution appraisal, and how can they attract entrepreneurs to fill the vacuums they identify?
Here the useful ideas are conceptually simple though politically fraught and are ones the US and other nations have grappled with already (for a prime example, see the Orphan Drug Act of 1983, which provided tax incentives and extended patent protection to developers of drugs that are intended to treat a disease affecting fewer than 200,000 Americans). Government or private philanthropies could identify areas where a novel solution would be immensely beneficial and then offer a “grand challenge” prize for its development (Adler 2011; also see Table 1 in Rooke 2013) or directly subsidize the early stages of research and development. Manzi (2014) summarizes the salutary results from subsidies, concluding that “the Breakthrough Institute has produced excellent evidence that government subsidies for speculative technologies and research over at least 35 years have played a role in the development of the energy boom’s key technology enablers: 3D seismology, diamond drill bits, horizontal drilling, and others.” He recommends that our “existing civilian infrastructure … can be repurposed, including most prominently the Department of Energy’s national laboratories, the National Institutes of Health, and NASA. Each of these entities is to some extent adrift the way Bell Labs was in the 1980s and should be given bold, audacious goals. They should be focused on solving technical problems that offer enormous social benefit, but are too long-term, too speculative, or have benefits too diffuse to be funded by private companies.” In other words, these giant agencies could devote some of their resources to identifying “orphan problems” that we have learned to live with but where innovation could conceivably reveal that this acquiescence wasn’t necessary.
A related idea has been championed by Outterson (2014), who has suggested in Congressional testimony that the federal government could offer a guaranteed payment stream to the successful developer of a needed antibiotic or other drug (one with large social benefits but marginal profitability for the developer) in exchange for the right to market the product.
Of the two parts to the “solution generation” puzzle—identifying situations where society needs a new solution and providing the “activation energy” so that developers will have the incentives and resources to explore such a solution—the former is clearly already occurring, as evidenced by the fact that I began by idly speculating about the benefits of a SynBio approach to methanogenesis in ruminant animals or a SynBio alternative to palm oil monoculture, only to find that various university groups were already working on the broad outlines of these very breakthroughs. But the other side of the coin—that to my knowledge neither idea has left the university environment to be brought forward to bench-scale fruition—suggests that new policies and organizational arrangements are needed to move good ideas forward in the absence of clear short-term profitability.
Conclusions
It should not be controversial that we are better off knowing whether a new technology has net risk reduction benefits that would outperform the status quo, whether or not we have the will to act on that knowledge (especially the will to act in ways that would cause the less efficient solutions to make way for more efficient ones). Of course, the probability and severity of risky scenarios are always uncertain, and risk comparisons are more uncertain than risk estimates (Finkel 1995), so judgment will always be needed to weigh the differential costs of error (boosting a new technology on the basis of its likely superiority, but one that will turn out to have risk-increasing consequences, versus impeding a new technology such that risk-increasing solutions will be allowed to persist).
The controversy comes when we contemplate intervening in the market to promote risk-reducing solutions over risk-increasing ones. Most ideologies other than pure libertarianism welcome the idea of government choosing policies that promote social welfare, whereas many liberals and conservatives bristle at the idea of government promoting individual companies over others (despite how often we tolerate government doing so via earmarks, subsidies, and the like). In between the extremes of “picking winning policies” and “picking winning firms” lies the notion of picking technologies or industries that solve problems with fewer untoward harms. Here I agree with much prior scholarship, particularly that of Rycroft and Kash (1992), that we need to repudiate the idea that “the politicians and bureaucrats who make these critical decisions would have neither the incentives nor the ability to pick winners as well as the private market place now does” (quoting a 1983 speech by Martin Feldstein, then chairman of the White House Council of Economic Advisers). Markets do reasonably well at allocating resources based on consumer preferences (as influenced by those doing the marketing), but much less well at allocating resources to minimize externalities. Comparative net risk analysis of solutions to a human need (or of ways to satisfy a want) provides the evidence that government needs to consider the non-market benefits and harms of technologies, which will allow government to consider strengthening the barriers to entry for innovations that tend to increase net risk while attenuating those barriers for innovations that tend to decrease net risk.
In other words, SFRA can tee up governance decisions that reject “permissionless innovation” when the SynBio or other new application is duplicative, ineffective, or harmful but that equally reject laissez-faire market primacy when the innovation is what we truly need to solve pressing health, safety, environmental, or other problems.
Notes
- 1.
I also must acknowledge that after contributing to NAS (<CitationRefaid:cstyle="CitationRef"CitationID="CR87”>2009</CitationRef>) and writing Finkel (2011), I realized that a prior report (Nelson and Banker 2007) introduced many of the same concepts as SFRA. I was led astray by the report’s title, which began with “Problem Formulation,” and I didn’t appreciate that Nelson and Banker used the word “problem” in exactly the opposite way that EPA does and exactly the way I advocate—to them, the “problem” is the unfulfilled human need that competing technologies profess to supply (not the “problems” the technologies pose), and hence the goal of analysis is to solve that problem in a risk-decreasing manner.
- 2.
In the section below entitled “Risk Assessment Methodologies for SynBio”, I will elaborate on what this question might mean given that both the risk-reducing and risk-increasing attributes of any technology are surrounded by uncertainty. At this point, one can certainly interpret the question to refer to the expected value (mean) risk reduction net of the expected risk increase.
- 3.
I will also elaborate later on the concept of “greater marginal net benefit.” At this point, consider this term as shorthand for a case where a conventional way to produce a material has positive net benefit (reduces risks more strongly than it imposes them), but where a SynBio application has even greater net risk-reducing potential. But the term also applies to cases where the SynBio application has negative net benefit, but could replace a conventional application whose net benefit profile is even more strongly negative (a “lesser of two evils” case).
- 4.
Here as well I will elaborate below about the distinction between “ways to produce a material” and “ways to fulfill a function.”
- 5.
Of course, this insensitivity to probability can work in the reverse direction: stating or implying that if an outcome is highly unlikely, it cannot transpire.
- 6.
For one of many examples of a vague claim of massive harm, see Bunting (2007): “Creating fantastic bacteria in a contained laboratory is one thing, but what happens when they get out and cross with their wild cousins, mutating into organisms we had never foreseen?” For one of many examples of a vague claim of massive benefit, see Hylton (2012), quoting Craig Venter as saying that “Agriculture as we know it needs to disappear. We can design better and healthier proteins than we get from nature.”
- 7.
Here I deliberately broaden Kaplan and Garrick’s first principle (theirs was “What can go wrong?”) to incorporate the notion that any policy, product, or activity can have either harmful and/or salutary effects.
- 8.
This formulation is the obverse of how CBA is usually described—namely, as the benefits of controlling some risk or hazard less the economic costs of controlling it. Here, I deliberately reversed the description to make it more apt for a SynBio decision problem, where we might be comparing the risks posed by a product against the “savings” we obtain by not controlling it.
- 9.
More generally, if society is contemplating some controls on the SynBio application, then the net benefit of allowing the application to proceed, with controls, would be [R↓- R↑∗- C], where C is the economic cost of the controls. Presumably, in this case, the R↑∗ term would be smaller than R↑ alone, because the effect of the controls would be to decrease the untoward risks of the application. So depending on the relationship between C and R↑∗, the net benefit of [approving with controls] could be larger or smaller than the “unfettered net benefit” estimate in the main text above.
- 10.
There are, I hasten to add, good reasons not to combine mutually exclusive probabilities in this manner (NAS 1994, Chap. 9); one could certainly highlight rather than obscure the uncertainty in this example by summarizing the risk reduction as “an 80% chance of a reduction of 950,000 cases and a 20% chance of zero reduction.”
- 11.
There are of course important counterarguments to a policy of disfavoring “me-too” products. Miller (2014) points out that derivative drugs may differ from the prior compound only in that they have fewer adverse side effects or in that they are effective in a different patient subpopulation. In either case, society might benefit from access to both products, although this would be consistent with Angell’s criterion of “better in some important way.”
- 12.
When this chapter was in proof, a research group (Evans et al. 2019) made headlines by publishing a set of findings that pointed to a higher downside risk and a lower efficacy for the Oxitec SynBio mosquito than previously believed. The group studied the aftermath of the release of roughly 50 million transgenic mosquitoes in the city of Jacobina, Brazil, and found that from 10 % to 60 % of a (small; roughly 10–20 mosquitoes per group) number of insects they genotyped showed a mixed genome—with one or more genes from the Oxitec mosquito having been introduced into the wild-type genome. They assumed that this resulted from a small percentage of the progeny being able to survive to adulthood, contrary to the intent of the lethal gene Oxitec inserted as a fail-safe mechanism, and they also claimed that the total insect population rebounded during the experiment to nearly the levels pre-release. Although Evans et al. found that the mixed-genome mosquitoes were no more infective with respect to dengue or Zika than the wild type, they speculated that the surviving insects could have acquired other unfortunate characteristics, such as insecticide resistance. If confirmed, this finding would complement our conclusion in Finkel, Trump et al. (2018a) that the net risk profile of the Oxitec application may be favorable on average but have a negative reasonable-worst-case profile, although it might also change the expected value if the efficacy was overestimated by Oxitec. However, Oxitec has issued a preliminary response to Evans et al. (Oxitec 2019), claiming that the survival of a few percent of progeny was anticipated and widely disclosed before the field trials, that there is no evidence that any introduced genes conferred any untoward characteristics such as insecticide resistance, and that the eventual rebound of the mosquito population was completely expected because the releases were of limited duration. I emphasize that ongoing controversy over the risk profile of a SynBio application is instructive, and does not affect whether a solution-focused net risk assessment paradigm is the best way to structure governance decisions.
References
Ackerman, F., & Heinzerling, L. (2005). Priceless: On knowing the price of everything and the value of nothing. New York: The New Press.
Adler, J. H. (2011). Eyes on a climate prize: Rewarding energy innovation to achieve climate stabilization. Harvard Environmental Law Review, 35(1), 1–45.
Aldrich, S. C. (2018). Integrating scenario planning and cost-benefit methods. In Governance of emerging technologies: Aligning policy analysis with the public’s values, Gregory E. Kaebnick and Michael K. Gusmano, eds., Hastings Center Report, 48(S1), (pp.S65-S69). Garrison: The Hastings Center.
Andrews, C. J. (2002). Humble analysis: The practice of joint fact-finding. Westport Connecticut: Praeger.
Angell, M. (2004). The truth about the drug companies. New York Review of Books, July 15. Available at https://www.nybooks.com/articles/2004/07/15/the-truth-about-the-drug-companies/
Ashford, N. (2000). An innovation-based strategy for a sustainable environment. In J. Hemmelskamp, K. Rennings, & F. Leone (Eds.), Innovation-oriented environmental regulation: Theoretical approach and empirical analysis (pp. 67–107). Heidelberg: Springer Verlag.
Assmuth, T., & Finkel, A. M. (2018). Principles and ideals behind the ‘rationality’ of choices in response to risks. Ms. in review, Joint Research Centre, European Commission.
Avorn, J., & Kesselheim, A. S. (2015). The 21st century cures act: Will it take us back in time? New England Journal of Medicine, 372(26), 2473–2475.
Baker, T. (2009). Bonded import safety warranties. In C. Coglianese, A. Finkel, & D. Zaring (Eds.), Import safety: Regulatory governance in the global economy. Philadelphia: University of Pennsylvania Press.
Bates, M. E., Grieger, K. D., Trump, B. D., Keisler, J. M., Plourde, K. J., & Linkov, I. (2015). Emerging technologies for environmental remediation: Integrating data and judgment. Environmental Science & Technology, 50(1), 349–358.
Bedau, M. A., Parke, E. C., Tangen, U., & Hantsche-Tangen, B. (2009). Social and ethical checkpoints for bottom-up synthetic biology, or protocells. Systems and Synthetic Biology, 3, 65–75.
Brassington, I. (2011). Synthetic biology and public health. Theoretical & Applied Ethics, 1(2), 34–39.
Bunting, M. (2007). Scientists have a new way to reshape nature, but none can predict the cost. The Guardian, Oct. 21, available at https://www.theguardian.com/commentisfree/2007/oct/22/comment.comment
Calabresi, G., & Bobbitt, P. (1978). Tragic choices: The conflicts society confronts in the allocation of tragically scarce resources. In Fels lectures on public policy analysis. W.W. Norton & Co.
Cameron, D. E., Bashor, C. J., & Collins, J. J. (2014). A brief history of synthetic biology. Nature Reviews (Microbiology), 12, 381–390.
Carter, S.R., Rodemeyer, M., Garfinkel, M.S., & Friedman, R.M. (2014). Synthetic biology and the U.S. biotechnology regulatory system: Challenges and options. J. Craig Venter Institute report, available at https://www.jcvi.org/sites/default/files/assets/projects/synthetic-biology-and-the-us-regulatory-system/full-report.pdf
Caruso, D. (2008). Synthetic biology: An overview and recommendations for anticipating and addressing emerging risks. Washington, DC: Center for American Progress.
Coglianese, C. (2012). Regulatory breakdown: The crisis of confidence in U.S. regulation (p. 304). University of Pennsylvania Press. Philadelphia, PA USA.
Coglianese, C. (2015). Listening· learning· leading: A framework for regulatory excellence. Report to the Alberta Energy Regulator, Penn Program on Regulation, available at https://www.law.upenn.edu/live/files/4946-pprfinalconvenersreport.pdf
Coglianese, C., & Lazer, D. (2003). Management-based regulation: Prescribing private management to achieve public goals. Law and Society Review, 37, 691–730.
Comanor, W. S., & Needleman, J. (2016). The law, economics, and medicine of off-label prescribing. Washington Law Review, 91, 119–146.
Cooke, R. M., Wilson, A. M., Tuomisto, J. T., Morales, O., Tainio, M., & Evans, J. S. (2007). A probabilistic characterization of the relationship between fine particulate matter and mortality: Elicitation of European Experts. Environmental Science & Technology, 41(18), 6598–6605.
Cortez, N. (2014). Regulating disruptive innovation. Berkeley Technology Law Journal, 29(1), 175–228.
Cox, L. A. (2008). What’s wrong with risk matrices? Risk Analysis, 28(2), 497–512.
Cox, L. A. (2012). Confronting deep uncertainties in risk analysis. Risk Analysis, 32(10), 1607–1629.
Cox, L. A., & Popken, D. A. (2014). Quantitative assessment of human MRSA risks from swine. Risk Analysis, 34(9), 1639–1650.
Dana, G. V., Kuiken, T., Rejeski, D., & Snow, A. A. (2012). Four steps to avoid a synthetic-biology disaster. Nature, 483, 29.
Davies, J. C. (2009). Oversight of next generation nanotechnology. Woodrow Wilson International Center for Scholars, PEN #18, April 2009, 39 pp. Available at https://www.nanotechproject.org/process/assets/files/7316/pen-18.pdf
De Jong, E. R. (2013). Regulating Uncertain Risks in an Innovative Society: A liability law perspective. In E. Hilgendorf & J.-P. Günther (Eds.), Robotik und Recht Band I (pp. 163–183). Baden-Baden: Nomos Verlag.
de Lorenzo, V., Prather, K. I., Chen, G. Q., O’Day, E., von Kameke C., et al. (2018). The power of synthetic biology for bioproduction, remediation, and pollution control: The UN’s Sustainable Development Goals will inevitably require the application of molecular biology and biotechnology on a global scale. EMBO Reports, 19(4): e45658 ff.
Donner, S. D., & Kucharik, C. J. (2008). Corn-based ethanol production compromises goal of reducing nitrogen export by the Mississippi River. Proceedings of the National Academy of Sciences, 105(11), 4513–4518.
Evans, B. R., Kotsakiozi, P., Costa-da-Silva, A. L., Ioshino, R. S., Garziera, L., et al. (2019). Transgenic Aedes aegypti mosquitoes transfer genes into a natural population. Nature Scientific Reports, 9, 13047.
Finkel, A. M. (1995). Towards less misleading comparisons of uncertain risks: The example of aflatoxin and alar. Environmental Health Perspectives, 103(4), 376–385.
Finkel, A. M. (2007). Distinguishing legitimate risk-risk tradeoffs from straw men. Presentation at the Annual Meeting of the Society for Risk Analysis, San Antonio, TX, Dec. 11.
Finkel, A. M. (2008). Protecting people in spite of—or thanks to—the ‘veil of ignorance’, Chapter 17. In R. R. Sharp, G. E. Marchant, & J. A. Grodsky (Eds.), Genomics and environmental regulation: Science, ethics, and law (pp. 290–342). Baltimore: Johns Hopkins Univ. Press.
Finkel, A. M. (2011). Solution-focused risk assessment: A proposal for the fusion of environmental analysis and action. Human and Ecological Risk Assessment, 17(4), 754–787. (and 5 invited responses/commentaries, pp. 788–812).
Finkel, A. M. (2012). Harvesting the ripe fruit: Why is it so hard to be well-informed at the moment of decision?, Chapter 3C. In R. Laxminarayan & M. K. Macauley (Eds.), The value of information: Methodological frontiers and new applications in environment and health (pp. 57–66). Dordrecht: Springer Science & Business Media.
Finkel, A. M. (2016). Risksplaining: A counter-productive cottage industry. Invited presentation to Dow Chemical Co. March 14. Slide presentation available at https://tinyurl.com/finkel-dow-risk-slides
Finkel, A. M. (2018a). I thought you’d never ask: Structuring regulatory decisions to stimulate demand for better science and better economics. Ms. in review, available from author.
Finkel, A. M. (2018b). Demystifying evidence-based policy analysis by revealing hidden value-laden constraints. In G. E. Kaebnick & M. K. Gusmano (Eds.), Governance of emerging technologies: Aligning policy analysis with the public’s values (Hastings Center Report) (Vol. 48(S1), pp. S21–S49). Garrison: The Hastings Center. Available at https://onlinelibrary.wiley.com/doi/epdf/10.1002/hast.818.
Finkel, A. M., & Gray, G. M. (2018). Taking the reins: How decision-makers can stop being hijacked by uncertainty. Environment Systems and Decisions, 38(2), 230–238. https://doi.org/10.1007/s10669-018-9681-x.
Finkel, A. M., & Sullivan, J. W. (2011). A cost-benefit interpretation of the ‘substantially similar’ hurdle in the Congressional Review Act: Can OSHA ever utter the E-word (ergonomics) again? Administrative Law Review, 63(4), 707–784.
Finkel, A. M., Trump, B. D., Bowman, D., & Maynard, A. (2018a). A ‘solution-focused’ comparative risk assessment of conventional and synthetic biology approaches to control mosquitoes carrying the dengue fever virus. Environment Systems and Decisions, 38(2), 177–197.
Finkel, A. M., Deubert, C. R., Lobel, O., Cohen, I. G., & Lynch, H. F. (2018b). The NFL as a workplace: The prospect of applying occupational health and safety law to protect NFL workers. Arizona Law Review, 60, 291–368.
Food and Drug Administration, U.S. (2014). Drugs intended to treat life-threatening and severely-debilitating illnesses. Code of Federal Regulations, Title 21, Part 312.80, revised as of April 1, 2014.
French, C. E., de Mora, K., Joshi, N., Elfick, A., Haseloff, J., & Ajioka, J. (2011). Synthetic biology and the art of biosensor design, Appendix A5. In E. R. Choffnes, D. A. Relman, L. Pray, & Rapporteurs (Eds.), The science and applications of synthetic and systems biology: Workshop summary (pp. 178–201). Washington, DC: Institute of Medicine, National Academy Press.
Friedman, L., Pierre-Louis, K., & Sengupta, S. (2018). The meat question, by the numbers. New York Times, Jan 25.
Friends of the Earth, International Center for Technology Assessment, and ETC Group. (2012). The principles for the oversight of synthetic biology, 20 pp.
Gagne, J. J., & Choudhry, N. K. (2011). How many ‘me-too’ drugs is too many? JAMA, 305(7), 711–712.
Gigerenzer, G. (2006). Out of the frying pan into the fire: Behavioral reactions to terrorist attacks. Risk Analysis, 26, 347–351.
Goldstein, B. D. (2018). Solution-focused risk assessment. Current Opinion in Toxicology, 9, 35–39.
Graham, J. D., & Wiener, J. B. (Eds.). (1997). Risk vs. risk: Tradeoffs in protecting health and the environment. Cambridge, MA: Harvard University Press.
Hammitt, J. K. (1990). Subjective-probability-based scenarios for uncertain input parameters: Stratospheric ozone depletion. Risk Analysis, 10(1), 93–102.
Hayes, K. R., Barry, S., Beebe, N., Dambacher, J. M., De Barro, P., Ferson, S., et al. (2015). Risk assessment for controlling mosquito vectors with engineered nucleases: Sterile male construct: Final report. Hobart: CSIRO Biosecurity Flagship. Available at https://publications.csiro.au/rpr/pub?pid=csiro:EP153254.
Hurlbut, W. B. (2013). St. Francis, Christian love, and the biotechnological future. The New Atlantis: A Journal of Technology and Society, Winter/Spring, 93–100.
Hylton, W. S. (2012). Craig Venter’s bugs might save the world. New York Times, May 30.
Jackson, R. J., et al. (2001). Expression of mouse interleukin-4 by a recombinant ectromelia virus suppresses cytolytic lymphocyte responses and overcomes genetic resistance to mousepox. Journal of Virology, 75: 1205–1210.
Kahn, J. (2011). Synthetic hype: A skeptical view of the promise of synthetic biology. Valparaiso University Law Review, 45(4), 29–46.
Kaplan, S., & Garrick, B. J. (1981). On the quantitative definition of risk. Risk Analysis, 1(1), 11–27.
Kousky, C., Pratt, J., & Zeckhauser, R. J. (2010). Virgin versus experienced risks. In E. Michel-Kerjan & P. Slovic (Eds.), The irrational economist: Making decisions in a dangerous world (pp. 99–106). New York: Public Affairs Press.
Lassey, K. R. (2007). Livestock methane emission: From the individual grazing animal through national inventories to the global methane cycle. Agricultural and Forest Meteorology, 142, 120–132.
Lempert, R. J., & Collins, M. T. (2007). Managing the risk of uncertain threshold responses: Comparison of robust, optimum, and precautionary approaches. Risk Analysis, 27(4), 1009–1026.
Løkke, S. (2006). Chemicals regulation: REACH and innovation. Conference proceedings, 16 pp. Available at: http://www.norlca.man.dtu.dk/-/media/Sites/Norlca_Nordic_Life_Cycle_Association/symposium2006/proceedings/loekke.ashx?la=da
Machado, R. D. (2012). Seeking the right targets: Gene therapy advances in pulmonary arterial hypertension. European Respiratory Journal, 39(2), 235–237.
Mandel, G. N., & Marchant, G. E. (2014). The living regulatory challenges of synthetic biology. Iowa Law Review, 100, 155–200.
Manzi, J. (2014). The new American system. National Affairs, Spring 2014, 3–24.
Marchant, G. E., & Finkel, A. M. (2012). Attempts to forge government-industry partnerships that are neither unduly coercive nor unduly meaningless. Invited presentation at “Soft Law Governance workshop,” Center for Law, Science, and Innovation, Arizona State University, Tempe, AZ, March 5, 2012.
Maynard, A. D., & Finkel, A. M. (2018). Solution-focused risk assessment and emerging technologies. Video (6′49″) available at https://www.youtube.com/watch?v=n7XkdbpYWHk
McCubbins, J. S. N., Endres, A. B., Quinn, L., & Barney, J. N. (2013). Frayed seams in the ‘patchwork quilt’ of American federalism: An empirical analysis of invasive plant species regulation. Environmental Law, 43, 35–81.
Miller, H. I. (2014). Critics of ‘me-too’ drugs need to take a chill pill. Wall Street Journal, January 1.
Moe-Behrens, G. H. G., Davis, R., & Haynes, K. A. (2013). Preparing synthetic biology for the world. Frontiers in Microbiology, 4, 1–10.
Mokhtari, A., Moore, C. M., Yang, H., Jaykus, L.-A., Morales, R., Cates, S. C., & Cowen, P. (2006). Consumer-phase Salmonella enterica serovar enteritidis risk assessment for egg-containing food products. Risk Analysis, 26(3), 753–768.
Montague, P., & Finkel, A. (2007). Two friends debate risk assessment and precaution. Rachel’s Democracy and Health News, No. 920. Available at http://www.rachel.org/?q=en/newsletters/rachels_news/920#Two-Friends-Debate-Risk-Assessment-and-Precaution
Morais, A. R. C., et al. (2015). Chemical and biological-based isoprene production: Green metrics. Catalysis Today, 239, 38–43.
Morgan, M. G., Fischhoff, B., Bostrom, A., & Altman, C. J. (2002). Risk communication: A mental models approach. New York: Cambridge University Press.
National Academy of Sciences. (1983). Risk assessment in the Federal Government: Managing the process. Washington, DC: National Academy Press.
National Academy of Sciences. (1994). Science and judgment in risk assessment. Washington, DC: National Academy Press.
National Academy of Sciences. (1996). Understanding risk: Informing decisions in a democratic society. Washington, DC: National Academy Press.
National Academy of Sciences. (2006). Preventing the forward contamination of Mars. Washington, DC: National Academy Press.
National Academy of Sciences. (2009). Science and decisions: Advancing risk assessment. Washington, DC: National Academy Press.
National Academy of Sciences. (2012). Sustainable development of algal biofuels in the United States. Washington, DC: National Academy Press.
Naujokas, M. F., et al. (2013). The broad scope of health effects from chronic arsenic exposure: Update on a worldwide public health problem. Environmental Health Perspectives, 121(3), 295–302.
Nelson, K. C., & Banker, M. J. (2007). Problem formulation and options assessment handbook: A guide to the PFOA process and how to integrate it into environmental risk assessment of genetically modified organisms. 252 pp., available at https://gmoera.umn.edu/sites/gmoera.umn.edu/files/pfoa_handbook_bw.pdf
Nussbaum, N. J. (2002). Making ‘me-too’ drugs benefit the public. American Journal of Medical Quality, 17(6), 215–217.
Outterson, K. (2014). Testimony to the House Energy and Commerce Committee, Sept 19. Available at https://docs.house.gov/meetings/IF/IF14/20140919/102692/HHRG-113-IF14-Wstate-OuttersonK-20140919.pdf
Oxitec Inc. (2019). Oxitec responds to article entitled “Transgenic Aedes Aegypti Mosquitoes Transfer Genes into a Natural Population.” Website dated September 18, 2019, https://www.oxitec.com/news/oxitec-response-scientific-reports-article. Last accessed 11 Oct 2019.
Paradise, J., & Fitzpatrick, E. (2012). Synthetic biology: Does re-writing nature require re-writing regulation? Penn State Law Review, 117, 53–87.
Perkel, J. M. (2013). Streamlined engineering for synthetic biology. Nature Methods, 10(1), 39–42.
Powers, C. M., Dana, G., Gillespie, P., Gwinn, M. R., Hendren, C. O., Long, T. C., Wang, A., & Davis, J. M. (2012). Comprehensive environmental assessment: A meta-assessment approach. Environmental Science & Technology, 46, 9202–9208.
Presidential Commission for the Study of Bioethical Issues. (2010). New directions: The ethics of synthetic biology and emerging technologies. Washington, DC: PCSBI. Available at https://bioethicsarchive.georgetown.edu/pcsbi/sites/default/files/PCSBI-Synthetic-Biology-Report-12.16.10_0.pdf.
Rampton, S., & Stauber, J. (2002). Trust us, we’re experts: How industry manipulates science and gambles with your future. New York: TarcherPerigee.
Rascoff, S. J., & Revesz, R. L. (2002). The biases of risk tradeoff analysis: Towards parity in environmental and health-and-safety regulation. University of Chicago Law Review, 69, 1763–1836.
Rich, N. (2014). The mammoth cometh, New York Times, March 2 (Sunday magazine, p. MM24).
Rodemeyer, M. (2009). New life, old bottles: Regulating first-generation products of synthetic biology, Woodrow Wilson International Center for Scholars, March 2009, 57 pp. Available at http://www.synbioproject.org/publications/synbio2/
Rooke, J. (2013). Synthetic biology as a source of global health innovation. Systems and Synthetic Biology, 7, 67–72.
Rosner, H. (2018). Palm oil is unavoidable: Can it be sustainable? National Geographic, December 2018, available at https://www.nationalgeographic.com/magazine/2018/12/palm-oil-products-borneo-africa-environment-impact/
Rycroft, R. W., & Kash, D. E. (1992). Technology policy requires picking winners. Economic Development Quarterly, 6(3), 227–240.
Sandman, P. M. (1993). Responding to community outrage: Strategies for effective risk communication. American Industrial Hygiene Association. Available at http://petersandman.com/media/RespondingtoCommunityOutrage.pdf
Schmidt, M., & de Lorenzo, V. (2012). Synthetic constructs in/for the environment: Managing the interplay between natural and engineered biology. FEBS Letters, 586, 2199–2206.
Schwermer, H., De Koeijer, A., Brulisauer, F., & Heim, D. (2007). Comparison of the historic recycling risk for BSE in three European countries by calculating the basic reproduction ratio R0. Risk Analysis, 27(5), 1169–1178.
Shaw, G. B. (1949). From “Back to Methuselah” (Act I; The Serpent says these words to Eve).
Shlyakhter, A. I. (1994). Improved framework for uncertainty analysis: Accounting for unsuspected errors. Risk Analysis, 14, 441–447.
Silver, L. (2007). Scientists push the boundaries of human life. Newsweek, June 3. Available at http://www.newsweek.com/scientists-push-boundaries-human-life-101723
Specter M. (2012). The mosquito solution. The New Yorker, July 9.
Starr, C. (1969). Social benefit versus technological risk. Science, 165(3899), 1232–1238.
Suskind, R. (2007). The one percent doctrine: Deep inside America’s pursuit of its enemies since 9/11. New York: Simon and Schuster.
SynBioWatch. (2016). Solazyme: Synthetic biology company claimed to be capable of replacing palm oil struggles to stay afloat. Available at http://www.synbiowatch.org/2016/02/solazyme-investigation/
Taxpayers for Common Sense. (2009). Coal: A Long history of subsidies. June 11, available at https://www.taxpayer.net/energy-natural-resources/coal-a-long-history-of-subsidies/
Taylor, M. R. (2006). Regulating the products of nanotechnology: Does FDA have the tools it needs? Woodrow Wilson International Center for Scholars, Project on Emerging Nanotechnology #5, October, 66 pp.
Thierer, A. (2016). Permissionless innovation: The continuing case for comprehensive technological freedom, revised and expanded ed. (Arlington, Virginia, George Mason University). Available at https://www.mercatus.org/system/files/Thierer-Permissionless-revised.pdf
University of Manchester. (2013). An impact analysis of a synthetic palm oil: Outlining a new approach to ethical considerations in the production of high-value chemicals. Available at http://2013.igem.org/wiki/images/9/9c/MANCHESTERIGEMimpactanalysisofsyntheticpalmoil.pdf
University of Nebraska-Lincoln. (2017). Helping reduce methane emissions from livestock. Available at http://2017.igem.org/Team:UNebraska-Lincoln/Description
Venter, J. C. (2013). Life at the speed of light: From the double helix to the dawn of digital life. New York: Penguin Books.
Wagner, W. (2000). The triumph of technology-based standards. University of Illinois Law Review, 83–113.
Wareham, C., & Nardini, C. (2015). Policy on synthetic biology: Deliberation, probability, and the precautionary paradox. Bioethics, 29(2), 118–125.
Willis, H. H., & Florig, H. K. (2002). Potential exposures and risks from beryllium-containing products. Risk Analysis, 22(5), 1019–1033.
Wright, O., Stan, G.-B., & Ellis, T. (2013). Building-in biosafety for synthetic biology. Microbiology, 159, 1221–1235.
Zhang, J. Y., Marris, C., & Rose, N. (2011). The transnational governance of synthetic biology: Scientific uncertainty, cross-borderness and the ‘Art’ of governance. BIOS working paper no. 4, London School of Economics and Political Science, 37 pp. Available at http://openaccess.city.ac.uk/id/eprint/16098/
Acknowledgments
I gratefully acknowledge financial and intellectual support for this chapter from the Alfred P. Sloan Foundation and thank Ben Trump for many helpful suggestions on Table 2.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Finkel, A.M. (2020). Designing a “Solution-Focused” Governance Paradigm for Synthetic Biology: Toward Improved Risk Assessment and Creative Regulatory Design. In: Trump, B., Cummings, C., Kuzma, J., Linkov, I. (eds) Synthetic Biology 2020: Frontiers in Risk Analysis and Governance. Risk, Systems and Decisions. Springer, Cham. https://doi.org/10.1007/978-3-030-27264-7_9
Download citation
DOI: https://doi.org/10.1007/978-3-030-27264-7_9
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-27263-0
Online ISBN: 978-3-030-27264-7
eBook Packages: EngineeringEngineering (R0)