Introduction

Risk is a common metric for public health and environmental decision-making. Scientifically, credible risk assessments underpin decisions regarding the potential safety of emerging technologies (National Academy of Sciences National Research Council 1983). Furthermore, individuals who may be exposed to the products of these technologies must understand the exposure and decide whether the potential risks are acceptable. Most environmental exposure decisions have low probability of substantial risk, for example, wastewater treatment plant design, construction of barriers to prevent migration of pollutants to drinking water wells, or selection of air pollution control equipment for particulate matter (PM). The difference between these decisions and those involving emerging technologies is that the latter have much greater uncertainty and must often rely on comparisons to conventional technologies.

The more advanced the technology, the more uncertain the exposure and risk will be. In particular, emerging technological exposures involve the potential for low-probability, high consequence (“rare”) events, which present special challenges to risk communications (Solomon and Vallero 2016). Scientific and engineering rigor are essential for rare events, as they are for any risk assessment scenario. Certainly, managing the risks presented by rare events requires many of the same fundamental communication elements of any credible risk-based decision analysis. Given the diversity of stakeholders and unconventional aspects of most rare events, a greater understanding and application of numerous other factors are needed, especially psychosocial and ethical factors.

One means of determining whether a substance is handled properly is to determine if the actions lead to exceedances of a standard or limit, which is based on scientific evidence that the exposure and risk introduced by these actions are acceptable. These standards and limits are often based on the amount of product that is released, for example, the concentration in air or water that escapes during a process. However, exposure and risk information for emerging contaminants are often lacking or deficient, so other metrics are needed, for example, the expectation that an action is accompanied by measures to ensure that the actions’ risk will be “as low as reasonably possible” (ALARP) for the potential to produce impurities, for example, genotoxic impurities (GTI) (Callis et al. 2010; Teasdale et al. 2013). In engineering and medicine, the size of safety factors increases indirectly with certainty (Arnaldi and Muratorio 2013; Falkner and Jaspers 2012; Finkel 1990; Kodell 2005; McNamara et al. 2014; Roca et al. 2017; Shepherd 2009). This measure of risk, then, is an expression of operational success or failure. Too much risk means the governance process has failed society. As mentioned, acceptable risk is defined by the standards and specifications, often developed by governmental or other certifying authorities. However, acceptable exposure and risk cannot be estimated solely from physical and biological information but must also factor in cultural, social, and communications information, for example, what is a person doing greatly affects the extent and intensity of exposure (Covello 2003; Cummings and Kuzma 2017; Kahan et al. 2009; Kuzma and Tanji 2010; Slovic 1987).

Within the environmental and public health communities, definitions of “acceptable risk” vary widely. The conventional metrics are incorporated into health codes and regulations, zoning and building codes and regulations, design principles, canons of professional engineering and medical practice, national standard-setting bodies, and standards promulgated by international agencies (e.g., ISO or the International Organization for Standardization). In the United States, for example, standards can come from a federal agency, such as hazardous waste landfill guidelines of the US Environmental Protection Agency, or material specifications for equipment, such as those of the National Institute of Standards and Technology (NIST). Unfortunately, these metrics are often absent or inappropriate for new technologies which vary from their conventional analogs. This begs the question as to whether existing disposal guidelines are sufficiently protective to reduce exposures in the myriad scenarios likely to arise when the new technology moves from research to application. Often, emerging technologies follow standards articulated by private groups and associations, but which may be so focused on the utility and other benefits of the technology that potential exposure and risk receive comparatively less rigorous and inadequate emphasis (Vallero 2010a).

Background

Risk is generally understood to be the likelihood that an unwelcome event will occur. Risk assessment is the scientific investigation into the factors that lead to a risk. An assessment may be retrospective, that is, to see what damage has occurred, or prospective, that is, to predict risk posed from reasonable present and future risk scenarios. Much of synthetic biological risk assessment is the latter. Risk management follows risk assessment. The dispassionate and objective scientific findings must underpin the decisions needed to reduce adverse outcomes. For example, an assessment may indicate that a synthetic organism poses a risk to human health if it were to escape and reach water supplies, whereas risk management would include the design and installation of building containment structures around a laboratory or test facility.

Articulating the hazard, that is, the physical, chemical, or biological agent of harm, is matched against the receptor’s contact with that hazard, that is, exposure. The types of receptors range in scale and complexity, for example, the exposed receptor may be:

  • An individual organism, for example, a human or other species

  • A subpopulation, for example, asthmatic children or endangered plant species in a habitat

  • An entire population, for example, all persons in a city or nation or the world

  • A macro-system, for example, a forest ecosystem

Hazard is an inherent trait. Thus, the hazard may occur before a waste is generated, such as a component of a manufacturing process. For example, if 1,1,1-trichloroethane (TCE) is used as a solvent in a chemical processing plant, it may be hazardous to the workers because it is carcinogenic. It may also be hazardous if it finds its way to a landfill (in drums or in contaminated sawdust after a cleanup).

The second component of the risk assessment is the potential of exposure to the hazard. Most of the exposure science literature to date has addressed chemical hazards, but given the focus of this book, this chapter must also address exposure to biological agents, especially genetically modified organisms (GMOs) and products of synthetic biology. Organisms engineered from synthetic biology may contain genes/genomes that are derived de novo, possibly without naturally occurring homologs. Any biological agent, that is, natural, genetically modified, or synthetic, may have inherent properties that render it hazardous, for example, production of exotoxins or infection of higher organisms. The uncertainties are further increased when chemical and biological hazards are combined, for example, the use of solvents and biological materials in the synthesis phases.

In the previous TCE/biological agent example, people can come into contact with the solvent in occupational settings and the organism during environmental (e.g., escape) and use (e.g., drinking water) scenarios. Thus, the exposure to TCE varies by activities (high for workers who use it, less for workers who may not work with TCE, but are nearby and breathe the vapors, and even less for other workers). The exposure to the biological agent is zero if it is completely contained and potentially expansive if not. Also, worker exposure is commonly based on a 5-workday exposure (e.g., 8 or 10 hours), whereas environmental exposures, especially for chronic diseases like cancer, are based on lifetime, 24-hour per day exposures. Thus, environmental regulations are often more stringent than occupational regulations when aimed at reducing exposure to a substance.

Risk assessment requires a sound physical, chemical, and biological characterization of the hazard, a consideration of changes to the agent in time and space, and how they may act synergistically or antagonistically with abiotic and biotic components of the system to which they are introduced. To assess a given scenario, the severity of the effect and the likelihood that it will occur in that scenario are calculated. This combination of the hazard particular to that scenario constitutes the risk.

The relationship between the severity and probability of a risk follows a general equation (Doblhoff-Dier et al. 2000):

$$ R=f\left(S,P\right) $$
(1)

where risk (R) is a function (f) of the severity (S) and the probability (P) of harm. The risk equation can be simplified to be a product of severity and probability:

$$ R=S\times P $$
(2)

The traditional chemical risk assessment paradigm (see Fig. 1) is generally a stepwise process. It begins with the identification of a hazard, which summarizes an agent’s physicochemical properties and routes and patterns of exposure and reviews toxic effects. The tools for hazard identification take into account the chemical structures that are associated with toxicity, metabolic and pharmacokinetic properties, short-term animal and cell tests, long-term animal (in vivo) testing, and human studies (e.g., epidemiology, such as longitudinal and case-control studies). These comprise the core components of hazard identification; however, additional hazard identification methods have emerged that provide improved reliability of characterization and prediction.

Fig. 1
figure 1

Risk assessment and management paradigm as employed by environmental agencies in the United States. The inner circle includes the steps recommended by the National Research Council. The outer circle indicates the research and assessment activities that are currently used by regulatory agencies to meet these required steps. (Source: National Academy of Sciences National Research Council 1983. NRC (1983))

Characterizing the inherent properties of an individual constituent used in a process is the first step in risk assessment. A number of tools have emerged to assist in this characterization. Risk assessors now can apply biomarkers of genetic damage (i.e., toxicogenomics) for more immediate assessments, as well as improved structure-activity relationships (SAR), which have incrementally been quantified in terms of stereochemistry and other chemical descriptions, that is, using quantitative structure-activity relationships (QSARs) and computational chemistry. There are fewer tools available for biological agents, but incorporating quantitative microbial risk assessment into a life cycle analysis (LCA) is promising (Harder et al. 2015). Health-effects research has mainly focused on early indicators of outcomes, making it possible to shorten the time between exposure and observation of an adverse outcome (National Academy of Science National Research Council 2002).

Microbes

Emerging technologies often generate nonchemical hazards. Notably, biological and infectious wastes present hazards from biological agents that differ from those posed by chemical-laden wastes. Of course, biological agents range from beneficial to extremely dangerous. The risks from microbes can be categorized. For genetically modified organisms, the categories are (Doblhoff-Dier et al. 2000):

  1. 1.

    Risk class 1. No adverse effect or very unlikely to produce an adverse effect. Organisms in this class are considered to be safe.

  2. 2.

    Risk class 2. Adverse effects are possible but are unlikely to represent a serious hazard with respect to the value to be protected. Local adverse effects are possible, which can either revert spontaneously (e.g., owing to environmental elasticity and resilience) or be controlled by available treatment or preventive measures. Spread beyond the application area is highly unlikely.

  3. 3.

    Risk class 3. Serious adverse local effects are likely with respect to the value to be protected, but spread beyond the area of application is unlikely. Treatment and/or preventive measures are available.

  4. 4.

    Risk class 4. Serious adverse effects are to be expected with respect to the value to be protected, both locally and outside the area of application. No treatment or preventive measures are available.

Future products of biotechnology also vary by novelty and complexity, that is, extent and method of genetic modification (e.g., transgenic or metagenome engineering), scale of impact, and comparators. As such, the National Academies of Sciences, Engineering, and Medicine has classified these products accordingly (National Academies of Sciences and Medicine 2017):

  1. A.

    Organisms domesticated by transgenic/recombinant DNA, engineered along one or only a few gene pathways, and which have ample comparators

  2. B.

    Undomesticated and domesticated organisms by transgenesis involving new genome engineering along multiple pathways and which have few or no comparators

  3. C.

    Many candidate organisms generated by genome engineering and gene drives via genome refactoring, recoding, and cell-free synthesis and which have few or no comparators

  4. D.

    Synthetic communities of microbes and individual synthetic, multicellular plants and animals generated by metagenome and microbiome engineering in a population or ecosystem and which have no or merely ambiguous comparators.

These classes indicate that even the safest microbes carry some risk, with uncertainty increasing with extent of synthesis. With more uncertainty about an organism, one cannot assume it to be safe, especially for synthetic protocells and larger organisms about which little is known, that is, Novelty Class D. The risks may be direct or indirect. An example of a direct risk would be the likelihood of a person contracting a pathogenic disease, whereas an indirect risk example is a change induced by the release of organism into an environment where there are no natural predators, allowing them to displace natural organisms. Thus, risk scenarios include not only the effects resulting from the intended purpose of the environmental application but also downstream and side effects that are not part of the desired purpose. For example, the European Union (EU) requires that a synthetic biology risk assessment define the “exposure chain,” that is, the events leading to the adverse health or environmental outcome (Scientific Committee on Health and Environmental Risks 2015).

As mentioned, the large uncertainties associated with emerging technologies and synthetic biology call for conservative science and treating the potential hazards and exposure as risk class 4. An impact could be widespread and irreversible. The nature of emerging entities is that we cannot know with even a modicum of confidence the extent and effectiveness of any existing treatment or preventive measure. Risk can be extrapolated from available knowledge of chemical or biological agents with similar characteristics or to yet untested but similar environmental conditions (e.g., a field study’s results in one type of field extrapolated to a different agricultural or environmental remediation setting). In chemical hazard identification, this is accomplished by structural activity relationships.

Complex Mixtures

Organisms are seldom exposed to a single hazard but are rather continuously exposed to complex mixtures . Until recently, toxicologists have considered a complex mixture to be a combination of two or more chemicals (Carpenter et al. 2002). From an exposure perspective, a mixture is actually a co-exposure. Humans and ecosystems are exposed to an array of compounds simultaneously (Kortenkamp et al. 2009). A key question is how do individual constituents’ physical and chemical properties affect those of other chemical and biological constituents used during biological synthesis? The additive, synergistic, and antagonistic effects must be considered. Until relatively recently, toxicologists studied mixtures in a stepwise manner, adding substances one at a time to ascertain the response of an organism with each iteration (Feron and Groten 2002). Thus, toxicologists and exposure scientists have begun to look at multicomponent mixtures from a systems perspective.

Synthetic biology further complicates the concept of “mixtures,” that is, the complex series of steps in synthetic biology, as mentioned, can result in exposures to mixtures that may contain both biological and chemical agents .

Exposure Probability

Following the hazard identification process for a chemical or a natural or synthetic microbe according to its inherent properties, the environmental conditions are examined to characterize different responses to doses in different populations. Both the hazard identification and dose-response information are based on research that is used in the risk analysis. For microbes, the highest score for any one effect determines the overall risk class for environmental application. In addition, the exposure estimate is the sum of all the exposures, that is, the evaluation of the likelihood of the occurrence of each potentially adverse outcome (Scientific Committee on Health and Environmental Risks 2015).

The factors leading to the exposure probability include the release, replication, dispersion, and ultimate contact with the microbe and other contaminants produced during and after the synthesis. The release may be intentional, for example, use of the product during medical, veterinary, agricultural, or consumer activities, and unintentional, for example, during laboratory studies and manufacturing.

Managing exposures to biological wastes (and any waste for that matter) must consider protecting the most vulnerable members of society, especially pregnant women and their yet-to-be-born infants, neonates, and immunocompromised subpopulations. Also, the exposure protections vary by threat. For example, adolescents may be particularly vulnerable to hormonally active agents, including many pesticides.

In the United States, ecological exposure and risk assessment paradigms have differed from those applied to human health risk. The ecological risk assessment framework (see Fig. 2) is based mainly on characterizing exposure and ecological effects. Both exposure and effects are considered during problem formulation (US Environmental Protection Agency 1992).

Fig. 2
figure 2

Framework for integrated human health and ecological risk assessment. (Sources: US Environmental Protection Agency 1992; World Health Organization 2000)

Interestingly, the ecological risk framework is driving current thinking in human risk assessment. The process shown in the inner circle of Fig. 1 does not target the technical analysis of risk so much as it provides coherence and connections between risk assessment and risk management. When scientific assessment and management are carried out simultaneously, decision-making could be influenced by the need for immediacy, convenience, or other political and financial motivations. The advantage of an arms-length, bifurcated approach is that decisions and management of risks are based on a rational and scientifically credible assessment (Loehr et al. 1992; Ruckelshaus 1983).

In both human health and ecological assessments , the final step is “characterization,” that is, integrating the “quantitative and qualitative elements of risk analysis and of the scientific uncertainties in it” (National Academy of Sciences National Research Council 2009). The problem formulation step in the ecological framework has the advantage of providing an analytic-deliberative process early on, since it combines sound science with input from various stakeholders inside and outside of the scientific community.

The ecological risk framework calls for the characterization of ecological effects instead of hazard identification used in human health risk assessments. This is because the term “hazard” has been used in chemical risk assessments to connote either intrinsic effects of a stressor or a margin of safety by comparing a health effect with an estimate of exposure concentration. Thus, the term becomes ambiguous when applied to nonchemical hazards, such as those encountered in biological systems. Specific scientific investigations will often be needed to augment existing assessment methods, especially when adverse outcomes may be substantial and small changes may lead to very different functions and behaviors from unknown and insufficiently known chemicals or microbes. For example, a genetically modified microbe (GMM) may have only been used in highly controlled experiments with little or no information about how it would behave inside another organism. Often, the proponents of a product will conduct substantial research on the benefits and operational aspects of the chemical constituents, but the regulatory agencies and the public may call for more and better information about unintended and yet-to-be-understood consequences and side effects (Doblhoff-Dier et al. 2000).

Even when a GMM is well studied, there often remain large knowledge gaps when trying to estimate environmental impacts. The bacterium Bacillus thuringiensis , for instance, has been applied for several decades as a biological alternative to some chemical pesticides. It has been quite effective when sprayed onto cornfields to eliminate the European corn borer. The current state of knowledge indicates that this bacterium is not specific in the organisms that it targets. What if in the process, B. thuringiensis (Bt) also kills honeybees? Obviously, this would be a side effect that would not be tolerable from either an ecological or agricultural perspective (the same corn crop being protected from the borer needs the pollinators). Furthermore, physical, chemical, and biological factors can influence these effects, for example, type of application of Bt can influence the amount of drift toward nontarget species. Downstream effects can be even more difficult to predict than side effects, since they not only occur within variable space but also in variable time regimes. For example, exposure potential can arise from both the application method and from the buildup of toxic materials and gene flow following the use of a GMM.

Dosimetry and Exposure Calculation

The typical routes of exposure are by inhalation or ingestion or through the skin (Dionisio et al. 2015; Jones-Otazo et al. 2005; Weschler and Nazaroff 2012). Humans and other organisms can also come into contact with synthetic organisms or other substances generated during the life cycle of an emerging technology, for example, nanomaterials or chemicals through various exposure routes simultaneously. Inhalation is the most likely route for human exposure when a substance reaches the air, which can occur during manufacturing processes, consumer use, and other scenarios involving synthetically derived substances. Airborne exposures do not always involve the respiratory system , such as nasal exposures where the substance passes from the nose to the brain or when airborne contaminants penetrate the skin via the dermal route (Genter et al. 2015; Maheshwari et al. 2019; Schiffman et al. 1995). Likewise, waterborne substances may include routes other than ingestion, for example, inhalation of volatile substances during showering and cooking (Northcross et al. 2015; Zhang et al. 2018).

Emerge technologies may also generate aerosols , which may be living, for example, a modified cell or GMM, or nonliving, for example, an engineered nanoparticle (NP). Numerous synthetic biology processes can produce aerosols (Scientific Committee on Health and Environmental Risks 2015). In addition to atmospheric concentrations, exposure calculations must also account for the scale and extent of the activity, the concentrations of the substance of concern in reactors and other vessels, the production volume (cultures, supernatants, etc.), the industrial use or other types of setting, and the kind of biological processes used during synthesis (e.g., in vivo or in vitro).

Identifying potential hazards is the first step in risk assessment. Sometimes the physicochemical structures of a substance can provide clues of potential hazards. If an unknown compound is similar to better known substances, statistical and mathematical modeling based structural activity relationships can be used as a first step in screening for hazard and exposure (Lagunin et al. 2011; Liu and Gramatica 2007; Roy and Mitra 2012; U.S. Environmental Protection Agency 2015; Vilar et al. 2008). For example, partitioning coefficients , such as the octanol-water coefficient (Kow) of known compounds, can be used to estimate and model the chemical and biological behavior of lesser known substances (Kimura et al. 1996). However, the greater the divergence from the known to the unknown, the less reliable such chemometric methods , for example, QSARs, become. For synthetic biology, there are little or no reliable data and information available for even crude QSARs. This is also true for other emerging technologies, for example, genetic engineering and nanomaterials, but the databases are much larger and more reliable (Tropsha et al. 2017; Winkler et al. 2015). Often, preliminary or screening toxicity data may be available for a substance, but potential uses and exposures are almost completely uncertain. Regulatory programs are beginning to identify and categorize substances according to potential toxicity and potential exposure. Notable examples include REACH in Europe (Kortelainen 2015), exposure-based prioritization in the United States (Egeghy et al. 2011), and rapid exposure and dosimetry in North America (Barber et al. 2017; Dionisio et al. 2015; Egeghy et al. 2016; Wambaugh et al. 2014). Unfortunately, these almost exclusively address chemicals.

Most of the exposure knowledgebase consists of chemical compounds, aerosols, and pathogenic microbes. To demonstrate the steps involved in human exposure, this chapter will focus on chemical exposure routes and pathways generally and aerosols specifically. However, it is important to keep in mind that synbio and other emerging technologies may produce substances and organisms that do not follow every concept discussed here. We will also focus on the inhalation route and air pathway.

Much can happen internally after substances are absorbed. The mass at the interface between the organism and the environment, for example, breathing zone, is merely the potential dose. Applied does occurs once the chemical crosses the interface. The dose experienced by different organs and tissues within the body is the focus of toxicokinetics (TK) studies. TK models have been developed to predict the chemical’s internal fate, which begins with absorption, followed by distribution, metabolism and elimination (ADME). Therefore, the uptake into an organism begins with the absorbed dose. Exposure is completed at biologically effective or target dose, that is, when the aerosol or its metabolic products reach the organ/tissue that is the site of effect/outcome, for example, the liver for a hepatotoxin and brain for a neurotoxin. The amount of the parent compound and its metabolites remaining in the organs and tissues is known as the chemical body burden. Any damage that results from this exposure falls in the realm of effects. For example, an exposure biomarker would show that the xenobiotic has hit the target (e.g., release of a liver enzyme), whereas an effects biomarker would show liver damage (perhaps a different liver enzyme, or the same enzyme, but at higher concentrations to indicate hepatotoxicity).

Aerosol size and shape determine the rate and extent of exposure. The differences between the dosimetry of nanoscale and bulk materials are not well understood. Measuring the hazard of a chemical substance is difficult in part because the applied dose will not be the same as the absorbed and biologically effective dose, given the losses to container wall, dissolution, aggregation and other mechanisms that may be much more important for nanoscale materials, but also much more difficult to quantify at the nanoscale (Ivask et al. 2018; Lead et al. 2018; Sekine et al. 2015).

The human respiratory tract can be divided into three regions, that is, the extrathoracic, tracheobronchial, and alveolar (see Fig. 3). The extrathoracic region consists of airways within the head, that is, nasal and oral passages, through the larynx and represents the areas through which inhaled air first passes. From there, the air enters the tracheobronchial region at the trachea. From the level of the trachea, the conducting airways then undergo dichotomous branching for a number of generations. The terminal bronchiole is the most peripheral of the distal conducting airways and leads to alveolar region where gas exchange occurs in a complex of respiratory bronchioles, alveolar ducts, alveolar sacs, and alveoli. Except for the trachea and parts of the mainstem bronchi, the airways surrounded by parenchymal tissue are composed mainly of the alveolated structures and blood and lymphatic vessels. The respiratory tract regions are made up of various cell types, and the distribution of cells that line the airway surfaces has different anatomical qualities in the three regions (EPA 2004).

Fig. 3
figure 3

Anatomy of the human respiratory tract . (Source: EPA 2004)

The first exposure characterization of a particle is its size and shape, because the way that a particle of any size behaves in the lung depends on the aerodynamic characteristics of particles in flow streams. In contrast, the major factor for gases is the solubility of the gaseous molecules in the linings of the different regions of the respiratory system (see Fig. 3). However, given the size of nanoparticles, they may behave at times as an aerosol and other times as a gas.

The deposition of particles in different regions of the respiratory system depends on their size. The nasal openings permit very large dust particles to enter the nasal region, along with much finer airborne particulate matter. Air pollution scientists and engineers consider particulate matter (PM) the same size as engineered nanoparticles. PMs with aerodynamic diameters of less than 100 nm, that is, the upper size range of nanoparticles, are known as ultrafine PM. For example, drug delivery research applying synthetic biology may involve the engineering of nanoparticles, as well as the unintentional release of variously sized PMs, ranging from ultrafine aerosols to coarse particles, for example, those with aerodynamic diameters larger than 2.5 microns, that is, PM2.5.

Coarse particles deposit in the nasal region by impaction on the hairs of the nose or at the bends of the nasal passages (Fig. 4). Smaller particles pass through the nasal region and are deposited in the tracheobronchial and pulmonary regions. Particles are removed from the airflow by impacts with the walls of the bronchi when they are unable to follow the gaseous streamline flow through subsequent bifurcations of the bronchial tree. As the airflow decreases near the terminal bronchi, the smallest particles are removed by Brownian motion, which pushes them to the alveolar membrane (Vallero 2014).

Fig. 4
figure 4

Particle deposition as a function of particle diameter in various regions of the lung, from nanoparticles (10–100 nm) to coarse particles (>10 μm). The nasopharyngeal region consists of the nose and throat; the tracheobronchial (T-bronchial) region consists of the windpipe and large airways; and the pulmonary region consists of the small bronchi and the alveolar sacs. (Source: International Commission on Radiological Protection Task Force on Lung Dynamics and Task Group on Lung Dynamics 1966)

The aerodynamic properties of particles are determined not only by size but also by their shape and density. The behavior of a chain type or fiber may also be dependent on its orientation to the direction of flow. Thus, another variable introduced by synthetic biology is uniquely shaped PM. Morphology and size are important factors in aerosol exposure, including nanoparticles, but others will be more critical for products generated by other emerging technologies like synthetic biology, in which novel biological functions are likely to lead to toxicity, exposure, and risk (Pauwels et al. 2013). For example, a synthetic organism may have traits that allow it to be undetected by immune cells or have unprecedented toxicokinetics and dynamics after taken up by an organism (SCHER).

The highly complex mechanisms controlling the inhaled particle behavior also control the extent to which an aerosol is eliminated from the body. The walls of the nasal and tracheobronchial regions are coated with a mucous fluid. The tracheobronchial walls have fiber cilia which sweep the mucous fluid upward, transporting particles to the top of the trachea, where they are swallowed. This mechanism is often referred to as the mucociliary escalator. In the pulmonary region of the respiratory system, foreign particles can move across the epithelial lining of the alveolar sac to the lymph or blood systems, or they may be engulfed by scavenger cells called alveolar macrophages . The macrophages can move to the mucociliary escalator for removal. For gases, solubility controls removal from the airstream. Highly soluble gases such as SO2 are absorbed in the upper airways, whereas less soluble gases such as NO2 and ozone (O3) may penetrate to the pulmonary region. Irritant gases are thought to stimulate neuroreceptors in the respiratory walls and cause a variety of responses, including sneezing, coughing, bronchoconstriction, and rapid, shallow breathing. The dissolved gas may be eliminated by biochemical processes or may diffuse to the circulatory system (Vallero 2008).

Since the location of particle deposition in the lungs is a function of aerodynamic diameter and density, then changing the characteristics of aerosols can greatly affect their likelihood to elicit an effect. Larger particles (>5 μm) tend to deposit before reaching the lungs, especially being captured by ciliated cells that line the upper airway. Moderately sized particles (1–5 μm) are more likely to deposit in the central and peripheral airways and in the alveoli but are often scavenged by macrophages. Particles with an aerodynamic diameter less than 1 μm remain suspended in air and will be exhaled if they do not adhere to lung tissue. Thus, smaller aerosols that deposit will do so deeply in the lung.

Inhaled NPs may alter the lung tissue, changing the respiratory system either directly (e.g., airway inflammation) or indirectly (e.g., by altering its immune response). Susceptibility to air pollutants differs among individuals, as exemplified by several diseases and conditions (e.g., asthma), but the fluid dynamics are the same, that is, disruption of the movement of air into the lungs to provide oxygen.

The motion of air and gases in the respiratory system follows the fundamental fluid dynamics theory (Isaacs et al. 2012; European Union 2015). The motion of these fluids is governed by the conservation of mass (continuity) equation and conservation of momentum (Navier-Stokes) equation . Under most conditions, the flow of air in the respiratory airways is assumed to be incompressible. For incompressible flow, the continuity equation is expressed as (Grotberg 2011):

$$ \nabla \cdot V=0 $$
(3)

And, the continuity equation is:

$$ \rho \left[\frac{\partial V}{\partial t}+\left(V\cdot \nabla \right)V\right]=\rho f-\nabla p+\mu {\nabla}^2V $$
(4)

where ∇ is a gradient operator; ∇2 is a Laplacian operator; V is velocity; ρ is fluid density; μ is absolute fluid viscosity; p is the hydrodynamic density; and f is a volumetric force that is applied externally, for example, gravity.

For cylindrical profiles like bronchi, the gradient operator ∇ can be expressed in cylindrical coordinates:

$$ \frac{\partial }{\partial r}+\frac{1}{r}{\frac{\partial }{\partial \theta}}_{\theta }+\frac{\partial }{\partial z} $$
(5)

Thus, the continuity equation can also be expressed cylindrically:

$$ \frac{1}{r}\frac{\partial }{\partial r}\left(r{V}_r\right)+\frac{1}{r}\frac{\partial }{\partial \theta }{V}_{\theta }+\frac{\partial }{\partial z}{V}_z=0 $$
(6)

where Vr, Vθ and Vz are the components of the fluid velocity, which are depicted in Fig. 5, that is, radial (r), circumferential (θ), and axial (z) directions, respectively. Thus, the momentum equations in these directions can be expressed as:

$$ \frac{\partial {V}_r}{\partial t}+\left(\boldsymbol{V}\cdot \nabla \right){V}_r-\frac{1}{r}{V}_{\theta}^2=-\frac{1}{\rho}\frac{\partial p}{\partial r}+{f}_r+\frac{\mu }{\rho}\left({\nabla}^2{V}_r-\frac{V_r}{r^2}-\frac{2}{r^2}\frac{\partial {V}_{\theta }}{\partial \theta}\right) $$
(7)
$$ \frac{\partial {V}_{\theta }}{\partial t}+\left(\boldsymbol{V}\cdot \nabla \right){V}_{\theta }+\frac{{\mathrm{V}}_r{\mathrm{V}}_{\theta }}{r}=-\frac{1}{\rho r}\frac{\partial p}{\partial \theta }+{f}_{\theta }+\frac{\mu }{\rho}\left({\nabla}^2{V}_{\theta }-\frac{V_{\theta }}{r^2}+\frac{2}{r^2}\frac{\partial {V}_r}{\partial \theta}\right) $$
(8)
$$ \frac{\partial {V}_z}{\partial t}+\left(\boldsymbol{V}\cdot \nabla \right){V}_z=-\frac{1}{\rho}\frac{\partial p}{\partial z}+{f}_z+\frac{\mu }{\rho }{\nabla}^2{V}_z $$
(9)

where:

$$ \boldsymbol{V}\cdot \nabla ={V}_r\frac{\partial }{\partial r}+\frac{1}{r}{V}_{\theta}\frac{\partial }{\partial \theta }+{V}_z\frac{\partial }{\partial z} $$
(10)
Fig. 5
figure 5

Coordinate system for an ideal cylindrical airway, depicting velocity component at an arbitrary point. (Source: Vallero 2014 Adapted from: Isaacs et al. 2012)

The first terms (i.e., time derivatives) in these three equations can be ignored under steady-state conditions. The Laplacian operator can be defined in cylindrical airways as:

$$ {\nabla}^2=\frac{1}{r}\frac{\partial }{\partial r}\left(r\frac{\partial }{\partial r}\right)+\frac{1}{r^2}\frac{\partial^2}{\partial {\theta}^2}+\frac{\partial^2}{\partial {z}^2} $$
(11)

Airway velocities are complicated by numerous factors, including lung and other tissue morphologies and the airway generations, that is, the levels of branching through which the air is flowing. Equations can be tailored to these conditions, or idealized velocity profiles can be assumed for the cascade of generations. These include parabolic flow (laminar fully developed), plug flow (laminar undeveloped), and turbulent flow (Isaacs et al. 2012). For example, the upper tracheobronchial airways may be assumed to be turbulent, but in the pulmonary region, plug and parabolic profiles may be assumed.

The right lung and left lung are connected via their primary bronchi to the trachea and upper airway of the nose and mouth (see Fig. 3). From there, the bronchi , that is, airways, subdivide into a branching network of many levels. Each level, called a generation, is designated with an integer. The tracheas are generation n = 0, the primary bronchi are generation n = 1, and so forth. Thus, theoretically there are 2n airway tubes at generation n. In the conducting zone (i.e., generations 0 ≤ n ≤ 16), airflow is restricted to entry and exit in the airway (Grotberg 2011). That is, air is moving, but there is no air-blood gas exchange of O2 and CO2.

Air exchange occurs in generation n > 16, known as the respiratory zone. Generations 17 ≤ n ≤ 19 are the locations of the airway walls’ air sacs (alveoli), which range from 75 to 300 μm in diameter (Grotberg 2011). Alveoli are thin-walled and, owing to the rich capillary blood supply in them, are designed for gas exchange. The respiratory bronchioles are the vessels by which air passes to alveoli. The walls of the tubes or ducts in generations 20 ≤ n ≤ 22 consist entirely of alveoli. At generation n = 23, terminal alveolar sacs are made up of clusters of alveoli (Isaacs et al. 2012). Thus, Fig. 4 shows that this is the pulmonary region where aerosols can deposit (International Commission on Radiological Protection Task Force on Lung Dynamics & Task Group on Lung Dynamics, 1966).

Two principal factors that are relevant to gas exchange are the airway volume (Vaw) and airway surface area (Aaw), which are proportional to the size of the person. Air exchange increases in proportion to Aaw. The Vaw (mL) for children is proportional to height and is approximated as (Kerr 1976):

$$ {V}_{\mathrm{aw}}=1.018\times \mathrm{Height}\;\left(\mathrm{cm}\right)-76.2 $$
(12)

Vaw (mL) can be estimated for adults by adding the ideal body weight (pounds) plus age in years (Bouhuys 1964). For example, a 40-year-old adult whose ideal body weight is 160 pounds has an estimated Vaw of 200 mL (George and Hlastala 2011).

The average human lung has from 300 to 500 million of these air sacs. In an average adult lung, the total alveolar surface area is 70 m2. This large Aaw allows for efficient gas exchange to supply O2 for normal respiration, but also large increases in gas exchange are needed when a person is stressed (e.g., during exercise, injury, or illness). The Reynolds number varies according to the branching level through which the air is flowing, that is, the generation (very high in the trachea, but low in the alveoli) (Grotberg 2011). Airways have liquid lining, with two layers in the first generations (up to about n = 15). A watery, serous layer is next to the airway wall, behaving as a Newtonian fluid. This layer has cilia that pulsate toward the mouth. Atop the serous layer is a mucus layer that possesses several non-Newtonian fluid properties, for example, viscoelasticity, shear thinning, and a yield stress.

Alveolar cells produce surfactants that orient at the air-liquid interface and reduce the surface tension significantly. Air pollutants can adversely affect the surfactant chemistry, which can make the lungs overly rigid, thus hindering inflation (Grotberg 2011). A pulmonary surfactant is a surface-active lipoprotein complex (phospholipoprotein) produced by type II pneumocytes, which are also known as alveolar type II cells. These pneumocytes are granular and comprise 60% of the alveolar lining cells. Their morphology allows them to cover smaller surface areas than type I pneumocytes. Type I cells are highly attenuated, very thin (25 nm) cells that line the alveolar surfaces and cover 97% of the alveolar surface. Surfactant molecules have both a hydrophilic head and a lipophilic tail. Surfactants adsorb to the air-water interface of the alveoli with the hydrophilic head that collects the water, while the hydrophobic tail is directed toward the air. The principal lipid component is dipalmitoylphosphatidylcholine, which is a surfactant that decreases surface tension. The actual surface tension decrease depends on the surfactant’s concentration on the interface. This concentration’s saturation limit depends on temperature and the presence of other compounds in the interface. Surface area of the lung varies during compliance (i.e., lung and thorax expansion and contraction) during ventilation. Thus, the surfactant’s interface concentration is seldom at the level of saturation. When the lung expands, the surface increases, opening space for new surfactant molecules to join the interface mixture. During expiration, lung surface area decreases, compressing the surfactant and increasing the density of surfactant molecules, thus further decreasing the surface tension. Therefore, surface tension varies with air volume in the lungs, which protects the lungs from collapsing at low air volume and from tissue damage at high air volume (Schurch et al. 1992; George & Hlastala 2011).

Transport by concentration gradient at the molecular scale, that is, Fickian diffusion , is important only for very small particles (≤0.1 μm diameter) because the Brownian motion allows them to move in a “random walk” away from the airstream. Interception works mainly for particles with diameters between 0.1 and 1 μm. During interception, the particle does not leave the airstream but comes into contact with the filter medium (e.g., a strand of fiberglass). Inertial impaction collects particles that are sufficiently large to leave the airstream by inertia (diameters ≥1 μm). Electrostatics consist of electrical interactions between the atoms in the filter and those in the particle at the point of contact (Van der Waal’s forces), as well as electrostatic attraction (charge differences between particle and filter medium). Other important factors affecting filtration efficiencies include the thickness and pore diameter or the filter, the uniformity of particle diameters and pore sizes, the solid volume fraction, the rate of particle loading onto the filter (e.g., affecting particle “bounce”), the particle phase (liquid or solid), capillarity and surface tension (if either the particle or the filter media are coated with a liquid), and characteristics of air or other carrier gases, such as velocity, temperature, pressure, and viscosity.

Basically, lung filtration consists of four mechanical processes: (1) diffusion, (2) interception, (3) inertial impaction, and (4) electrostatics (see Fig. 6). Diffusion is important only for very small particles (≤0.1 μm diameter) because the Brownian motion allows them to move away in a “random walk” away from the airstream. This can be an important process for NPs .

Fig. 6
figure 6

Mechanical processes leading to the deposition of particulate matter. Diffusion can be an important filtration mechanism for nanoparticles. (Source: Vallero 2013, 2014; adapted from: Rubow et al. 2004)

All of these filtration processes apply to capture and escape of synbio and nanoparticles. Notably, interception occurs when a particle stays in the airstream but comes into contact with matter (e.g., lung tissue), mainly for particles in the upper nanoscale size range, that is, diameters near 100 nm and up to 1 μm. Impaction collects sufficiently large particles to leave the airstream by inertia (diameters ≥ 1 μm); hence this is commonly referred to as “inertial impaction .” Given their size, nanoscale and ultrafine aerosols are strongly affected by electrostatics given the electrical interactions between the atoms in a surface and those in the particle at the point of contact (Van der Waal’s force), as well as electrostatic attraction (charge differences between particle and surface). Other important factors affecting lung filtration are surface stickiness, uniformity of particle diameters, the solid volume fraction, the rate of particle loading onto tissue surfaces, the particle phase (whether liquid or solid), capillarity and surface tension, and characteristics of air in the airway, such as humidity, velocity, temperature, pressure, and viscosity.

In addition to aerosol size, the chemical composition also determines the fate of symbiotic products in the respiratory system. Endogenously, varying amounts of the parent substance (e.g., zero-valent metal), any salts and ions formed, and other chemical species (e.g., organometallic compounds) are absorbed and distributed within the body. For metal NPs, the principal difference between the way that nanomaterials and other forms of metal will partition among zero-valence, ions, and metallic compounds is determined by its relative volume and mass. The greater amount of surface area in NPs means that compared to even fine particulate matter, the NP has a greater number of potentially active sites for sorption and solution. The low mass also means that the NP can remain suspended for a very long time. Such nano-suspensions in surface waters mean that the metal tends to remain in the water column, rather than settle onto the surface, so it is more likely to be exposed to free oxygen than to the more reduced and anoxic conditions of the sediment. In the air, these features mean that the NP will be more likely to remain airborne for longer time periods and to undergo atmospheric transformation.

These differences in mass and volume from bulk materials can also translate into endogenous differences, meaning that absorption, distribution, metabolism, excretion, and toxicity could also be different for a NP. The fraction of the metal species or its transformation products that accumulates in lipids and other tissue substrates could be higher, and the amount excreted decreased, so that the difference results in bioaccumulation and increased body burden (see Fig. 7).

Fig. 7
figure 7

Toxicokinetics for a hypothetical nanomaterial that has been inhaled, ingested, or contacted dermally. (Based on: Agency for Toxic Substances and Disease Registry 2002; adapted from: Vallero 2014)

The metal NP , cations, and its metabolites thereafter induce toxicity in various ways. For toxicity (e.g., metal-induced neuropathologies) to occur, a metal must reach a target (e.g., a neuron) at a concentration sufficient to alter mechanistically the normal functioning of the tissue. Metal toxicity can involve the types of membrane receptor-ligand disruptions. However, it may also involve intracellular receptors and ion channels. Metals tend to react with nucleophilic macromolecules, for example, proteins, amino acids, and nucleic acids. A nucleophile donates an electron pair to an electrophile, an electron pair acceptor, to form a chemical bond. Mercury, for example, reacts with sulfur (S) in thiols, cysteinyl protein residues, and glutathione and S in thiols and thiolates. However, other metals, for example, lithium, calcium, and barium, preferentially react with harder nucleophiles, for example, the oxygen in purines. Lead (Pb) tends to fall between these two extremes, that is, exhibits universal reactivity with all nucleophiles (Shanker 2008).

Again, these effects have been observed in metals and metalloids in various forms, with nanomaterials playing a role of either degrading or improving environmental conditions. How metal NPs differ is a subject of current research. In addition, metals in various forms and sizes are influenced by the presence of NPs. For example, Pb mobility and bioavailability can be adjusted by inserting iron (Fe) NPs (e.g., Fe3(PO4)2·8H2O) into Pb-contaminated soil, that is, converting highly aqueous soluble and exchangeable forms to less soluble and less exchangeable forms (Liu and Zhao 2007). Such findings can greatly improve environmental remediation efforts.

Much of the toxicology resulting from inhalation exposure (E) can be expressed as (Derelanko 2014; Vallero 2014):

$$ E=\frac{(C)\cdot \left(\mathrm{PC}\right)\cdot \left(\mathrm{IR}\right)\cdot \left(\mathrm{RF}\right)\cdot \left(\mathrm{EL}\right)\cdot \left(\mathrm{AF}\right)\cdot \left(\mathrm{ED}\right)\cdot \left({10}^{-6}\right)}{\left(\mathrm{BW}\right)\cdot \left(\mathrm{TL}\right)} $$
(13)

where:

  • C = concentration of the contaminant on the aerosol/particle (mg kg−1)

  • PC = particle concentration in air (gm m−3)

  • IR = inhalation rate (m3 h−1)

  • RF = respirable fraction of total particulates (dimensionless, usually determined by aerodynamic diameters, e.g., 2.5 μm)

  • EL = exposure length (h d−1)

  • ED = duration of exposure (d)

  • AF = absorption factor (dimensionless)

  • BW = body weight (kg)

  • TL = typical lifetime (d)

  • 10−6 = a conversion factor (kg to mg)

The human body and other biological systems have a capacity for the uptake of myriad types of substances and utilize them to support some bodily function or eliminate them. In work or exercise scenarios, for example, the exposure to NPs is greatly increased because of the elevated IR and PC values.

The quality and amount of data from which to base nanomaterial exposures vary. As analytical capabilities have improved, increasingly lower concentrations of chemicals have been observed in various parts of the body. Some of these chemicals enter the body by inhalation, whereas the dominant pathway for others could be in drinking water, food, and skin contact. Equations for each of these pathways are analogous to Eq. 1.

Engineers and scientists document and try to quantify uncertainty by working within the known domain and using tools to extend knowledge to the lesser known domains, that is, extrapolating information and knowledge from the data-rich to data-poor domains. If something has failed under specified conditions and did not fail under different, specified conditions, this may inform decisions within unknown domains. However, if this is all the information available, the gap between the two domains is the region of uncertainty. From both an engineering and biomedical perspective, uncertainties are addressed by conservative safety, including protective factors of safety. For example, regulatory agencies may have reference doses (RfD) and concentrations (RFC) for chemical compounds that have been based largely on in vitro and in vivo studies of pure compounds. However, when these compounds are constituents of synbio products and nanoparticles, additional levels of protection are required, given the additional uncertainties about exposure and toxicity.

Exposure Models

Risk management depends on models to estimate exposures. Such models range from “screening-level” to “high-tiered.” Screening models generally overpredict exposures because they are based on conservative default values and assumptions. They provide a first approximation that screens out exposures not likely to be of concern (Chemical Computing Group 2013; Guy et al. 2008; Hilton et al. 2010; Judson et al. 2010; U.S. Environmental Protection Agency 2017; Zhang et al. 2014). Conversely, higher-tiered models typically include algorithms that provide specific site characteristics and time activity patterns and are based on relatively realistic values and assumptions. Such models require data of higher resolution and quality than the screening models and, in return, provide more refined exposure estimates (U.S. Environmental Protection Agency 2017).

Environmental stressors can be modeled in a unidirectional and one-dimensional fashion. A conceptual framework can link exposure to environmental outcomes across levels of biological organization (Fig. 8). Thus, environmental exposure assessment considers coupled networks that span multiple levels of biological organization and can describe the interrelationships within the biological system. Mechanisms can be derived by characterizing and perturbing these networks, for example, behavioral and environmental factors (Hubal et al. 2010). This can apply to a food chain or food web model (Fig. 9) or a kinetic model (Fig. 10) or numerous other modeling platforms.

Fig. 8
figure 8

Systems cascade of exposure-response processes. In this instance, scale and levels of biological organization are used to integrate exposure information with biological outcomes. The stressor (chemical or biological agent) moves both within and among levels of biological organization, reaching various receptors, thereby influencing and inducing outcomes. The outcome can be explained by physical, chemical, and biological processes (e.g., toxicogenomic mode-of-action information). (Source: Hubal et al. 2010)

Fig. 9
figure 9

Biochemodynamic pathways for a substance (in this case, a single substance). The receptor is mammalian tissue. Various modeling tools are available to characterize the movement, transformation, uptake, and fate of the compound. Similar biochemodynamic paradigms can be constructed for multiple chemicals (e.g., mixtures) and microorganisms. (Source: Vallero 2010b)

Fig. 10
figure 10

Toxicokinetic model used to estimate dose as part of an environmental exposure. This diagram represents the static lung, with each of the compartments (brain, carcass, fat, kidney, liver, lung tissue, rapidly and slowly perfused tissues, spleen, and the static lung) having two forms of elimination, an equilibrium binding process, and numerous metabolites. Notes: K refers to kinetic rate; Q to mass flow; and QB to blood flow. A breathing lung model would consist of alveoli, lower dead space, lung tissue, pulmonary capillaries, and upper dead space compartments. Gastrointestinal (GI) models allow for multiple circulating compounds with multiple metabolites entering and leaving each compartment, that is, the GI model consists of the wall and lumen for the stomach, duodenum, lower small intestine, and colon, with lymph pool and portal blood compartments included.Fig. 10 (continued) Bile flow is treated as an output from the liver to the duodenum lumen. All uptaken substances are treated as circulating. Nonspecific ligand binding, for example, plasma protein binding, is represented in arterial blood, pulmonary capillaries, portal blood, and venous blood. Source: (C. C. Dary, 2007); adapted from: (Blancato, Power, Brown, & Dary, 2006)

Exposure Estimation

Exposure results from sequential and parallel processes in the environment, from release to environmental partitioning, movement through pathways to uptake, and fate in the organism (see Fig. 11). The substances often change to other chemical species as a result of the body’s metabolic and detoxification processes. From a precautionary perspective, it may be necessary to assume that synthesis and genetic modifications will affect such processes. New substances, known as degradation products or metabolites , are produced as cells use the parent compounds as food and energy sources. These metabolic processes, such as hydrolysis and oxidation, are the mechanisms by which chemicals are broken down.

Fig. 11
figure 11

Processes leading to organismal uptake and fate of chemical and biological agents after release into the environment. In this instance, the predominant sources are air emissions, and the predominant pathway of exposure is inhalation. However, due to deposition to surface waters and the agent’s affinity for sediment, the ingestion pathways are also important. Dermal pathways, in this case, do not constitute a large fraction of potential exposure. (Source: McKone et al. 2006)

The exposure pathway also includes the ways that humans and other organisms can come into contact with a hazard. The pathway has five parts:

  1. 1.

    The source of contamination (e.g., fugitive dust or leachate from a landfill)

  2. 2.

    An environmental medium and transport mechanism (e.g., soil with water moving through it)

  3. 3.

    A point of exposure (such as a well used for drinking water)

  4. 4.

    A route of exposure (e.g., inhalation, dietary ingestion, nondietary ingestion, dermal contact, and nasal)

  5. 5.

    A receptor population (those who are actually exposed or who are where there is a potential for exposure)

If all the five parts are present, the exposure pathway is known as a completed exposure pathway . In addition, the exposure may be short term, intermediate, or long term. Short-term contact is known as an acute exposure, that is, occurring as a single event or for only a short period of time (up to 14 days). An intermediate exposure is one that lasts from 14 days to less than 1 year. Long-term or chronic exposures are greater than 1 year in duration.

Determining the exposure for a neighborhood can be complicated. For example, even if we do a good job identifying all of the contaminants of concern and possible sources (no small task), we may have little idea of the extent to which the receptor population has come into contact with these contaminants (steps 2 through 4). Thus, assessing exposure involves not only the physical sciences but the social sciences, for example, psychology and behavioral sciences. People’s activities greatly affect the amount and type of exposures. That is why exposure scientists use a number of techniques to establish activity patterns, such as asking potentially exposed individuals to keep diaries, videotaping, and using telemetry to monitor vital information, for example, heart and ventilation rates.

General ambient measurements , such as air pollution monitoring equipment located throughout cities, are often not good indicators of actual population exposures. For example, metals and their compounds comprise the greatest mass of toxic substances released into the environment. This is largely due to the large volume and surface areas involved in metal extraction and refining operations. However, this does not necessarily mean that more people will be exposed at higher concentrations or more frequently to these compounds than to others. A substance that is released or even that if it resides in the ambient environment is not tantamount to its coming in contact with a receptor. Conversely, even a small amount of a substance under the right circumstances can lead to very high levels of exposure (e.g., handling raw materials and residues at a waste site).

The simplest quantitative expression of exposure is:

$$ E=D/t $$
(14)

where E is the human exposure during the time period t (units of concentration (mg kg−1d−1); D is the mass of pollutant per body mass (mg kg−1); and t is time (day).

D, the chemical concentration of a pollutant, is usually measured near the interface of the person and the environment, during a specified time period. This measurement is sometimes referred to as the potential dose (i.e., the chemical has not yet crossed the boundary into the body but is present where it may enter the person, such as on the skin, at the mouth, or at the nose).

Expressed quantitatively, exposure is a function of the concentration of the agent and time. It is an expression of the magnitude and duration of the contact. That is, exposure to a contaminant is the concentration of that contact in a medium integrated over the time of contact:

$$ E=\underset{t={t}_1}{\overset{t={t}_2}{\int }}C(t) dt $$
(15)

where E is the exposure during the time period from t1 to t2 and C(t) is the concentration at the interface between the organism and the environment, at time t.

The concentration at the interface is the potential dose (i.e., the agent has not yet crossed the boundary into the body but is present where it may enter the receptor). Since the amount of a chemical agent that penetrates from the ambient atmosphere into a control volume affects the concentration term of the exposure equation, a complete mass balance of the contaminant must be understood and accounted for; otherwise, exposure estimates will be incorrect. Recall that the mass balance consists of all inputs and outputs, as well as chemical changes to the contaminant:

$$ {\displaystyle \begin{array}{c}\mathrm{A}\mathrm{ccumulation} \mathrm{or} \mathrm{loss} \mathrm{of} \mathrm{contaminant} \mathrm{A}=\mathrm{Mass}\ \mathrm{of}\ \mathrm{A}\ \mathrm{transported}\ \mathrm{in}\\ {}-\mathrm{Mass}\ \mathrm{of}\ \mathrm{A}\ \mathrm{transported}\ \mathrm{out}\pm \mathrm{Reactions}\end{array}} $$
(16)

The reactions may be either those that generate substance A (i.e., sources) or those that destroy substance A (i.e., sinks). Thus, the amount of mass transported in is the inflow to the system that includes pollutant discharges, transfer from other control volumes and other media (e.g., if the control volume is soil, the water and air may contribute mass of chemical A), and formation of chemical A by abiotic chemistry and biological transformation. Conversely, the outflow is the mass transported out of the control volume, which includes uptake, by biota, transfer to other compartments (e.g., volatilization to the atmosphere), and abiotic and biological degradation of chemical A. This means the rate of change of mass in a control volume is equal to the rate of chemical A transported in less the rate of chemical A transported out, plus the rate of production from sources, and minus the rate of elimination by sinks. Stated as a differential equation, the rate of change contaminant A is:

$$ \frac{d\left[A\right]}{dt}=-v\cdot \frac{d\left[A\right]}{dx}+\frac{d}{dx}\left(\Gamma \cdot \frac{d\left[A\right]}{dx}\right)+r $$
(17)

where:

  • v is the fluid velocity.

  • Γ is a rate constant specific to the environmental medium.

  • \( \frac{d\left[A\right]}{dx} \) is the concentration gradient of chemical A.

  • r is the internal sinks and sources within the control volume.

Reactive compounds can be particularly difficult to measure. For example, many volatile organic compounds in the air can be measured by collection in stainless steel canisters and followed by chromatography analysis in the lab. However, some of these compounds, like the carbonyls (notably aldehydes like formaldehyde and acetaldehyde), are prone to react inside the canister, meaning that by the time the sample is analyzed, a portion of the carbonyls are degraded (underreported). Therefore, other methods may need to be applied, such as trapping the compounds with dinitrophenylhydrazine (DNPH)-treated silica gel tubes that are frozen until being extracted for chromatographic analysis. The purpose of the measurement is to see what is in the air, water, soil, sediment, or biota at the time of sampling, so any reactions before the analysis give measurement error.

The general exposure in Eq. 13 is rewritten to address each route of exposure, accounting for chemical concentration and the activities that affect the time of contact. The exposure calculated from these equations is actually the chemical intake (I) in units of concentration (mass per volume or mass per mass) per time, such as mg kg−1 d−1:

$$ I=\frac{C\cdot \mathrm{CR}\cdot \mathrm{EF}\cdot \mathrm{ED}\cdot \mathrm{AF}}{\mathrm{BW}\cdot \mathrm{AT}} $$
(18)

where:

  • C is the chemical concentration of contaminant (mass per volume).

  • CR is the contact rate (mass per time).

  • EF is the exposure frequency (number of events, dimensionless).

  • ED is the exposure duration (time).

These factors are further specified for each route of exposure, such as the lifetime average daily dose (LADD) as shown in Table 1. The LADD is obviously based on a chronic, long-term exposure.

Table 1 Equations for calculating lifetime average daily dose (LADD) for various routes of exposure

Acute and subchronic exposures require different equations, since the exposure duration (ED) is much shorter. For example, instead of LADD , acute exposures to noncarcinogens may use maximum daily dose (MDD) to calculate exposure (see discussion box). However, even these exposures follow the general model given in Eq. 15.

Hypothetical Example of an Exposure Calculation

Over an 18-year period, VICHLOSYN has successfully applied synthetic biology to detoxify soil contaminated with vinyl chloride. Contaminated soil has been trucked to their facility. However, storing the soil and treatment have contaminated the soil on its property. Complaints and audits led to VICHLOSYN closing the facility 2 years ago but vinyl chloride vapors continue to reach the neighborhood surrounding the plant at an average concentration of 1 mg m−3. Assume that people are breathing at a ventilation rate of 0.5 m3 h−1 (about the average of adult males and females over 18 years of age) (Moya et al. 2011). The legal settlement allows neighboring residents to evacuate and sell their homes to the company. However, they may also stay. The neighbors have asked for advice on whether to stay or leave, since they have already been exposed for 20 years.

Vinyl chloride is highly volatile, so its phase distribution will be mainly in the gas phase rather than the aerosol phase. Although some of the vinyl chloride may be sorbed to particles, we will use only vapor phase LADD equation, since the particle phase is likely to be relatively small. Also, we will assume that outdoor concentrations are the exposure concentrations. This is unlikely, however, since people spend very little time outdoors compared to indoors, so this may provide an additional factor of safety. To determine how much vinyl chloride penetrates living quarters, indoor air studies would have to be conducted. For a scientist to compare exposures, indoor air measurements should be taken.

Find the appropriate equation in Table 1 and insert values for each variable. Absorption rates are published by the EPA and the Oak Ridge National Laboratory’s Risk Assessment Information System (http://risk.lsd.ornl.gov/cgi-bin/tox/TOX_select?select=nrad). Vinyl chloride is well absorbed, so for a worst case we can assume that AF = 1. We will also assume that the person staying in the neighborhood is exposed at the average concentration 24 hours a day (EL = 24) and that a person lives the remainder of entire typical lifetime exposed at the measured concentration.

Although the ambient concentrations of vinyl chloride may have been higher when the plant was operating, the only measurements we have are those taken recently. Thus, this is an area of uncertainty that must be discussed with the clients. The common default value for a lifetime is 70 years, so we can assume the longest exposure would be is 70 years (25,550 days). Table 2 gives some of the commonly used default values in exposure assessments. If the person is now 20 years of age and has already been exposed for that time and lives the remaining 50 years exposed at 1 mg m−3, then:

$$ {\displaystyle \begin{array}{c}\mathrm{LADD}=\frac{(C)\cdot (IR)\cdot \left(\mathrm{EL}\right)\cdot (AF)\cdot (ED)}{(BW)\cdot \left(\mathrm{TL}\right)}\\ {}=\frac{(1)\cdot (0.5)\cdot (24)\cdot (1)\cdot (25550)}{(70)\cdot (25550)}\\ {}=0.2\;\mathrm{mg}\;{\mathrm{kg}}^{-1}{\mathrm{day}}^{-1}\end{array}} $$
Table 2 Commonly used human exposure factors

If the 20-year-old leaves today, the exposure duration would be for the 20 years that the person lived in the neighborhood. Thus, only the ED term would change, that is, from 25,550 days to 7300 days (i.e., 20 years).

Thus, the LADD falls to 2/7 of its value:

$$ \mathrm{LADD}=0.05\;\mathrm{mg}\;{\mathrm{kg}}^{-1}{\mathrm{day}}^{-1}. $$

Note that this is a straightforward, chemical exposure estimate in the gas phase. Often, a chemical will exist as a vapor, an aerosol, or sorbed to an aerosol. In this case, the inhalation exposure would have to be calculated for the gas and the PM, that is, the concentration of PM and the concentration of the chemical in the PM. Furthermore, if this were an exposure involving an emerging technology, it would be much more complex and uncertain, since the routes and pathway information may be more difficult to ascertain, for example, GMMs do not behave like chemical compounds. The risk assessment may be even more uncertain, since it is likely that at least some of the products may lack data on toxicity and hazard, including genetically modified organisms, so even if the exposure probability is reliable, the risk assessment will be weakened.

Once the hazard and exposure calculations are complete, risks can be characterized quantitatively. There are two general ways that such risk characterizations are used in environmental problem-solving, that is, direct risk assessments and risk-based cleanup standards.

Conclusion

The benefits of emerging technologies must be weighed against the amount of risk that they introduce. The risks to health and the environment must be reduced or avoided by proper management. Risk management decisions must be underpinned by scientifically credible and reliable assessments of both the hazards and the likelihood and extent of exposure to those hazards. Thus, reliable exposure estimates are required for decisions involving products of synthetic biology and other emerging technologies.

This chapter introduced exposure assessment approaches, identifying where conventional methods may fail, along with possible ways to augment them to address the large uncertainties in assessing and managing the risks posed by these technologies.

Among the challenges of substances generated in synthetic biology processes is that they are not limited to chemical contaminants but will include mixtures and biological agents generated during various life stages of synthesis and use. The agents may include products during various stages of synthesis, beginning with chassis bacteria. They may also include genetically modified biological agents, as well as pathogens and other natural organisms which induce harm when released into a human population or ecosystem. Methods for estimating and predicting exposures to these agents are much more uncertain than those employed in traditional chemical risk assessment. Assessment methodologies must be adapted to address the various routes of exposure and adverse outcomes introduced from new technologies that generate unprecedented biological entities, such as (Epstein and Vermeire 2016):

  1. 1.

    Integration of protocells into living organism

  2. 2.

    Xenobiology

  3. 3.

    DNA synthesis and direct genome editing of zygotes that can lead to multiplexed genetic modifications

  4. 4.

    Increased modifications introduced in parallel by large-scale DNA synthesis and highly parallel genome editing

These and other synthetic process will result in increased genetic distance between the synthetic organism and any natural organism or any previously modified organism (Epstein and Vermeire 2016). Thus, existing exposure and risk science provide a pathway to exposure assessment for synthetic biology but are wholly insufficient given these differences. Research is needed to compare and contrast synthetic biology-generated contaminants and agents with chemicals.