Keywords

1 Introduction

The field of mixtures has had a resurgence in the last decade, with increasing interest from researchers and governments. As a report from NIEHS on a 2011 workshop put it, “Traditionally, toxicological studies and human health risk assessments have focused primarily on single chemicals. However, people are exposed to a myriad of chemical and nonchemical stressors every day and throughout their lifetime…It is imperative to develop methods to assess the health effects associated with complex exposures in order to minimize their impact on the development of disease.” Another conclusion was the “need for further collaboration among epidemiologists, toxicologists, and biostatisticians.” (Carlin et al. 2013). For such collaborations to be most productive, researchers in all three fields need to be at least aware of each other’s jargon, especially the definition of “interaction.” Further, toxicologists and epidemiologists need to understand the viewpoints and methods that the two fields use to investigate health effects of mixtures (see also Boedeker and Backhaus 2010; Howard and Webster 2013).

1.1 The Mixtures Problem for Toxicologists

The mixtures problem typically faced by toxicologists and pharmacologists is to predict the effect of a combination of compounds based only on information for each compound individually, including toxicological mechanism of action. Put more concretely, when and how can the individual dose-response curves plus mechanistic information be used to predict the joint response across a range of doses, i.e., the dose-response surface. For simplicity, concepts will be illustrated with a mixture of two compounds; it is important to note that the ideas presented here can be generalized to more complicated situations. This problem can be visualized with a three-dimensional diagram: plot the dose of one compound on the X 1-axis, the dose of the second compound on the X 2-axis, and response on the Y-axis (Fig. 10.1, right). In some situations, the response surface can be accurately predicted using component-based mixtures models, as briefly discussed later in this chapter and elsewhere in this volume (Chaps. 9, 11 and 14). Sometimes toxicologists are faced with a well-defined mixture for which the composition is fixed or nearly so. For such multicomponent defined mixtures, direct toxicological testing of the complete mixture can be used. Toxicologists also directly test highly complex environmentally realistic mixtures (whole mixtures) where some part of the mixture mass is known and the rest consists of unidentified components (Narotsky et al. 2012, 2013).

Fig. 10.1
figure 1

An important mixtures problem in toxicology: When and how can dose-response curves and other information about individual components (left) predict the dose-response surface of the mixture (right)?

1.2 The Mixtures Problem for Epidemiologists

Environmental epidemiologists face a related but complementary problem. For component-based mixtures, toxicologists can choose the compounds they are examining and the combinations of doses. Since epidemiologists cannot ethically expose people to toxic compounds, they look for natural experiments where such exposures occur. For each person, the investigator needs to know their exposure to each compound X j (during the biologically relevant time period), the outcome Y, as well as potential confounders and effect measure modifiers. At the simplest level, a confounder is a third variable that is associated with the exposure and is an independent predictor of the outcome (for a more thorough definition and discussion of confounding, see Aschengrau and Seage 2013; Rothman et al. 2008). If not controlled in some way, confounding causes a distortion of the relationship between the exposure of interest and the outcome; this distortion can be in any direction, either diminishing/masking a true association or creating false associations. It is important to note that the omission of a risk factor for an outcome does not cause confounding if it is not also associated with the exposure of interest. Effect measure modification is discussed in more detail later in the chapter.

For now, two simplifying assumptions are made: the outcome Y is continuous, and there is no effect measure modification. The exposure of each person can be plotted as a point on the X 1 −X 2 plane, generating the distribution of data in exposure space (e.g., Fig. 10.2). Statisticians typically use regression modeling to estimate the associations between each exposure and the outcome (Chap. 8) as well as statistical interaction. For example, one might use a regression equation of the form

$$ Y={\beta}_0+{\beta}_1{X}_1+{\beta}_2{X}_2+{\beta}_{12}{X}_1{X}_2+{\beta}_3Z+\varepsilon $$
(10.1)

where β 0 is a constant; β 1 and β 2 are the effect estimates for the two exposures X 1 and X 2 individually (called the main effects) (e.g., Harrell 2001). In statistics, β 12 X 1 X 2 is called a multiplicative interaction term. As discussed later in this chapter, this is not necessarily the same as interaction from an epidemiologic or toxicologic point of view. Z is a confounder that requires adjustment, and ε is an error term, assumed to be random and unmeasured. Regression uses the data for each person (X ij , Y i , Z i ) to estimate the parameters β 0, β 1, β 2, β 12, and β 3. Equation 10.1 is quite simple. In addition to the two assumptions discussed above, it also assumes that individual dose-response curves are linear and there is only one confounder; these assumptions can be relaxed. More complicated interaction terms can also be used. While toxicologists can usually avoid confounding by experimental design (Chaps. 8 and 13), it is a major cause of concern in environmental epidemiology: this difference arises because of the uncontrolled quality of natural experiments. For example, only a limited number of exposure variables (e.g., different chemicals) are usually measured, potentially causing confounding.

Fig. 10.2
figure 2

Example of exposure space for two compounds BDE 47 and PCB153, as they occur in human serum (ng/g lipid weight, unpublished data). Each axis represents an exposure; each point represents a person

While a component-based toxicology approach uses data on individual compounds to estimate the effect of the mixture, the epidemiologist tries to estimate the response surface directly from the data in exposure space: Equation 10.1 is a model of the dose-response surface (it also yields estimates of the individual dose-response curves, e.g., by setting the other compound equal to zero). Unlike toxicologists, epidemiologists often have neither a priori individual chemical dose-response information nor information about toxicological mechanism.

Epidemiologists have at least three questions in mind when studying exposure to mixtures (Braun et al. 2016): (1) Variable selection: Which components of the mixture contribute to the outcome? In our example, are both X 1 and X 2 associated with the effect? (2) Are there “interactions” (however defined) between the two exposures? (3) Can some kind of summary measure of the exposures be constructed (discussed later in this chapter)? Examining mixtures in epidemiology is a difficult problem, facing a number of challenges (e.g., Braun et al. 2016).

Several recent efforts have been aimed at developing methods to evaluate mixtures of chemicals in epidemiology: e.g., the EPA multipollutant workshop (Johns et al. 2012) and the NIEHS workshop Statistical Approaches for Assessing Health Effects of Environmental Chemical Mixtures in Epidemiology (NIEHS 2015). Novel methods that have been applied to assess mixtures in epidemiological settings include weighted quantile sum regression (Carrico et al. 2015), Bayesian kernel machine regression (Bobb et al. 2014), and exposure space smoothing (Webster and Vieira 2015). A series of publications are expected to come out of the NIEHS workshop comparing various methods (e.g., Taylor et al. 2016).

2 Toxicology and Epidemiology of Mixtures: The Importance of Exposure Space

To generalize, instead of two exposures, suppose there are J exposures. Thus, instead of the simple diagram in Fig. 10.2, the exposure space has J dimensions; the response variable adds an additional dimension. This space is potentially very large: it is estimated that there are somewhere between 25,000 and 84,000 chemicals in commerce in the USA (IOM 2014). To this one might add natural compounds, metabolites, pharmaceuticals, and nonchemical exposures. (However, we have still simplified, as considerations of time and exposure measurement error are omitted from the discussion.)

It is currently not possible to simply test our way out of the mixtures problem: the numbers are too big. For example, even if we examined only one dose of only three-way combinations of 25,000 chemicals, the number of toxicology experiments required is 2.6 × 1012. We clearly need ways to reduce the number of combinations. This sobering fact provides a compelling rationale for the following two main toxicological approaches to mixtures. The component-based approach, when it is applicable, only requires data on individual compounds. When the relative composition of a mixture is fixed, then varying the dose of the whole mixture produces a ray in exposure space, a line from the origin outward (Chap. 13).

The distribution of points in exposure space is thus of key importance for both toxicology and epidemiology (e.g., Carlin et al. 2013). Large pieces of the space will be empty: such combinations of exposure do not occur and don’t require investigation. Some exposures will be highly correlated, even forming rays amenable to whole mixture approaches. Understanding exposure space is also critical for environmental epidemiologists as they cannot control the distribution of exposures except to some degree by choice of populations. The good news is that epidemiologists study exposures as they actually occur—thus targeting important parts of exposure space—at least if they are measured. But this can also pose problems. For example, estimating both main effects and interactions is difficult unless the population under study is sufficiently large, and the data are to some degree spread across exposure space. For example, with two exposures, one would ideally have groups of people with exposure to both compounds, exposure to neither compound, and exposure to only one or the other. If two exposures are highly correlated, it is difficult to disentangle separate effects: putting both exposures in the regression model can produce unstable estimates, a problem called collinearity (note that if exposures are very highly correlated, the epidemiological problem is related to the whole mixture approaches of toxicology). Suppose components A and B are correlated because they come from a common source, but only A contributes to the effect. If only B is measured (or included in a model), it will incorrectly appear to be associated with the outcome, i.e., it is confounded by the missing exposure (Fig. 10.3). For all of these reasons, it is important to increase the number of exposures that are measured and examined: expanded targeted analysis, nontargeted analysis, and similar approaches are critical for a better understanding of what mixtures occur. Targeted analysis looks for specific compounds in a sample; nontargeted analysis is a screening approach that can identify previously unknown or uncharacterized exposures (e.g., Getzinger et al. 2015; Chaps. 3 and 4). Analysis of such expanded exposure data will also require larger sample sizes, e.g., to achieve desired levels of statistical power.

Fig. 10.3
figure 3

Suppose that exposures A and B are correlated because they both arise from a common source. A, but not B, causes the outcome Y. If B is the only exposure measured, it will falsely appear to be associated with Y due to confounding

Methods for analyzing the information in exposure space are also important. For example, Fig. 10.4 illustrates the correlations of a set of persistent organic pollutants in human serum from the same cohort as Fig. 10.2, but with more compounds (unpublished data). The dendrogram, which can be interpreted similar to a family tree, was constructed using hierarchical clustering. Compounds joined closer to the bottom are more highly correlated (using Spearman’s correlation coefficients of the serum concentrations), those joined at the top less so. For example, BDE47 and BDE99 are two highly correlated (and tightly clustered) compounds. They are two polybrominated diphenyl ether (PBDE) congeners that occur in the same commercial flame retardant and that have similar routes of exposure. The PBDEs and PCBs are in two different clusters because they are not very correlated with each other. For example, BDE47 is not well correlated with PCB153, as shown in Fig. 10.2. This suggests that PBDEs and PCBs are unlikely to confound each other (at least in this cohort); that does not preclude “additive” or “interactive” effects however.

Fig. 10.4
figure 4

Dendrogram showing the correlations between the concentrations of a number of persistent organic pollutants in human serum

3 Component-Based Mixture Methods in Toxicology

When mechanisms are sufficiently well understood, models of the effect of mixtures can sometimes be constructed. For example, biologically based mathematical models can be constructed of the effects on receptor activation by mixtures of ligands, one important mechanism for endocrine disruption (e.g., Weiss et al. 1996; Howard and Webster 2009; Webster 2013). However, toxicologists and risk assessors often need to make predictions without this kind of detail. Two main approaches are used. As these ideas are discussed in more detail elsewhere in this book (Chap. 9), these concepts will be only briefly reviewed.

For compounds that act via similar toxicological mechanisms, one approach assumes dose addition, also known as concentration addition. Compounds that are dose additive obey the following equation (see Chap. 9):

$$ \sum \limits_{j=1}^J\frac{d_j}{{\mathrm{ED}}_{j,y}}=1 $$
(10.2)

where d j is the dose of compound j and ED j,y is the dose of compound j alone that causes response level y, e.g., the ED10 (Berenbaum 1989). Depending on the number of dimensions, Eq. 10.2 describes a line, plane, or hyperplane. As a result, the isoboles (contours) of the response surface form negatively sloped lines, planes, or hyperplanes when projected onto the exposure space (Fig. 10.5).

Fig. 10.5
figure 5

Isoboles: The response surface on the left (a) yields the contours (isoboles) on the right (b). Compounds that are dose additive have isoboles that are negatively sloped straight lines. For compounds that follow the toxic equivalence model (a special case of dose addition with constant relative potency), the isoboles are also parallel

A simple version of dose addition is toxic equivalence: it can occur when the relative potency between compounds is the same at all response levels (Howard and Webster 2009; Chap. 14). The joint response of the mixture under toxic equivalence (f TE) is a function of a linear combination of the component doses scaled by potency. If chemical 1 is selected as the reference compound, then the mixture response can be represented by the dose-response model for chemical 1 applied to the linear combination of the component doses

$$ {f}_{\mathrm{TE}}\left[{d}_1,\dots, {d}_J\right]={f}_1\left[\sum \limits_{j=1}^J{\gamma}_j{d}_j\right] $$
(10.3)

where f 1[.] is the dose-response curve for the reference compound, compound 1. The γ j are the relative potency factors (RPFs) compared to a reference compound. When the reference compound has the highest potency, the other compounds in the mixture act as if they are dilute versions of the reference compound (for a discussion of definitions of RPFs and TEFs, see Chap. 14 as well as USEPA 2008). The concept underlying dose addition is perhaps most easily seen for toxic equivalence: one first scales doses by their relative potencies (to give their equivalent doses as chemical 1) and then applies the dose-response function of chemical 1 to total equivalent dose. Under toxic equivalence, the isoboles are always parallel, negatively sloped straight lines (Fig. 10.5b).

We have proposed a modification of dose addition called generalized concentration addition (GCA) that can in principle handle mixtures of full and partial agonists, i.e., compounds with different maximal responses (Howard and Webster 2009). This class of models has been successfully applied to mixtures of full and partial agonists of the AhR and PPARγ receptors (Howard et al. 2010; Watt et al. 2016). Isoboles for mixtures obeying GCA are straight lines but can have negative or positive slopes: the latter implies that the partial agonist acts like a competitive antagonist at response levels above its maximum effect level (Howard and Webster 2009).

When compounds act via different mechanisms, many mixtures toxicologists use independent action. The mixture response expected under independent action (f IA) is:

$$ {f}_{\mathrm{IA}}\left[{d}_1,\dots, {d}_J\right]=1-\prod \limits_{j=1}^J\left(1-{f}_j\left[{d}_j\right]\right) $$
(10.4)

Originally derived from independence in probability theory, independent action assumes that responses range between zero and one.

Effect summation (ES) is defined by

$$ {f}_{\mathrm{ES}}\left[{d}_1,\dots, {d}_J\right]=\sum \limits_{j=1}^J{f}_j\left[{d}_j\right] $$
(10.5)

and describes the excess effect, above controls (Chap. 9) (note that at very low effect levels independent action is approximated by effect summation). It is worth emphasizing that effect summation has often been rejected as a general mixtures model by mixtures toxicologists, e.g., Howard and Webster (2009), but may be useful when dose-response curves are approximately linear or under other conditions (Chap. 14). Effect summation frequently appears in the toxicology literature as well as in textbooks.

Perhaps not surprisingly, there has been discussion in the toxicology literature about when one should use dose addition vs. independent action, in particular, how similar the toxicological mechanisms must be for dose addition to apply (Webster 2013). The choice of dose addition vs. independent action can have profound consequences; this is nicely illustrated in the “something from nothing” experiment of Silva et al. (Silva et al. 2002), where a mixture of xenoestrogens, each at doses less than their empirical no effect levels, produces a response in combination; independent action would predict no combination response. Further, the compounds exhibit toxic equivalence and thus dose addition well describes the mixture response. As the dose-response curves for these compounds are concave upward at low doses (Fig. 10.6a), the combined response exceeds that predicted by effect summation (Silva et al. 2002; Rajapakse et al. 2002). Some models (integrated addition models) combine features of both dose addition and independent action (e.g., Rider and LeBlanc 2005, Chap. 9)

Fig. 10.6
figure 6

(a) Suppose A and B follow the toxic equivalence model and have the same potency, with a nonlinear response curve that is concave up. The same dose of A alone or B alone would give the same response. For a mixture of these doses, effect summation would predict twice the response of either compound alone. Dose addition gives the correct, higher value. But from the point of view of epidemiology, the incremental effect of compound B depends on the amount of compound A. They interact, as defined by epidemiologists. (b) Suppose A and B follow the toxic equivalence model and have the same potency, with a linear response curve that is concave up. The same dose of A alone or B alone would give the same response. For a mixture of these doses, effect summation and dose addition gives the same, correct value. From the point of view of epidemiology, the incremental effect of compound B does not depend on the amount of compound A. They do not interact, as defined by epidemiologists

In sum, mixtures toxicologists often use either dose addition or independent action to estimate the effect of mixtures from individual components. The choice often depends on the toxicological similarity of the components. Furthermore, dose addition (or GCA) and independent action can be considered as toxicologic definitions of non-interaction/additivity. Having specified such a definition, one can then determine if a mixture has an interaction, producing responses greater than additive or less than additive relative to the chosen definition.

4 Additivity and Interaction in Epidemiology

To borrow a frequently used example (Rothman 1986), Table 10.1 shows hypothetical risks of lung cancer categorized by exposure to asbestos and smoking in a cohort study (rates of disease could also be used). Let’s assume there is no confounding or other biases. The highest risk is in the doubly exposed people. As will be seen, epidemiologists think about interaction in quite a different way from toxicologists.

Table 10.1 Hypothetical risks of lung cancer categorized by exposure to asbestos and smoking (2 × 2 table). Risks and risk differences (RD) are expressed as cases per 100,000; relative risks (RR) are ratios

Epidemiologists summarize the associations between exposure and disease using effect measures. For example, Table 10.1 shows the relative risks (RR) for the association between lung cancer and asbestos, holding smoking constant. In this case, the RR—the ratio of the risk in the asbestos exposed to the risk in the asbestos unexposed—equals 5 for smokers and also 5 for non-smokers. As the RRs are the same in both strata, there is no effect measure modification of the asbestos RR by smoking. Similarly, Table 10.1 shows that asbestos exposure does not modify the RR for smoking and lung cancer: it is 10 in both strata. On the other hand, suppose the investigator used another equally valid effect measure: the risk difference (RD), which equals the risk in the exposed minus the risk in the unexposed. The RD for the association between lung cancer and asbestos is 40/100,000 for smokers and 4/100,000 for non-smokers. Thus, the asbestos RD is modified by smoking (and vice versa). This phenomenon is called effect measure modification because it depends on the choice of effect measure, here either RD or RR (there are other possibilities such as odds ratios, typically used in case-control studies).

Epidemiologists consider effect measure modification to be descriptive. Interaction is a different concept. Like toxicologists, epidemiologists also use the term interaction to mean nonadditive, either greater than additive (called synergism by epidemiologists) or less than additive. Unlike toxicology, epidemiologists have a single definition of non-interaction (additivity), making no distinction for mechanism: it is based on additivity of risk differences. For the example in Table 10.1, epidemiologists would say that asbestos and smoking interact, having a greater than additive (synergistic) effect on lung cancer. Indeed, some epidemiologists call this “biologic interaction” (e.g., Ahlbom and Alfredsson 2005).

To see how epidemiologists judge Table 10.1 to display nonadditivity, let’s briefly review one derivation of the epidemiologic definition of interaction (for a detailed explanation, see Howard and Webster 2013; Rothman et al. 2008). As in Table 10.1, epidemiologic examples traditionally use binary exposures and outcomes. One model for interaction in epidemiology relies on what are called counterfactual susceptibility types. For pairs of two binary exposures A and B, there are 16 possible patterns of exposures and outcomes, each of which can be considered a possible response type (Table 10.2). For some types, one needs to know the value of both exposures to know the outcome. For example, for people of type 8, the outcome occurs only if both exposures occur, i.e., A = B = 1. Epidemiologists call these types interdependent. For non-interdependent types, the effect of exposure to one compound does not depend on the other. For example, people of type 6 will have the outcome if A = 1, irrespective of the value of B. Risks associated with different exposure scenarios can be written as rAB; e.g., r10 is the risk in the population exposed to A but not to B. Writing down the risks associated with only the non-interdependent types—types 1, 4, 6, 11, 13, and 16—(and assuming no confounding or bias) yields the following equation:

$$ \left({r}_{11}-{r}_{00}\right)=\left({r}_{10}-{r}_{00}\right)+\left({r}_{01}-{r}_{00}\right) $$
(10.6)

where r ij denotes the risk for each exposure patterns (e.g., r 10 means the risk in a population exposed to X 1 and unexposed to X 2). This equation means that the risk difference between the jointly exposed (r 11) and the jointly unexposed (r 00) is equal to the sum of the risk differences due to individual exposures, (r 10r 00) and (r 01r 00). Since this equation includes only non-interdependent types, deviation from this equation implies the presence of interdependent types. Thus risk difference additivity is used by epidemiologists as the criteria for interaction/interdependence. It is necessary but not sufficient, i.e., interaction may occur even if Eq. 10.6 holds, e.g., if risks associated with interdependent types cancel out (dividing Eq. 10.6 by r00 provides equivalent criteria in terms of relative risks). Applying this equation to Table 10.1 shows that asbestos and smoking interact: indeed they have a greater than additive effect on lung cancer. Setting r 11 = 50, r 10 = 10, r 01 = 5, r 00 = 1 yields

$$ \left(50\hbox{--} 1\right)>\left(10-1\right)+\left(5-1\right) $$
(10.7)
Table 10.2 Counterfactual susceptibility type model for two exposures provides a basis for thinking about interaction for epidemiologists. The outcome (binary) depends on the combination of exposure (A, B)

For simplicity, the denominator of 100,000 was omitted for all of the numbers in Eq. 10.7.

This all might seem very reasonable until one realizes the following fact: the epidemiologic definition of biological interaction is consistent with effect summation, the definition rejected by many mixtures toxicologists! Equation 10.6 is a special case of effect summation, Eq. 10.5, where outcomes and exposures are binary (Howard and Webster 2013). To see this, recall that effect summation examines the excess effect above controls. For two exposures, Eq. 10.5 is equivalent to Eq. 10.6 with the background risk (r00) subtracted.

Similar ideas about interaction are sometimes applied to continuous outcomes in epidemiology. For continuous outcomes Y (untransformed) and linear dose-response curves (or binary exposures), regression models such as Eq. 10.1 can test for statistical interaction (and control for confounding). The beta coefficients for main effects are the differences in outcome per unit of exposure. Results consistent with β 12 meaningfully different from zero imply there is an interaction using the epidemiologic definition. This conclusion depends on the use of an additive scale. The presence of a non-zero interaction term in more general regression models does not necessarily imply interaction from the epidemiologic point of view, i.e., statistical interaction is not the same as epidemiologic interaction. For example, in a logistic model, as is commonly used for binary outcomes, one cannot simply examine an interaction term; there are, however, more complicated methods to assess epidemiologic interaction in such cases (Andersson et al. 2005).

5 Contrasting Interaction in Toxicology and Epidemiology

What would toxicologists and pharmacologists say about the data in Table 10.1? To highlight differences between epidemiology and toxicology, consider replacing asbestos and smoking by compounds A and B, about which little is known (the epidemiologic conclusion would remain the same). The answer would depend on whether one hypothesized that A and B worked by similar or different mechanisms. If the toxicologist thought they acted by “different” mechanisms, they might use independent action and conclude that it is greater than additive (since the risks are small, independent action is approximately equal to effect summation). Suppose they believed the chemicals acted by “similar” mechanisms? Unfortunately, Table 10.1 does not contain enough information to determine if A and B (or smoking and asbestos) are dose additive. One would also need information or assumptions about the dose-response curves for each compound alone.

The contrasting ideas about interaction and additivity between epidemiology and toxicology are perhaps most stark when comparing effect summation with dose addition for a mixture of compounds that follow toxic equivalence (a special case of dose addition) and a dose-response curve that curves upward (e.g., Fig. 10.6a). The mixture is nonadditive from the epidemiologic point of view where one first applies the dose-response function to each compound separately and adds the results: the sum of the effects is less than the effect of the mixture. But toxicologists would define this mixture as additive (relative to dose addition). For toxic equivalence, one first adds component doses scaled by relative potency factors and then applies the dose-response function of the reference compound. The contrast derives from the underlying logic of the epidemiologic definition. As illustrated in Fig. 10.6a, the increase in the effect due to a dose of compound B depends on whether compound A is also present in the mixture. Hence, nonlinear dose-response curves will lead to interaction as defined by epidemiologists. The toxicologic and epidemiologic definitions coincide when the dose-response curve is linear (Fig. 10.6b).

Now suppose that A and B are merely different doses of the same substance, an idea called sham substitution (Berenbaum 1989). Although the definition of dose addition used here does not depend on this idea, it is sometimes used by toxicologists as a rationale for thinking of dose addition as noninteractive. According to this line of thought, a compound does not interact with itself. From the epidemiologic point of view, sham substitution implies that different doses of the same compound do interact when dose-response curves are nonlinear (Rothman 1974; Howard and Webster 2013).

Toxicologists and epidemiologists thus use the same terminology—additive, greater than additive, less than additive—but mean something quite different. Understanding this difference is important for interpreting mixtures studies that come out of the two fields. None of this discussion means that toxicology is correct and epidemiology is wrong or vice versa. Definitions cannot be “wrong” (at least, if used logically); the real test is whether they are useful. Research is needed comparing mixtures studies in the two fields. Epidemiology has the possible advantage of relying on one definition rather than two as in toxicology, where the choice depends on sometimes fuzzy distinctions about similarity of mechanism (Howard and Webster 2013). It is possible that the toxicologic definition sheds more light on biology, whereas the epidemiologic definition (despite being called biologic by some epidemiologists) might be more useful for thinking about intervention to protect public health (e.g., Rothman et al. 1980). As an example of the latter, consider two exposures that have a greater than additive effect from the epidemiologic point of view; this implies that reduction of either exposure may lead to a dramatic reduction of risk. Returning to our example of Table 10.1, preventing exposure to either smoking or asbestos would have a large impact, greatly reducing the risk of lung cancer in those who would otherwise have been exposed to both.

6 Combining Ideas from Toxicology and Epidemiology

Progress on mixtures would benefit from greater communication and collaboration between toxicologists, epidemiologists, statisticians, and exposure scientists. The mixtures problem can be thought of as having two sub-questions:

  1. 1.

    What are the patterns of co-exposure in real populations?

  2. 2.

    What are the health effects of the mixtures to which populations are exposed?

As discussed above, exposure science has much to contribute to the first question. The second can be investigated by the complementary approaches of toxicology and epidemiology.

Epidemiologists have used some of the toxicological ideas discussed in this chapter. Perhaps the best example is the use of TEFs when studying health effects in people of exposure to dioxin-like compounds (e.g., Korrick et al. 2011). For example, when exposure is measured using blood concentrations, one multiplies the concentrations by the appropriate TEFs and sums. The result is then used as a summary measure of exposure in a regression equation that gets around collinearity problems. With sufficient toxicological information to construct RPFs, this strategy could be applied to other classes of compounds. This approach could be a very fruitful line of collaboration between toxicologists and epidemiologists.

Let’s now take a more general view of the mixtures problem for epidemiologists, one called exposure space smoothing (Webster and Vieira 2015). Important limitations of Eq. 10.1 include the assumption of linearity of dose-response functions and a particular mathematical form of the interaction term. These restrictions can be avoided by using a smoothing function f(.) of the exposures

$$ g\left[Y\right]=f\left[{X}_1,{X}_2\right]+{\gamma}^{\hbox{'}}Z+\varepsilon $$
(10.8)

For simplicity, only two exposures (X 1, X 2) are shown, but higher dimensional smooths are possible with sufficient data. Equation 10.8 also includes a link function g[Y] of the outcome, allowing the use of continuous, binary, and other types of outcome data. Equation 10.8 can be treated as a generalized additive model (gam) (Hastie and Tibshirani, 1990). Gams can also adjust for confounders, important for any epidemiologic analysis (Eq. 10.8 adjusts for a vector of confounders Z). Rather than impose a specific functional form (e.g., linearity), smoothing functions use the data to inform the shape. There are a number of ways to do smoothing, but one method estimates the value of the function at a particular point by using a weighted average of the outcomes at points that are nearby in exposure space (for details on how this works in two-dimensional geographic space, see Webster et al. 2006). The results of a two-dimensional smooth can be displayed in a number of ways, e.g., as color-coded maps or by using contours (e.g., Fig. 10.5b). For more dimensions, slices can be displayed. Such results can also be used as exploratory data analysis to inform additional modeling. The contours of the response surface, called isoboles, have a toxicologic interpretation. As discussed above, isoboles that are approximately negatively sloped parallel straight lines suggest that a summary measure can be constructed using RPFs. If the RFPs are not known from toxicologic data, they might be estimated using other approaches, including methods such as weighted quantile sum regression (Carrico et al. 2015). Isoboles which curve toward the origin suggest a greater than dose-additive response; isoboles which curve away from the origin suggest a less than dose-additive response.

Another interesting potential approach that combines aspects of exposure science, toxicology, and epidemiology is EAMEDA: exposomic analysis of mixtures via effect-directed analysis (Fig. 10.7). Effect-directed analysis (EDA) uses a high-throughput response assay, e.g., reporter assays, to biologically compute the combined effect of a mixture (e.g., serum or dust) on that biological endpoint. Working backward, chemical fractionation and targeted and nontargeted analysis are used to identify the components of the mixture responsible for the result (e.g., Simon et al. 2013; Fang et al. 2015, Chap. 3). For example, Fang et al. (2015) measured total PPARγ activity of dust extracts using a reporter assay. The dust samples were then chemically fractionated with normal phase high-performance liquid chromatography. Each fraction was retested with the reporter assay. In fractions with significant activity, compounds were identified using targeted and nontargeted analysis. Fatty acids were determined to be a major contributor to the dust PPARγ activity. Simon et al. (2013) used a transthyretin-binding assay to examine an aspect of thyroid hormone disruption in polar bears. Nonylphenols and certain hydroxylated PCBs contributed to this activity in plasma. The biologically based measure of activity can be considered the central focus of the EAMEDA concept: (1) Investigators work backward using EDA to determine which compounds contribute to the activity. (2) They could also work forward, using the result of the assay—an integrated measure of the activity of the mixture—as the measure of exposure in an epidemiologic study. Clearly, the appropriateness of the assay and the samples would need to be carefully considered.

Fig. 10.7
figure 7

Overview of EAMEDA: Exposomic Analysis of Mixtures via Effect Directed Analysis. A biological assay (e.g., luciferase reporter assay) is used to measure the integrated activity of the sample. The investigator uses these results as the exposure measure in an epidemiology study. One also uses effect-directed analysis (or some related technique) to determine which compounds in the sample account for the activity

I look forward to greater synergy—or at least additivity—between toxicologists, epidemiologists, statisticians, and exposure scientists in investigations of the mixtures problem.