Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Theory of Availability

Soil ingestion (see Chapter 6 by Bierkens et al., this book; Pausten bach 2000) is a key exposure pathway in Human Health Risk Assessment (HHRA) for contaminants in soil (see Chapter 5 by Swartjes and Cornelis, this book). To date, research regarding the human bioavailability of soil contaminants, via the ingestion exposure pathway, has concentrated on inorganic contaminants, and therefore this chapter is focussed on these contaminants. The priority inorganic contaminants have been arsenic (Beak et al. 2006; Bruce et al. 2007; Davis et al. 1996a; Ellickson et al. 2001; Juhasz et al. 2008; Koch et al. 2005; Palumbo-Roe and Klinck 2007; Laird et al. 2007; Roberts et al. 2007b; Wragg et al. 2007) and lead (Abrahams et al. 2006; Denys et al. 2007; Drexler and Brattin 2007; Hamel et al. 1999; Oomen et al. 2002; Ren et al. 2006; Ruby et al. 1993, 1999; Van de Wiele et al. 2005, 2007), but work has been carried out on other metals such as cadmium (Tang et al. 2006), chromium (Fendorf et al. 2004; Nico et al. 2006; Pouschat and Zagury 2006; Stewart et al. 2003a, b), nickel (Barth et al. 2007; Hamel et al. 1998; Hansen et al. 2007) and mercury (Cabanero et al. 2007; Davis et al. 1997). As such, the inorganic contaminants referred to in this chapter include metals and metalloids such as arsenic, lead, cadmium, chromium and nickel, but not contaminants such as cyanide. For the organic contaminants, recent insights and projects have concentrated on polyaromatic hydrocarbons (PAHs) (Gron 2005; Tilston et al. 2008) and dioxin and furans (Wiettsiepe et al. 2001). However, for the degradable contaminants, the estimation of the bioavailable fraction appears to be more complicated than for inorganic contaminants, as the metabolites and degradation products of these contaminants have to be taken into consideration. For example, Van de Wiele et al. (2005) showed that colon microbiota plays an important role in the biodegradation of PAH, thus influencing the outcomes of the bioaccessibility measurement undertaken.

In Risk Assessments using exposure models, it is often assumed that inorganic contaminants ingested via soil are absorbed to the same extent as the available form that was used during the toxicological assessment. Thus, assuming that the rate of absorption of ingested metal is independent of the matrix in which it is included and of its chemical form (speciation). However, the binding of inorganic contaminants to solid phases in soils may render them unavailable for absorption; similar mechanisms have been extensively shown to modulate uptake of metals by plants and metal speciation plays an important part in the uptake process (see Chapter 8 by McLaughlin et al., this book).

It can then be assumed that only a fraction of the inorganic contaminant will be absorbed by the receptor (human or animal), this fraction is called the bioavailable fraction. As oral exposure is being considered the term oral bioavailability is used. When using a Risk Assessment model, it is therefore very important to understand whether the underlying algorithm incorporates bioavailability in the oral exposure pathway. This chapter focuses on oral bioavailability (OB) to humans.

1.1 Oral Bioavailability

Bioavailability can be defined as the fraction of an ingested contaminant that is absorbed and reaches the systemic circulation where it may then cause adverse effects on human health. Oral bioavailability can be assessed by comparing the internal doses obtained after oral administration to that of intravenous administration of the contaminant. The absolute oral availability is defined as the ratio of the oral administration (AUCPO) to the intravenous administration (AUCIV). The internal dose is often related to the area under the curve (AUC), using the relationship between time and plasmatic concentrations of the contaminant (Fig. 7.1). However, other end-points and methods of determining the oral bioavailability of contaminants are commonly used and briefly discussed in Section 7.1.2.

Fig. 7.1
figure 7_1_212384_1_En

Oral Bioavailability pharmako-kinetics definition

Oral bioavailability is of toxicological interest because the possible adverse effects, caused by the contaminant, on the exposed human subject are related to the internal dose. In order to depict the effect of soil properties on contaminant availability, this concept can be divided into several steps; accessibility, absorption and metabolism.

The most commonly used definition of oral bioavailability is based upon a two-step model with three compartments shown in Fig. 7.2 and Eq. (7.1):

$$F_{{\textrm{Bacc}}} \times F_{{\textrm{Abs}}} = F_{{\textrm{Bava}}} $$
((7.1))
Fig. 7.2
figure 7_2_212384_1_En

Steps involved in oral bioavailability

A slightly different definition (used by the Dutch National institute of public health and the environment, RIVM) includes a third step with liver action (metabolism and possibly bile excretion) on the contaminant as defined by Eq. (7.2):

$$F_{{\textrm{Bacc}}} \times F_{{\textrm{Abs}}} \times Fn_{{\textrm{Met}}} = F_{{\textrm{Bava}}} $$
((7.2))

where:

  • F Bava is the bioavailable fraction;

  • F Bacc is the fraction of ingested contaminant that is bioaccessible;

  • F Abs is the fraction of accessible contaminant that is absorbed;

  • Fn Met is the fraction of absorbed contaminant that is not metabolised in liver (= 1–fraction metabolised).

1.1.1 Accessibility

Contaminants that are released from soil particles within the gastrointestinal tract are considered to be bioaccessible. For a contaminant in soil to be bioaccessible to a human via ingestion, it must be released from the soil into solution in the gastrointestinal tract in a form that can be absorbed into the blood stream. Conditions in the stomach, such as pH and residence time, may vary, depending on whether the individual is in a fed or fasted state. It is generally considered, when simulating human gastro-intestinal conditions, that a fasted state is likely to mobilise the highest amount of metals (Maddaloni et al. 1998; Van de Wiele et al. 2007). The fasted state is, therefore, considered to be the most conservative for inorganics. Conversely, for organic contaminants, the opposite is considered to be conservative, i.e. the fed state, for the reasons discussed in Section 7.1.1.2.

The main factors that control the release of contaminants are discussed in the following sections. The amount of contaminant liberated from the soil is considered to be the maximum amount that is potentially available for absorption. This is called the bioaccessible fraction. The accessible fraction is produced mainly in the upper part of the digestive tract, particularly in the stomach compartment, in the case of metals. The extent to which ingested contaminants are rendered accessible depends of physicochemical conditions in the digestive tract, as well as of transit time. Both are modulated by the presence of food and soil in the digestive tract. The differences between fed and fasted states are:

  • The presence or absence of food;

  • The amount of enzymes such as bile in the intestinal fluid (these can be very different in the fed and fasted states); and

  • The pH conditions under which the model is employed (when food is present the stomach pH is much higher).

Under fed conditions, the model should include an estimate of the nutritional status of the receptor under investigation (for the RIVM fed state model, baby food is included), an increase in the stomach pH and an alteration in the amount of enzymes included in the test system (some are increased, such as the amount of bile added, but others decreased) (Versantvoort et al. 2004).

1.1.2 Absorption

Absorption occurs predominantly in the small intestine. This is a highly developed organ with a large surface area specifically for this purpose. The walls of the small intestine comprise of finger-like projections, of about one millimetre long, called villi, which are in turn covered by numerous micro-villi as shown in Fig. 7.3. The villus contain blood and lymphatic capillaries, which transport nutrients, and for the purposes of this discussion, contaminants to the rest of the human body. Enterocytes (absorptive cells, which are predominantly responsible for absorption (Oomen 2000)) in conjunction with a heterogeneous mass of other cell types such as goblet cells (those that secrete mucin) and endocrine cells populate the surface of each villi.

Fig. 7.3
figure 7_3_212384_1_En

Intestinal villi, finger-like projections of about 1 mm long

Because of the different properties of nutrients/contaminants, absorption can occur either by the para or trans cellular route, as shown in Fig. 7.4. Figure 7.4 shows that the epithelial cells bound together at their luminal facing ends by specialized facings known as ‘zonnulae occludens’, which carry out a number of functions such as the preservation of the cell sheet integrity and the formation of a boundary between apical surfaces and basolateral membranes. For the transcellular route, movement occurs across both apical and basolateral membranes of the intestinal lumen, however for the para cellular route movement of nutrients/contaminants occurs by circumventing these membranes. The small spaces between the cells mean that paracellular absorption can only occur for the transport of small hydrophobic molecules. Passive forces drive this process, with the rate limiting forces which determine the extent or magnitude of transport determined by the permeability between the cells. Larger molecules are transported by the transcellular route, which involves the movement across both the apical and basolateral membranes. The accepted model for transcellular transport assumes that substances absorbed by this route enter the cell across the apical and leave via the basolateral membrane and that the opposite is true for secretions. The mechanisms of transcellular transport may occur by:

  • Transport by a specific carrier, either active or facilitated. Active transport is the mechanism by which proteins (after transformation into amino acids or small peptides), carbohydrates and ions are absorbed. This transport route uses carrier molecules to ferry nutrients, or accessible contaminants, across the intestinal membrane. In addition to carrier molecules, this form of transport requires energy in the form of adenosine triphosphate (ATP). However, some carbohydrates, such as fructose, require special carrier molecules for transport, but not the addition of energy via ATP. This mechanism of transport is known as facilitated.

  • Passive diffusion, where neither energy or special carrier molecules are required. Examples include organic contaminants that interact with the bile salts from the liver and form mixed micelles, which are able to diffuse freely through the mucosal cell wall.

  • Transcytosis or pinocytosis, where ‘a small volume of the intestinal fluid is invaginated by the mucosal cell wall to form an endocytotic vesicle’ (Oomen 2000).

Fig. 7.4
figure 7_4_212384_1_En

Mechanisms of absorption

Figure 7.5 illustrates the process of the absorption of contaminants, specifically iron, through the mucosal cell wall (Amshead et al. 1985; Bridges and Zalups 2005; Kim et al. 2007; Park et al. 2002; Sharp 2003; Tallkvist et al. 2001). For human/animal receptors in a state of low dietary mineral intake (i.e. mineral deficient), the primary routes of absorption of bioaccessible cations occurs via the active transport mechanism using transmembranar proteins as the carrier molecules. Carrier molecules include the Divalent Metal Transporter 1, DMT1 and the Zinc Transporter 4 Precursor, ZIP4 (not presented in Fig. 7.5). On ingestion of excessive doses of mineral salts, the gradient between the intestinal lumen and plasma or cell will be so large that it allows for passive diffusion (proportional to the concentration between the two compartments) to occur. In this instance, the process of active transport from the lumen to systemic circulation is decreased, because the absorption of mineral salts would be too high, and there is an increase in the excretion of the mineral salts e.g. through sweating in order to protect the organism from poisoning.

Fig. 7.5
figure 7_5_212384_1_En

Intestinal absorption of iron and its regulation

Divalent cations, such as calcium, cadmium, copper, magnesium, manganese, iron and zinc, exhibit similar common behaviour under the acidic pH conditions in the very upper intestine (proximal duodenum):

  • ionisation and solubilisation in the intestinal fluids;

  • binding to transmembranar proteins;

  • active process (energy consumption) during apical crossing;

  • intracellular release as ions and chelation.

For example, iron is absorbed in the proximal duodenum. In order for the absorption process to be efficient, an acidic environment is required (where the solubilisation and ionisation processes occur), however, these conditions may be hindered by the ingestion of antacids etc., that interfere with gastric acid secretion. Ferrireductase reduces ferric to ferrous iron in the duodenal lumen and after binding to the transmembranar protein DMT-1, the iron is cotransported with a proton into the absorbtive intestinal cells. The transmembranar protein DMT-1 is not iron specific, it is a transporter of many divalent metal ions. Once inside the intestinal cell, the absorbed iron may follow one of two major pathways, which is dependant on the dietary status of the host and the iron loading already present in the cell. For iron abundant states, the iron within the cell is trapped by incorporation into ferritin. This iron is not transported to the blood and the iron is lost when the cell dies. Whereas under conditions where there is a paucity of iron; the absorbed iron is exported from the cell via the transporter ferroportin, which is found in the basolateral membrane, and transported through the body after binding to the iron-carrier transferrin (i.e. intracellular release and chelation).

During the protein binding steps in the active transport process cations may interact, as antagonists (i.e. oppose the action of additional cations or bind to the receptor without producing a response). Thus the absorption of specific bioaccessible cations to the binding/carrying proteins may be affected by the presence of other cations in the target organ and competition with other bioaccessible contaminants. For example, this regulation means that cadmium absorption can be affected not only by a competition with other ions to bind to the ligand site, but also by the iron status of the receptor.

For some metals bound to amino-acids in soil, absorption can occur via amino acid carriers. In the case of copper, the amino acid carrier is histidine, whereas for selenium, methionine complexes are formed. After their translocation to plasma, metals can be bound to specific or non-specific carriers. Specific carriers include transferrin (a blood plasma protein for iron delivery), transcobalamin (a group of proteins (of intestinal cells) that bind to cyanocobalamin (vitamin B12) and transport it to other tissues) and nickeloplasmin (a nickel containing protein), the non-specific class of carriers includes albumin and amino-acids.

When considering the organic contaminants of major concern, i.e. dioxins, polychlorobiphenyls (PCBs) and polyaromatic hydrocarbons (PAHs), absorption through the intestinal epithelium is commonly described by a passive diffusion model (Cavret and Feidt 2005). However, because there is a large variation in behaviour amongst the organic contaminants of interest, intestinal absorption of organic contaminants will not be covered in great detail in this chapter. In the passive diffusion model, the diffusion flow is proportional to the concentration gradient between the luminal and plasmatic compartments, according to their lipid contents. Most organic contaminants of interest are not readily soluble in aqueous digestive fluid, because of their hydrophobicity, and are conveyed by micellar solutions of conjugated bile salts. Bile acids are facial amphipathic molecules, i.e., they contain both hydrophobic (lipid soluble) and polar (hydrophilic) faces. Therefore, bile salts act as lipid carriers and are able to solubilise many organic contaminants by forming micelles, aggregates of lipids such as fatty acids, cholesterol and monoglycerides, which remain suspended in water and can reach the epithelium. Once the organic contaminants reach the double-layered membrane, their hydrophobicity favours their uptake across the epithelium (Fig. 7.3).

1.1.3 Metabolisation in the Liver

After absorption, contaminants migrate through the portal vein into the liver, in which they may undergo biotransformation. The most common transformation mechanisms are hydroxylation for lipophilic organic contaminants (PAHs, PCBs, dioxins), whereas the cationic metals are bound by proteins (Webb 1986). Arsenic, mostly under anionic form, may undergo methylation (Tseng 2009). Once absorbed and passed through the portal vein, contaminants may be:

  • sequestrated within the liver (metals bound to proteins or PCDF bound to cytochrome P450);

  • excreted in bile, either unaltered or in a metabolised/bound form; or

  • released into the systemic circulation.

These processes are complex, because of the enzymes involved in the biotransformation process and the inducible production of binding proteins. The amount of contaminant metabolisation, via bile excretion, is dose dependant. At a low dose, the amount of activity will not be sufficiently efficient; however, continual exposure increases enzymatic activity due to DNA regulation, increasing the metabolism capacity followed by the rate of metabolism. The metabolism of a particular substance can be affected by both interspecies and inter-individual differences, thus contaminants that may be benign to humans may be highly toxic to other species and vice versa (Fowles et al. 2005).

The three-step approach described above, with accessibility, absorption and metabolisation is suitable for a pharmaceutical approach, i.e. relating to a drug which reaches its target. However, it may fail to properly represent the toxic potential of some contaminants, such as such as PAHs, which express their toxic (carcinogenic) effects through their hydroxy-metabolites. In this case the metabolism activates toxicity. This is an important consideration that needs to be addressed when considering the choice of method used to assess oral bioavailability. Because of analytical limitations, PAH metabolites such as hydroxy-PAH conjugated to sulphates, glucuronic acid or glutathione cannot be detected in plasma. Metabolites are thus excluded from the AUC determination, although they are the very origin of PAHs toxicity. Consequently, due to the biotransformation processes, the bioavailable fraction is underestimated when analysing only the native PAHs in blood (scenario 1). In addition, under circumstances when the analysis of PAH metabolites in plasma is technically feasible, the fraction that has been excreted via the bile is not taken into account in the bioavailability calculation (scenario 2). This results in biotransformation processes reducing the calculated bioavailable fraction, which is used in the resulting Risk Assessment, but not for the same reason and not to the same extent: in the first case 100% of the metabolites are ‘forgotten’ due to the analytical procedure chosen, in the second a proportion of the metabolites is lost through bile excretion. This example shows that term definition (the definition of what is considered to be bioavailable) and method of assessment are not truly independent, and that it is very difficult to obtain a consensual absolute bioavailability.

The methylation process is the primary process by which inorganic arsenic is metabolised in the body (Fowles et al. 2005). This process is generally considered to be a detoxification process, whereby the majority of the ingested arsenic is metabolised to form methyl arsenic species (mainly dimethylarsenic acid), which are then excreted in the urine. Analysis of urine has identified arsenate, arsenite, monomethylarsonic acid and dimethylarsinic acid (Cullen 2008) amongst other species, which have a lower affinity for tissue sulfhydryl groups than other inorganic arsenic species (Fowles et al. 2005).

1.2 Relative Bioavailability Factor

Since the bioavailability of inorganic contaminants when measured in a water soluble form is different when compared to the contaminants bound to soil, it is necessary to use a correction factor which takes the effect of the sample matrix e.g. a soil or an aqueous solution into account. This correction factor is considered to be the relative oral bioavailability factor. This factor is generally less than one, because the accessibility of contaminants in the soil is less than in a water soluble form through physico-chemical interactions with the solid phase of the soil.

The relative oral bioavailability factor of the contaminant in a specific matrix, e.g. a specific soil sample, is expressed as a percentage of the contaminant in the reference matrix, e.g. reference sample according to Eq. (7.3):

$$\left( {{\textrm{AUC}}_{{\textrm{PO}}}}\right)_{{\textrm{TM}}} / \left( {{\textrm{AUC}}_{{\textrm{PO}}} } \right)_{{\textrm{RM}}} \times 100$$
((7.3))

where:

  • (AUCPO)RM is the area under the curve obtained from the oral administration of the contaminant in the reference matrix (RM), e.g. water;

  • (AUCPO)TM is the area under the curve obtained from the administration of the contaminant in the test matrix (TM), e.g. soil material.

This ratio is often between 0 and 1 (or 0 and 100%), because the reference material is chosen according to its physico-chemical properties in order to maximise absorption, i.e. water with ionisable metal forms or oil for organic contaminants.

A number of in-vivo animal trials (i.e. tests performed in a living organism) have been used to assess the bioavailability of soil contaminants, using a variety of species such as the monkey, pig, rat and rabbit (Drexler and Brattin 2007; Ellickson et al. 2001; Freeman et al. 1992, 1993, 1994, 1995, 1996; Roberts et al. 2007b; Schroder et al. 2004a). Unfortunately, rats and rabbits exhibit large differences in their digestive physiology compared to humans, rats also have a pre-stomach compartment with a very specific physiology. Although these models have a long history of use in toxicological studies etc., the aim for bioaccessibility and bioavailability studies with respect to contaminated land is to mimic human physiology and model the interactions occurring between the soil contaminants and the digestive fluids. In particular the interactions between digestive fluids and the soil are an important parameter to consider, the pH of rat digestive fluids is different to humans, and the practice of coprophagy (rats) and caecotrophy (rabbits), which the habit of feeding on excrement, may re-introduce the contaminants of interest to the gut. Although the soluble forms of the contaminants present may not be affected by the second passage through the GI tract, any interaction with organic matter and microbial communities may differ compared to the contaminant bound to the soil. An animal model considered to be physiologically similar to humans is the primate, however, few experiments have been conducted with this model (Roberts et al. 2007a). The reason for this is that this model is expensive and ethically fragile. The juvenile swine is considered to be a useful anatomical proxy for the human neonatal digestive tract (Miller and Ullrey 1987; Moughan et al. 1992) and has been successfully used to assess the bioavailability of lead (Casteel et al. 1996, 2006). Further considerations that determine the choice of animal model include the developmental speed, intestinal tract aging, the ratio between the bone and body mass etc., to this end, although rats and rabbits are adequate models, the juvenile swine model is a preferred candidate (Moughan et al. 1994; Rowan et al. 1994). Animal studies are expensive, criticized due to animal welfare and cannot be conducted with a large enough number and variety of contaminated soils. In-vitro tests (i.e. tests performed in a laboratory dish or test tube; an artificial environment (Latin for in glass)) allow researchers to overcome limitations of animal tests (Oomen et al. 2003; Ruby 2004; Ruby et al. 1999). However, animal testing still remains a necessary means to validate in-vitro methods (Schroder et al. 2004b).

Despite the problems associated with animal testing there are a number of different end-points that can be used to assess relative oral bioavailability. Because of the complex animal manipulations required to obtain the plasmatic kinetics within juvenile swines, concentrations in alternative targets can be chosen. For lead, liver, kidney or bone concentrations at the end of the exposure period are considered to be proportional to the bioavailable fraction (Jondreville and Revy 2003). Unlike the area under the plasma lead concentration-time curve, these concentrations in target organs cannot be used to assess the actual amount which has been absorbed and has reached the systemic circulation. However, these concentrations do allow for a comparison to be made with the effects of the soil matrix on the bioavailability. In this case, the relative bioavailability can be expressed according to Eq. (7.4) and Fig. 7.6:

$${\textrm{RBA = Ratio of slopes (test matrix/reference matrix)}}$$
((7.4))
Fig. 7.6
figure 7_6_212384_1_En

Relative bioavailability value (RBV) assessment using dose response slopes

Specific conditions are necessary to validate this calculation (Littell et al. 1997):

  • there should be a linearity of response of both the reference and the test matrix over the dose range under investigation;

  • the two lines should have a common intercept;

  • the common intercept should be equal to the mean of the reference blank.

Depending on the contaminants of interest, specific target organs are relevant. The liver, kidney, bones and also urine give satisfactory responses for cations (lead, cadmium), urine and liver for anions (arsenic, antimony). In addition to using the AUC to study the oral bioavailability of contaminants, three additional primary methods are routinely employed (Kelley et al. 2002). Where the contaminant of interest is rapidly excreted, urine is a common end-point used in bioavailability studies as this fraction provides an estimate of the absorbed dose. Tissue concentration comparison, after administration of different forms of a contaminant, is useful for contaminants that preferentially accumulate in specific tissues. This type of end-point measurement provides an estimate of relative bioavailability. Additionally, the fraction of the dose excreted in the faeces may be measured; however this method reflects the amount of contaminant/dose that is not absorbed and the actual dose is calculated by subtracting the excreted amount from the initial amount dosed. As such, this method is subject to underestimating absorption if the contaminant is absorbed and subsequently excreted via bile back into the GI tract. For the organic contaminants and their metabolites, the database of available results is smaller compared to the metals and metalloids. Furthermore the choice of target organs/matrices for the metabolites is dependant on the metabolism process experienced by the individual contaminants. The fate of highly metabolised contaminants may be assessed by analysing their metabolites in urine, whereas unaltered contaminants may accumulate in adipose tissue.

1.3 Validation of Bioaccessibility Tests

In order to use a bioaccessibility test it is necessary to be able to demonstrate a mathematical, fit-for-purpose, relationship between the bioaccessible concentration, relative to the bioaccessibility of a soluble salt of the contaminant, in the test soil (as measured by an in-vitro test) and the bioavailable concentration, relative to the bioavailability of a soluble salt of the contaminant in the soil (as measured by an in-vivo study), see Section 7.1.2. In order to validate the in-vitro bioaccessibility test with the bioavailability data it is important that comparable units of measurement are used. The bioavailability of the element in the soil is almost always measured relative a water soluble salt of the element. It is therefore important that the comparison is made with the bioaccessibility of the soil relative to the same soluble salt. This may seem to be contradictory since, if the salt is completely soluble, the relative bioaccessibility will be the same as the absolute bioaccessibility. However, the water soluble salts are not necessarily fully soluble in the simulated stomach and intestine fluids particularly if the solubility of the element is reduced at the higher pH values found in intestinal fluid phase. Under these conditions, if relative bioaccessibility is not used, the bioaccessibility will always under predict the true relative bioavailability. For the test to be of pragmatic use, there are two major assumptions:

  • the soils used in the validation test are representative of the soils that will be tested by the bioaccessibility test;

  • the soluble salt used to calculate the relative bioavailabilities and the relative bioaccessibilities have the same biological action as the salt used in the toxicity test.

Previous studies (Basta et al. 2007; Drexler and Brattin 2007; Juhasz et al. 2007; Rodriguez et al. 1999, 2003; Schroder et al. 2003, 2004a) have shown that within the uncertainties on the measurement there is a linear relationship between bioaccessibility and bioavailability (or its relative counterpart). Monte-Carlo simulations show that for bioaccessibility to be predictive of bioavailability the straight line relationship must meet the following criteria:

  • the soils used in the validation exercise must cover greater than 70% of the 0–100% of the bioavailability/bioaccessibility range;

  • the between laboratory reproducibility of the bioaccessibility and bioavailability measurements must have a relative standard deviation of less than 20%;

  • the r square value of the straight line should be greater than 0.7.

2 Influence of Soil Properties on Oral Bioaccessibility

The bioavailability of any contaminant bound to the soil depends upon the soil type, properties of the soil, the contaminant and the manner by which the contaminant has entered the soil (Selinus 2005).

2.1 pH

Soils generally have pH values (measured in water) from 4 to 8.5, due to buffering by aluminium at the lower end and calcium carbonate (CaCO3) at the upper end of the range. In general, most divalent cationic forms of contaminants are less strongly absorbed, and therefore more bioaccessible, in acidic soils than they are at neutral or alkaline soils as demonstrated for lead by Yang et al. (2003) and for cadmium by Tang et al. (2006). In a study testing five Chinese sites (Tang et al. 2007), the bioaccessibility of both spiked and endogenous arsenic increased with increasing pH. A similar effect was observed by Yang et al. (2005), based on a study of 36 US soils. pH also affects other parameters such as the solubility of organic carbon, and the sorptive capacity of iron oxides and aluminium oxides and clays which are discussed in the next sections.

2.2 Soil Organic Matter

The soil organic matter content can vary widely among soils, from <1% to >70%. The organic matter is divided into non-humic and humic. The former consists of unaltered biochemicals which have not been degraded since they entered the soil, through production by living organisms. The latter are formed by secondary synthesis reactions involving micro-organisms.

The mechanisms that control the bioaccessibility of contaminants in the presence of soil organic matter have been described as (Selinus 2005):

  • the adsorption of cations on negatively charged sites (ion exchange);

  • the mobility and protection of some metal ions from adsorption through the formation of chelates with low molecular weight; and

  • the retention of many contaminants in the higher molecular weight solid forms of humus.

Contaminants showing particularly high affinities to soil organic matter include cobalt, copper, mercury, nickel and lead (Adriano 2001). A number of studies have systematically investigated soil properties to determine those that exert the greatest controls on the bioaccessibility of arsenic (Cave et al. 2007; Tang et al. 2006; Yang et al. 2005). Although these studies cannot claim to have covered all soil types and soil properties, none of the outcomes showed that organic carbon was significantly influencing the bioaccessibility of arsenic.

Besides the direct complexation of contaminants by organic carbon there are also two other important secondary factors which affect contaminant bioaccessibility:

  • redox conditions;

  • organic matter competition for sorption of contaminants on oxides and clay.

Several studies (e.g. Baker et al. 2003; Chen et al. 2003; Lindsay 1991; Rose et al. 1990) show that organic matter has an abiotic and biotic role in the reduction of Fe-oxides, causing dissolution of the host oxide and release of adsorbed contaminants. In addition, organic matter can compete with contaminants for adsorption sites causing displacement from the oxide matrix into more available forms (Dixit and Hering 2003). An example of this (Wragg 2005) is shown in soils containing arsenic derived from natural underlying Jurassic ironstone. A small but statistically significant increase in the bioaccessibility of arsenic in garden soils compared to rural soils was found, which was attributed to gardening practices including the addition of organic matter to improve soil fertility. Stewart et al. (2003b) showed that the bioaccessibility of chromium(VI) was significantly influenced by reduction processes catalyzed by soil organic carbon. Other studies show how organic matter mediates the adsorption of contaminant to clay minerals (Cornu et al. 1999; Lin and Puls 2000; Luo et al. 2006; Manning and Goldberg 1996). In addition, although further discussion is beyond the scope of this chapter, soil organic matter is known to sequester organic contaminants therefore playing a key role in the potential reduction of the bioaccessibility of any organic compounds present (Ruby et al. 2002).

2.3 Mineral Constituents

The inorganic constituents of soils usually make up the majority of the mass of the soil and it is the interaction of contaminants with the surfaces of these materials that is a major control on bioaccessibility. Davis et al. (1996b) studied the mineralogical constraints on the bioaccessibility of arsenic in mining sites. They concluded that the arsenic bioaccessibility compared to the total arsenic content in the soils was constrained by:

  • encapsulation in insoluble matrices, e.g. energite in quartz;

  • formation of insoluble alteration or precipitation rinds, e.g. authigenic iron hydroxide and silicate rinds precipitating on arsenic phosphate grains; and

  • formation of iron oxide and arsenic oxide and arsenic phosphate cements that reduce the arsenic–bearing surface area available for dissolution.

In a previous study on lead in Montana soils, Davis et al. (1993) found similar results in which the solubility was constrained by alteration and encapsulation which limited the available lead-bearing surface area. Ruby et al. (1996) diagrammatically summarised how the chemical and mineralogical forms of arsenic and lead relate to their bioaccessibility. Figure 7.7 shows the possible physico-chemical processes governing the bioaccessibility of lead at a contaminated site.

Fig. 7.7
figure 7_7_212384_1_En

Schematic diagram of how different lead species, particle size and morphologies affect lead bioavailability (after Ruby et al. 1996)

Whilst these early studies provided a good insight into the factors governing arsenic and lead bioaccessibility they were very much aimed at soils from mining areas where the contaminants were introduced into the soils as products from ore processing.

In the UK, a number of studies have examined the bioaccessibility of elevated concentrations of arsenic in soils developed over Jurassic ironstones (Cave et al. 2003, 2007; Palumbo-Roe et al. 2005; Wragg 2005; Wragg et al. 2007). In this instance, there was a highly significant correlation between the total arsenic content and the total iron content, but there was no significant correlation between the bioaccessible arsenic and the total iron content. Palumbo-Roe et al. (2005) concluded that in these soils the bioaccessible arsenic is mainly contained within calcium iron carbonate (sideritic) assemblages and only partially in iron aluminosilicates, probably berthierine, and iron oxyhydroxide phases, probably goethite. It is suggested that the bulk of the non-bioaccessible arsenic is bound up with less reactive more highly crystalline (see Fig. 7.8) iron oxide phases.

Fig. 7.8
figure 7_8_212384_1_En

Diagrammatic representation of the ageing processes of Fe oxides in soils

These studies highlight the role of two very important mineral classes which are ubiquitous in soils and have been shown to be key controls on bioaccessibility. These are:

  • clays;

  • oxides of iron, manganese and aluminium.

Clay minerals are produced through hydrolysis weathering reactions which is the reaction between hydrogen ions and an aluminosilicate mineral (such as feldspar) to form soluble cations, silicic acid and a clay mineral. They are characterised by two-dimensional sheets of corner sharing SiO4 and AlO4 tetrahedra. These tetrahedral sheets have the chemical composition (AlSi)3O4, and each tetrahedron shares three of its vertex oxygen atoms with other tetrahedra forming a hexagonal array in two-dimensions. The fourth vertex is not shared with another tetrahedron and all of the tetrahedra ‘point’ in the same direction, i.e. all of the unshared vertices are on the same side of the sheet.

Oxides of iron, manganese and aluminium are often referred to as hydrous oxides and like clays are principally derived from weathering reactions of rock minerals. Although different in chemical structure, these two classes of minerals are very fine grained (<2 μm) and hence have a very large reactive surface area and similar modes of action in binding contaminants and hence controlling their bioaccessibility. These modes are:

  • cation and anion exchange;

  • specific adsorption.

For ion exchange, contaminant ions are bound electrostatically to the clay or oxide surface sites with an opposite charge. As already discussed in the organic carbon section, organic matter can also act as ion exchangers. A measure for the ability of the soil to attract and retain cations is known as the cation exchange capacity (CEC). In general, oxides contribute little to the CEC when soil pH is <7, under these conditions, the main contribution comes from organic matter and clays. Anion exchange occurs where negatively charged ions are attracted to positively charged sites. The highest anion exchange for oxides occurs at low pH. Cation exchange is reversible, diffusion controlled and stoichiometric and has an order of selectivity based on the size, concentration and charge of the ion. Electrostatically bound contaminants are displaced relatively easily from the soil matrix in the presence of the low pH conditions of the human stomach.

Specific adsorption involves the exchange of cations and anions with surface ligands on solids to form partly covalent bonds with lattice ions. As with ion exchange, the process is highly dependant on pH, charge and ionic radius. In contrast to ion exchange, however, contaminants bound by this mechanism are far less labile. Brummer (1986) showed that the sorptive capacities of iron oxides and aluminium oxides were up to 26 times higher than their ionic complexes at pH 7.6.

Many studies have confirmed the importance of clays and oxides on the bioaccessibility of contaminants, (e.g. Ahnstrom and Parker 2001; Boonfueng et al. 2005; Bowell 1994; Cave et al. 2007; Chen et al. 2002; Esser et al. 1991; Foster et al. 1998; Garcia-Sanchez et al. 2002; Lin et al. 1998; Manceau et al. 2000; Matera et al. 2003; Palumbo-Roe et al. 2005; Smith et al. 1998; Somez and Pierzynski 2005; Stewart et al. 2003a; Sultan 2007; Violante and Pigna 2002; Violante et al. 2006; Yang et al. 2005; Zagury 2007). It is clear that iron oxides are most commonly reported as hosts for sorbed arsenic (Camm et al. 2004; Cances et al. 2005; Cave et al. 2007; Palumbo-Roe et al. 2005; Wragg et al. 2007). Depending on the form of the iron oxide present in the soil they can be both sources of bioaccessible and non-bioaccessible arsenic. Figure 7.8 shows how progressive ageing of iron oxides from amorphous forms through to more crystalline forms increases the thermodynamic stability and, hence, specifically adsorbed contaminants, notably arsenic, are less easily mobilised. For other metals the picture is less clear and the geochemical host of the contaminant under study is very dependant on the history of the soil.

2.4 Solid Phase Speciation and Bioaccessibility

Adsorption of contaminants to different solid phases has been shown to be a key factor in determining the bioaccessibility. It is, therefore, very important that analytical methodologies are available that can be used to measure the physico-chemical forms of contaminants in the soil. Such methods may then provide information on the potential environmental redistribution of contaminants under different soil conditions, and ultimately be used as additional lines of evidence to support in-vitro bioaccessibility testing in the assessment of human health risks from soil ingestion.

Spectroscopic methods such as x-ray absorption fine structure (XAFS) and x-ray absorption near edge structure (XANES), that directly measure the oxidation state and chemical bonds holding the contaminant in the soil, have been used very successfully, (e.g. Cances et al. 2005; Cutler et al. 2001; Manceau et al. 2000; Peak et al. 2006; Welter et al. 1999). However, these methods require the use of a synchrotron source and can be expensive and time consuming. A relatively simple and well-adopted method to assess metal pools of differential relative lability in soils is the use of sequential extraction with reagents of increasing dissolution strength. Each reagent should target a specific solid phase associated with the contaminant. Many of these extraction schemes have been described in the literature and have been reviewed in Filgueiras et al. (2002). In many instances, the steps with low dissolution strength are equated to the bioaccessible fraction or are used along side dedicated bioaccessibility tests to help interpret the geochemical source of the bioaccessible fraction (Datta et al. 2006; Denys et al. 2007; Jimoh et al. 2005; Liu and Zhao 2007a, b; Marschner et al. 2006; Palumbo-Roe et al. 2005; Reeder et al. 2006; Schaider et al. 2007; Siebielec et al. 2006; Tang et al. 2004, 2006, 2007).

Cave et al. (2004), amongst other workers, has highlighted major shortcomings with traditional sequential extraction methods and developed a new procedure called Chemometric Identification of Substrates and Element Distributions (CISED). This procedure uses increasing strengths of simple mineral acids as the extractant, followed by chemometric data processing of the resulting multi-element data obtained from the extract analysis. This method has been shown to work well for a number of contaminants in the NIST 2710 reference soil compared to more traditional sequential extraction schemes and has subsequently been applied very successfully in a number of studies to identify the source of bioaccessible contaminants in soils (Cave et al. 2003; Palumbo-Roe and Klinck 2007; Palumbo-Roe et al. 2005; Wragg 2005; Wragg et al. 2007).

In addition to the solid phase distribution of contaminants, their chemical form (speciation) can affect bioaccessibility and also the toxicity. In human Exposure Assessment, if a bioaccessibility factor is applied to the total soil concentration, the soil speciation needs to be understood. For redox sensitive contaminants such as arsenic, antimony or chromium, which have different toxicity levels according to their redox state, this verification is particularly important. Denys et al. (2009) investigated whether the speciation of antimony changed during the human gastro-intestinal digestion process in four soils, sampled from a former lead-mining site using the differential pulse anodic stripping voltammetry DPASV technique. The results showed that for each soil sample, as the pentavalent form was present and no change in speciation occurred during the digestion process, a resulting Risk Assessment would result in no additional human health risks due to changes in speciation.

2.5 Soil Ageing

As well as short term fluctuations, soil can undergo longer term changes caused by changes in land use or other environmental factors such as acid rain, flooding and global warming. As such it is important to note that when the bioaccessibility of a contaminant within a soil is assessed it only applies to a snapshot in time and that bioaccessibility can change with time. A number of studies have looked at how the bioaccessibility of freshly contaminated soils changed with ageing over relatively short timescales (periods of months). For arsenic, the general view is that the bioaccessibility of freshly contaminated soil decreases with time. This is thought to be due to oxidation of more soluble arsenic(III) forms to less soluble arsenic(V), followed by sorption onto Fe-oxides and Fe-oxyhydroxides (Datta et al. 2007; Fendorf et al. 2004; Juhasz et al. 2008; Lin and Puls 2000; Lombi et al. 1999; Tang et al. 2007; Yang et al. 2002, 2003, 2005). The absolute magnitude of the effect, however, varied significantly between soil types. Other studies on cadmium and lead (Lee 2006; Tang et al. 2006; Yang et al. 2003) suggest that the decrease in bioaccessibility for these metals is not as marked as for arsenic, although this was very specific to soil type and pH. Stewart et al. (2003b) showed chromium bioaccessibility decreased with duration of exposure, with aging effects being more pronounced for chromium(III). The decrease in chromium bioaccessibility was rapid for the first 50 days and then slowed dramatically between 50 and 200 days. In general, the effects of chromium solid phase concentration on bioaccessibility was small, with chromium(III) showing the most pronounced effect, higher solid phase concentrations resulted in a decrease in bioaccessibility. Chemical extraction methods and X-ray Adsorption Spectroscopy analyses suggested that the bioaccessibility of chromium(VI) was significantly influenced by reduction processes catalyzed by soil organic carbon.

2.6 Statistical Modelling of Bioaccessibility

Yang et al. (2002) carried out a detailed study of arsenic contaminated soils, in which the arsenic originated from processes other than mining. Their studies showed that soils with lower soil pH and higher Fe-oxide content exhibited lower bioaccessibility and were able to model the bioaccessible arsenic content using these factors. However, the model was not able to accurately predict the bioaccessibility of arsenic in a different set of contaminated soils, previously used in an independent Cebus monkey dosing trial, consistently overpredicting the bioavailability, resulting in an unacceptably large uncertainty. Juhasz et al. (2007) found that they could model the in-vitro bioaccessibility of soils contaminated by arsenic from herbicides, pesticides and mining waste, using the total arsenic content and total or dithionitecitrate extractable (free) iron. However, the bioaccessible content of a naturally mineralised site could not be modelled in this way. In a quite different approach a self-modelling mixture algorithm was used (Cave et al. 2007) to convert raw Near Infra-Red (NIR) spectra of soils, developed over Jurassic ironstones, into five underlying spectral components and associated coefficients. The five spectral components were shown to be significantly correlated to the total arsenic, bioaccessible arsenic and total Fe-contents of the soils and tentatively assigned to crystalline Fe oxides, Fe oxyhydroxides and clay components in the soils. A linear regression model, using the spectral component coefficients associated with the clay fraction, the Fe oxyhydroxides and the total arsenic content of the soils as independent variables, was shown to predict the bioaccessible arsenic content of the soils, as measured by an in-vitro laboratory test, with a 95% confidence limit of ±1.8 mg kg−1 and a median R 2 value of 0.80.

2.7 Soil Sampling and Preparation for Bioaccessibility/Bioavailability Measurements

In order for the final results of bioaccessibility/bioavailability testing to be meaningful the samples under investigation need to be representative of the sampling location and the grain size applicable to the resulting Human Health Risk Assessment. As such, a number of sampling and preparation considerations need to be met as part of the underlying analytical protocol.

Prior to soil sampling, consideration should be given to:

  • the history of the location – including the effect of the local geology and or historical anthropogenic contamination and the potential effect of contaminant ‘hotspots’;

  • the equipment to be used to collect the samples – clean, high quality sampling tools and containers in order to avoid sample contamination;

  • the number of samples to be collected per averaging zone; Nathanail (2009) recommend a minimum of 10 samples in order to ‘gain and adequate appreciation of the variation in bioaccessibility’. However, when considering in-vivo bioavailability measurements, the cost of testing may be the driving force in determining the number of samples, per location, to be collected;

  • the depth at which samples are collected – surface soil samples from 0 to 15 cm depth, as these are representative of the material to which humans are likely to be exposed (Cave et al. 2003);

  • the variation of soil composition and resulting bioaccessibility across site, therefore separate bioaccessibility sampling areas will be required for each soil type represented;

  • the type of sample to be collected – grab or composite? Composite sampling is the standard practice for geochemical surveying work (Kelleher 1999). Composite sampling is analogous of making three replicate measurements of analytical data and averaging the data point (Wragg 2005). However, ‘hotspots’ may be missed or the contaminant concentration may be reduced as not all samples to be composited have the same/similar contaminant concentration;

  • the preparation of the individual samples – samples should be dried at <35°C and gently disaggregated, but never crushed, in order to break up large clasts and homogenized. A representative portion of the bulk material should be sub-sampled and sieved to <250 μm, as this fraction is considered to be the upper limit of particle sizes that are likely to adhere to children’s hands (often the at risk receptor) (Duggan et al. 1985), and tested for its total and bioaccessible/bioavailable contaminant content.

3 Considerations for the Potential Use of Site Specific Bioaccessibility Measurements

Bioaccessibility measurements are not necessarily applicable to all contaminants, all soil types and all Risk Assessment scenarios. The following key questions should be considered before embarking on bioaccessibility testing:

  • Is the contaminant concentration slightly above the guidance value for the soil under consideration?

    • ∘Research in the UK (Cave et al. 2003; Nathanail and Smith 2007; Palumbo-Roe and Klinck 2007; Palumbo-Roe et al. 2005; Wragg et al. 2007) has shown that specific soil types have a well defined distribution of % bioaccessibility values. Using this knowledge it is then possible to estimate the maximum total soil concentration, for a soil type, where bioaccessibility data would assist the assessment of risk. For example, in the Jurassic Ironstone soils in Lincolnshire (Eastern England) the modal bioaccessible arsenic fraction is approximately c. 10%. Working on a possible arsenic guideline value of 20 mg kg−1, this would suggest that a total arsenic soil concentration of up to 200 mg kg−1 would be suitable for using bioaccessibility testing in a further detailed quantitative Risk Assessment. Whereas, dependant on the local federal guidelines (i.e. Superfund site etc) a soil with a concentration of 1,000 mg kg−1 arsenic is unlikely to be suitable for bioaccessibility testing. If the concentration is very high, this will override all subsequent points and bioaccessibility will no longer be an option.

  • Is remediation likely to be very expensive, unsustainable or not technically feasible?

    • ∘For point source contamination, it is likely that remediation will only involve a relatively small volume of contaminated material. However, for diffuse contamination, particularly from natural geogenic sources, e.g. naturally mineralised soils in Devon and Cornwall in the south-west of England, it is practically impossible to remediate in any sensible fashion. The lines of evidence approach, which includes bioaccessibility testing and contaminant solid phase distribution determination with associated data interpretation, may be a pragmatic way forward (Palumbo-Roe and Klinck 2007).

  • Is there an adverse environmental risk associated with remediation?

    • ∘Remediation can involve a large amount of heavy machinery and transport (plant), which will have a significant effect on the quality of life to the surrounding population and adverse health effects through dust inhalation;

  • Is the number of contaminants driving the risk three or fewer?

    • ∘If the site is contaminated with a wide variety of contaminants, consideration must be given to which contaminant is driving the risk calculation. As a rule of thumb, if three or less contaminants are causing a potential risk then bioaccessibility will probably decrease the estimates of risk.

  • Is there in-vivo validation data associated with the in-vitro method under consideration?

    • ∘The criteria associated with validation of a bioaccessibility test are discussed in detail under Section 7.1.3.

  • Does the local regulator accept in-vitro bioaccessibility in Human Health Risk Assessment?

    • ∘The regulators responsible for a given site should be contacted prior to a site investigation in order to determine whether bioaccessibility testing is accepted. This will ensure that financial resources are not wasted by unnecessary testing, in the case of that bioaccessibility data is not acceptable, and natural resources such as soil is not unnecessarily remediated, if the opposite is true.

  • Is the land use likely to change in the future?

    • ∘The application of bioaccessibility data needs to be considered in terms of the land use at the time of the assessment and any proposed future land use. Because of soil ageing and weathering etc. (see Section 7.2.5) it is considered that naturally occurring contaminants in soil will have a lower bioaccessibility/bioavailability than those found in made ground. In addition the bioaccessibility of contaminants may be altered (increased or decreased) by land practices such as the liming of soil to raise the pH or the addition of organic matter (Wragg 2005).

4 Examples of Bioaccessibility Studies

Bioaccessibility research is a rapidly developing scientific area, as such there are many examples of its application in the scientific literature. It is beyond the scope of this chapter to give a comprehensive review, however the following section gives examples of bioaccessibility related studies for a variety of contaminants arising from geogenic sources and anthropogenic influences. Other examples of where bioaccessibility testing has been applied to contaminated soils over the period 2003 to 2009 have been listed in Table 7.1.

Table 7.1 Examples of where bioaccessibility testing has been applied to contaminated soils (2003–2009)

4.1 Geogenic Sources

Palumbo-Roe et al. (2005) and Wragg et al. (2007) have studied the bioaccessibility in soils with elevated arsenic concentrations (up to circa 200 mg kg−1). A combination of a physiologically based extraction test combined with geochemical testing (using either the CISED methodology described in Section 7.2.4 or multivariate statistical modelling of geochemical soil survey data) was used. The findings showed that the bioaccessible arsenic was generally less than and mainly contained within calcium iron carbonate (sideritic) assemblages and only partially iron aluminosilicates, probably berthierine, and iron oxyhydroxide phases, probably goethite. The bulk of the non-bioaccessible arsenic was bound up with less reactive iron oxide phases.

Naturally occurring arsenic in soils (13–384 mg kg–1) at a new housing site in southwest England (Nathanail and Smith 2007) were demonstrated not to pose unacceptable risk to human health by site specific estimates of bioavailability and region specific estimates of soil to plant uptake factors. Independent lines of evidence, consisting of data from sequential extraction of representative test soils and soil to plant uptake factors for the site were used to justify the arsenic exposure factors for oral bioavailability. The results of the study and subsequent Risk Assessment avoided the need for remediation and unnecessary public concern.

Denys et al. (2007) studied the bioaccessibility of lead in soils with a high lead carbonate (cerrusite) content (up to 870 g kg−1). The authors found that lead bioaccessibility, using the RIVM in-vitro protocol, in high carbonate soils can be low (down to 20% of the total soil Pb content) and is not correlated with cerussite soil contents even if the concentration of this mineral is relatively high. This research indicates that mineralogical analysis alone is not a reliable predictor of the bioaccessible fraction.

4.2 Anthropogenic Influences

Basta and co-workers have investigated the bioaccessibility of arsenic and lead, using the in-vitro gastro intestinal method (IVG) in smelter impacted soils over a number of years (Basta et al. 2007; Rodriguez et al. 1999). Method development of the IVG included validation and dosing trials of immature swine for fifteen soils ranging in total arsenic concentrations of 401–17,460 mg As kg−1. The research indicates that IVG methodology was linearly correlated with the animal model (r = 0.83, p <0.01) for arsenic relative bioavailability ranging from 2.7 to 42.8% (Rodriguez et al. 1999). Further research by this group investigated the effect of the dosing vehicle used to determine the bioaccessibility, using the same contaminated materials as employed in the 1999 research (Basta et al. 2007). Similarly to previous research, the IVG was shown to produce strong relationships with the in-vivo arsenic bioavailability, with or without the presence of a dosing vehicle (r = 0.92 (with), p  < 0.01) and r = 0.96 (without) p < 0.01). Further to the work on arsenic, the applicability of the IVG methodology has been investigated for soils contaminated with cadmium (Schroder et al. 2003). Relative bioavailable cadmium for ten soils (containing total cadmium ranging between 23.8 and 465 mg kg−1) was obtained from dosing trials using juvenile swine and ranged from 10.4 to 116%. Linear regression with cadmium bioaccessibility from the IVG methodology indicated strong correlations with both the stomach (r = 0.86) and intestinal (r = 0.80) phases of the IVG.

In a study of and abandoned copper-arsenic mine in Devon, in the South-west of England in the UK (Devon Great Consols (DGC)) Palumbo-Roe and Klinck (2007) investigated the mineralogical factors that control the bioaccessibility of arsenic in soils influenced by the past mining operation in the area. Bioaccessibility (determined using a physiologically based extraction test, as described by Cave et al. (2003)) was related to the solid phase distribution of arsenic present in the test soils (determined by the CISED method (Cave et al. 2004)). The results of this study show that the mine soils from DGC have higher arsenic bioaccessibility (median 15%) than those not affected by mining activities and other background soils collected from the sampling area (the Tamar catchment), with a median bioaccessibility of 9%. Determination of the solid phase distribution of the arsenic present indicated that the arsenic present in the test soils was mainly hosted by an iron oxyhydroxide component, whose partial dissolution was responsible for the bioaccessible arsenic fraction. The degree of crystallinity of this component was thought to be an important control on the arsenic bioaccessibility.

Caboche et al. (2008) investigated the bioaccessibility at sites influenced by different sources of historical contamination: a former lead-zinc mining site and a site in which soils were contaminated by atmospheric deposits of lead containing particles. The total concentrations ranged from 48 to 247 mg·kgdw −1 for arsenic and from 1,462 to 16,267 mg·kgdw −1 for lead, with the most contaminated soils representing the mining site. Bioaccessibility data for the two contaminant sources showed that for the mining soils arsenic and lead bioaccessibility was lower (ranging from 2.7 to 8% for arsenic and 13.2 to 35.5% for lead) compared to 10 to 52% for the soils contaminated by atmospheric deposition. The results of this study underline the importance of bioaccessibility data when comparing the exposure to contaminants present at contaminated sites.

5 The BARGE Network

The Bioaccessibility Research Group of Europe (BARGE) is a network of European institutes and research groups, formed in 1999/2000, to ‘study human bioaccessibility of priority contaminants in soils’ (http://www.bgs.ac.uk/barge). A priority objective of the network is the provision of oral bioaccessibility/bioavailability data for Human Health Risk Assessments and policy making that is both robust and defensible.

A key driver for the inception of the network, identified by representatives of the 16 Member States of the concerted action CLARINET (Contaminated Land Rehabilitation Network for Environmental Technologies), was a European wide urgency for more realistic oral bioavailability factors that could be used in site specific Risk Assessment and policy making. Initial collaborative efforts, financially supported by the Dutch Ministry of Housing, Spatial Planning and the Environment (VROM) and the participating institutes, allowed for the bringing together of a multi-disciplinary team of research scientists (with a diverse range knowledge base in pharmaco-kinetics, physiology, geochemistry and analytical measurement techniques), policy makers and Risk Assessment practitioners, all key in the decision making process.

Since its inception, the BARGE group has been active in the field of oral bioavailability and bioaccessibility. The activities include the undertaking of a number of round robin studies (see section below); liaising with the International Standards Organisation on the preparation of a technical specification entitled ‘Soil quality – Assessment of human exposure from ingestion of soil and soil material – Guidance on the application and selection of physiologically based extraction methods for the estimation of the human bioaccessibility/bioavailability of metals in soil’; the dissemination of findings through the development of a website (http://www.bgs.ac.uk/barge), the hosting of workshops/dedicated speaker sessions on bioaccessibility related topics at International conferences such as ConSoil (http://consoil.ufz.de/); and the preparation of a peer reviewed special publication on bioaccessibility topics (Gron and Wragg 2007).

As the research on oral bioavailability, and the development of oral bioavailability factors, is not an issue specific to Europe, and affects contaminated site practitioners worldwide, the BARGE group is an ever expanding International research network, combining the cross-continental collaborative efforts of Europe and North America. In a similar vein to the BARGE group, researchers, Risk Assessment practitioners and regulators in Canada with an interest in oral bioavailability and the application of resulting tools have joined forces to form BioAccessibility Research Canada (BARC). The long term aim of this group is ‘to provide a scientific basis for evaluating and predicting inorganic and organic contaminant bioaccessibility in soils found at contaminated sites in Canada’ (BARC 2006). To address issues relating specifically to the measurement of arsenic oral bioavailability and bioaccessibility by in-vivo and in-vitro methodologies respectively, researchers from both the BARGE and BARC groups have joined forces with other research institutes and government bodies to share knowledge via an electronic forum.

5.1 Inter-Laboratory Studies

To date, four international inter-laboratory studies have been carried out to investigate different aspects of bioaccessibility in the human gastro-intestinal tract. Three have been undertaken by the BARGE group, two of which have been published in peer reviewed publications (Oomen et al. 2002; Van de Wiele et al. 2007). Oomen et al. (2002) described a multi-laboratory study which compared the bioaccessibility data returned by five in-vitro methods for three solid materials, contaminated with arsenic, cadmium and lead. The methods included physiologically and non-physiologically based test systems of dynamic and static nature. The salient points of the study were that, in many cases the bioaccessibility was <50%, an important factor for Human Health Risk Assessment, although a wide range of bioaccessibility values were observed across the in-vitro methods studied. The primary driver for the range in bioaccessible values was attributed to the difference in gastric pH of the various test systems. High bioaccessibilities were typically observed for the simplest method (stomach compartment only) with a low gastric pH and the lowest values for the system with the highest gastric pH (4.0). Low bioaccessible values, however, were also observed with two-phase systems incorporating low gastric pH and an intestinal compartment at neutral pH, indicating that pH plays an important role in both phases of the in-vitro test.

The multi-laboratory comparison study by Van de Wiele et al. (2007) assessed the bioaccessibility of soil-bound lead under both fed and fasted conditions in a number of in-vitro test systems. The study utilized both the soils and the resulting lead bioavailability data (Maddaloni et al. 1998), as a reference point for comparison, from a previous human in-vivo trial. As with the previous study by the BARGE group, both static and dynamic in-vitro methodologies were included, but unlike the previous study, all test systems were physiologically-based. The study showed that regardless of the nutritional status of the model (fed or fasted state), the lead bioaccessibility was significantly different (p < 0.05) between the methods with bioaccessible lead ranging between ca. 2 to 35%. Comparison with the available in-vivo data indicated that the simulation of the human gastrointestinal system under fed conditions overestimated lead bioavailability, whilst under fasted conditions a number of the models investigated underestimated oral bioavailability (Van de Wiele et al. 2007). This study and the previous study by Oomen et al. (2002) note that differences in the measured bioaccessibilities are often due to key methodological parameters such as gastric and intestinal pH. This further study notes that the method of separation (centrifugation, filtration, ultrafiltration) used in the test system is a critical factor in separating the bioaccessible from the non-bioaccessible fraction present and in the interpretation of the results for use in Human Health Risk Assessment.

The third inter-laboratory study initiated by the BARGE group had the goal of progressing development of a harmonised bioaccessibility methodology in order to carry in-vitro testing for Human Health Risk Assessment of contaminated soils. The collective efforts of the network modified the previously published RIVM physiologically based in-vitro method and the trial determined the analytical performance characteristics of the bioaccessibility measurement (repeatability and reproducibility). Modifications were made in order that the methodology was robust and provided adequate conservatism, at least for first tier Risk Assessment, for future use across the local geological conditions of the individual member countries (Wragg et al. 2009). The study utilized slag materials, river sediments, soil material and house dusts that had been previously investigated for their arsenic, cadmium and lead bioaccessibility and oral bioavailability contents by Professors Nick Basta (Basta et al. 2007; Rodriguez et al. 1999; Schroder et al. 2003, 2004a) and/or Professor Stan Casteel. The first inter-laboratory study showed that the harmonised methodology had the potential to meet the benchmark criteria set by BARGE. In addition, with the aid of further testing on soils that are representative of the local geology to the participating countries and not just highly elevated mining and slag waste, the method could be standardised for international use.

Health Canada has funded the BARC group to assess the variability in the reported data across Canadian laboratories in the form of a simple inter-laboratory comparison. This initial BARC study includes domestic commercial and research facilities and international research laboratories from the UK, the Netherlands and the US. Future round robin studies are planned by the group in order to examine the variability amongst individual methods and by comparison of results to toxicological reference values (BARC 2008).

5.2 Utilization of Bioaccessibility Data Across Europe

The utilization of the concept of oral bioavailability/bioaccessibility is used in a highly variable manner across Europe. In France, until recently, bioaccessibility was not taken into account in Human Health Risk Assessment. However, more recently, the Institut National de l’Environnement Industriel et des Risques (INERIS) has been tasked with the development of research programs that include the application of bioaccessibility testing in Human Health Risk Assessment and assessing the benefits of the inclusion of such testing regimes. As a part of its risk based approach to the management of contaminated land there is guidance is in place in the UK that recognises the potential benefits of, and allows for the inclusion of, oral bioavailability/bioaccessibility data in Human Health Risk Assessments when addressing exposure to significant possibilities of significant harm (SPOSH) (Department for the Environment Food and Rural Affairs and the Environment Agency 2002; ODPM 2004). Similarly, in Canada, although absorption factors for ingestion are usually 100% in screening level Risk Assessments, oral bioavailability is often determined as bioaccessibility, and for complex Risk Assessments site specific bioaccessibility values may be determined as a surrogate of bioavailability (Health Canada 2004). However, currently there is a paucity of guidance available concerning the application of bioaccessibility data in UK and Canadian Risk Assessments, which has lead to a degree of uncertainty within the practitioner community regarding the use of this potentially beneficial tool. In the US, the ‘Risk Assessment Guidance for Superfund, Volume 1’ (USEPA 1989) considers the determination of bioavailability on a site-specific basis, and more recently guidance has become available for a number of issues surrounding site-specific bioavailability data (USEPA 2007a). Such issues include:

  • implementation of a validated methodology, and documentation of data collection and analysis;

  • EPA criteria for the evaluation of a bioavailability methodology, to test its suitability for use.

Additionally, after recent validation of an in-vitro methodology for lead against in-vivo swine data, there is growing support for the use of reliable bioaccessibility data, in the first instance for Pb in Human Health Risk Assessments (USEPA 2007b).

Despite apparent confusion surrounding the science and application of bioaccessibility testing within the UK contaminated site community, process understanding of the mechanisms controlling bioaccessibility has continued with respect to priority UK contaminants (arsenic, chromium, nickel, lead and poly aromatic hydrocarbons (PAHs)), in addition to the application of bioaccessibility data in site-specific UK detailed quantitative Risk Assessments. Research on bioaccessibility testing has focussed on assessing the solid phase distribution of contaminants, such as arsenic, in soils to explain the bioaccessible sources of this contaminants (Cave et al. 2003; 2007; Nathanail and Smith 2007; Nathanail et al. 2004, 2005a, b; Palumbo-Roe and Klinck 2007; Palumbo-Roe et al. 2005). Inclusion of the physico-chemical sources of the bioaccessible contaminant can provide a better understanding of the mechanisms by which the potentially harmful contaminant becomes bioaccessible and may aid the process understanding of how the bioaccessibility may change with time and changes in land use. Nathanail et al. (2005a) included in-vitro bioaccessibility data, as one line of evidence, to a) assist in the derivation of site specific assessment criteria and b) demonstrate that the pedogenic arsenic present at a dwelling in Wellingborough in the UK, did not pose an unacceptable human health risk to the owners. Currently there is no official status regarding bioaccessibility testing, in the Netherlands. In 2006 the RIVM provided the Dutch government with a report recommending the use of bioaccessibility testing for lead on site specific basis. In the latest revision of the Dutch Soil Quality Standards (Intervention Values) (Ministry of VROM 2008) a relative bioavailability factor for lead of 0.73 has been applied). The Danish Environmental Protection Agency (EPA) has allowed for the use of oral bioavailability data for lead and cadmium (in the form of in-vitro bioaccessibility data) in the evaluation of sites in compliance with Soil Quality Standards. As part of this allowance, the Danish EPA has stipulated that oral bioavailability data must be obtained using the RIVM fasted state methodology. For lead the data must be generated using the stomach phase and for cadmium the intestinal phase should be included. In Flanders, bioaccessibility testing is considered for the assessment of risk to human health risk from soil contaminated with PAHs and other contaminants (Van de Wiele T 2008; Wragg, J. Nottingham ‘personal communication’).