In its application to chemical exposure problems , the risk assessment process is used to compile and organize the scientific information that is necessary to support environmental and public health risk management decisions. The approach is used to help identify potential problems, establish priorities, and provide a basis for regulatory actions. Indeed, it is apparent that the advancement of risk analysis in regulatory decision-making—among several others—has helped promote rational policy deliberations over the past several decades. Yet, as real-world practice indicates, risk analyses have often been as much the source of controversy in regulatory considerations as the facilitator of consensus (ACS and RFF 1998). Anyhow, risk assessment can appropriately be regarded as a valuable tool for public health and environmental decision-making —albeit there tends to be disagreement among experts and policy makers about the extent to which its findings should influence decisions about risk. To help produce reasonable/pragmatic and balanced policies in its application, it is essential to explicitly recognize the character, strengths, and limitations of the analytical methods that are involved in the use of risk analyses techniques in the decision-making process.

Overall, risk assessment methods commonly encountered in the literature of environmental and public health management, and/or relevant to the management of chemical exposure problems characteristically require a clear understanding of several fundamental issues/tenets and related attributes. This chapter discusses key fundamental principles and concepts that will be expected to facilitate the application and interpretation of risk assessment information—and thus make it more suitable in public health risk management decisions.

1 Fundamental Principles of Chemical Hazard , Exposure, and Risk Assessments

Hazard is that object with the potential for creating undesirable adverse consequences; exposure is the situation of vulnerability to hazards; and risk is considered to be the probability or likelihood of an adverse effect due to some hazardous situation. Indeed, the distinction between hazard and risk is quite an important consideration in the overall appraisal of risk possibilities and/or scenarios; broadly speaking, it is the likelihood to harm as a result of exposure to a hazard that distinguishes risk from hazard. Accordingly, a substance is considered a hazard if it is capable of causing an adverse effect under any particular set of circumstance(s)—whereas risk generally reflects the probability that an adverse effect will occur under actual or realistic circumstances, also taking into account the potency of the specific substance and the level of exposure to that substance. For example, a toxic chemical that is hazardous to human health does not constitute a risk unless human receptors/populations are exposed to such a substance—as conceptually illustrated by the Venn diagram representation shown in Fig. 4.1. Thus, from the point of view of human exposure to chemicals, risk can be defined as the probability that public health could be affected to various degrees (including an individual or group suffering injury, disease, or even death) under specific set of circumstances.

Fig. 4.1
figure 1

When do hazards actually represent risks?

The integrated and holistic assessment of hazards, exposures and risks are indeed a very important contributor to any decision that is aimed at adequately managing any given hazardous situation. To this end, potential risks are estimated by considering the following key elements:

  • Probability or likelihood of occurrence of harm;

  • Intrinsic harmful features or properties of specified hazards;

  • Population-at-risk (PAR);

  • Exposure scenarios; and

  • Extent of expected harm and potential effects.

On the whole, a complete assessment of potential hazards posed by a substance or an object typically involves, among several other things, a critical evaluation of available scientific and technical information on the substance or object of concern, as well as the possible modes of exposure. In particular, it becomes increasingly apparent that potential receptors will have to be exposed to the hazards of concern before any risk could be said to exist. Overall, the availability of an adequate and complete information set is an important prerequisite for producing sound hazard, exposure, and risk assessments.

1.1 The Nature of Chemical Hazard, Exposure, and Risk

Hazard is broadly defined as the potential for a substance or situation to cause harm, or to create adverse impacts on populations and/or property. It represents the undetermined loss potential, and may comprise of a condition, a situation, or a scenario with the potential for creating undesirable consequences. The degree of chemical hazard will usually be determined from the type of exposure scenario and the potential effects or responses resulting from any exposures. Next, whereas there may be no universally accepted single definition of risk, this generally may be considered as the probability or likelihood of an adverse effect, or an assessed threat to persons due to some hazardous situation; it is a measure of the probability and severity of adverse consequences from an exposure of potential receptors to hazards—and may simply be represented by the measure of the frequency of an event.

Procedures for analyzing hazards and risks may typically be comprised of several steps (Fig. 4.2), consisting of the following general elements:

  • Hazard Identification and Accounting

    • Identify hazards (including nature/identity of hazard, location, etc.)

    • Identify initiating events (i.e., causes)

    • Identify resolutions for hazard

    • Define exposure setting

  • Vulnerability Analysis

    • Identify vulnerable zones or locales

    • Identify concentration/impact profiles (or levels/degrees of hazards) for affected zones

    • Determine populations potentially at risk (such as human populations, and critical facilities)

    • Define exposure scenarios

  • Consequences/Impacts Assessment

    • Determine risk categories for all identifiable hazards

    • Determine probability of adverse outcome (from exposures to hazards)

    • Estimate consequences (including severity, uncertainties, etc.).

Fig. 4.2
figure 2

Basic steps in the analyses of hazards and risks

Some or all of these elements may have to be analyzed in a comprehensive manner, depending on the nature and level of detail of the hazard and/or risk analysis that is being performed. Anyhow, the analyses typically fall into two broad categories—namely: endangerment assessment (which may be considered as contaminant-based, such as human health and environmental risk assessment associated with chemical exposures); and safety assessment (which is system failure-based, such as probabilistic risk assessment of hazardous facilities or installations). At the end of the day , the final step will be comprised of developing risk management and/or risk prevention strategies for the problem situation.

1.1.1 Hazard Vs. Risk: Portraying the Nomenclatural Differences

Invariably, hazard characterization will often form an important foundational basis for most environmental and public health risk management programs; the general purpose of such hazard characterization is to make a qualitative judgment of the effect(s) caused by an agent or stressor under consideration, and its relevance to a target population of interest. Clearly, in translating hazard characterization into corresponding risk value or indicator, the processes involved need to consider, among other things, the severity of critical effects and the specific affected population groups, etc.; for instance, in determining ‘safe exposure limits’ associated with human exposure to nitrate, it is important to recognize the fact that infants are very sensitive to nitrate exposures (related to methemoglobinemia)—whereas this critical effect would not be relevant to the development of an occupational exposure limit. Consequently, it is important to carefully consider the scenarios of interest (with respect to population, duration, exposure routes, etc.) in such characterization efforts, in order to arrive at realistic and pragmatic risk conclusions—and subsequently an effectual risk management plan of action.

It is noteworthy that, irrespective of the type of analytical protocols adopted for any given evaluation scenarios, a clear distinction between the terms ‘hazard’ and ‘risk’ can become a major issue to contend with in various important risk communication and/or risk management efforts. This may be especially true in any attempts to relay risk appraisal outcomes to a potentially impacted community that may, rightly or wrongly, perceive likely threat levels as being ‘unacceptable’. Thus, it becomes even more important to come up with proper clarification nomenclatures that explicitly recognize the fact (as well as properly convey the message) that ‘hazard’ is generally defined as the potential to harm a target population, whereas ‘risk’ would typically encompass the probability of exposure along with the extent of damage. After all, hazard is associated only with the intrinsic ability of an agent, stressor, or situation to cause adverse effects to a target population or receptor—and this ability may never even materialize if the targets are adequately protected and/or are immune from exposure; in contrast, risk typically would take the probability and the scale of damage into account—based on the fact or assumption that a harmful event will inevitably occur. Hence, the ‘decisive factor’ under such circumstances is the appropriate weighting of the possible scale of damage with the probability of exposure and the related harm—culminating in risk being generally deemed as the probability of occurrence of a harmful event (Scheer et al. 2014). In a way, defining risk therefore becomes a process of combining what might be viewed as ‘possibilistic’ measures with probability concepts (and perhaps with other qualitative indicators as well) in order to arrive at credible risk measures.

1.2 Basis for Measuring Risks

Risk represents the assessed loss potential, often estimated by the mathematical expectation of the consequences of an adverse event occurring. It is generically defined by the product of the two components of the probability of occurrence (p) and the consequence or severity of occurrence (S), viz.:

$$ \mathrm{Risk}= p\times S $$
(4.1)

When interpreted as the probability of a harmful event to humans or to the environment that is caused by a chemical, physical, or biological agent, risk can also be described by the following conceptual relationship:

$$ \mathrm{Risk}=\left[ f\left(\mathrm{I}\right)\times f\left(\mathrm{P}\right)\right]\hbox{--} f\left(\mathrm{D}\right) $$
(4.2)

where f(I) represents an ‘intrinsic risk’ factor that is a function of the characteristic nature of the agent or the dangerous properties of the hazard; f(P) is a ‘presence’ factor that is a function of the quantity of the substance or hazard released into the human environment, and of all the accumulation and removal methods related to the chemical and physical parameters of the product, as well as to the case-specific parameters typical of the particular environmental setting; and f(D) represents a ‘defense’ factor that is a function of what society can do in terms of both protection and prevention to minimize the harmful effects of the hazard. Meanwhile, it could perhaps be argued that the most important factor in this equation is f(D); this may include both the ordinary defense mechanisms for hazard abatement, as well as some legislative measures . In effect, the level of risk is very much dependent on the degree of hazard as well as on the amount of safeguards or preventative measures against adverse effects; consequently, risk can also be conveniently defined by the following simplistic conceptual relationships:

$$ \mathrm{Risk}=\frac{\left[\mathrm{Hazard}\right]}{\left[\mathrm{Preventative}\;\mathrm{Measures}\right]} $$
(4.3)

or

$$ \mathrm{Risk}= f\left\{\mathrm{Hazard},\mathrm{Exposure},\mathrm{Safeguards}\right\} $$
(4.4)

where ‘Preventative Measures’ or ‘Safeguards’ is considered to be a function of exposure—or rather inversely proportional to the degree of exposure; the ‘Preventative Measures’ or ‘Safeguards’ components represent the actions that are generally taken to minimize potential exposure of target populations to the specific hazards.

It is notable that, invariably, the estimation of risks involves an integration of information on the intensity, frequency, and duration of exposure for all identified exposure routes associated with the exposed or impacted group(s); for instance, an identifiable risk may represent the probability for a chemical to cause adverse impacts to potential receptors as a result of exposures over specified time periods. Anyhow, the risk measures commonly give an indication of the probability and severity of adverse effects (Fig. 4.3)—and this is generally established with varying degrees of confidence according to the importance of the decision involved.

Fig. 4.3
figure 3

General conceptual categories of risk measures

In general, measures used in risk analysis take various forms, depending on the type of problem, degree of resolution appropriate for the situation on hand, and the analysts’ preferences. Thus, the risk parameter may be expressed in quantitative terms—in which case it could take on values from zero (associated with certainty for no-adverse effects) to unity (associated with certainty for adverse effects to occur). In several other cases, risk is only described qualitatively—such as by use of descriptors like ‘high’, ‘moderate’, ‘low’, etc.; or indeed, the risk may be described in semi-quantitative/semi-qualitative terms. In any case, the risk qualification or quantification process will normally rely on the use of several measures, parameters and/or tools as reference yardsticks (Box 4.1)—with ‘individual lifetime risk’ (represented by the probability that the individual will be subjected to an adverse effect from exposure to identified hazards) being about the most commonly used measure of risk. At any rate, it is also worth mentioning here that the type or nature of ‘consuming/target audience’ must be given careful consideration in choosing the type of risk measure or index to adopt for a given program or situation.

Box 4.1 Typical/Common Measures, Parameters, and/or Tools That Form the Basis for Risk Qualification or Quantification

  • Probability distributions (based on probabilistic analyses)

  • Expected values (based on statistical analyses)

  • Economic losses or damages

  • Public health damage

  • Risk profile diagrams (e.g., iso-risk contours plotted on area map, to produce an iso-risk contour map)

  • Incidence rate (defined by the ratio of [number of new cases over a period of time]:[population at risk])

  • Prevalence rate (defined by the ratio of [number of existing cases at a point in time]:[total population])

  • Relative risk (i.e., risk ratio) (defined by a ratio such as [incidence rate in exposed group]:[incidence rate in non-exposed group])

  • Attributable risk (i.e., risk difference) (defined by an arithmetic difference, such as [incidence among an exposed group]—[incidence among the non-exposed group])

  • Margin of safety (defined by the ratio of [the highest dose level that does not produce an adverse effect]:[the anticipated human exposure])

  • Individual lifetime risk (equal to the product of exposure level and severity, e.g., [dose × potency])

  • Population or societal risk (defined by the product of the individual lifetime risk and the population exposed)

  • Frequency-consequence diagrams (also known as F-N curves for fatalities, to define societal risk)

  • Quality of life adjustment (or quality adjusted life expectancy, QALE)

  • Loss of life expectancy (given by the product of individual lifetime risk and the average remaining lifetime)

1.3 What Is Risk Assessment?

Several somewhat differing definitions of risk assessment have been published in the literature by various authors to describe a variety of risk assessment methods and/or protocols (see, e.g., Asante-Duah 1998; Cohrssen and Covello 1989; Conway 1982; Cothern 1993; Covello et al. 1986; Covello and Mumpower 1985; Crandall and Lave 1981; Davies 1996; Glickman and Gough 1990; Gratt 1996; Hallenbeck and Cunningham 1988; Kates 1978; Kolluru et al. 1996; LaGoy 1994; Lave 1982; Neely 1994; Norrman 2001; NRC 1982, 1983, 1994a, b; Richardson 1990, 1992; Rowe 1977; Scheer et al. 2014; Turnberg 1996; USEPA 1984; Whyte and Burton 1980). In a generic sense, risk assessment may be considered to be a systematic process for arriving at estimates of all the significant risk factors or parameters associated with an entire range of ‘failure modes’ and/or exposure scenarios in connection with some hazard situation(s). It entails the evaluation of all pertinent scientific information to enable a description of the likelihood, nature, and extent of harm to human health as a result of exposure to chemicals (and really other potential stressors) present in the human environments.

Risk assessment is indeed a scientific process that can be used to identify and characterize chemical exposure-related human health problems. In its application to the management of chemical exposure problems , the process encompasses an evaluation of all the significant risk factors associated with all feasible and identifiable exposure scenarios that are the result of specific chemicals being introduced into the human environments. It may, for instance, involve the characterization of potential adverse consequences or impacts to a target (human) population or groups that are potentially at risk due to exposure to chemicals found in consumer products and/or in the environment.

Overall, the public health risk assessment process seeks to estimate the likelihood of occurrence of adverse effects resulting from exposures of human receptors to chemical, physical, and/or biological agents present in the human living and work environments. The process entails a mechanism that utilizes the best available scientific knowledge to establish case-specific responses that will ensure justifiable and defensible decisions—as necessary for the management of hazardous situations in a cost-efficient manner. The process is also concerned with the assessment of the importance of all identified risk factors to the various stakeholders whose interests are embedded in a candidate problem situation (Petak and Atkisson 1982).

1.4 The Nature of Risk Assessments

Traditionally, risk assessment methods have been viewed as belonging to one of several general major categories—typically under the broad umbrellas of: hazard assessment, exposure assessment, consequence assessment, and risk estimation (Covello and Merkhofer 1993; Norrman 2001). The hazard assessment may consist of monitoring (e.g., source monitoring and laboratory analyses), performance testing (e.g., hazard analysis and accident simulations), statistical analyses (e.g., statistical sampling and hypotheses testing), and modeling methods (e.g., biological models and logic tree analyses). The exposure assessment may be comprised of monitoring (e.g., personal exposures monitoring, media contamination monitoring, biologic monitoring), testing (e.g., laboratory tests and field experimentation), dose estimation (e.g., as based on exposure time, material disposition in tissue, and bioaccumulation potentials), chemical fate and behavior modeling (e.g., food-chain and multimedia modeling) , exposure route modeling (e.g., inhalation, ingestion, and dermal contact), and populations-at-risk modeling (e.g., general population vs. sensitive groups). The consequence assessment may include health surveillance, hazard screening, animal tests, human tests, epidemiologic studies, animal-to-human extrapolation modeling, dose-response modeling, pharmacokinetic modeling, ecosystem monitoring, and ecological effects modeling. The risk estimation will usually take such forms as relative risk modeling, risk indexing (e.g., individual risk vs. societal risk), nominal vs. worst-case outcome evaluation, sensitivity analyses, and uncertainty analyses. Detailed listings of key elements of the principal risk assessment methods are provided elsewhere in the literature (e.g., Covello and Merkhofer 1993; Norrman 2001). Meanwhile, it is notable that most of the techniques available for performing risk assessments are structured around decision analysis procedures—since such approach tends to better facilitate comprehensible solutions for even complicated problems. Invariably, the risk assessment process can be used to provide a ‘baseline’ estimate of existing risks that can be attributed to a given agent or hazard, as well as to determine the potential reduction in exposure and risk under various mitigation scenarios.

Risk assessment is indeed a powerful tool for developing insights into the relative importance of the various types of exposure scenarios associated with potentially hazardous situations. But as Moeller (1997) points out, it has to be recognized that a given risk assessment provides only a snapshot in time of the estimated risk of a given toxic agent at a particular phase of our understanding of the issues and problems. To be truly instructive and constructive, therefore, risk assessment should preferably be conducted on an iterative basis—being continually updated as new knowledge and information become available.

As a final point here, it is noteworthy that, in general, some risk assessments may be classified as retrospective—i.e., focusing on injury after the fact (e.g., nature and level of risks at a given contaminated site), or it may be considered as predictive—such as in evaluating possible future harm to human health or the environment (e.g., risks anticipated if a newly developed food additive is approved for use in consumer food products, etc.). Anyhow, in relation to the investigation of chemical exposure problems, it is apparent that the focus of most public health risk assessments tends to be on a determination of potential or anticipated risks to the populations potentially at risk.

1.5 Recognition of Uncertainty as an Integral Component of Risk Assessments

A major difficulty in decision-making resides in the uncertainties of system characteristics for the situation at hand. Uncertainty is the lack of confidence in the estimate of a variable’s magnitude or probability of occurrence. Invariably, scientific judgment becomes an important factor in problem-solving under uncertainty, and decision analysis provides a means of representing the uncertainties in a manner that allows informed discussion. The presence of uncertainty means, in general, that the best outcome obtainable from an evaluation and/or analysis cannot necessarily be guaranteed. Nonetheless, as has been pointed out by Bean (1988), decisions ought to be made even in an uncertain setting—otherwise several aspects of environmental (and related public health) management actions could become completely paralyzed. Indeed, there are inevitable uncertainties associated with just about all risk estimates, but these uncertainties do not invalidate the use of the risk estimates in the decision-making process. However, it is important to identify and define the confidence levels associated with the particular evaluation—also recognizing that, depending on the specific level of detail of a risk assessment, the type of uncertainty that dominates at each stage of the analysis can be quite different.

Uncertainty analysis can indeed be performed qualitatively or quantitatively—with sensitivity analysis often being a useful adjunct to the uncertainty analysis. Sensitivity analysis entails the determination of how rapidly the output of a given analysis changes with respect to variations in the input data; thus, in addition to presenting the best estimate, the evaluation will also provide a range of likely estimates in the form of a sensitivity analysis. In fact, it is generally recommended that a sensitivity analysis becomes an integral part of a detailed risk evaluation process. Through such analyses, uncertainties can be assessed properly, and their effects on given decisions accounted for in a systematic way. In this manner, the risk associated with given decision alternatives may be properly delineated, and then appropriate corrective measures can be taken accordingly.

In view of the fact that risk assessment may constitute a very crucial part of the overarching environmental and public health management decision-making process, it is essential that all the apparent sources of uncertainty be well documented. Indeed, the need to be explicit about uncertainty issues in risk analysis has long been recognized—and this remains a recurrent theme for policy analysts and risk management practitioners. In general, the uncertainty can be characterized via sensitivity analysis and/or probability analysis techniques—with the technique of choice usually being dependent on the available input data statistics. Broadly speaking, sensitivity analyses require data on the range of values for each exposure factor in the scenario—and probabilistic analyses require data on the range and probability function (or distribution) of each exposure factor within the scenario. Further discussion of this topic appears later on in Chap. 12 of this title.

1.6 Risk Assessment Versus Risk Management: The Dichotomy Between Risk Assessment and Risk Management

Risk assessment has been defined as the ‘characterization of the potential adverse health effects of human exposures to environmental hazards’ (NRC 1983). In a typical risk assessment, the extent to which a group of people has been or may be exposed to a certain chemical is determined; the extent of exposure is then considered in relation to the kind and degree of hazard posed by the chemical—thereby allowing an estimate to be made of the present or potential risk to the target population. Depending on the problem situation, different degrees of detail may be required for the process; in any event, the continuum of acute to chronic hazards and exposures would typically be fully investigated in a comprehensive assessment, so that the complete spectrum of risks can be defined for subsequent risk management decisions.

The risk management process—that utilizes prior-generated risk assessment information—involves making a decision on how to protect public health. Examples of risk management actions include: deciding on how much of a given chemical of concern/interest an operating industry or company may discharge into a river; deciding on which substances may be stored at a hazardous waste disposal facility; deciding on the extent to which a hazardous waste site must be cleaned up; setting permit levels for chemical discharge, storage, or transport; establishing levels for air pollutant emissions; and determining the allowable levels of contamination in drinking water or food products. In a way, this generically portrays how risk management is distinct from risk assessment—but nevertheless maintains a fundamental relationship.

At the end of the day, risk assessment is generally conducted to facilitate risk management decisions. Whereas risk assessment focuses on evaluating the likelihood of adverse effects, risk management involves the selection of a course of action in response to an identified risk—with the latter often based on many other factors (e.g., social, legal, political, or economic) over and above the risk assessment results. Essentially, risk assessment provides information on the likely health risk, and risk management is the action taken based on that information (in combination with other ‘external’ but potentially influential factors).

2 Fundamental Concepts in Risk Assessment Practice

The general types of risk assessment often encountered in practice may range from an evaluation of the potential effects of toxic chemical releases known to be occurring, up through to evaluations of the potential effects of releases due to events whose probability of occurrence is uncertain (Moeller 1997). Regardless, in order to adequately evaluate the risks associated with a given hazard situation, several concepts are usually employed in the processes involved. Some of the fundamental concepts and definitions that will generally facilitate a better understanding of the risk assessment process and application principles, and that may also affect risk management decisions, are introduced below in this section.

2.1 Qualitative Versus Quantitative Risk Assessment

In public health risk assessments, quantitative tools are often used to better define exposures, effects, and risks in the broad context of risk analysis. Such tools will usually employ the plausible ranges associated with default exposure scenarios, toxicological parameters, and indeed other assumptions and policy positions. Although the utility of numerical risk estimates in risk analysis has to be appreciated, these estimates should be considered in the context of the variables and assumptions involved in their derivation—and indeed in the broader context of likely biomedical opinion, host factors, and actual exposure conditions. Consequently, directly or indirectly, qualitative descriptors also become part of a quantitative risk assessment process. For instance, in evaluating the assumptions and variables relating to both toxicity and exposure conditions for a chemical exposure problem, the risk outcome may be provided in qualitative terms—albeit the risk levels are expressed in quantitative terms.

In general, the attributable risk for any given problem situation can be expressed in qualitative, semi-quantitative, or quantitative terms. For instance, in conveying qualitative conclusions regarding chemical hazards, narrative statements incorporating ‘weight-of-evidence’ or ‘strength-of-evidence’ conclusions may be used—i.e., in lieu of alpha-numeric designations alone being used. In other situations, pure numeric parameters are used—and yet in other circumstances, a combination of both numeric parameters and qualitative descriptors are used in the risk presentations/discussions.

2.1.1 Risk Categorization

Oftentimes in risk studies , it becomes necessary to put the degree of hazards or risks into different categories for risk management purposes. A typical risk categorization scheme for potential chemical exposure problems may involve a grouping of the ‘candidate’ problems on the basis of the potential risks attributable to various plausible conditions—such as high-, intermediate- and low-risk problems, as conceptually depicted by Fig. 4.4. Under such classification scheme, a case-specific problem may be designated as ‘high-risk’ when exposure represents real or imminent threat to human health; in general, the high-risk problems will prompt the most concern—requiring immediate and urgent attention or corrective measures to reduce the threat. Indeed, to ensure the development of adequate and effectual public health risk management or corrective action strategies, potential chemical exposure problems may need to be prudently categorized in a similar or other appropriate manner during the risk analysis. In the end, such a classification would likely facilitate the development and implementation of a more efficient public health risk management or corrective action program.

Fig. 4.4
figure 4

A conceptual representation of typical risk categories for chemical exposure problems

2.2 Conservatisms in Risk Assessments

Many of the parameters and assumptions used in hazard, exposure, and risk evaluation studies tend to have high degrees of uncertainties associated with them—thereby potentially clouding the degree of confidence assigned to any estimated measures of safety. Conversely, ‘erring on the side of safety’ tends to be the universal ‘mantra’ of most safety designers and analysts. To facilitate a prospective safe design and analysis, it is common practice to model risks such that risk levels determined for management decisions are preferably over-estimated. Such ‘conservative’ estimates (also, often cited as ‘worst-case’, or ‘plausible upper bound’ estimates) used in risk assessment are based on the supposition that pessimism in risk assessment (with resultant high estimates of risks) is more protective of public health and/or the environment.

Indeed, in performing risk assessments, scenarios have often been developed that will reflect the worst possible exposure pattern; this notion of ‘worst-case scenario’ in the risk assessment generally refers to the event or series of events resulting in the greatest exposure or potential exposure. Also, quantitative cancer risk assessments are typically expressed as plausible upper bounds rather than a tendering of estimates of central tendency; but then, when several plausible upper bounds are added together, then the question arises as to whether the overall result is still plausible (Bogen 1994; Burmaster and Harris 1993; Cogliano 1997). At any rate, although it is believed that the overall risk depends on the independence, additivity, synergistic/antagonistic interactions among the carcinogens, and the number of risk estimates (as well as on the shapes of the underlying risk distributions), sums of upper bounds still provide useful information about the overall risk. On the other hand, gross exaggeration of actual risks could lead to poor decisions being made with respect to the oftentimes very limited resources available for general risk mitigation purposes. Thus, after establishing a worst-case scenario, it is often desirable to also develop and analyze more realistic or ‘nominal’ scenarios, so that the level of risk posed by a hazardous situation can be better bounded—via the selection of a ‘best’ or ‘most likely’ sets of assumptions for the risk assessment. But in deciding on what realistic assumptions are to be used in a risk assessment, it is imperative that the analyst chooses parameters that will, at the very worst, result in erring on the side of safety. Anyhow, it is notable that a number of investigators (see, e.g., Anderson and Yuhas 1996; Burmaster and von Stackelberg 1991; Cullen 1994; Maxim, in Paustenbach 1988) have been offering a variety of techniques that could help make risk assessments more realistic—i.e., rather than the dependence on wholesale compounded conservative assumptions.

By and large, there generally is the need to systematically undertake sensitivity analyses, among other things; this may indeed include the use of multiple assumption sets that reflect a wider spectrum of exposure scenarios. This is important because controls based on the so-called upper-bound estimate or worst-case scenario may address risks that are almost nonexistent and impractical. In fact, risk assessment using extremely conservative biases do not necessarily provide risk managers with the quality information needed to formulate efficient and cost-effective management strategies. Also, using plausible upper-bound risk estimates or worst-case scenarios may lead to spending scarce and limited resources to regulate or control insignificant risks—whiles at the same time more serious risks are probably being ignored. Thus, ‘blind’ conservatism in individual assessments may not be optimal or even truly conservative in a broad sense if some problematic sources of risk are not addressed, simply because other less serious ones are receiving undue attention . For such reasons, the overall recommendation is to strive for accuracy rather than conservatism.

2.3 Individual Versus Group Risks

In the application of risk assessment to environmental and public health risk management programs, it often becomes important to distinguish between ‘individual’ and ‘societal’ risks—in order that the most appropriate metric/measure can be used in the analysis of case-specific problems. Individual risks are considered to be the frequency at which a given individual could potentially sustain a given level of adverse consequence from the realization or occurrence of specified hazards. Societal risk, on the other hand, relates to the frequency and the number of individuals sustaining some given level of adverse consequence in a given population due to the occurrence of specified hazards; the population risk provides an estimate of the extent of harm to the population or population segment under review.

Broadly speaking, four types of risks may be differentiated for most situations—namely:

  • Risks to individuals

  • Risks to the general population

  • Risks to highly exposed subgroups of a population

  • Risks to highly sensitive subgroups of a population

The latter three categories may then be considered as belonging to the ‘societal’ or ‘group’ risk category—representing population risks associated with more than one person or the individual. Individual risk estimates represent the risk borne by individual persons within a population—and are more appropriate in cases where individuals face relatively high risks. However, when individual risks are not inequitably high, then it becomes important during resources allocation, to deliberate on possible society-wide risks that might be relatively higher. Indeed, risk assessments almost always deal with more than a single individual. However, individual risks are also frequently calculated for some or all of the persons in the population being studied, and these are then put into the context of where they fall in the distribution of risks for the entire population.

Finally, it is noteworthy that, at an individual level, the choice of whether or not to accept a risk is primarily a personal decision. However, on a societal level (wherein values tend to be in conflict, and decisions often produce prospective ‘winners’ and ‘losers’) the decision to accept or reject a risk tolerance level is much more difficult (Cohrssen and Covello 1989). In fact, no numerical level of risk will likely receive universal acceptance; on the other hand, the idea of perchance eliminating all risks is virtually an impossible task—especially for our modern societies in which people have become so accustomed to numerous ‘hazard-generating’ luxuries of life. Congruently, for many activities and technologies of today, some level of risk would normally have to be tolerated in order for one to benefit from the activity or technology. Consequently, levels of risk that may be considered tolerable or relatively ‘safe enough’ should generally be identified/defined—at least on the societal level—to facilitate rational risk management and related decision-making tasks. Under such circumstances, it must be acknowledged that individuals at the high end of a risk distribution/spectrum are often of special interest to risk managers—especially when considering various actions to mitigate the risk; these individuals often are either more susceptible to the identified adverse health effect than others in the population, or are more highly exposed individuals, or both.

2.4 Consideration of Risk Perception Issues

The general perception of risks tends to vary amongst individuals and/or groups, and may even change with time. Risk perception may therefore be considered as having both spatial and temporal dimensions. In general, the public often views risk differently vis-à-vis the typical risk estimates developed by technical experts. Indeed, this notion ties in very well with the concept that public perception of risk is a function of hazard and the so-called ‘outrage’ factors; the ‘outrage’ component describes a range of (more or less abstract) factors, other than the actual likelihood of a hazard, that contribute to an enhanced or variant perception of the estimated risk (Sandman 1993; Slovic 1993, 1997). Conceivably, these ‘outrage’ factors explain why multiple hazards of similar magnitude can at times be perceived as having vastly differently levels of concomitant risk. In any event, whereas public outrage is not tangible, it is still real—and must therefore be addressed to ensure program success.

In general, risks that are involuntary (e.g., environmental risks) or ‘novel’ seem to arouse more concern from target/affected populations than those that are voluntary (e.g., associated with use of certain cosmetics and other consumer products) or ‘routine’; thus, the latter tends to be more acceptable to the affected individuals (van Leeuwen and Hermens 1995). Similarly, ‘natural’ toxins and contaminants in foods may be considered reasonably acceptable (even though they may cause illness), whereas food additives (used in foodstuffs to assist in preservation) may not be as much acceptable to some people (Richardson 1986). Also, perceptions about risk tend to be influenced by: the sources of information; styles of presentation; personal background and educational levels; cultural contexts; and the dimensions of a particular risk problem. For instance, there seem to be reasonable documentation and recognition regarding cultural explanations for some risk management controversies that have occurred in fairly recent times (see, e.g., Earle and Cvetkovich 1997)—i.e., in regards to the ways people differ in their thinking about risk (or risk acceptability for that matter). In fact, several value judgments become an important component of the consequential decision-making process—with the value judgments involving very complex social processes.

A fairly well established hierarchy of risk ‘tolerability’ has indeed emerged in recent times that involve several issues/factors—including those enumerated in Box 4.2 (Cassidy 1996; Cohrssen and Covello 1989; Lowrance 1976). Anyhow, in the final analysis, issues relating to risk perception become a very important consideration in environmental and public health risk management decisions—especially because, in some situations, the perception of a group of people may alter the priorities assigned to the reduction of competing risks. In fact, the differences between risk perception and risk estimation could have crucial consequences on the assessment, management, and communication of risks. This is because the particular risks estimated in a given risk assessment may not necessarily be consistent with the perceptions or concerns of those individuals most directly affected.

Heuristic Reasoning Structure vs. ‘Formalized’ Risk Assessment

Cognitive heuristics tend to dictate or form the basis of risk perception often observed in the general (lay) population—i.e., rather than systematic or structured reasoning that tend to form the basis of formulating a ‘formal’ risk assessment carried out by most scientific experts. Even so, these apparently different arms or paths to risk management decisions are not necessarily incompatible or inconsistent. Indeed, it has been suggested (e.g., MacGillivray 2014) that significant aspects of risk assessment can initially be represented as heuristics (i.e., despite their generally rough and rather contingent nature)—but then only to be subsequently supplanted by using the insights from this to work toward a useful analytical framework for characterizing the process in a more formal manner. In actual fact, the heuristic elements of carrying out a risk assessment could (and probably should) be viewed or understood as a way of structuring, authenticating and/or formalizing the overall risk assessment process, as a true scientific practice (MacGillivray 2014). After all, among several other things, ‘weight-of-evidence’ (WoE) heuristics/approaches have become increasingly prominent in a variety of environmental decision-making scenarios—with these generally following the logic that there are often multiple lines of evidence that bear on a particular causal inference, and which therefore need to be weighted and aggregated prior to making a final decision; such process may in principle be guided by some formal algorithm or set of rules—albeit in practice, it typically takes the form of factors-based judgments (MacGillivray 2014). Ultimately, the integrated approach of using heuristics concepts together with ‘formalized’ structures could benefit the overall risk assessment process by adding additional layer/degree of consistency, transparency, and even some level of predictability in both the processes involved as well as the final outcomes.

Box 4.2 Key Factors Affecting the ‘Tolerability’ of Risk by Individuals and Society

  • Voluntariness (i.e., Voluntary vs. Involuntary exposures)

  • Response time (i.e., Delayed vs. Immediate effects)

  • Source (i.e., Natural vs. Human-made risks)

  • Controllability (i.e., Controllable vs. Uncontrollable)

  • Perception of personal control

  • Familiarity with the type of hazard (i.e., Old/Known vs. New/Unknown hazards or risks)

  • Perceptions about potential benefits (i.e., Exposure is an essential vs. Exposure is a luxury)

  • Nature of hazard and/or consequences (i.e., Ordinary vs. Catastrophic)

  • Perception of the extent and type of risk

  • Perceptions about comparative risks for other activities

  • Reversibility of effects (i.e., Reversible vs. Irreversible)

  • Perceptions about available choices (i.e., No alternatives available vs. Availability of alternatives)

  • Perceptions about equitability/fairness of risk distribution

  • Continuity of exposure (i.e., Occasional vs. Continuous)

  • Visual indicators of risk factors or levels (i.e., Tangible vs. Intangible risks)

2.5 Deterministic Versus Probabilistic Risk Assessments

Deterministic risk assessment methods generally involve exclusive use of key data sets that lead to specific ‘singular’ and/or ‘monotonic’ outcomes—often considered the ‘traditionalist’ approach. Probabilistic [or, Stochastic] methods of approach typically entail the application of statistical tools that incorporate elements of random behavior in key data sets—often viewed as the more ‘contemporaneous’ approach.

In the application of risk assessment to environmental and public health risk management programs, it has become come practice to utilize either or both of deterministic and probabilistic methods of approach—in efforts to facilitate the most effectual decision-making processes, and that would adequately support public health risk management needs. In practice, the deterministic approach to risk assessments can be said to be the classical or traditional tool preceding the development of stochastic or probabilistic methodologies. On the other hand, because deterministic models generally do not explicitly consider uncertainty in key variables and/or model parameters, such models provide a rather limited picture to support effectual risk management programs. Even so, deterministic models can be relied upon to a great extent for certain preliminary studies—i.e., usually prior to a more detailed stochastic optimization or simulation study. Indeed, stochastic methods typically come into use when the deterministic approach is found to be somehow deficient. Regardless, stochastic processes may be conveniently evaluated in such a manner, and conclusions associated with them drawn and treated, as if the process was somehow deterministic.

Meanwhile, it is noteworthy that, despite its usefulness, stochastic data do not improve the original poor records per se—but merely improve the quality of designs made with whatever records are available (Fiering and Jackson 1971); also, the processes involved will generally provide an idea of the confidence that can be placed on the adopted design value (McMahon and Mein 1986). Thus, notwithstanding any shortcomings, it is still an undisputable fact that the stochastic methods of approach tend to offer a more complete use of the information content of the usually limited data series; the result is the increase in the variations and spectrum of the possible solutions and methods for the design of complex safety and risk management systems. All the same, it must be acknowledged that some of the theoretical-based methods found in the literature cannot at times be used by themselves in practice, especially in the case of limited and/or ‘unreliable’ data series; under such circumstances, analysts may do well to choose a deterministic method of approach.

Finally, it is worth mentioning here that during the past several decades, there have been several important developments in analytical and statistical methods used in various risk assessment programs, as well as in the design of a variety of safety-related systems—albeit some of the basic classical or ‘traditionalist’ elements/methods for such efforts are still often utilized by contemporary practitioners. In general, the application of the new or ‘non-traditional’ scientific methods or tools is particularly justified when it provides answers to questions that cannot quite be resolved by traditional methods in an effectual manner. Notwithstanding, it must be cautioned that stochastic methods are by no means a panacea for executing risk assessment programs per se. In fact, many shortcomings (such as lack of knowledge concerning the underlying stochastic processes) might tend to cause decisions to be less optimal than had the phenomena been treated as deterministic. Each decision that has a stochastic input, however, must be realized as such and the proper methodology employed; the use of stochastic methods in risk assessments is, after all, an attempt to widen and extend our knowledge on key parameters and improve our decision-making ability. In a number of situations, this is accomplished by generating longer hypothetical sequences of events based on the statistical and probability characteristics of the past or existing records; the generated sequences of data are then used to identify the components that contribute to error and uncertainty in the specific program under review.

3 Risk Acceptability and Risk Tolerance Principles: de Minimis Versus de Manifestis Risk Criteria

An important concept in risk management is that there are levels of risk that are so great that they must not be allowed to occur at all cost, and yet there are other risk levels that are so low that they are not worth bothering with even at insignificant costs—known, respectively, as de manifestis and de minimis levels (Kocher and Hoffman 1991; Suter 1993; Travis et al. 1987; Whipple 1987). Risk levels between these bounds are typically balanced against costs, technical feasibility of mitigation actions, and other socioeconomic, political and legal considerations—in order to determine their acceptability or tolerability. In any event, with maintenance of public health and safety being a crucial goal for public health risk management decisions, it should be recognized upfront in any risk analysis that reasons such as budgetary constraints alone may not be used as justification for establishing an acceptable risk level that is on the higher side of a risk spectrum.

On the whole, the concept of de manifestis risk is usually not seen as being controversial—because, after all, some hazard effects are clearly unacceptable. However, the de minimis risk concept tends to be controversial—in view of the implicit idea that some exposures to, and effects of, pollutants or hazards are acceptable (Suter 1993). With that noted, it is still desirable to use these types of criteria to eliminate obviously trivial risks from further risk management actions—considering the fact that society cannot completely eliminate or prevent all human and environmental health effects associated with chemical exposure problems. Indeed, virtually all social systems have target risk levels—whether explicitly indicated or not—that represent tolerable limits to danger that the society is (or must be) prepared to accept in consequence of potential benefits that could accrue from a given activity. This tolerable limit is often designated as the de minimis or ‘acceptable’ risk level. Thus, in the general process of establishing ‘acceptable’ risk levels, it is possible to use de minimis levels below which one need not be concerned (Rowe 1983); it is notable that current regulatory requirements are particularly important considerations in establishing such acceptable risk levels.

At the end of the day, it is apparent that the concept of ‘acceptable risk level’ relates to a very important issue in risk assessment—albeit the desirable or tolerable level of risk is not always attainable. Anyhow, it is noteworthy that risk acceptability (i.e., the level of risk that society can allow for a specified hazard situation) usually will have a spatial and temporal variability to it.

3.1 The de Minimis or ‘Acceptable’ Risk

Risk is de minimis if the incremental risk produced by an activity is sufficiently small, such that there is no incentive to modify the activity (Cohrssen and Covello 1989; Covello et al. 1986; Fischhoff et al. 1981; Whipple 1987). These represent risk levels judged to be too insignificant to be of any social concern or to justify use of risk management resources to control them, compared with other beneficial uses for the often limited resources available in practice. In simple terms, the de minimis principle assumes that extremely low risks are trivial and need not be controlled. A de minimis risk level would therefore represent a cutoff, below which a regulatory agency could simply ignore related alleged problems or hazards.

The concept of de minimis or acceptable risk is essentially a threshold concept, in that it postulates a threshold of concern below which there would be indifference to changes in the level of risk. Meanwhile, it is notable that considerable controversy exists with regards to the concept of ‘acceptable’ risk in the risk/decision analysis literature; this is because, in practice, acceptable risk is the risk associated with the most acceptable decision—rather than being acceptable in an absolute sense. It has indeed been pointed out by some experts that acceptable risk is often decided in the political arena, and that ‘acceptable’ risk really means ‘politically acceptable’ risk (Massmann and Freeze 1987). On the whole, the selection of a de minimis risk level is contingent upon the nature of the risks, the stakeholders involved, and a host of other contextual variables (such as other risks being compared against). This means that de minimis levels will be fuzzy (in that they can never be precisely specified), and relative (in that they will depend on the special circumstances). Also, establishing a de minimis risk level is often extremely difficult because people perceive risks differently. More so, the cumulative burden of risks could make a currently insignificant risk become significant in the future. Consequently, stricter de minimis standards will usually become necessary in dealing with newly introduced risks that affect the same population groups.

There are several general approaches to deriving the de minimis risk levels—but the method of choice should be wholly justifiable based on the expected socioeconomic, environmental, and public health impacts. A common approach in placing risks in perspective is to list many risks (which are considered similar in nature), along with some quantitative measures of the degree of risk. Anyhow, typically, risks below the level of one-in-a-million (i.e., 10–6) chance of premature death will often be considered insignificant or de minimis by regulatory agencies in most nations, since this compares favorably with risk levels from several ‘normal’ human activities—e.g., 10–3 for smoking a pack of cigarette/day, or rock climbing, etc.; 10–4 for heavy drinking, home accidents, driving motor vehicles, farming, etc.; 10–5 for truck driving, home fires, skiing, living downstream of a dam, use of contraceptive pills, etc.; 10–6 for diagnostic X-rays, fishing, etc.; and 10–7 for drinking about 10 L of diet soda containing saccharin, etc. (Paustenbach 1988; Rowe 1977, 1983; Whipple 1987). In considering a de minimis risk level, however, the possibility of multiple de minimis exposures with consequential large aggregate risk should not be overlooked. In fact, Whipple (in Paustenbach 1988) suggests the use of a de minimis probability idea that will help develop a generally workable de minimis policy.

In summary, de minimis is a lower bound on the range of acceptable risk for a given activity. When properly utilized, a de minimis risk concept can help prioritize risk management decisions in a socially responsible and beneficial way. It may also be used to define the threshold for regulatory involvement. Indeed, it is only after deciding on an acceptable risk level that an environmental or public health risk management program can be addressed in a most cost-effective manner. Ultimately, in order to make a determination of the best environmental or public health risk management strategy to adopt for a given problem situation, a pragmatic and realistic acceptable risk level ought to have been specified a priori.

3.2 The ‘Safety Paradigm ’ in Public Health Risk Assessment: ‘The Dose Makes the Poison’—So, What Dose Is Safe Enough?

Current level of knowledge shows that many metals may be considered essential to normal cellular activity and evolutionary development. However, in excess, these same elements may cause toxic responses—as, for example, are noted below for the select list of essential and medically important metals (Berlow et al. 1982; Hughes 1996).

  • Aluminum [Al]—finds medical uses in antacids, and also in dialysis fluids. However, it has an associated toxic effect of dialysis dementia with excesses.

  • Cobalt [Co]—found in vitamin B12 as an essential metal, but can cause polycythemia and cardiomyopathy in excesses. Like iron (Fe2+) in hemoglobin, Co2+ serves to hold the large vitamin molecule together, and to make it function properly.

  • Copper [Cu]—facilitates the synthesis of hemoglobin, but may cause microcytic anemia when present in excessive amounts. Indeed, Cu is required for a variety of roles in the human body, several of which are connected to the use of iron. Although the total amount of Cu in the body is rather small, its deficiency may result in weak blood vessels and bones, as well as possible nerve damage.

  • Gold [Au]—finds medical uses in pharmaceuticals (rheumatoid arthritis), but excesses could result in nephropathies.

  • Iron [Fe]—important to the formation of RBCs (viz., erythropoiesis), but may cause liver or cardiovascular damage in excesses. In the human body, the iron-containing molecule (called hemoglobin) carries oxygen from the lungs to the rest of the human body. Indeed, small amounts of Fe are found in molecules that use oxygen in every tissue cell. It is noteworthy that, although the actual need for iron is very low (approximately 1–1.5 mg/day for a normal person), about ten times as much must be taken in human foods, mostly because only a small fraction of the iron passing through the human body is absorbed.

  • Lithium [Li]—finds medical uses in pharmaceuticals (depression), but excesses may result in nephropathies and cardiopathies.

  • Manganese [Mn]—is an enzyme potentiator, but may cause CNS (central nervous system) disorders and manganese pneumonitis in excesses. Indeed, Mn has many essential functions in every cell. However, Mn is also highly neurotoxic and the effects are largely irreversible; consequently, the recommended exposure limits have been lowered drastically in a number of countries in recent years. It is noteworthy that, with its increased industrial use and emissions into the general environment, the harmful effects of Mn cannot be overlooked—and close monitoring seems prudent.

  • Molybdenum [Mo]—is an enzyme cofactor, but may cause anemia and diarrhea in excesses. Indeed, Mo is part of several important enzymes.

  • Selenium [Se]—is an enzyme cofactor, but subject to cause neuropathies, dermatopathies, decreased fertility, and teratogenesis in excesses.

  • Zinc [Zn]—is essential (as Zn2+) for the normal growth of genital organs, wound healing, and general growth of all tissues. It is also associated with the hormone insulin, which is used to treat diabetes. Even so, excess of this essential nutrient is not recommended. It is noteworthy that oysters are believed to be an unusually rich source of Zn .

In fact, it is notable that even some of the more ‘suspicious’ chemicals (e.g., arsenic and chromium) are believed to be essential nutrients in rather small amounts—albeit are extremely toxic in slightly elevated/larger amounts. Thus, even the essential elements can be toxic at concentrations that are too high, and yet a deficiency of these same metals can also be harmful to the health of most living organisms—including humans. For such reasons, it is quite important to make a very clear distinction between the therapeutic and toxic properties of chemicals—recognizing that these properties are sometimes, but not always, indistinguishable except by dose.

In closing, it is remarkable that the sixteenth century Swiss philosopher and physician-alchemist, Paracelsus, indicated once upon a time that: ‘all things are poison and nothing is without poison, only the dose permits something not to be poisonous’—i.e., only the dose of a substance usually determines its toxicity. Indeed, this notion makes it even more difficult to ascertain the levels that constitute hazardous human exposure to chemicals. But careful application of risk assessment and risk management principles and tools should generally help remove some of the fuzziness in defining the cut-off line between what may be considered a ‘safe level’ and what apparently is a ‘dangerous level’ for most chemicals.

4 Risk Assessment Implementation Strategy

A number of techniques are available for conducting risk assessments. Invariably, the preferred methods of approach generally consist of the several basic procedural elements/components that are further outlined in Chap. 7 of this book. In any event, the key issues requiring significant attention in the processes involved will typically involve finding answers to the following questions:

  • What chemicals pose the greatest risk?

  • What are the concentrations of the chemicals of concern in the exposure media of interest?

  • Which exposure routes are the most important?

  • Which population groups, if any, face significant risk as a result of the possible exposures?

  • What are the potential adverse effects of concern, given the exposure scenario(s) of interest?

  • What is the range of risks to the affected populations?

  • What are the public health implications for any identifiable corrective action and/or risk management alternatives?

As a general guiding principle , risk assessments should be carried out in an iterative fashion, and in a manner that can be appropriately adjusted to incorporate new scientific information and regulatory changes—but with the ultimate goal being to minimize public health and socioeconomic consequences associated with a potentially hazardous situation. Typically, an iterative approach would start with relatively inexpensive screening techniques—and then for hazards suspected of exceeding the de minimis risk, further evaluation is conducted by moving on to more complex and resource-intensive levels of data-gathering, model construction, and model application (NRC 1994a, b).

In effect, risk assessments will normally be conducted in an iterative manner that grows in depth with increasing problem complexity. Consider, as an example, a site-specific risk assessment that is used to evaluate/address potential health impacts associated with chemical releases from industrial facilities or hazardous waste sites. A tiered approach is generally recommended in the conduct of such site-specific risk assessments. Usually, this will involve two broad levels of detail—i.e., a ‘screening’ (or ‘Tier 1’) and a ‘comprehensive’ (or ‘Tier 2’) evaluation. In the screening evaluation, relatively simple models, conservative assumptions, and default generic parameters are typically used to determine an upper-bound risk estimate associated with a chemical release from the case facility. No detailed/comprehensive evaluation is warranted if the initial estimate is below a pre-established reference or target level (i.e., the de minimis risk). On the other hand, if the screening risk estimate is above the ‘acceptable’ or de minimis risk level, then the more comprehensive/detailed evaluation (that utilizes more sophisticated and realistic data evaluation techniques than were employed in the ‘Tier 1’ screening) should be carried out. This more comprehensive next step will confirm the existence (or otherwise) of significant risks—which then forms the basis for developing any risk management action plans. The rationale for such a tiered approach is to optimize the use of resources—in that it makes efficient use of time and resources, by applying more advanced and time-consuming techniques to chemicals of potential concern and scenarios only where necessary. In other words, the comprehensive/detailed risk assessment is performed only when truly warranted. Irrespective of the level of detail, however, a well-defined protocol should always be used to assess the potential risks. Ultimately, a decision on the level of detail (e.g., qualitative, quantitative, or combinations thereof) at which an analysis is carried out will usually be based on the complexity of the situation, as well as the uncertainties associated with the anticipated or predicted risk.

As a final note here, it is worth mentioning that human exposures to radiological contaminants may be evaluated in a manner similar to the chemical exposure problems alluded to—albeit certain unique issues may have to be taken into consideration for the radiological exposures. Meanwhile, it is notable that, for the most part, the archetypical radiological exposures may occur through medical and dental X-rays; naturally-occurring radioactive materials in soils and groundwater; ambient air; and indeed various food sources, as well as several other consumer product sources.