Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

Occupational epidemiology has the same main goal as the broad field of epidemiology: to identify the causes of disease in a population in order to intervene to remove them. Occupational epidemiology is an exposure-oriented discipline; it is thus the systematic study of illnesses and injuries related to the workplace environment (Checkoway et al. 2004).

The first concern about occupational causes of disease may have been that of Hippocrates, who wrote about the lifestyle habits and environment of populations and patients. Nevertheless, it was the Italian physician Bernardino Ramazzini who recommended that doctors add questions about occupation to those recommended by Hippocrates, and it was Ramazzini who made the first systematic description of occupational diseases and their causes in his book De Morbis Artificum (Ramazzini 1964). His descriptions included different characteristics of skin ulceration in freshwater and sea fishermen, silicosis among stonemasons, ocular disorders among glassblowers, and neurological toxicity among tradesmen exposed to mercury. It is noteworthy that he not only described the diseases but was also deeply concerned about the ethics of harmful work practices and the need for preventive measures, such as good ventilation and protective clothing.

Classic historical reports, such as those about scurvy in sailors in 1753, scrotal cancer in chimney sweeps in 1775, respiratory cancers in underground metal miners in 1879, and bladder cancer in dye workers in 1895, are clear examples of the importance of reports of case series by clinicians and by the workers themselves (Carter 2000). New occupational hazards came to light incidentally even in the mid-1900s, when the methodological landmark of the historical cohort study was designed (Doll 19521955; Case et al. 1954) and occupational epidemiology developed as a discipline. Indeed, Case and coauthors suspected that rubber workers would have an elevated risk for bladder cancer while conducting a study on the high incidence of bladder tumors among dye manufacturers (Doll 1975). While reviewing hospital records of bladder cancer patients in Birmingham, England, chosen as a control area because it did not have a dye industry, they noticed that many workers had been employed in a rubber factory. Subsequent investigation confirmed the association with rubber production and showed that it resulted from exposure to an antioxidant containing the carcinogen 2-naphthylamine (Case and Hosker 1954; Coggon 2000).

Occupational epidemiology has contributed to the development of both study designs (such as the historical cohort study) and analytical methods that are now part of the broader field of epidemiology and of other exposure-oriented disciplines. For instance, quantitative and qualitative methods for assessing exposure, such as job-exposure matrices and job-specific questionnaire modules for assessment by experts, were developed by occupational epidemiologists and industrial hygienists. They have now been adapted and used in other disciplines, such as nutritional and environmental epidemiology, and are central to ensuring the validity and informativeness of epidemiological research in general.

Prevention is the final goal of all epidemiological research and findings. Occupational exposure was one of the first causes to be identified of diseases such as cancer and pulmonary illness, and epidemiological study of such exposures often led to the identification of specific causal agents. Occupational hazards are known causes of disease that are amenable to regulatory control and thus especially suitable for prevention. This is in contrast to aspects of lifestyle, such as smoking and dietary habits, for which control requires modification of cultural and personal behavior patterns. Free choice may contribute to some diseases attributable to environmental causes; for instance, the large majority of cases of lung cancer are attributable to tobacco smoking and can be prevented by avoiding the habit. The reason for interest in preventing occupational hazards is more subtle: As personal choice plays little or no role in occupational exposure, the protection of workers warrants special attention. Furthermore, while industrial effluents and products might cause illness in the general population, exposed workers are likely to be the first and most severely affected. Prevention at the level of the working environment will by the same token result in prevention in the general population.

This chapter will address issues in study designs and epidemiological methods as applied in the specific field of occupational epidemiology. They will include dose-response analysis, healthy worker effect, and exposure assessment. Finally, how occupational epidemiology can help to evaluate the need and effectiveness of primary prevention interventions and policies will be described using the example of occupational cancer.

2 Study Designs

Classic epidemiological studies, such as cross-sectional (see chapter Descriptive Studies of this handbook), case-control (see chapter Case-Control Studies of this handbook), and cohort studies (see chapter Cohort Studies of this handbook), are commonly carried out in occupational settings. The principles of study design and data analysis are derived from general epidemiological methods; however, some specific aspects are worth addressing.

2.1 Cross-Sectional Studies

Cross-sectional studies are generally used to investigate symptoms, sub-clinical outcomes, and physiological functions that are (or are suspected to be) related with an exposure of interest. Examples are wheezing and other respiratory symptoms in relation to exposure to different types of metal-working fluids (Greaves et al. 1997), urinary mutagenicity and DNA adducts in peripheral blood mononuclear cells and urothelial cells in relation with urinary 1-hydroxypyrene levels (Peters et al. 2008), and forced expiratory volume in one second (FEV1) and diesel exhaust exposure (US Environmental Protection Agency 2002). Cross-sectional studies are also useful in the study of potentially reversible and non-fatal diseases, such as musculoskeletal disorders. For all of such conditions, longitudinal studies are either inappropriate or not practically feasible, and cross-sectional studies are the best study design applicable. It must be born in mind, however, that they measure the prevalence of the outcome of interest, or assess a dose-response relationship, in groups that are often an opportunistic, rather than a representative, random sample of the ideal target population (e.g., all metal or all rubber workers), being made up of the individuals still active at the time of the survey and willing to participate. Therefore, associations between exposure and disease are difficult to interpret, as they could depend either on an increased incidence or on a longer duration of disease among a subgroup of cases. For this reason and for problems of reverse causality arising from measuring exposures and diseases at the same time, which prevents the time sequence between exposure and outcome to be clearly established, the causal nature of an association can be weakly addressed using a cross-sectional approach.

Cross-sectional studies allow the researchers to examine on an ad hoc basis the target population, or a sample of it, and by doing so to study health outcomes that cannot be investigated in longitudinal studies of mortality or incidence. At the same time, they are vulnerable to the effect of non-response, particularly when they are carried out with the main aim of estimating the prevalence of diseases or their symptoms. Diseased workers may participate in the study differently from those who are not diseased, and their willingness to participate may depend on their exposure status. Occupational studies of fertility and sperm quality are an example of studies in which non-response is a critical problem. Since the observation of the toxic effects of 2,3-dibromo-3-chloropropane on testicular germ cells (Potashnik et al. 1978), the fertility of exposed male workers has been investigated in several studies. In one study, groups of traditional and organic farmers were selected randomly from the database of the Danish Ministry of Agriculture in 1995–1996 and invited to participate in a study on semen quality, including total sperm count, sperm concentration, other indices, and serum concentrations of sex hormones (Larsen et al. 1999). A questionnaire eliciting information on previous exposure to pesticides was posted to 1,124 farmers, of whom 86% answered and 256 provided semen samples. This low participation proportion was not unexpected, as the examination required by the study was somewhat demanding.

A further limitation of cross-sectional studies, which is specific to occupational epidemiology, is that only active workers are usually investigated, because the study base is defined as workers employed in a specific industry or exposed to a specific occupational factor. It follows that workers who have terminated their employment cannot be included in the study.

Let us consider the example of the cross-sectional studies on the health effects of exposure to diesel fumes (US Environmental Protection Agency 2002). Acute respiratory effects were investigated in several studies by measuring FEV1 and other indicators of pulmonary function twice, at the beginning and at the end of a work shift, in workers employed in mines and garages. Chronic respiratory effects were studied through a single survey and a medical examination in workers with different levels of cumulative occupational exposure to diesel exhausts. Individuals who are susceptible to diesel exhaust exposure tend to move from jobs with a high level of exposure. Therefore, a cross-sectional study on the acute effects of exposure is presumably carried out among a selected group of workers, resulting in a possible underestimate of the effects. Regarding chronic effects, which are manifest a long period after the exposure has occurred, there is an underestimate of the association between exposure and disease, if the termination of employment is determined by the disease or its early symptoms.

Notwithstanding these limitations, cross-sectional studies have been successfully carried out on the association between certain exposures and long-term effects, taking advantage of the fact that this study design offers the opportunity to investigate exposures thoroughly. A group of French talc millers received a respiratory health examination in 1989, including a standardized questionnaire on respiratory symptoms, lung function testing, and a chest radiograph (Wild et al. 1995). Systematic measurements of dust concentrations had been made in 1986 and 1989, and, based on the results of the measurement campaigns and of an accurate retrospective reconstruction of past working conditions, a quantitative job-exposure matrix had been developed. The cumulative exposure of all workers included in the respiratory health survey was estimated, by applying the matrix to their work histories. A dose-response relationship between cumulative dose and decline in lung function, prevalence of dispnoea, and chest X-ray opacities could be shown.

The initial survey of a cross-sectional study may become the first phase of a longitudinal study. In the investigation of French talc millers, the same group of workers included in the 1989 respiratory health survey received a second chest X-ray in 1993. An increase was observed in the number of workers with profusion of opacities > 1/0, but the increase was not associated with cumulative exposure to talc (Wild et al. 1995).

2.2 Cohort Studies

To investigate long-term effects, the cohort study is a valid, but sometimes expensive and time-consuming, design. Nevertheless, the availability of employment records and trade union registries often permits straightforward identification of past occupational cohorts. It is therefore not surprising that historical cohort studies have long been the method of choice in occupational epidemiology, and they have contributed significantly to the identification of occupational hazards.

Researchers usually identify a factory in which the exposure of interest occurs – to specific chemicals and substances or specific working conditions and job tasks – and select the members of the cohort from registries available at the factory. Alternatively, a study population can be identified from similar departments in different factories. Thus, when a single facility does not provide a sufficient number of workers or the time of follow-up is not long enough, a collaborative study can be conducted in similar factories in several centers.

The cohort study of workers employed in the man-made vitreous fiber (MMVF) industry in Europe, coordinated by the International Agency for Research on Cancer (IARC) (Boffetta et al. 19971999; Sali et al. 1999), is an example of such collaboration. The cohort was assembled in 1977 and consisted of approximately 22,000 workers who had ever been employed in 13 factories producing at least one of three types of MMVF, namely, glass wool, continuous filaments of glass fiber, and rock or slag wool, at any time between the year of starting production of MMVF and 1977. The follow-up ended between 1990 and 1995 in different factories, depending on subsequent updating.

Exposure was assessed on the basis of individual work histories, obtained from employment registries in 1977. It was known that important technological changes had taken place in the production of MMVF over the study period, so that the period of MMVF production was divided into three “technological phases”: early, intermediate, and late. As the ambient levels of exposure to MMVF were estimated to have decreased with evolving production processes, the year in which each of the three phases began in each factory was assessed. Information on possible concomitant exposure to other agents, such as asbestos and bitumen, was also obtained for each factory. The researchers thus knew the duration of employment for each worker, by factory, technological phase, and job task.

National mortality rates were used to calculate standardized mortality ratios (SMRs) (cf. chapters Rates, Risks, Measures of Association and Impact or Descriptive Studies of this handbook) for neoplastic and non-neoplastic causes of death, and cancer-specific standardized incidence ratios (SIRs) were estimated for the subcohorts in countries where cancer incidence rates are available from cancer registries. The effect of duration of employment was estimated in internal comparisons within the cohort, the reference group including workers employed for less than 5 years. Data were also analyzed according to type of MMVF, job task, technological phase, and time since first employment. Data on workers who had been employed for less than 1 year were analyzed separately, as short-term workers might be high-risk individuals with particular lifestyles or occupational exposure to agents other than MMVF (see also Sect. 42.4.2).

In general, MMVF production workers did not have an excess risk of mortality or cancer incidence, although a small excess risk for lung cancer was found among rock- and slag-wool workers, and increased mortality from heart diseases and non-malignant renal diseases was suggested. It is important to note that no information on lifestyle factors was available, which is a limitation of almost all historical cohort studies.

In countries where good, computerized population registries with a long history of registration exist, large occupational cohort studies can be carried out by linkage of information on occupational status from censuses with individual data on, for example, mortality, cancer incidence, and hospital discharges. The strength of record-linkage studies is the very large sample size. An occupational record-linkage study of cancer incidence was conducted in the Nordic countries among persons aged 25–64 years who were listed in the 1970 censuses (Andersen et al. 1999). Overall, about ten million persons were included in the study, and more than 500,000 incident cases of cancer occurred during the follow-up period, which ended between 1987 and 1990, depending on the country. Occupational exposure was evaluated for 54 occupational groups. Many cancer-specific associations were estimated, and they cannot be discussed here; however, the general finding was that risk of cancer is associated with occupation.

This record-linkage study shows clearly that cohort studies can provide risk estimates for many outcomes and some of the findings might be unexpected. For instance, in a historical cohort study of 8,226 workers employed in an aircraft manufacturing factory in northern Italy between 1954 and 1981, an unexpected excess of melanomas was found (6 observed, 1.02 expected cases) (Costa et al. 1989). When an unexpected association is found, the characteristics of the cases, including age, sex, period of employment, factory, and job task, should be explored carefully in order to identify any clusters of jobs or operations that suggest a common exposure. In the example of melanoma, the characteristics of the six cases were described in detail, but no cluster could be identified.

There are two major limitations to the use of data from existing records rather than from ad hoc questionnaires and environmental or biological measurements: lack of detailed information on exposure and lack of information on possible relevant confounders.

With regard to exposure, maximum cooperation between researchers and management, trade unions, occupational physicians, and industrial hygienists is crucial to obtain information on the nature of both industrial processes and working environments. Basic information on the exposure of each worker should include the starting and ending dates of employment at the factory. Unfortunately, important information, such as the job task of each worker and changes in industrial processes over time, is often missing. Even when the job task is recorded, one would like to evaluate also the variability of exposure levels among workers carrying out the same job. The general lack of information may reduce the quality of the data on exposure, whatever approach is used to assess exposure, and finally bias the results of the study because of misclassification. Although it is theoretically possible to measure factory-specific levels of exposure at the time a study is conceived, strong assumptions should hold for a reliable imputation of past exposures.

In some studies, plant-specific ambient measurements had been recorded over time and were available for assessing exposure. A historical cohort of more than 74,000 workers employed between 1972 and 1987 in 672 factories in jobs that entailed exposure to benzene was assembled in China (Hayes et al. 1997). The cohort was followed-up for death from all causes and for incidence of hematological tumors, with an analogous cohort of approximately 36,000 unexposed workers for comparison. For the purposes of assessing exposure, information on the factory and department of employment and on the starting and ending dates of each job was obtained, for each worker, from employment records available at the factories (Dosemeci et al. 1994a). Moreover, information on production activities and changes in processes over time was obtained at each factory and for each job type. Importantly, the results of all past air monitoring (more than 8,400 measurements) for benzene and other solvents were also obtained. Therefore, whenever possible, the exposure level was assigned to each worker on the basis of monitoring results either for specific combinations of job task, department, and calendar period or for adjacent calendar periods or similar job tasks.

Detailed information on exposure and confounders can obviously be obtained in concurrent cohort studies, which can be efficiently carried out when the induction period between exposure and disease is short. If the cohort is followed-up prospectively, temporal variations in exposure can be ascertained either at individual level, from questionnaires, personal dosimetry data, or use of biomarkers of exposure, or at aggregate level, from environmental measurements and monitoring of changes in industrial processes.

2.3 Case-Control Studies

Studies in Industrial Populations Nested case-control studies (see chapter Modern Epidemiological Study Designs of this handbook) might solve some of the limitations inherent in the cohort design. As a nested case-control study covers fewer persons than a cohort study, the nested approach is efficient when the exposure assessment is not straightforward, as, for instance, when it is based on experts’ judgment. The nested approach is also more efficient when worker-specific levels of exposure are estimated from biological samples or by direct interview with the workers or their next-of-kin. Measurement of biomarkers can result in accurate assessments of current exposure, but assumptions must be made about past exposure. Conversely, interviews allow detailed reconstructions of working histories and provide information on possible confounders. Information on actual exposure levels may nevertheless be rather imprecise, and the subjects are difficult to trace, especially when the follow-up period is long.

The historical cohort study of workers employed in MMVF production coordinated by the IARC, described above, includes a clear example of a nested case-control design (Boffetta et al. 1997). The analyses of the cohort revealed a small excess risk for lung cancer among rock- and slag-wool production workers, but no information was available on possible confounders; furthermore, occupational histories were available up to 1977 only and were limited to the information in the employment registries. The researchers therefore conducted a case-control study of 196 cases of lung cancer, and 1,715 matched controls nested in the cohort (Kjaerheim et al. 2002). The index subjects or their next-of-kin were traced and interviewed to obtain information on lifetime smoking habits, residential history, and lifetime occupational history, both within and outside the MMVF industry. As anticipated by the study design, the proportion of completed interviews with the selected subjects was low: 68% for cases and 35% for controls. Two industrial hygienists evaluated the individual occupational histories for exposure to each of several occupational agents known or suspected to be associated with lung cancer. Moreover, an expert panel was formed to evaluate individual cumulative exposure to MMVF on the basis of the new information obtained at the interview. The smoking-adjusted estimates and the analyses by quartiles of cumulative level of exposure in the nested study did not support an association between exposure to rock- or slag-wool and lung cancer risk.

Quite often, a nested case-control design increases the efficiency of the computerization, cleaning, and handling of data, even though information on exposure is available. Grayson (1996), for example, conducted a case-control study on brain cancer nested in a cohort of approximately 880,000 US Air Force members to evaluate the effect of occupational exposure to electromagnetic fields. The workers had to have been employed between 1970 and 1989. At the end of the follow-up period, 230 incident cases of brain cancer were found, and four controls for each case were randomly selected among cohort members. Information on past exposure to electromagnetic fields was obtained from several sources, including employment records, records of events exceeding existing limits, and some personal dosimetry data. The final analysis was based on 1,150 persons instead of more than 800,000 in the original cohort.

Studies in the General Population Population- or hospital-based case-control studies have frequently been used to investigate the health effects of occupational exposures. In the early 1980s, a multicenter case-control study was carried out to investigate the associations between laryngeal and hypopharyngeal cancer and smoking, alcohol, dietary habits, and occupational factors (Tuyns et al. 1988). The study, coordinated by IARC, was population-based and included six centers in northern Italy, France, Spain, and Switzerland. Information on occupational history and lifestyle factors was obtained by face-to-face interviews with cases and controls. Specifically, each person was asked to report all jobs held for at least 1 year since 1945, specifying their starting and ending years, a short description of specific tasks, the name of the company, the company’s activity, and the specific products of the department in which the interviewed person had worked. The occupational histories of 1,010 interviewed cases and 2,176 interviewed controls were coded, without knowledge of case or control status, according to standard international classifications of occupations and industries. Then, smoking- and alcohol-adjusted odds ratios for occupational factors were obtained by two approaches. First, an exploratory analysis was carried out on 156 occupations and 70 industrial activities in which at least nine individuals had been ever employed (Boffetta et al. 2003). Second, a working group created a job-exposure matrix (JEM) to categorize each combination of job and activity in terms of levels of probability, intensity, and frequency of exposure to 16 occupational agents for which there was some a priori evidence of an association with laryngeal cancer risk (Berrino et al. 2003). The agents investigated included asbestos, solvents, formaldehyde, and polycyclic aromatic hydrocarbons. The JEM was used and evaluated in ad hoc studies (Merletti et al. 1991; Ahrens et al. 1993; Luce et al. 1993; Orlowski et al. 1993; Stengel et al. 1993; Stucker et al. 1993). An account of its validation, based on a comparison between the results of the JEM and the experts’ evaluation of the jobs as described in the questionnaires, is given in Table 42.1. Generally, the specificity and sensitivity of the JEM was agent-specific. The first analytical approach, based on job titles and industrial activities, provided risk estimates for several occupations, an advantage facilitated by the heterogeneity of the study subjects’ working histories due to the multicenter design. Conversely, the second approach directly tested etiological hypotheses. In both instances, the case-control design made it possible to control for the confounding effects of smoking, alcohol drinking, and diet.

Table 42.1 Validation of the job exposure matrix (JEM) of the IARC case-control study on laryngeal and hypopharyngeal cancer: percentage of jobs not entailing an exposure to specific agents according to an expert’s assessment compared with the results from the JEM

Mortality Odds Ratio Studies Mortality odds ratio studies have a case-control design and are a valid alternative to proportionate mortality studies, which have been widely used in occupational epidemiology (Miettinen and Wang 1981; Boyd et al. 1970). In proportionate mortality studies, the frequency of death for the diseases under study among exposed workers is compared with the corresponding figure calculated for a reference population (proportionate mortality ratio, PMR). PMRs are limited by the fact that they must add up to unity; therefore, elevated PMRs for some diseases are, by definition, counterbalanced by decreased PMRs for other diseases. Moreover, PMRs are biased if ascertainment of deaths is incomplete in a different proportion among exposed than unexposed subjects. These drawbacks are overcome in mortality odds ratio studies where the case-control approach is applied. In such studies, the cases comprise deaths from the specific cause of interest, both exposed and unexposed, while the controls are other deaths, selected on the basis of a presumed lack of association with the exposure. The principle of selecting the control causes of death for inclusion in the study is therefore the same as selecting a control series for any case-control study: Controls are selected independently of exposure and with the aim of representing the proportion of exposure in the study base (Rothman et al. 2008). In practice, however, avoiding bias may prove difficult, as many causes of death, even when unrelated with the exposure under study, are associated indirectly with occupational exposures via the socioeconomic status.

Concluding Remarks Case-control studies can be used efficiently to investigate long-term outcomes represented by rare diseases, or by diseases caused by widespread occupational exposures, which cannot be localized to a specific industry. This study design also permits the researcher to focus on minorities and on subgroups of the population that have often been poorly investigated. For example, at the first international conference on occupational cancer in women, in 1993, it was recognized that most of the information on occupational hazards had been obtained from studies on men: A survey showed that less than 10% of published epidemiological studies included and reported detailed results on women (Zahm et al. 1994). Although this picture has changed, efforts to study the effects of occupational exposures on women are still needed (Zahm and Blair 2003). A case-control approach is often used in occupational epidemiology for exploratory studies. As in the study on laryngeal and hypopharyngeal cancer described above, an occupational history may be classified by several groups of job titles and industrial activities. As multiple comparisons are made, a Bayesian approach with semi-Bayes or empirical-Bayes adjustments might help to decrease the impact of false-positive results (Greenland and Poole 1994; Corbin et al. 2008). For a formal explanation and practical examples of Bayesian approaches in occupational epidemiology, see Greenland and Poole (1994) and Steenland et al. (2000a).

3 Exposure Assessment

Exposure assessment (see chapter Exposure Assessment of this handbook), a critical step in any epidemiological study, is central in occupational epidemiology. The most recent developments in the design of both cohort and case-control studies of work-related diseases rely on identification of exposure to specific agents, such as chemicals, rather than on the use of surrogates of exposures, such as being employed in a given industrial activity or holding a certain job. Furthermore, an attempt is often made to compute some measure of dose, such as cumulative exposure or average exposure, which in turn requires estimation of the level (intensity) of exposure and its variation over time.

3.1 Exposure Assessment in Industry-Based Studies

Two general strategies, statistical and deterministic modeling (Kauppinen et al. 1994), are available to assess exposure on the basis of the primary information collected in a study. Such information usually does not include satisfactory measures of exposure to the agent(s) of interest and is often limited to a description of the work setting, the operations performed by workers, and the materials they handled. When suitable data are available, however, on the concentration of the agent of interest in the workplace, exposure assessment will be based on a stochastic (statistical) approach, in which a model is fitted to the results of past industrial hygiene measurements by plant, job title, and work area; missing data will be estimated from the model (Kauppinen et al. 1994). When statistical modeling is applied to industry-based studies, such as cohort, cross-sectional, and nested case-control studies, workers are classified into “homogeneous” groups on the basis of combinations of plant, work area, job title, and period. The available industrial hygiene measurement series are broken down into the same groups (de Vocht et al. 2008; Lavoué et al. 2007). The main limitations of the statistical approach are that (1) trends in exposure over time are often unknown, either because measurements were not made for previous processes and working conditions or because of difficulties in the interpretation of historical measurements, and (2) the interindividual variation of exposure within a homogeneous group can be wider than that between groups, which may make the correct identification of homogeneous groups difficult (Kromhout et al. 1993). Furthermore, it must be born in mind that, when only limited data exist, they will also be distributed unevenly by job. There is, thus, the danger of overestimating exposures due to oversampling worst case scenarios, especially when measurements were initiated upon complaints of workers, for example, in response to spills. Conversely, measurements done by the companies or measurements made to assess compliance with exposure limits may lead to underrepresentation of the highest exposures.

In historical cohort studies, the availability and quality of industrial hygiene data are often different for the various settings included in the work histories of the study subjects; good data may exist for some periods and not for others. In these circumstances, the maximum achievable goal is a semiquantitative approach in which jobs are compared according to materials handled and tasks performed. The jobs are then ordered in terms of assessed exposure, which is placed onto a semiquantitative scale (e.g., high, intermediate, low).

Because comprehensive data on exposure are rarely available, less accurate methods have thus to be used. If the factors that determine the level of exposure can be identified, they can be used to construct a deterministic model (Kauppinen et al. 1994). In deterministic modeling in industry-based studies, the most significant factors that affect exposure intensity, such as type of plant and machinery, presence of local exhausts, and workers proximity to sources, are identified. Their relative importance is then assessed, either on the basis of historical industrial hygiene data – even if such data are insufficient for complete modeling – or, in their absence, through the theoretical evaluation of how different tasks, operations, and procedures could have affected exposure. In rare circumstances, it has been possible to improve the assessment by reproducing experimentally some past working conditions and to estimate the concentrations of the agents of interest by applying modern measurement techniques to such reconstructed working places and procedures. It has been shown that, by combining such approaches, complex industry-specific exposure matrices can be built on the basis of detailed knowledge of plant-, job-, and time-specific factors (Kauppinen and Partanen 1988). Semiquantitative exposure levels can then be established for each study subject applying the matrix to the information on the jobs they held and the tasks they performed.

The main limitations of the deterministic approach are that (1) the relative importance of the various determinants may prove difficult to assess, and agreement among experts may be poor, and (2) the quality of information on the determinants may be highly variable across study subjects; for some, the tasks involved in their job might have been recorded, while for others barely the job title or the department of assignment is known.

In some recent multicenter or pooled industry-based studies, considerable advances have been made in exposure assessment, by combining the statistical and deterministic approaches. Quantitative industry-specific exposure matrices were thus built and applied to the workers’ individual job histories (Burstyn et al. 20002003; t’Mannetje et al. 2002; Harber et al. 2003). Alternatively, Bayesian decision analysis may be applied: Experts will create a “prior” probability distribution of exposure, based on their knowledge of processes, jobs, etc., which will be combined with available measurements to obtain “posterior” distributions (Ramachandran 2001). Current standards of practice imply that when good industrial hygiene data are available, at least for some historical periods, and the relative influences of changes in plant, process, and activity can be evaluated, exposure will be assessed quantitatively and extrapolated to periods or plants for which no original quantitative information was available. With this method, quantitative data on a given job, in a given industrial activity, and during a given period provide a baseline estimate of both the average exposure and its variation. Known differences in the presence and characteristics of determinants provide multiplicative weighting factors to be applied to the baseline estimate. Few validation studies of industry-specific exposure matrices are, however, available (Dosemeci et al. 1994b1996).

3.2 Exposure Assessment in Population-Based Studies

In population- and hospital-based case-control studies, statistical modeling has been used to set up job-exposure matrices (JEM) (Hoar et al. 1980; Macaluso et al. 1983). A JEM can be defined as a cross-classification of a list of job titles with a list of agents to which the workers performing the jobs might be exposed (Kauppinen and Partanen 1988). Deterministic modeling has been used in the interpretation and assessment of job histories by industrial hygiene experts when occupational questionnaires including job-specific modules (JSM) were developed to obtain the detailed information necessary for the experts’ judgment (Siemiatycki et al. 1981; Macaluso et al. 1983; Ahrens et al. 1993). Researchers at the US National Cancer Institute showed that a deterministic approach can be used not only in expert- and JSM-based assessment but also to create and use more detailed and improved JEMs that might allow semiquantitative or even quantitative exposure assessments (Dosemeci et al. 1990a). The same group suggested that the JEM-based assessment strategy should be abandoned in favor of the JSM-based expert assessment (Stewart et al. 1996). Use of JEMs has been reported to result in loss of both sensitivity and specificity in exposure assessment, in comparison with the use of a JSM-based individual assessment (Rybicki et al. 1997). Simulation studies suggested that use of JEMs may lead to loss of precision in odds ratio estimates, whereas expert-based assessment resulted in relatively low levels of misclassification (Bouyer et al. 1995).

Although it is somewhat difficult to assess the validity of expert-based exposure assessment in the field, some studies suggest that the agreement within and between experts might be satisfactory when experienced teams of raters are available (Siemiatycki et al. 1997; Fritschi et al. 2003). Two studies addressed the issue of expert-based exposure assessment validation by means of an objective index of past exposure to asbestos.

The first study (Pairon et al. 1994) comprised 131 cases of mesothelioma. The probability, level, and frequency of exposure were assessed by using qualitative ordinal classifications of the job in which each person had maximum exposure. Combinations of assessed probability, level, and frequency were summarized in four classes: (1) unexposed or possibly exposed, (2) probable or definite exposure at low level, (3) probable or definite exposure at levels higher than low, with sporadic frequency, and (4) probable or definite exposure at levels higher than low, with more than sporadic frequency. No attempt to build up a cumulative dose index was made. A limited correlation between the exposure assessment and objective indices of exposure to asbestos was observed, particularly with counts of asbestos bodies per gram of dried tissue. This study suffered from some shortcomings. Intensity and frequency were not used to compute a combined dose estimate, which prevented the calculation of a cumulative dose index. Frequency was used to discriminate between the third and the fourth summary class, but variations in frequency might actually be less important than those in intensity to determine the average exposure level. Only the highest exposure job was used in the assessment, so that other possible exposures have not been taken into account. The sensitivity of objective indices of asbestos exposure in mesothelioma cases may be low.

In the second study (Takahashi et al. 1994), 42 cancer cases for whom necropsy material was available were assessed for exposure from a JSM-based questionnaire and by analysis of lung tissue fibers. A good correlation was found between the JSM-based exposure assessment and asbestos fiber counts, although some cases were found to have had exposure but had no asbestos in the lung. The main shortcomings of this study are its rather limited dimension and a potential necropsy selection bias; the heterogeneous nature of the cases as to their cancer site makes it difficult to extrapolate its results to a mesothelioma series.

Expert-based assessment with deterministic modeling in the hands of experienced raters has resulted in quantitative assessments in some population- and hospital-based case-control studies (Iwatsubo et al. 1998; Brüske-Hohlfeld et al. 2000; Rödelsperger et al. 2001; Pohlabeln et al. 2002).

Recently, nationwide databases of past measurements have been created by public institutions involved in health and safety at work, like Colchic at INRS (France), MEGA at Berufsgenossenschaften (Germany), or the National Exposure Database at the Health and Safety Executive (UK). The availability of large amounts of industrial hygiene data, going back in time to the 1970s and 1980s and earlier, is stimulating the development of quantitative JEMs (Kauppinen et al. 2009; Preller et al. 2010; Peters et al. 2012).

3.3 Consequences of Errors in Exposure Assessment

The consequences of errors in exposure assessment are discussed extensively in the chapters on exposure assessment (chapter Exposure Assessment) and measurement error (chapter Measurement Error) in this handbook. When exposure is measured as a continuous variable at the individual level, random, non-differential errors in assessment, such as those deriving from errors in measurement, generally lead to attenuation of the exposure-disease association and diminish the goodness of fit of regression models (Armstrong 1998). When measurements are constrained by a lower limit, such as a detection limit, however, inflation of the exposure-response association can occur under certain circumstances (Richardson and Ciampi 2003). When exposure is measured as a continuous variable, but at the group level, a rather different situation occurs: The exposure level is the average for a sample of individuals in the group, and all individuals are assigned this average exposure. This leads to what is referred to as “Berkson error.” In a “classical” error, an individual is assigned a measured exposure, affected by random variability. In Berkson error, the group average exposure is affected by a considerably smaller random error, but the actual exposure of each individual in the group will be different from the group average. Berkson error usually entails small, if any, bias of the exposure-response association (Armstrong 1998), even if in certain circumstances a substantial over- or underestimation of the quantitative relationship may occur (Steenland et al. 2000b).

In many, probably most, study designs, exposure is scaled as a discrete variable on a dichotomous or a polytomous scale. When the exposure variable is dichotomized, non-differential misclassification will always bias the effect measure toward the null; however, when the exposure variable is polytomous, non-differential misclassification will bias the effect measure toward the null only if misclassification occurs between adjacent exposure categories. When it involves non-adjacent categories, bias away from the null may also occur (Armstrong 1998; Dosemeci et al. 1990b).

Quantitatively, misclassification is a function of (a) the sensitivity of the assessment method, that is, the proportion of all truly exposed subjects correctly classified as exposed and (b) its specificity, that is, the proportion of all truly unexposed subjects correctly classified as unexposed. The relative importance of sensitivity and specificity in overall misclassification bias depends on exposure prevalence: When exposures are rare, like most occupational exposures in population- and hospital-based case-control studies, even small losses in specificity may strongly bias the relative risk estimate toward the null. Such effect is clearly depicted in Fig. 42.1, where a true relative risk of 4.0, sensitivity in exposure assessment of 0.9, and a range of commonly found exposure prevalences (from 0.001 up to 0.2) are assumed. The estimated relative risk is plotted against different specificities in exposure assessment.

Fig. 42.1
figure 1

Bias toward the null of estimated relative risk due to loss in specificity in exposure assessment. True relative risk = 4 and exposure assessment sensitivity 0.9 are assumed. Estimated relative risk is plotted against different levels of specificity of exposure assessment, for exposure with prevalences (Prev) ranging from 0.001 to 0.2 (Modified from Ahrens 1999)

This consideration does not of course imply that sensitivity is not important: When exposures are rare, low sensitivity in exposure assessment causes loss in power and requires substantial increases in sample size to compensate for it.

4 Special Issues in Occupational Epidemiology

4.1 Confounding

A general discussion of confounding is given in chapter Confounding and Interaction of this handbook). Information on several known possible confounders and on other occupational and non-occupational exposures of interest is almost always lacking for historical occupational cohorts. Methods to deal with confounding in historical cohorts include use of internal comparison groups, with general characteristics assumed to be similar to those of the exposed subjects, and use of available statistics on the distribution of confounders in the population from which the cohort originated. The first approach is commonplace in occupational epidemiology, although it is seldom possible to verify whether the comparison group has the assumed characteristics. Internal comparisons have the advantage of controlling part of the bias introduced by the “healthy worker effect,” that is discussed below. In a historical cohort study conducted in the Nordic countries to investigate the risks for cancer among airplane pilots (Pukkala et al. 2002), national cancer registry data were used to calculate standardized incidence ratios (SIRs). Since airplane pilots belong to a higher social class than the general population, however, the SIRs were possibly biased. In this study, cosmic radiation was the main exposure of interest; therefore, a cumulative dose of radiation experienced by each member of the cohort was calculated. This made it possible to check the main findings from the external comparison by analyzing the effect of the exposure, using pilots with the lowest cumulative dose as the reference group. Such a comparison is unlikely to be confounded by social class.

The second approach, the use of available population statistics, was applied in the Norwegian part of the occupational record-linkage study in the Nordic countries, described in Sect. 42.2.2, with occupation-, sex-, and birth-cohort-specific information on smoking habits in the population obtained from external surveys (Haldorsen et al. 2004). The Norwegian study evaluated 42 occupational groups for risk of lung cancer in comparison with twelve other occupational groups assumed to be without exposure to occupational lung carcinogens. The magnitude of the associations of proportion of current and former smokers and amount of cigarettes smoked with lung cancer risk was estimated among the twelve reference groups, using data from the external surveys. Then, the smoking-adjusted SIRs for the 42 occupational groups were calculated and compared with the non-adjusted estimates. Limitations of this approach are that (1) the quality of the information on the confounder depends on the quality of the surveys and (2) the magnitude of the association between confounder and disease is not directly estimated at an individual level. The magnitude of the association can be either obtained from the best available studies or from ad hoc studies conducted in the same population, or estimated, as in the Norwegian study, using aggregate data.

When a study is conducted to determine whether an occupational exposure is associated with a disease, with no specific interest in the dose-response relationship, and when the estimate of the association is large, adjusted and unadjusted estimates are often similar. This has been shown in studies of occupational lung cancer risk, for which smoking is a strong potential confounder. In particular, using an indirect approach that is a type of sensitivity analysis, Axelson calculated that the confounding effect of smoking can hardly explain relative risks greater than 1.5 or below 0.7 when national rates are used for comparison (Axelson 1978; Axelson and Steenland 1988). He made sound assumptions about the proportions of moderate smokers (40%) and heavy smokers (10%) in the population used for calculating the number of expected cases and also about the effects of moderate smoking (relative risk, 10) and of heavy smoking (relative risk, 20) on lung cancer risk. Then, the adjusted relative risks were calculated for different scenarios of association between smoking and being employed in specific occupations (Table 42.2). Axelson’s suggestion that, under common circumstances, strong risk factors have weak confounding effects was investigated further and supported (Gail et al. 1988; Siemiatycki et al. 1988; Flanders and Khoury 1990).

Table 42.2 Risk ratios for lung cancer in relation to the fraction of smokers in various hypothetical populations (Source: Axelson 1978; Axelson and Steenland 1988)

In developing a protocol for a case-control study on the risk of female breast cancer associated with occupational exposure to magnetic fields, a simulation study was carried out to evaluate the potential confounding effects of several risk factors (Goodman et al. 2002). Twelve potential confounders, including a family history of breast cancer, country of birth, age at menopause, and obesity, were selected on the basis of recent reviews on breast cancer epidemiology and evaluated both in univariable analyses and with combinations of two to five risk factors. Estimates of the strength of the associations between the risk factors and breast cancer risk and the prevalences of the risk factors in the general population were obtained from the literature. The aim was to identify confounders that, under different scenarios of their prevalence among cases, could increase a true odds ratio of 1 up to a distorted value of 1.5. In the univariable analysis, no risk factor was a strong confounder, unless an unrealistic increase in its prevalence among occupationally exposed women was assumed. Interestingly, the scenario in which the prevalences of several risk factors were increased also did not have a strong confounding effect. For instance, a twofold increase among exposed women in the prevalence of first-degree relatives with breast cancer, a history of cancer in one breast, benign proliferative breast disease, obesity, and consumption of at least two drinks per day inflated the odds ratio from unity to 1.38.

The similarity between adjusted and unadjusted estimates has also been shown empirically, in both cohort and case-control studies. SMRs of cancers of the lung, bladder, and intestine, unadjusted for smoking, strongly correlated with smoking-adjusted estimates in analyses of occupational factors in a cohort of US veterans (Blair et al. 1985). Analogously, smoking was found to be a weak confounder in a review of several occupational case-control studies on lung cancer (Simonato et al. 1988). When selecting the final model for analyzing a case-control study on occupational factors and lung cancer risk in two areas of Italy in 1990–1992 (Richiardi et al. 2004), we evaluated several models for addressing smoking as a confounder. Table 42.3 shows the results of an evaluation for two occupational categories, one positively associated and the other negatively associated with smoking. The evaluation showed that a simple model in which smokers are classified as current, former, and never can accommodate for most of the potential confounding effect.

Table 42.3 Odds ratio and 95% confidence intervals of lung cancer for two selected job titles, obtained using seven different methods to model smoking in an Italian case-control study on lung cancer (Source of data: Richiardi et al. 2004)a

4.2 Healthy Worker Effect

Workers are not a random sample of the general population as the employment status is positively associated with the health status. First, relatively healthier people are more likely to seek a job and to be hired. Second, as sick people tend to leave their jobs, healthier workers remain employed longer. The two health-related selection forces cause a well-known selection bias in occupational epidemiology, known as the “healthy worker effect” (Fox and Collier 1976; McMichael 1976). The first phenomenon is known as the “healthy hire effect,” whereas the second, associated with duration of employment, is known as the “healthy worker survivor effect” (Arrighi and Hertz-Picciotto 1994). The magnitude of the phenomena depends on the type of work, general social conditions (e.g., unemployment rate), the disease under study (e.g., studies of cancer are generally less biased than studies of diseases with shorter induction period), and the study design (Choi 1992). The healthy worker effect is also seen as a traditional confounding problem, as employment status is associated at the same time with the health status of workers and disease risk (Checkoway et al. 2004). Because of the healthy worker effect, cohorts of workers may have lower mortality rates than the general population. Negative results in occupational epidemiological studies may therefore hide harmful exposures. Moreover, an increase in risk of a disease may artificially plateau at the highest level of cumulative exposure, at which workers have the longest duration of employment (Stayner et al. 2003).

A logical approach for controlling, or at least decreasing, the bias introduced by the healthy worker effect is to use an appropriate internal or external comparison group, namely, a group of unexposed workers who possibly underwent similar health-related selection at the time of employment. Use of such a comparison group does not, however, imply unbiased estimates, as the healthy worker survivor effect may still persist. Indeed, as reviewed by Checkoway and colleagues (2004), four time-related factors should be considered: age at first employment, duration of employment, length of follow-up (members of a cohort can be followed-up also after they have left the job), and age at risk. Arrighi and Hertz-Picciotto (1996) evaluated four suggested methods for controlling the healthy worker effect: (1) restricting analyses to long-term survivors; (2) excluding recent exposures, introducing a lag of 10–20 years; (3) introducing current employment status as a confounder in the models; and (4) modeling employment status simultaneously as a confounder (the same as in the third approach) and as an intermediate time-dependent variable (if the risk factor for the disease under study is also a determinant of job termination and, therefore, of change in employment status). The latter technique uses the so-called G-method as suggested by Robins and colleagues (1986, 1992). This approach has the strongest theoretical support and was considered the most appropriate after empirical evaluation, although there are difficulties in its implementation. Lagging exposure is a valid, straightforward alternative that can be implemented when the induction period between exposure and disease is not short.

Case-control and cross-sectional studies are not free from the healthy worker effect. In case-control studies, it can result in differential sampling of controls from the exposed and the unexposed population. For instance, if controls are selected from hospitalized patients and individuals with a particular occupational exposure tend to be healthier, then the proportion of exposed controls is artificially decreased. The odds ratio would, therefore, be overestimated, a bias that reverses the usual underestimation of SMRs introduced by the healthy worker effect in cohort studies.

In a cross-sectional study, workers with higher exposure may have a paradoxically lower prevalence of diseases or symptoms known to be associated with exposure, because diseased workers would tend to leave jobs entailing the exposure.

4.3 Dose-Response Analysis

As discussed in chapter Exposure Assessment of this handbook), the dose is the level of the risk factor at the target organ, while exposure refers to the level of the risk factor in the external environment. Although the dose is the biologically relevant measure, the amount of exposure, as a surrogate of the dose, is usually the only available information in occupational studies, so that a dose-response analysis is in fact an exposure-response analysis. In some studies, the actual dose can be estimated from measurements of exposure and knowledge of the specific agent uptake and clearance (US Environmental Protection Agency 2002).

Exposure can be measured using different metrics, namely duration, intensity, and cumulative level (cf. chapter Exposure Assessment of this handbook). The selection of the metric should be based on the – often unknown – mechanism of disease development and on the nature of the exposure itself. Importantly, the choice of the metric influences the magnitude of the estimates and the shape of the dose-response (Blair and Stewart 1992). Cumulative exposure, that is, the product of intensity and duration, is a correct metric for several types of diseases where risk is directly proportional to dose. Duration of employment is a valid surrogate for cumulative exposure when intensity of exposure has been relatively constant over time, through working areas of the plant and across tenures (Checkoway 1986). Peak exposure is more important than duration in the study of diseases for which a threshold exists, such as back pain or acute toxicity.

A dose-response analysis is commonly carried out in occupational epidemiology for at least three main reasons. First, occupational exposures are time- and place-specific, implying that assessment of an association between an occupational exposure and a disease necessarily takes the level of exposure into account. Second, a dose-response relation is one of the well-known Bradford Hill’s criteria for establishing causality (Hill 1965). When the risk for a disease increases continuously with increasing exposure, whatever the shape of the trend, the likelihood of a causal association is higher. However, on the one hand, a dose-response relation does not prove causality; on the other hand, the lack of such a relation does not imply lack of a causal association, as clearly demonstrated by threshold phenomena. Third, the dose-response analysis is one of the steps in risk assessment, which aims at quantifying the health effects of environmental and occupational exposures that can be modified by new policies and technologies. Risk assessment comprises (1) hazard identification, on the basis of evaluation of the available evidence on the health effects of the agent; (2) exposure assessment, identifying the nature of the exposure in the population, the characteristics of the exposed individuals, and the behavior of the agent in humans; (3) identification of the exposure-risk model, which implies a dose-response assessment; and (4) risk characterization, determining the exposure level-specific health effects in the population (Nurminen et al. 1999; Checkoway et al. 2004).

Often, data on exposure in occupational epidemiology are summarized as qualitative or semiquantitative indices. For instance, JEMs usually produce indices of intensity and probability of exposure on an ordinal scale. Such information offers little basis for a dose-response analysis. In other instances, if quantitative information is available, cumulative exposure can be estimated for each study subject; it must be born in mind, however, that quantitative estimates are affected by measurement errors, falling in the two broad categories of classical and Berkson error already discussed in Sect. 42.2.3. Among many possible examples, we cite here the dose-response analysis carried out by Steenland and colleagues (1998) on occupational exposure to diesel exhaust in the trucking industry and the risk for lung cancer, which was used for quantitative risk assessment by the Health Effects Institute (1999). Steenland and colleagues conducted a case-control study, obtaining information on lifetime work histories from interviews with the study subjects’ next-of-kin and from retirement registries. Then, for the purpose of exposure assessment, workers were assigned to the category in which they had been employed the longest. Contemporaneously, an industrial hygiene survey was conducted to measure levels of exposure to elemental carbon (a marker of exposure to diesel exhaust, which is a complex mixture of gases and particulates) in the main job categories within the trucking industry (Zaebst et al. 1991). Combining the lifetime work histories with the results of the survey and making several assumptions, in particular with regard to past exposure, the cumulative exposure of each worker was estimated. Although quantitative data were obtained, the level of misclassification of exposure was still presumably high, albeit non-differential. In particular, each subject’s true exposure in each job category was a random variation of the exposure level that was assigned to that job based on the industrial hygiene survey.

There are several approaches to dose-response analysis, including simple parametric models, categorical analysis, biological-based models, polynomial regression, spline regression, and non-parametric models, such as generalized additive models (cf. chapters Analysis of Continuous Covariates and Dose-Effect Analysis and Regression Methods for Epidemiological Analysis of this handbook). We will not describe and compare these techniques, but we highlight some aspects that are specific to occupational epidemiology. Interested readers may refer to the above chapters or one of the several available thematic textbooks (Härdle 1990; Hastie and Tibshirani 1990).

Categorical analysis, in which the exposure variable is subdivided into a certain number of categories on the basis of cut-points chosen a priori, is usually the starting point for a dose-response assessment, as it allows researchers to observe the shape of the dose-response relationship. The shape is obviously strongly influenced by the choice and number of the cut-points that can be decided upon according to biological considerations, if available, or other criteria, including established standards or the percentile method. Evidence that an association is limited to the highest exposure levels should not lead to disregard causality without careful consideration of the possibility of a threshold for the effect of interest.

When the exposure variable is continuous, the simplest approach consists in fitting a regression model with a term for the exposure (e.g., cumulative exposure). This implies assuming a priori a shape of the dose-response curve that seldom reflects biological knowledge, if any is available. When the exposure variable is not transformed, the assumed shape is usually log-linear or logistic. In occupational epidemiology, a leveling off in the increasing trend in risk for chronic diseases is often observed at the highest levels of exposure (Stayner et al. 2003). The explanations for this leveling off can be either biological (e.g., saturation phenomena, depletion of susceptible individuals) or methodological (e.g., misclassification of exposure, healthy worker effect), and a log transformation of the cumulative exposure variable is an option to consider.

Among the more complex alternatives, spline regression and its variants (b-splines and loess among the most popular) can be implemented quite easily with common software packages. It is therefore being used increasingly in occupational and environmental epidemiology (Greenland et al. 2000; Steenland et al. 2001; Thurston et al. 2002; Steenland and Deddens 2004). Spline regression, which is based on piecewise polynomials, has the advantage of providing a smoothed dose–response curve, although it does not always produce easily interpretable estimates (Harrell et al. 1988; Greenland 1995). Figure 42.2 shows an example of a dose-response analysis of data from men included in a case-control study on occupational factors and lung cancer risk that we carried out in two areas of northern Italy in 1990–1992 (Richiardi et al. 2004). The odds ratio (plotted on the log scale) of lung cancer increased with duration of employment until 10–15 years and slightly decreased after that. Estimates for the durations of employment above 30 years are not interpretable because of few observations and consequent large confidence intervals. This shape in dose-response is not entirely unexpected when duration of exposure is used in the analysis, as subjects with longer duration of employment may be those with lower intensity of exposure and better health status.

Fig. 42.2
figure 2

Association between duration of employment in occupations known to entail exposure to lung carcinogens and risk of lung cancer modeled using a generalized additive model with cubic b-splines (four degrees of freedom), adjusted for study area, age, and cigarette smoking (current/former/never smokers) (Source of data: Richiardi et al. 2004)

5 Primary Prevention

How can occupational epidemiology help evaluate the need and effectiveness of primary prevention interventions and policies?

Sound epidemiological studies are typically needed to produce evidence of toxicity for occupational agents when long-term effects are present such as in occupational cancer that we use here as an example (Merletti and Mirabelli 2004). Complex mixtures entailing occupational exposures were among the first causes of cancer to be identified and finally led to the identification of specific causal agents. Thus, the study of occupational cancers offered precious insights and paradigms for occupational epidemiology at large.

Agents currently established as causes of occupational cancer and occupations with sufficient evidence of increased cancer risk according to the International Agency for Research on Cancer can be found in this textbook (see chapter Cancer Epidemiology of this handbook; IARC 2011).

In appraising the body of evidence on occupational hazards and its relevance for control of occupational exposures, consideration must be given to the problem of who should bear the burden of proof and what the proof should consist of: whether evidence of benefit from intervention or evidence of harm from exposure. Occupational exposures are imposed upon individuals who have little, if any, personal choice, freedom, and responsibility in accepting or avoiding them. Furthermore, they often lack the basic necessary knowledge. As consequence, the burden of proof is on the employer, to demonstrate that the production process is safe. Evidence that exposure may be harmful is sufficient to require intervention to eliminate it.

Primary prevention, in the field of exposure to carcinogens as well as of other chemical and physical hazards at work, is based on the application of basic industrial hygiene strategies at the industry level: (1) substitution with agents intended not to be as dangerous, (2) fully enclosed processing, and (3) strict control of exposure by reduction of amounts used, by local exhaust, by personal protection, by cleaning practices, etc. This means to reduce the number of potentially exposed workers and their exposure level. Exposure control is better implemented by embedding it in the project of plants and processes, aiming to workers’ protection as well as to that of neighboring communities.

At the community and country level, primary prevention entails adopting regulations intended to favor preventive measures or to enforce them. The first country to forbid the manufacture of certain chemicals because of their carcinogenicity was the United Kingdom, with the Carcinogenic Substances Regulations in 1967, prohibiting beta-naphthylamine, benzidine, 4-aminobiphenyl, and 4-nitrobiphenyl (UK 1967). The EC regulation on carcinogens at work has been developed starting with the 90\(\vert\)394\(\vert\)EEC Directive, but still today the only carcinogenic agents whose production and use is forbidden, apart from asbestos, are the same four as in the UK Carcinogenic Substances Regulations. In the USA, no formal ban has been put on any carcinogenic agent, production, or process on grounds of workers’ protection. Permissible exposure levels (PELs) have been established by the Occupational Safety and Health Administration (OSHA) largely on the basis of the 1987 list of the American Conference of Governmental Industrial Hygienists (ACGIH) threshold limit values (TLVs), with the result that (1) the TLV’s list has been updated and expanded by ACGIH, but the list of PELs is unchanged, and (2) certain agents are commonly recognized carcinogens, but their PELs were established without taking their cancer-causing properties into account (Smith and Mendeloff 1999).

Despite these limitations, OSHA and EPA in the USA and the EC in its regulation on classification, labeling and packaging of dangerous substances publish lists of substances officially recognized as carcinogens. The availability of lists of carcinogenic, and in general of toxic, chemicals is a useful tool for hazard identification, even if limited to intentionally used agents.

Workers’ information on their exposures and on the risks entailed by them is a fundamental issue. It is the first step in their empowerment to verify that appropriate measures have been taken. The EC regulation requires that specific information is given to exposed workers, including special instructions on how to deal with accidents and emergencies.

Provided local regulations have been adopted, like all EC Member States should have done, law enforcement through technical public services specialized in inspecting workplaces is another key issue. Further, workers should be able to stand in courts not only when they are affected by work-related conditions but just because they are exposed, and their cases should be fairly settled, which does not seem to occur currently even in large EC Member States (Editorial 2003).

It may be surprising that systematic reviews on the effectiveness of interventions and/or implementation activities aimed at exposure control are generally lacking. In the area of occupational cancer, an exception may be the review by Kogevinas and coworkers in 1998 on the rubber industry (Kogevinas et al. 1998), where some changes in overall technology and chemistry were considered along with evidence on the persistence of previously observed cancer risks. This review is useful to point out the many and different difficulties we are confronted with while trying to gather evidence of effectiveness in occupational cancer prevention:

  1. 1.

    The long induction period of most human cancers prevents driving conclusions from early observations after changes are introduced, since workers first employed after intervention are not yet at risk, or fully at risk, of developing the disease.

  2. 2.

    Longer-term observations, however, are difficult to carry out; they are also difficult to interpret because of changing patterns of incidence/mortality in the disease of interest, and of possible complex interactions with other exposures.

  3. 3.

    Often, the exposure characteristics are not well understood and recorded, so it may become impossible to assess the quantitative relationship between exposure level and disease occurrence, which is precisely what is needed when exposure levels are reduced, but the agent is not completely eliminated.

    1. (a)

      Sometimes, the nature of the relevant exposure is not understood, so that a carcinogenic agent may be withdrawn, but its substitutes may be as dangerous, or almost as dangerous.

    2. (b)

      Both industry-based and community-based epidemiological studies have major limits in exposure assessment, due to lack of suitable exposure data, and this is the origin of major uncertainties and controversies in the interpretation of epidemiological evidence.

This picture explains why it is difficult to obtain evidence of cancer risk reduction following the adoption of control measures and why reports of this kind of evidence are rare.

Within the limits of the above-mentioned uncertainties, some widespread occupational cancer risks (Cruickshank and Squire 1950) seem to have disappeared from industrial and agricultural settings in Europe and in the USA. Furthermore, some carcinogenic exposures also disappeared or have been reduced to lower levels in developed countries. The subsequent reduction of the fraction of cancers attributable to occupation can be estimated, provided adequate data are available (Armstrong and Darnton 2008), and has to be the object of future scientific investigations.

Some contradictory experiences occurred either: Agents have been substituted with others now seemingly entailing the same risks, carcinogenic contaminants have been eliminated from agents used in certain industries only to be introduced in agents used in other processes, and only partial elimination of risk has been achieved when relevant exposures were to complex mixtures rather than to simple chemicals (Evanoff et al. 1993). Therefore, workers’ exposure to carcinogens in industrialized countries is still not controlled as completely as it should be, given our current knowledge of the carcinogenic properties of chemical and physical agents. The most critical point, however, is continuation of productions and processes entailing exposure to carcinogens in developing countries, often lacking experience in the management of industrial hazards and power to enforce sound control strategies (Jeyaratnam 1994).

6 Conclusions

Attempts have been made to estimate the global burden of disease and injury due to occupational factors (Leigh et al. 19971999; Ezzati et al. 2002; Rushton et al. 2010). Although such global statistics are of difficult interpretation, given the very large number of assumptions underlying them, two major conclusions can be drawn: (1) The problem is still an important one throughout the world, including developed regions; (2) the burden is shifting to the developing world, which accounts to 70% of the world’s workforce and where the globalization of industry is resulting in increased exposure to occupational agents. The situation is exacerbated by unsafe technology, transfer of hazardous industries and wastes from developed to developing countries, use of agents banned or restricted elsewhere, poor health and nutritional status of the workforce, and ineffective legislation on occupational safety and health. Although prevention of exposure to occupational hazards will come from political and economic changes in the world, just as political and economic interests are the determinants of the present situation, much can still achieved, even in the current international situation (Pearce et al. 1994).

The applications of occupational epidemiology in public health decision-making are broadening, providing inputs to risk assessment, evaluation of occupational guidelines, and extrapolation of findings from occupational settings to communities with the aim of setting policies at population level. These multiple applications mean increasing responsibility to ensure ethical scientific conduct and clear, thorough communication of the assumptions, limitations, and uncertainties, of the results of research and of risk assessment (Kriebel and Tickner 2001).

Recent discoveries in molecular biology and genetics have made it possible for researchers to examine how genetic characteristics affect responses to occupational and environmental exposures. The use of genetic biomarkers in epidemiology has provided potential understanding of the underlying mechanisms of disease and therefore ultimately contributes to public health. Despite the potential benefits of genetic information, its collection in epidemiological studies, particularly in occupational settings, presents ethical, legal, and social challenges. Clarifying gene-environment interactions will have implications for difficult regulatory questions, such as protecting the most susceptible members of the population and its subgroups, but in the case of workers, genetic information could be used to discriminate them (Christiani et al. 2001). The challenge of identifying and applying genetic information in the study of human diseases in instances in which it will make a difference to prevention and public health (Millikan 2002; Merikangas and Risch 2003; Schulte 2004) may well also apply to occupational epidemiology.