Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

Policy makers and regulators have increasingly expressed an interest in obtaining more safety data and guidance on the use of consumer products. A number of concerns have been raised about the potential health risks associated with the consumption of consumer products or exposure to some of their components. The products that have received scrutiny cover quite a large range, including all sorts of commercial products, home products, personal care products, children’s products, and food products.

This increased interest has led to a greater emphasis on the use of observational methods to understand the safety profile of products after they are marketed. With the development of new technologies, increasingly available biomonitoring data have provided evidence of widespread human exposure to large numbers of chemical, microbiological, and physical agents. Epidemiological methods and studies can contribute to assessments of the health risks posed by consumer products.

The objectives of this contribution are to introduce key notions of epidemiological research and to show how these notions can be applied to consumer products.

2 Epidemiological Concepts

2.1 Definition and Purpose of Epidemiology

Epidemiology is the study of the distribution of diseases and their determinants in human populations (Silman 1995; Friedman 2004). The key principle is to compare health-related events such as deaths, accidents, diseases, or injuries, between groups of individuals that are exposed or not exposed to specific factors. Epidemiology is not necessarily solely concerned with adverse health outcomes; it also identifies positive health effects and assesses methods for improving and maintaining health. Thus, the results of epidemiological studies can be used to promote healthy behavior (e.g., physical activity or a healthy diet) or discourage unhealthy behaviors (e.g., smoking, alcohol consumption, or a sedentary lifestyle).

Epidemiology plays a particularly important role in safety evaluations for medicines. A classic example of how pioneering research has applied epidemiologic methods to safety evaluations is the discovery of the relationship between thalidomide and limb defects in babies born in the Federal Republic of Germany in the 1950s. In 1961, Lenz (1961) and McBride (1961) suggested a possible correlation between congenital defects and the use of thalidomide during pregnancy. The drug was removed from the market in Germany, and several other countries, between 1961 and 1962. However, by that time, around 10,000 children had been born worldwide that were affected by thalidomide. The thalidomide tragedy dramatically changed the way we currently assess the primary and side effects of drugs. Prior to thalidomide, there were no statutory requirements for implementing epidemiologic studies.

2.2 The Notion of Risk in Epidemiology

Risk refers to the probability of an adverse outcome over a specific period of time. Risk is a quantifiable, but dimensionless concept. We may talk about the risk of death or the risk of a heart attack, in general, but risk can vary with the time-period under consideration. Therefore, it is essential to specify the period used to assess risk.

A prerequisite for the quantification of risk is to quantify exposure to a so-called risk-factor. This is not necessarily an easy task. Exposure may depend on the characteristics of the factor of interest. The characteristics might include chemical, radioactive, nutritional, environmental, occupational, or behavioral properties. When the factor is a substance, exposure also depends on whether natural barriers or specific equipment can be used to prevent exposure or mitigate the degree of exposure. Exposure also depends on how the substance is adsorbed, metabolized, and excreted by the body.

Exposure can be defined by its intensity, its frequency (and duration), and its route. There are multiple ways to categorize exposure (Table 1). The simplest approach is dichotomous, where exposure is defined according two modalities (yes/no or at least once/never). However, epidemiologists are usually more interested in comparing multiple degrees of exposure. Thus, they typically prefer to use multi-modal categorizations, when possible and practical.

Table 1 Different ways to categorize exposure

After an association is found, it is necessary to determine the extent of causality between an exposure (cause) and the occurrence of an event (effect). This determination requires a great deal of effort from the epidemiologist. The Bradford Hill criteria are used to assess evidence of a causal relationship between an exposure and an event (Table 2). In particular, it is important to consider the temporal relationship: the cause (i.e., the exposure) must precede the occurrence of the disease or the event of interest. Although the timing might appear to be self-evident at first glance, difficulties arise when exposures and outcomes are measured at the same time. Finally, the exposed population (i.e., the “population at risk”) must be clearly defined.

Table 2 The Bradford Hill criteria for causation

When an association is thought or proven to be causal, epidemiologists use the term “risk factor”. Risk factors represent any product characteristic, individual characteristic or behavior that can increase the likelihood of an event. Risk factors are categorized as modifiable (e.g., behavior) or non-modifiable (e.g., gender, age, ethnicity, genetics, or environment). Age is a risk factor for many diseases, but some of the strongest risk factors are behavioral. Examples include an unhealthy diet, smoking, alcohol abuse, or lack of physical activity.

It is important to note that, typically, there is not a one-to-one relationship between a risk factor and a particular disease. A given risk factor may cause multiple diseases and a disease may have multiple causes. Finally, exposure to some factors may promote good health by preventing adverse outcomes. In those cases, the terminology “protective factor” is preferred.

2.3 Measures of Risk

Several measures are used by epidemiologists to quantify risk (Table 3). The measures most commonly used are the incidence and the relative risk (i.e., the ratio of incidences in exposed and non-exposed individuals). The incidence must be distinguished from the prevalence, another commonly reported measure in epidemiology. The incidence is the rate of occurrence, and the prevalence is the proportion of individuals with a specific health condition at a given point in time.

Table 3 Measures of risk in epidemiology

Many medical endpoints are reported as binary outcomes; i.e., outcomes that reflect the occurrence or non-occurrence of a particular event or disease. A convenient way to represent and compare binary outcomes across two groups is to use a 2 × 2 contingency table (Fig. 1). Typically, a group of individuals exposed to a risk (or protective) factor, such as smoking (or a healthy diet), is compared to a group of individuals that are not exposed to this risk (or protective) factor. The relative risk can then be readily computed to measure the association between exposure and the risk of occurrence of the event or disease of interest (Fig. 1).

Fig. 1
figure 1

Medical outcomes in a contingency table can be used to compute relative risk (RR) and odds ratio (OR). (Top) Contingency table shows how formulas are derived; (bottom) formulas and potential values are shown with standard interpretations. Note: \( {N}_T=a+b+c+d;{I}_E=\frac{a}{a+b};{I}_{NE}=\frac{c}{c+d};{I}_T=\frac{a+c}{a+b+c+d}\ne {I}_E+{I}_{NE} \)

The odds ratio (OR) is another measure of the association between a risk factor and the occurrence of disease. The OR is also readily computed from a contingency table. The odds that an event will occur is usually numerically close to the probability that an event will occur when the event rate is low. It is the ratio of the odds in the exposed group to the odds in the non-exposed group (Fig. 1). The OR approximates the RR when the event rate is low (typically below 10%). In general, an OR provides a more extreme estimate of the effect (i.e., more different from 1) as the event rate increases. Generally, ORs are used for cross-sectional and retrospective studies, while RRs can be calculated for prospective studies.

3 Epidemiological Studies and Risk Assessment

A number of different study designs can be used to assess causal relationships between exposure to risk factors and the occurrence of an event or disease. In all instances, it is essential to have clear definitions of the event or disease of interest and the exposure. In the absence of clear definitions, it can be difficult to design and interpret an epidemiological study.

3.1 Cross-Sectional, Retrospective, and Prospective Designs

Epidemiological studies can be cross-sectional, retrospective, or prospective (Fig. 2). A cross-sectional study measures exposure and disease in a specific population at a particular point in time. A survey is a typical example of a cross-sectional study. With survey information, concurrent exposed and non-exposed groups can be compared for their disease status or vice versa (Silman 1995). The main purpose of surveys is to determine prevalence, both of exposure and of outcomes. While being the simplest design, cross-sectional studies cannot properly discern whether an exposure is the potential cause of a disease. Ideally, exposure status should be documented before disease onset. In retrospective studies, exposure is recorded after the outcome.

Fig. 2
figure 2

Cross-sectional, retrospective, and prospective study designs (Adapted from Silman 1995)

In a retrospective study, the event or disease of interest has already occurred before the start of the study, but epidemiologists look backward in time to determine the exposure status. Case-control studies and retrospective cohort studies are typical examples of retrospective studies (Fig. 2). In a prospective study, the event or disease of interest has not yet occurred. Individuals free of the event or disease are followed forward in time. Prospective cohort studies, such as clinical trials, are typical examples (Fig. 2).

In cohort studies, individuals are followed to see how the subsequent occurrence of an event or the development of new disease cases differs between exposure and non-exposure groups. Attributable and relative risks can be estimated. This type of study provides the best evidence to support the causation of disease. Although conceptually simple, cohort studies represent a major undertaking. They may require long follow-up periods as many exposures are long-term in nature. The difficulty is further increased when there is a long induction period between the first exposure to a hazard and the eventual manifestation of a disease, as with most carcinogens, for instance.

Case-control studies provide another way to investigate the causes of diseases. They recruit individuals with the disease of interest and a comparable control group of individuals without the disease. The study then compares the extent of past exposure to the suspected risk factor between groups. An important consideration in case-control studies is the identification of an appropriate and comparable control group. The cases and controls should belong to the same general population. Exposures should be measurable to the same degree of accuracy in controls and cases.

Absolute risk and relative risk cannot be determined directly from case-control studies, because the incidence of disease is not known in either the exposed or the unexposed population. However, as mentioned above, ORs can be calculated to determine the association between exposure and the risk of disease. Note that a case-control approach is preferred when studying rare diseases, because a relatively large number of individuals would be necessary to draw conclusions from a cohort study.

Finally, it should be noted that retrospective studies are typically much less expensive and time consuming than prospective studies. The costs of retrospective studies can occasionally be further reduced by using historical cohorts, identified on the basis of records of previous exposure. This type of investigation is then called a historical cohort study, because all exposure and disease status data have been collected before the study was planned. This sort of design is relatively common for studies on cancers related to occupational exposures.

3.2 Observational Versus Interventional (Experimental) Studies

Observational studies allow nature to take its course. The researchers observe, measure, and analyze, but they do not intervene. Observational studies are generally descriptive or analytical. A descriptive study documents the occurrence of a disease in a population. It is often the first step in an epidemiological investigation. For instance, descriptive epidemiology determines the distribution over time of health outcomes, in individuals grouped by age, gender, socioeconomic status, levels of exposure, etc. An analytical study goes a step further. It examines the potential relationships between health outcomes and other variables. The aim is to investigate which factors might be responsible for increasing or decreasing the risk of occurrence of a specific event or disease. In other words, descriptive studies are concerned with the prevailing distribution of variables. They do not test hypotheses or make inferences concerning possible causality. In contrast, analytical studies test for a hypothesized causal relationship and focus on the identification and quantification of specific risk factors.

In experimental (or interventional) studies, the researchers intend to assess the effect of a specific intervention on health outcomes. The researchers define the nature of the intervention, the selection/exclusion criteria for enrollment into the study, the length of follow-up, and a plan for proper management of the study population during the follow-up period. Experimental studies are necessarily prospective cohort studies. They are more controlled and managed than cohort studies. They are usually referred to as clinical trials, when the exposure is to a treatment that is designed to protect individuals against the occurrence of the event of interest, such as premature death, myocardial infarction, or cancer relapse.

A randomized controlled trial is an investigational epidemiological experiment, where the enrolled individuals are randomly allocated to the intervention group or the control group (e.g., placebo or active-control). Randomization promotes a balance between groups with regard to both known and unknown confounding and prognostic factors. Therefore, randomization guarantees that the only difference in outcomes between the groups lies in the treatment given; other characteristics are assumed to be evenly distributed with the randomization process. Thus, a causal relationship between outcome and treatment can be established. When randomization is accompanied by double-blinding (neither the investigator nor the subject know to which group the subject belongs), the study is less subject to “noise” or bias.

Randomized clinical trials are the gold standard among study designs for assessing intervention effects. When well designed and conducted, they provide the most compelling evidence of cause and effect. However, they are subject to extra constraints. Ethical considerations are of paramount importance. It is not acceptable to expose subjects deliberately to potentially serious hazards, and no one should be denied appropriate intervention as a result of participation in an experiment. The intervention tested must be acceptable in the light of current knowledge. Finally, properly informed consent from participants must be sought and obtained.

Of note, in the context of consumer products, randomized trials are mostly conducted with healthy individuals. In these cases, the goal is to assess risk or prevention effects. Notwithstanding, randomization to the use of a consumer product is often very difficult, impractical, or even totally unrealistic.

3.3 Meta-Analysis

The term ‘meta-analysis’ refers to an analysis where a collection of pooling and weighting methods are applied to the results of two or more independent individual studies to provide an overall combined estimate. The result is essentially the quantitative component of a systematic review of the relevant literature. The rationale behind a meta-analysis is to provide an estimate with more power than the estimates provided by the separate studies. Simply said, a meta-analysis increases the statistical power, due to an increased sample size.

A critical question is which studies should be included or excluded from the meta-analysis. In fact, the quality of a meta-analysis depends on the quality of the individual studies and the integrity of the process used to combine them. Another important point to consider is the potential heterogeneity across the selected studies. Studies are typically different in design, population, degree of exposure, etc., and when data are combined assuming a unique global effect, the results could be misleading. One approach to this problem is to use statistical random effect models that take into account the heterogeneity across studies.

The use of meta-analyses in epidemiology has increased in recent years, due to ethical reasons, cost issues, and the need to estimate the overall effect of a particular intervention or factor. These reasons are particularly true for clinical trials, where the sample sizes of individual trials are often too small to permit drawing a robust conclusion from any single trial. In addition, results from multiple studies may sometimes be conflicting. Thus, a meta-analysis might be able to increase statistical power, improve the precision of the effect estimate, and provide an overall summary measure.

3.4 Sources of Concerns

There are few sources of concerns to be aware of when designing and interpreting epidemiological studies.

3.4.1 Confounding

When studying the association between an exposure and the risk of a disease, confounding can occur when another exposure is present in the studied population, and it is associated with both the disease and the exposure being studied. Confounding can have very profound effects. It can even change the apparent direction of an association. A variable that appears to be protective may in fact be harmful after controlling for confounding factors. Confounding might also create the appearance of a causal relationship that does not actually exist. For instance, antioxidant supplementation is relatively popular among the lay population. Laboratory experiments and studies on individuals that take antioxidants on a regular basis have suggested that antioxidants can prevent cardiovascular disease and even certain cancers. However, careful randomized studies, which are able to avoid confounding factors, have routinely found little effect of antioxidants. In fact, results showed that, compared to individuals that do not take antioxidants, individuals that regularly take antioxidants are more conscious of their health in general, are more likely to exercise more frequently, tend to watch their weight, eat more vegetables, and avoid smoking. It might well be that all of these activities, not solely the intake of antioxidants, lead to better health outcomes in non-randomized studies on antioxidants.

Several methods are available to control for confounding factors (Table 4). These methods can be applied either during the study design and conduct (randomization, matching, restriction) or during the data analysis (standardization, stratification, statistical modeling).

Table 4 Methods of controlling for confounding factors in epidemiological studies

3.4.2 Bias

Bias (or systematic error) occurs when results differ in a systematic manner from the true values. Bias has been defined as “an error in the conception and design of a study or in the collection, analysis, interpretation, publication, or review of data, leading to results or conclusions that are systematically (as opposed to randomly) different from the truth” (Porta 2008). The possible sources of bias in epidemiology are many and varied. Over 30 specific types of bias have been identified. The principal biases are confounding (see above), selection bias, and measurement (or classification) bias.

Selection bias occurs when there is a systematic difference in characteristics between groups, other than those under study. That is, two groups that differ in a specific characteristic of interest (i.e., the degree of exposure) might also differ in other characteristics. When these other characteristics are related to the outcome, the comparison is biased (Fletcher et al. 2014). Thus, the independent effect of the characteristic of interest cannot be properly assessed, because the difference might actually be due to the differences in other characteristics. For example, not all subjects selected for a study will necessarily fully complete or return the questionnaire. This is a potential source of selection bias, because the study might only evaluate individuals that fully participated in the study. Another source of selection bias is when participants volunteer for a study, because either they feel unwell or they are particularly worried about their exposure to a risk factor. The possibility of selection bias should always be considered when defining a study sample.

A measurement bias occurs when individual measurements of a disease or exposure are inaccurate; i.e., when the instruments do not correctly measure what they are intended to measure. There are many sources of measurement bias, and the importance of the effect is variable. For instance, biochemical or physiological measurements are never completely accurate, and different laboratories often produce different results on the same specimen. When specimens from exposed and control groups are analyzed randomly by different laboratories, the chance of a systematic measurement bias is lower than when all specimens from the exposed group are analyzed in one laboratory and all those from the control group are analyzed in another laboratory.

Another type of measurement bias, the recall bias, is a particular concern in retrospective case-control studies. Indeed, the ability to recall information may be different between case and control groups. For example, diseased individuals might be more likely to recall past exposure than healthy individuals, particularly when they have a disease that is clearly suspected to be associated with exposure. Recall bias can either exaggerate or minimize the degree of association between exposure and disease, depending on whether affected subjects are, respectively, more or less likely than controls to admit or recall past exposure.

Finally, it should be kept in mind that meta-analyses are sensitive to publication bias. Publication bias is a form of selection bias, because some results have a higher probability of being published than others (Ioannidis 2008). For example, studies that show a statistically significant effect have a higher likelihood of getting published than studies that show no significant effect. Publication bias can be addressed with a funnel plot (i.e., the plot of each study effect against its respective level of precision). The funnel plot should be symmetric, and it should converge to the true effect size in the absence of publication bias.

Nearly all epidemiological studies are subject to bias of one sort or another. This does not mean that they are scientifically unacceptable, or that they should be disregarded. However, it is important to be aware of biases and to assess their potential impact when drawing conclusions from a study.

3.4.3 Statistical Power

One problem that often arises in epidemiological investigations is how to determine an adequate sample size to address a specific research question. The sample size must be large enough to provide appropriate statistical power (i.e., the ability to demonstrate a significant association, if one exists). Sample size calculations are based on a number of study design factors, such as the prevalence of the outcome, the acceptable statistical error, and a meaningful difference required for detection.

3.4.4 Representativeness

Observations about exposure and disease are based on groups of individuals sampled from the population of interest. Thus, epidemiologists must ensure that the selected individuals are representative of the population. The sample characteristics must correspond, as much as possible, to the characteristics of the original population. Ideally, each member of the population should have an equal chance of being selected in the study sample.

3.4.5 Generalizability

The findings of a study should be applicable, and thus generalizable, to individuals elsewhere. It is important to define precisely how the studied subjects were selected from what population of interest. Detailing the baseline characteristics of the studied subjects (such as age, gender, or duration and severity of symptoms, for instance) is a prerequisite in a study report. With this information, the extent of similarity between the studied population and the original population can be gauged.

4 Case Studies

This section presents a selected series of case studies that illustrate how findings from epidemiological reports can be used to assess the risk associated with the consumption of consumer products.

4.1 Health Products

Oral contraceptives are known to reduce the incidence rate of endometrial cancer (Collaborative Group on Epidemiological Studies on Endometrial Cancer 2015). However, it is uncertain how long this effect lasts after use ceases or whether it is modified by other factors. The Collaborative Group on Epidemiological Studies on Endometrial Cancer investigated the association between the use of oral contraceptives and the subsequent risk of endometrial cancer. The Group used data from 36 epidemiological studies on endometrial cancer. In all, 27,276 women with endometrial cancer and 115,743 controls were analyzed. In both groups, the proportion of women that used oral contraceptives was comparable (35% of cases versus 39% of controls). The protective effect of oral contraceptives was confirmed. Women that had consumed oral contraceptives had a relative risk (RR) of 0.69 (95% confidence interval [CI]: 0.67–0.72) for endometrial cancer compared to women that had never consumed oral contraceptives. Moreover, this study showed a positive association between the duration of oral contraceptive consumption and protection from endometrial cancer. The longer women had used oral contraceptives, the greater the reduction in risk of endometrial cancer. Every 5 years of use was associated with a risk ratio of 0.76 (95% CI: 0.73–0.78). The study also showed that, at 75 years of age, women that had never used oral contraceptives had a cumulative incidence of endometrial cancer of 2.3 per 100 women. This cumulative incidence decreased to 1.7, 1.3, and 1.0 per 100 women among women that had consumed oral contraceptives for 5, 10, and 15 years, respectively. The authors concluded, by extrapolation, that oral contraceptive consumption could have prevented more than 400,000 endometrial cancers, in 21 countries around the world, between 1965 and 2014, and half of these cancers had occurred over the last 10 years.

4.2 Food Products

Butter is known to have a cholesterol-raising effect, and it has often been included as a negative control in dietary studies. Nonetheless, the effect of moderate butter intake was unclear, until the study by Engel and Tholstrup (2015). The authors compared the effects of moderate butter intake, moderate olive oil intake, and a habitual diet on blood lipids, high-sensitivity C-reactive protein (hsCRP), glucose, and insulin. The study was a controlled, double-blinded, randomized, 2 × 5-week crossover dietary intervention study with a 14-day run-in period, during which subjects consumed their habitual diets. The study included 47 healthy men and women that substituted part of their habitual diets with 4.5% of energy from butter or refined olive oil. Butter intake increased the levels of total cholesterol and LDL cholesterol more than the olive oil intake, compared to the run-in period. Butter also increased HDL cholesterol compared to the run-in period. No effects were observed on triacylglycerol, hsCRP, insulin, or glucose concentrations. The intake of saturated fatty acids was significantly higher in the butter period than in the olive oil and run-in periods. The authors concluded that individuals with hypercholesterolemia should maintain minimum consumption of butter, but individuals with normocholesterolemia may consider moderate butter intake in the diet.

Fractures during childhood are common. The risk of fracture can be influenced by both genetic and environmental factors. The identification of detrimental dietary patterns early in life may contribute to reducing the high incidence of fractures among healthy children. To test this hypothesis, Danish and Australian researchers conducted a systematic review and meta-analysis of observational studies that examined the association between dietary intake or serum nutritional concentrations and childhood fractures (Händel et al. 2015). The authors identified 18 observational studies that were primarily case-control in design. Randomized controlled trials were absent, potentially due to the unethical nature of randomly assigning children to dietary exposures that could increase later fracture rates. The authors found that the absence of breastfeeding, the non-consumption of milk, the consumption of fat cheeses and highly-caloric soft drinks may be risk factors for sustaining fractures between 2 and 13 years of age. The authors speculated that the effect of calcium intake on the risk of fracture would follow a U-shape curve, with increased risk at low and high calcium intakes.

4.3 Internet Usage

The internet has become part of our daily life. It is widely available, often unregulated, and it provides ready access to a broad range of information and communication with strangers around the world. The high intensity of internet usage has given rise to concerns about how it may negatively impact vulnerable individuals, notably those with suicidal tendencies (Mok et al. 2015). In this context, Mok et al. (2015) reviewed the literature to assess the use of the internet for suicide-related issues. Those authors reported that many individuals used the internet to search for suicide-related information and to discuss suicide-related problems with others. However, the causal link between suicide-related internet use and suicidal thoughts and behaviors remains unclear. There is a lack of studies that focus on internet users with suicidal tendencies. Only case studies are available that have examined the influence of suicide-related internet use on suicidal behaviors. No studies have specifically assessed the influence of pro-suicide or suicide prevention websites. Although online professional services might be useful for reinforcing suicide prevention, more work is required to demonstrate their efficacy. Currently, further research is needed, particularly research involving direct contact with internet users, to improve our understanding of the impact of both informal and professionally moderated suicide-related internet use.

4.4 Psychoactive Drugs

Over the past 20 years, epidemiological studies have provided ample information on how regular cannabis use in young adulthood has adverse effects on mental health and psychosocial outcomes. The Christchurch Health and Development Study (CHDS) made a particularly valuable contribution to this field (Wayne 2015). That study followed the life course, from birth, of 1000 New Zealanders, and found that 80% of those individuals had used cannabis by their mid-20s. Nearly a third had consumed cannabis regularly and for long periods. That number was sufficient to enable an assessment of potential associations between regular cannabis use and adverse psychosocial and mental health outcomes. Daily cannabis consumers consistently attained lower levels of education and employment in young adulthood, compared to non-consumers. Compared to non-consumers, daily consumers were also more likely to consume other illicit drugs, to report symptoms of psychosis or depression, and to commit suicide. Many of these risks increased with the intensity of cannabis use. Moreover, these risks persisted after statistically adjusting for plausible confounding factors. The study also showed that the adverse health effects of cannabis were mostly concentrated among daily users, which comprised nearly 20% of all individuals that had ever consumed cannabis. This risk pattern was most common among individuals that began cannabis consumption in their mid-teens and continued to consume it daily throughout young adulthood.

In the US, the Centers for Disease Control and Prevention (CDC) have published noteworthy data on polysubstance abuse trends involving alcohol, opioid pain relievers, and benzodiazepines (Ogbu et al. 2015). The CDC report was based on the 2010 data retrieved from the Drug Abuse Warning Network (DAWN). DAWN had randomly sampled 237 hospitals to collect data on alcohol use, illegal drug use, prescription and over-the-counter medication use, emergency department (ED) visits, and deaths. In 2010, they reported that 438,718 ED visits in the US had been associated with opioid abuse, and of these, 18.5% had also involved alcohol consumption. Alcohol involvement was even higher for ED visits related to benzodiazepine abuse; of the 408,021 ED visits associated with benzodiazepine, 27.2% also involved alcohol consumption. Opioid-related ED visits involving alcohol were the highest (20.6%) among individuals aged 30–44 years. Benzodiazepine-related ED visits involving alcohol were highest (31.1%) among individuals aged 45–54 years. Of the 3833 opioid-related deaths and 1512 benzodiazepine-related deaths, 22.1% and 26.1% involved alcohol, respectively. Opioid-related deaths involving alcohol were highest among those aged 40–49 years (25.2%) and 50–59 years (25.3%). Benzodiazepine-related deaths were highest among individuals aged 60 years and older (27.7%). However, the DAWN data had a number of limitations. The most important limitations were the lack of accurate drug identification, the lack of accurate quantification of alcohol consumption, and the failure to distinguish between medical and non-medical uses.

4.5 Food Supplements

An increasing number of individuals use dietary supplements to promote health. For instance, calcium-collagen chelate (CC) is a dietary supplement that can contribute to preventing osteoporosis among postmenopausal women with osteopenia (Elam et al. 2015). Elam et al. (2015) randomly assigned 39 women to receive either 5 g of CC containing 500 mg of elemental calcium + 200 IU of vitamin D or 500 mg of calcium + 200 IU vitamin D. Both groups received the dietary supplement daily over a 12-month period. The loss of whole body bone mineral density in women that received CC was substantially lower than that of the control group at 12 months. Moreover, the CC group had significantly better results in bone biomarker assessments compared to the control group. The authors concluded that the CC supplement improved bone health in terms of bone density and bone turnover, in postmenopausal women with osteopenia.

4.6 Injuries

Head injuries are relatively common among alpine skiers and snowboarders. It was hypothesized that helmets might prevent these injuries. However, helmets might also increase head injuries by reducing the field of vision, impairing hearing, and giving skiers a false sense of security. To obtain more definitive evidence of the actual effects of helmet use, investigators in Norway conducted a case-control study (Sulheim et al. 2006). Both cases and controls were selected from visitors to eight major Norwegian alpine ski resorts during the 2002 winter season. The cases comprised 578 individuals that had sustained head injuries, according to ski patrol reports. The controls comprised a sample of individuals that were waiting in line at the bottom of the main ski lift at each of the eight resorts. For both cases and controls, investigators recorded other factors that might confound the relationship between helmet use and head injury, including age, gender, nationality, type of equipment, previous ski school attendance, rented or owned equipment, and skiing ability. After taking confounders into account, helmet use was associated with a 60% reduction in the risk of head injury.

5 Perspectives

5.1 Summary of Key Messages

Currently, people are exposed daily to a multitude of potentially hazardous agents from consumer products. In the evolving policy and regulatory landscape, concerns are being raised about the health risks associated with these exposures. An essential component in evaluating health risks is to estimate the magnitude, frequency, and duration of exposure. This task is challenging, because many exposures are mixed and long-term in nature. The amount of product used (or misused) by individuals (i.e., how much, how frequently, and under what conditions) is often either unknown or varies substantially among individuals. An individual’s exposure to a risk factor may vary with the setting (e.g., the workplace vs. home) and the timing (e.g., variations from season to season or from day to day).

This contribution has reviewed the main epidemiological concepts and methods employed to establish a potential causal relationship between exposure and the occurrence of disease, injury, or adverse outcomes. Selecting the appropriate study design is critical for epidemiological investigations. It should be kept in mind that each study design has different strengths and limitations. Prospective randomized trials remain the gold standard for therapeutic research. However, in the field of consumer products, they may not be practical or feasible.

Prospective, non-randomized cohort studies can provide valuable information about the causation of diseases from specific exposures. However, a large number of individuals must be followed up over long periods of time to accrue sufficient cases for statistically meaningful results. This is particularly true when investigating the causation of chronic diseases, such as cancer, coronary heart diseases, or diabetes. The difficulty is intensified, when there is a long induction period between the first exposure and the clinical manifestation of disease.

Case-control studies can also be valuable, when designed effectively. In this approach, one of the most difficult tasks is to identify an appropriate control group. The degree of exposure should be determined in the same manner for both groups. Case-control studies can estimate the relative risk of disease, but they cannot determine the absolute incidence of disease.

5.2 Recommendations for Future Epidemiological Product Assessments

It is critical for epidemiological research to assess exposure to the risks spawned by consumer products in a reliable manner. However, this is probably the greatest challenge that epidemiologists must face. Professionals from relevant disciplines (e.g., chemists, engineers, toxicologists, and even behavioral scientists and sociologists) should be involved in the design of monitoring programs for epidemiological studies to ensure they provide suitable exposure assessments. We also encourage greater collaborations between epidemiologists and regulators. Indeed, it is worthwhile to present the results of epidemiological studies in a form that can be readily utilized by regulators, and in turn by policymakers, to support the establishment of consumption policies and safeguards.

5.3 Implications for Consumer Risk Perceptions, Behaviors, and Decisions

Many people actually use, consciously or unconsciously, epidemiologic information in daily life to reduce their health risk. For instance, when we decide to quit smoking, to use the stairs instead of the elevator, or to order vegetables instead of fries, we are influenced by epidemiologists’ assessment of risks to our health. Findings from epidemiological studies are directly relevant to the choices we make every day to promote our health and well-being. In other words, the knowledge of epidemiologically identified risk factors can steer our lifestyle, with health-related decisions and behaviors. Concerns about health risk reduction are currently publicized through a multiple of channels, including television, newspapers, magazines, and a myriad of web sites. The emergence of the internet has provided consumers unlimited access to product information, usage recommendations, and cautionary statements. There is little doubt that all this information increasingly drives our consumer decision-making processes.

6 Conclusion

Regulators are increasingly faced with the necessity of correctly informing and protecting consumers about the potential hazards of consumer products. Accurate characterizations of exposure to risk factors are essential for guiding policies and safety recommendations. However, it is challenging to assess the effects of exposure to a multitude of risk factors embedded in consumer products. Human behavior, social factors, and complex product characteristics play important roles in exposure. Improving the reliability of individual exposure assessments will enhance the evidence that can be generated through epidemiological studies. In turn, this evidence can provide a basis for consumers and policymakers to make better-informed consumption choices and policies.