Keywords

1 Introduction

Disease burden estimates provide the foundation for evidence-informed policy making and are critical to public health priority setting around food safety. Several efforts have recently been undertaken to better quantify the burden of foodborne disease, as presented in Chaps. 7 and 8, but there is still much work to be done. This chapter outlines areas of improvement that would lead to improved estimates such as enhancing foodborne disease surveillance infrastructure and improving our understanding of the burden of chronic sequelae associated with foodborne disease. We also give an overview of attribution studies that will increase the usefulness of disease burden estimates by identifying the most important (groups of) foods or reservoirs that contribute to the disease burden.

2 Foodborne Disease Surveillance

Many studies use data from public health surveillance to estimate the overall burden of foodborne disease (Flint et al. 2005; Haagsma et al. 2013; Scallan et al. 2011a, b). Public health surveillance systems for foodborne disease are largely passive and often require a laboratory confirmed diagnosis; therefore, only a relatively small number of cases are actually reported to public health agencies (Fig. 9.1). To estimate the overall burden of foodborne disease using data from public health surveillance, investigators must have a good understanding of how many cases of illness are lost at each stage of the surveillance pyramid due to underdiagnosis (i.e., medical care seeking, specimen submission, laboratory testing practices, laboratory test sensitivity) or underreporting (i.e., diagnosed illness not reported to surveillance). By estimating the degree of underdiagnosis (e.g., only 20% people seek medical care) and underreporting (e.g., only 90% of diagnoses illnesses were reported to public health), investigators adjust for undercounts by creating multipliers (e.g., the inverse of the proportion (1/0.20) equates to a multiplier of 5 for medical care seeking) to scale up the number of illnesses reported in public health surveillance to estimate the overall number of illnesses in the community. An example is provided in Fig. 9.2.

Fig. 9.1
figure 1

The burden of illness pyramid (Adapted from CDC (Centers for Disease Control and Prevention (CDC) 2015))

Fig. 9.2
figure 2

Example of the use of multipliers in estimating the number of illnesses

Surveys of the general population have been used to estimate the number of people with a diarrheal illness that seek medical care and submit stool sample for testing (Jones et al. 2007; Scallan et al. 2005). Limitations of these retrospective surveys include the fact that they are based on self-report and people with a diarrheal illness in the community may not be representative of those with an enteric infection reported to surveillance, given that those with more severe symptoms may be more likely to seek medical care and submit a stool sample for testing (O’Brien et al. 2010; Scallan et al. 2006). Some investigators have tried to account for severity by estimating medical care seeking and stool sample submission separately for those with mild and severe illness, using symptoms such as bloody diarrhea or duration of illness as a marker for severity (Haagsma et al. 2013; Scallan et al. 2011a, b; Kirk et al. 2014). Most surveys estimating the rate of medical care seeking and specimen submission focus on diarrheal illness, so estimates of underdiagnosis are often lacking for foodborne diseases that do not have diarrhea as a primary symptom (e.g., brucellosis, listeriosis) or that are not associated with diarrhea (e.g., toxoplasmosis and most diseases associated with foodborne chemical hazards (Gibb et al. 2015)).

Because laboratory confirmation is often required for a foodborne disease to be diagnosed and reported to public health agencies, investigators must determine how often laboratories routinely test for specific pathogens as well as the sensitivity and specificity of the laboratory test that was used. Laboratory test sensitivity can be challenging to estimate as it encapsulates more than just sensitivity of the test in a controlled setting. Rather it is meant to capture the “real-world” laboratory test sensitivity which includes reductions in sensitivity caused by issues with transportation and transport media, timeliness of specimen collection and testing, and other factors. Studies have derived estimates of laboratory test sensitivity from a variety of sources including quality assurance surveys (Hall et al. 2008), outbreaks (Chalker and Blaser 1998), and expert opinion (Ingram et al. 2013).

The increased use of culture-independent diagnostic testing (CIDT) for foodborne pathogens poses a number of challenges for accurately estimating the burden of foodborne disease (Cronquist et al. 2012). CIDTs for bacterial enteric pathogens include nucleic acid amplification tests (such as PCR) and antigen-based methods (such as enzyme immunoassays and lateral flow assays) and are being increasingly used by clinical laboratories. While there are many advantages to CIDTs, including more rapid diagnosis and testing for pathogens not previously tested for routinely (e.g., Enterotoxigenic E. coli), any changes in laboratory test or practices will require investigators estimating the burden of foodborne disease to reassess the multipliers used to adjust for laboratory testing and laboratory test sensitivity when estimating total illnesses. The sensitivity and specificity of CIDTs is different from culture which has been the standard for many decades, and there is a lot of variation in test performance across different tests. In addition, the demographic characteristics of patients with detected infections have also shifted, suggesting that testing practices have changed with the introduction of new tests. To account for the increased use of CIDTs, more information is needed on laboratory testing practices, sensitivity and specificity, and changes in clinician testing practices.

An alternative approach to obtaining population-level incidence estimates of diarrheal disease and attribution to specific pathogens is through (1) prospective cohort studies with community and etiologic components and (2) cross-sectional surveys with or without supporting targeted studies (Flint et al. 2005). Prospective cohort studies invite patients in the general population and/or presenting at general practices to provide detailed information on their health status during a pre-defined follow-up period. Patients meeting a case definition of acute gastroenteritis are invited to submit stool specimens for pathogen detection and to complete questionnaires on health, risk factors, and other relevant factors. Healthy controls may be invited to strengthen etiologic and risk factor analysis. Such prospective cohort studies are relatively expensive and complex and have been organized by only a few countries. Yet, these studies have the advantage of providing community incidence rates that are pathogen-specific. Key examples are the IID-1 and IID-2 studies in the United Kingdom (Tam et al. 2012; Wheeler et al. 1999) and the Sensor/NIVEL studies in the Netherlands (De Wit et al. 2001a, b, c). Prospective cohort studies have also implemented in major, recent international studies on the incidence and etiology of enteric disease in low and middle income countries, although these studies typically included patients presenting to health care and therefore do not provide population-based incidence estimates. Key examples are the GEMS (Kotloff et al. 2013) and MAL-ED studies (The MAL-ED Network Investigators 2014; Platts-Mills et al. 2015).

Cross-sectional surveys , which are also known as prevalence studies, examine the association between a risk factor(s) and a disease by collecting data on both exposures and outcomes at a specific point in time rather than by following a group of patients over time, as is done in a prospective cohort study. In food safety, cross-sectional surveys are typically based on random-dialing telephone surveys, and provide information on (self-reported) incidence of gastrointestinal illness and, depending on questionnaire design, other variables of interest for burden estimation and risk factor analysis. While these types of studies are faster and cheaper to conduct than prospective cohort studies, they cannot prove causality, and, as such, etiological information often must be obtained from other sources. Flint et al. (Flint et al. 2005) provide examples of studies implemented in different high-income countries, including the population surveys used to estimate the number of people with a diarrheal illness that seek medical care and submit stool sample for testing that were discussed previously.

Many foodborne pathogens are not routinely captured as part of routine surveillance and may only be reported to public health agencies as part of a recognized outbreak. Therefore, outbreak reports may provide the only source of data for some pathogens. Because only a fraction of diagnosed cases are associated with an outbreak, studies apply an “outbreak multiplier” (in addition to any adjustments for underdiagnosis) to estimate the number outbreak of cases that would have been reported had all outbreak cases been reported to routine disease surveillance. Studies have derived an outbreak multiplier by comparing the number of cases reported to national surveillance with the number of cases reported as part of outbreak for the given pathogen or pathogens with both types of data available (e.g., Salmonella) (Scallan et al. 2011a, b; Kirk et al. 2014); however, it is not clear how representative these extrapolations are.

Outbreak reports also provide information on the routes of transmission and the foods responsible for illness, and these data have been used to attribute the burden of illness to specific sources (Adak et al. 2005; Painter et al. 2013). While data from outbreak reports can provide extremely valuable information on foods, there are several limitations. First, it is not known how representative outbreak-associated cases are of all cases of illness with regard to the implicated product. For example, chicken is thought to be the most important cause of Campylobacter infections , but most detected Campylobacter outbreaks have been linked to unpasteurized milk (Adak et al. 2005; Painter et al. 2013). Second, many outbreaks do not implicate a food vehicle as part of the outbreak investigations, so information may be missing or food vehicles may be reported as a “complex food” (e.g., lasagna) without a clear ingredient being identified as the culprit. Finally, outbreak data may be lacking for some pathogens of interest; for example, Campylobacter is rarely associated with outbreaks but causes a significant number of illnesses annually.

Public health surveillance and outbreak reports are important sources of data for estimating the overall burden of disease and attributing the burden of illness to specific sources. More complete surveillance data accompanied by supplemental studies that illuminate different points in the surveillance pyramid increase the accuracy of burden of disease estimates based on public health surveillance data. In particular, more work is needed to understand the surveillance pyramid for non-diarrheal foodborne pathogens. Understanding laboratory testing practices, laboratory test sensitivity and specificity, and changes in physician testing practices in the age of CIDTs is also of critical importance. Outbreak reports provide critical information on pathogens not routinely reported to public health surveillance and provide data needed to attribute illness to specific foods. This underscores the importance of investigating outbreaks, identifying the causative pathogen and implicating a food vehicle, and systematically collecting these data in a central location.

3 Disease Burden of Chronic Sequelae

Traditionally, burden of disease estimates have focused on the incidence of acute foodborne illness, hospitalization, and death (Scallan et al. 2011a, 2011b; Mead et al. 1999). However, foodborne illness has been associated with several chronic diseases, including functional gastrointestinal disorders, renal dysfunction, reactive arthritis, neurologic disorders, cognitive and developmental deficits (Table 9.1) (Batz et al. 2013; Keithlin et al. 2014a, b, 2015; Kowalcyk et al. 2013; Roberts et al. 2009), and increased long-term mortality (Helms et al. 2003). These long-term health outcomes (LTHO) , which are described below, are major drivers of disease burden and cost (Havelaar et al. 2012; Mangen et al. 2014), but few long-term follow-up studies of FBD have been conducted, and most that have been conducted have significant limitations that restrict their generalizability (Roberts et al. 2009). As a result, there are significant gaps in our understanding of the strength and consistency of effect, temporality, dose response, burden of disease, and clinical management of the LTHOs associated with foodborne illness (Deising et al. 2013). Due to the lack of data and conclusive evidence on causality, many chronic sequelae associated with FBD have not been systematically included in burden of disease estimates. For example, Scharff (Scharff 2012) included the burden associated with irritable bowel syndrome (IBS) but did not include the burden associated with reactive arthritis (ReA) , while other researchers included ReA but excluded IBS from their burden estimates (Batz et al. 2012). When such discrepancies exist, it is difficult to compare burden estimates and/or make recommendations to decision-makers. Research is needed to address these important epidemiologic research gaps, which would lead to improved burden of disease estimates.

Table 9.1 Selected health outcomes associated with foodborne pathogens (Batz et al. 2013)

3.1 Functional Gastrointestinal Disorders and Inflammatory Bowel Disease

Exposure to foodborne pathogens has been associated with several functional gastrointestinal disorders (FGDs) that cause chronic or recurrent gastrointestinal symptoms. While the biological mechanism for this is not fully understood, it is hypothesized that exposure to the foodborne pathogen alters the gut flora, alters intestinal permeability and/or motility, and increases the number of intraepithelial lymphocytes, lamina propria T cells, and mast cells, triggering an immune response (Barbara et al. 2009; Marshall et al. 2004; Dunlop et al. 2003; DuPont 2008; Smith and Bayles 2007). Post-infectious irritable bowel syndrome (PI-IBS) has been associated with exposure to Campylobacter , Salmonella , Shiga toxin-producing E. coli (STEC) , Shigella , Yersinia , Giardia , Trichinella , and norovirus, with the incidence varying by pathogen from 3%–36% (Dai and Jiang 2012; Halvorson et al. 2006; Ilnyckyj et al. 2003; Marshall et al. 2006; Pitzurra et al. 2011; Porter et al. 2011, 2013a; Thabane et al. 2007). For example, patients from the 2000 Walkerton, Ontario, waterborne outbreak of E. coli O157:H7 and Campylobacter had an increased risk of PI-IBS (odds ratio (OR): 3.12; 95% confidence interval (CI): 1.99–5.04) 8 years following the outbreak when compared to controls (Marshall et al. 2010). Increased risk of Crohn’s disease (CD) and ulcerative colitis (UC) has also been associated with acute gastroenteritis generally (Garcia Rodriguez et al. 2006; Gradel et al. 2009; Jess et al. 2011; Porter et al. 2008; Ternhag et al. 2008) as well as with specific enteric pathogens, such as Campylobacter and Salmonella (Gradel et al. 2009; Jess et al. 2011; Ternhag et al. 2008). A meta-analysis of nine studies found a twofold increase in risk of developing functional dyspepsia (FD) following infectious gastroenteritis (Pike et al. 2013). Celiac disease (CeD) , an autoimmune disorder triggered by the protein epitopes of gluten, has been associated with Campylobacter, but the epidemiologic evidence is limited (Riddle et al. 2012, 2013). Identified risk factors for developing FGDs following acute gastroenteritis vary by FGD but generally include family history, age, gender, severity of acute infections, prior antibiotic use, smoking, education level, psychosocial factors (e.g., stress, neuroses , hypochondrias), and health-care seeking behaviors (Dunlop et al. 2003; Riddle et al. 2012; Gwee et al. 1999; Locke 3rd et al. 2000; Marshall et al. 2007; Neal et al. 1997; Nicholl et al. 2008; Ruigómez et al. 2007).

3.2 Autoimmune Disorders

Exposure to foodborne pathogens can also cause autoimmune responses, such as reactive arthritis (ReA) and Guillain-Barré syndrome (GBS) . Several studies have found an association between infectious gastroenteritis and ReA, a painful form of inflammatory arthritis that is triggered by an infection in another part of the body (Keithlin et al. 2014a, b; Ajene et al. 2013; Hannu 2011; Pope et al. 2007; Porter et al. 2013b). For example, a review of 14 cohort studies estimated the weighted mean incidence of ReA following Campylobacter, Salmonella, and Shigella infection to be 9, 12, and 12 per 100,000, respectively (Ajene et al. 2013). Estimates, however, vary across studies and reviews; this is likely due to the variability in measuring exposure and outcomes and/or differences in host/pathogen factors. Similarly, several studies and reviews have found an association between GBS, a rare but serious autoimmune disorder that causes paralysis and is fatal in 4–15% of patients, and infectious pathogens such as Cytomegalovirus, Epstein-Barr virus, Zika virus, Salmonella, and Campylobacter (Keithlin et al. 2015; Esan et al. 2017; Frenzen 2008; McCarthy and Giesecke 2001; McGrogan et al. 2009; Moore et al. 2005; Mori et al. 2000; Tam et al. 2006, 2007; Winer 2001). Campylobacter jejuni infection, in particular, has been identified in 20–40% of GBS cases, making it the predominant antecedent infection for GBS (Hughes and Cornblath 2005; Nyati and Nyati 2013; Poropatich et al. 2010). GBS patients often develop long-term chronic sequelae with 31% showing moderate to severe neurological sequelae and 38% and 44% reporting changes in their work situation and leisure activities, respectively, 2.6–6.4 years post-GBS (Bernsen et al. 2002).

3.3 Hemolytic Uremic Syndrome

Hemolytic uremic syndrome (HUS ) is a severe and potentially fatal complication characterized by acute hemolytic anemia (destruction of red blood cells), nephropathy (kidney failure), and thrombocytopenia (reduced platelets) that can occur during or immediately following the acute phase of foodborne illness and is most commonly associated with Shiga toxin-producing E. coli (STEC) , although cases following Shigella and Salmonella infection have been reported (Keithlin et al. 2014a; Garg et al. 2003; Karpman et al. 1998; Mayer et al. 2012; Siegler and Oakes 2005). The proportion of cases that develop HUS varies by pathogen species, serotype, and virulence factors. A meta-analysis of 82 studies found that 4.2–17.2% of E. coli O157:H7 cases develop HUS (Keithlin et al. 2014a) which is consistent with data collected through national surveillance systems in the United States and the United Kingdom (Byrne et al. 2015; Gould et al. 2013, 2009), with an estimated 3–5% of cases being fatal (Andreoli et al. 2002; Mody et al. 2015; Siegler 1995). Children below 5 years of age and the elderly are at higher risk of developing HUS following STEC infection (Wong et al. 2012); bloody diarrhea, fever, treatment with β-lactam antibiotics, and serotypes with class 2 Shiga toxin and eae (intimin encoding) virulence genes are additional risk factors (Brandal et al. 2015; Ethelberg et al. 2004; Gianantonio et al. 1968; Launders et al. 2016; Siegler et al. 1994; Werber et al. 2003). Chronic sequelae, including renal impairment, hypertension, diabetes mellitus, cardiovascular disease, and neurological sequelae such as seizures, hemiparesia, epilepsy, and developmental delay, have been associated with HUS (Garg et al. 2003; Bale Jr. et al. 1980; Bauer et al. 2014; Magnus et al. 2012; Nathanson et al. 2010; Buder et al. 2015; Clark et al. 2010; Eriksson et al. 2001; Gagnadoux et al. 1996; Kelles et al. 1994; Suri et al. 2005). The Walkerton Health Study found that, 8 years after the Walkerton outbreak, patients with moderate to severe acute symptoms were significantly more likely to develop hypertension, renal impairment, and self-reported cardiovascular disease than asymptomatic or mildly ill cases (Clark et al. 2010). A large meta-analysis found similar results with 25% of HUS cases suffering renal sequelae (95% CI: 20–30%), 10% hypertension (95% CI: 8–12%), and 15% proteinuria (95% CI: 10–20%) (Garg et al. 2003). Another meta-analysis found that 3.2% (95% CI: 1.3–5.1%) of children with HUS develop diabetes during the acute illness and 38% (95% CI: 24–55) develop persistent diabetes (Suri et al. 2005). It is important to note that most studies of the long-term health impacts of HUS are retrospective, have small sample sizes and/or short follow-up, and often do not include neurological sequelae. Consequently, renal impairment is commonly the only chronic sequela included in burden estimates .

3.4 Neurological Dysfunction

Foodborne disease has been associated with neurological sequelae such as impaired/delayed cognitive development, motor impairment, seizures, palsies, and vision/hearing loss (Roberts et al. 2009). While several pathogens—such as Salmonella, Campylobacter, Shigella, Brucella, and several parasites, including Taenia solium, Trichinella, Echinococcus, Diphyllobothrium, Paragonimus, Spirometra, and Toxocara – have been associated with neurological sequelae, the most notable are Listeria monocytogenes and Toxoplasma gondii (Batz et al. 2013; Schlech 3rd 2000). L. monocytogenes can cause sepsis, meningoencephalitis, or acute respiratory distress syndrome, particularly in fetuses, newborns, the elderly, and those with compromised immune systems—resulting in residual neurological deficits and sometimes death (Lomonaco et al. 2015; Mylonakis et al. 1998, 2002; Swaminathan and Gerner-Smidt 2007). In a meta-analysis of 87 studies, 43.8% of perinatal and 13.7% of non-perinatal listeriosis cases with central nervous system infections subsequently developed neurological sequelae, including long-term hearing/vision loss and stroke outcomes (Maertens de Noordhout et al. 2014). While most infected individuals experience no symptoms, T. gondii can cause serious illness and significant neurological sequelae, particularly in fetuses, newborns, and individuals with compromised immune systems. Congenital toxoplasmosis is usually more severe than acquired toxoplasmosis and has been associated with vision/hearing impairment, cognitive impairment, psychomotor deficiencies, and seizures (Havelaar et al. 2007a, 2007b).

3.5 Psychological Disorders

There is emerging evidence that infection with foodborne pathogens may increase risk of psychological disorders, such as depression, chronic fatigue, anxiety, bipolar disorder, schizophrenia, and post-traumatic stress disorder (Bolton and Robertson 2016). For example, a follow-up study of 389 patients sickened in the 2011 E. coli O104 outbreak in Germany found that 6 months after the infection, 43% of patient had clinically relevant fatigue and 3% of patients suffered from post-traumatic stress syndrome (Löwe et al. 2014). Of all the foodborne pathogens, the links between psychological disorders and T. gondii have been the most comprehensively studied. A meta-analysis of 50 case-control studies found significant differences in seroprevalence of T. gondii between healthy controls and patients with schizophrenia (OR: 1.81; CI: 1.51–2.16; p-value (p) < 0.001), bipolar disorder (OR: 1.52; CI, 1.06–2.18; p = 0.02), addiction (OR: 1.91; 95% CI, 1.49–2.44; p < 0.001), and obsessive-compulsive disorder (OR: 3.4; 95% CI: 1.73–6.68; p < 0.001) (Sutterland et al. 2015). It has also been suggested that exposure to infectious agents could be associated through gut-brain interactions with autism spectrum disorder although, as with many of the chronic sequelae discussed here, more research is needed to establish a conclusive link (Bolton and Robertson 2016). The mechanisms by which bacterial and parasitic pathogens affect mental health are not well understood, but the hypothesis is that the pathogens directly infect the brain, as with T. gondii, or indirectly impact the brain by activating the peripheral nervous system (Sutterland et al. 2015; Torrey and Yolken 2003). More research is needed in this emerging area that may greatly contribute to the burden of foodborne disease.

3.6 Urinary Tract Infections

The association between urinary tract infections (UTIs ) and extraintestinal pathogenic E. coli (ExPEC) is well established, but there is emerging evidence that UTIs may also be associated with foodborne pathogens (Sutterland et al. 2015; Nordstrom et al. 2013; Toval et al. 2014). In one study, isolates of E. coli strains from retail meats and ready-to-eat foods were found to be genetically related to strains from women with UTIs, suggesting that foodborne transmission may play a role (Vincent et al. 2010). In another study, women with UTIs caused by antimicrobial resistant E. coli reported consuming poultry and pork more frequently than women with UTIs caused by fully susceptible E. coli, suggesting meat as a potential reservoir (Manges et al. 2007). Based on this evidence, it has been hypothesized that, in foodborne UTIs (FUTIs) , the patient is exposed to ExPEC through food, the gut is colonized, and the pathogen is subsequently transferred to the urinary tract (Nordstrom et al. 2013); however, additional studies are needed.

3.7 Malnutrition and Growth Impairment

Childhood growth impairment is a topic of big concern given the high prevalence of stunted children under 5 years of age; a 2017 report estimated that 115 million children worldwide are stunted (UNICEF, WHO, World Bank Group 2017). Previous published findings suggest that childhood stunting is associated with poor cognitive development (Grantham-McGregor et al. 2007; Walker et al. 2011; Prendergast and Humphrey 2014; Prendergast et al. 2015), increased morbidity and mortality from infectious and chronic diseases (Caulfield et al. 2004; De Boer et al. 2012; Guerrant et al. 2008; Prendergast and Humphrey 2014), as well as reduced incomes throughout life (Prendergast and Humphrey 2014). However, the pathogenesis of childhood stunting is poorly understood (Owino et al. 2016). In the last several decades, various epidemiological or intervention studies have extensively explored the relationships of malnutrition and growth/stunting and infection/diarrheal disease and growth/stunting (Bhutta et al. 2013; Dewey and Adu-Afarwuah 2008; Richard et al. 2013, 2014). However, the modest relationships with stunting suggest that, while nutrition and diarrheal disease are important factors for linear growth, they are not the only factors. This increased realization has encouraged researchers to delve more into potential pathways such as chronic gut injury with systemic inflammation and immunostimulation that can ultimately impair growth (Campbell et al. 2003; Mbuya and Humphrey 2016). Of particular interest is the hypothesis that exposure to poor sanitation and hygiene causes enteropathy in the gut that leads to stunting (Humphrey 2009). This enteropathy, which has been recently termed environmental enteric dysfunction (EED) , has been associated with increased intestinal permeability, impaired gut immune function, recurrent/persistent diarrhea, nutrient malabsorption, and stunting (Owino et al. 2016; Crane et al. 2015; Keusch et al. 2013; McCormick and Lang 2016). Multiple factors seem to contribute to EED including nutritional deficiencies, (asymptomatic) colonization by enteric pathogens, and environmental toxins such as mycotoxins (Prendergast et al. 2015). However, the relative contribution of each of the factors is unknown (Prendergast et al. 2015; Kelly et al. 2004). Recently, the MAL-ED study identified a high Campylobacter prevalence in primarily asymptomatic children in eight low-resource settings being associated with a lower length-for-age Z score, increased intestinal permeability, and intestinal and systemic inflammation at 24 months of age (Amour et al. 2016).

Many chemical hazards are assumed to increase the risk for chronic diseases , including malnutrition and growth impairment. For example, mycotoxin contamination can cause various health issues and economic losses worldwide. Mycotoxins are toxic secondary metabolites produced by fungi that commonly contaminate foods such as maize, peanuts, and cereal grains (Wu 2013); 25% of the world’s crop are contaminated with mycotoxins (Reddy et al. 2010), and high levels are reported for sub-Saharan Africa, Asia, and Central America. Developing countries with tropical climates (high temperature and humidity) are particularly impacted by mycotoxin contamination (Reddy et al. 2010), and over 4.5 billion people are at risk for chronic aflatoxin exposure through food (Centers for Disease Control and Prevention (CDC) 2012). Despite the significant public health impact (Wild and Gong 2010), very few epidemiological studies have explored the longitudinal relationships of mycotoxin exposure on health outcomes and, particularly, childhood growth impairment. Current findings suggest exposure to mycotoxins— including aflatoxins and fumonisins—is associated with several serious health outcomes, including adverse birth outcomes, childhood stunting, impaired nutrient absorption, immune suppression, mental impairment, liver disease, and cancer (Wu 2013; Alborzi et al. 2006; Food and Drug Administration (FDA) 2012; International Agency for Research in Cancer (IARC) 1993; Shuaib et al. 2010; Smith et al. 2012; Turner et al. 2007; Turner 2013). Potential biological mechanisms/pathways related to mycotoxins exposure and child growth impairment are less well understood. Hence, well-characterized epidemiological studies with multiple exposures/biomarkers and in multi-country settings (such as MAL-ED) can provide valuable insights into the contribution of mycotoxins and EED, along with various factors, in the pathogenesis of childhood stunting and burden of disease calculation.

4 Exploring the Association of Health Outcomes

As Sect. 9.3 has shown, there are a large number of health outcomes that are potentially associated with foodborne hazards. Establishing this association is, however, not always straightforward. In this section, we describe and discuss the methods that are used for establishing such associations and argue for scenario analyses to evaluate potential uncertainties and knowledge gaps.

The first, and most straightforward, method for causal attribution is categorical attribution . This approach can be used when a foodborne hazard results in an outcome (death or a specific symptom) that is identifiable as caused by the hazard (and only the hazard) in individual cases (Devleesschauwer et al. 2015). For instance, an individual diarrhea case may be attributed to Salmonella based on laboratory confirmation, or an anaphylactic reaction may be attributed to peanut exposure based on anamnesis.

When the foodborne hazard elevates the risk of an outcome that occurs from other causes as well, causal attribution can no longer be made on a case-by-case basis but, only statistically, at a population level (Devleesschauwer et al. 2015). For instance, T. gondii is reported to increase the risk of schizophrenia and other psychological disorders (Sutterland et al. 2015), but it is not possible to attribute an individual case of schizophrenia to T. gondii infection. Likewise, aflatoxin may increase the risk of hepatocellular carcinoma, but it is not possible to specify that a specific liver cancer case was caused by aflatoxin since (1) there is a long latency period between the exposure and the development of cancer and (2) many other exposures and/or genetic risk factors could have caused the liver cancer. In this situation, the standard approach for calculating the burden of foodborne disease is to use a counterfactual analysis in which the current disease outcomes with current exposure are statistically compared to the disease outcomes under an alternate exposure (a minimum risk exposure which could be zero or some accepted background level) (Prüss-Üstün et al. 2003). This allows calculation of the relative risk and population attributable fraction, which are population-level metrics of the association between the foodborne hazard and the associated outcome. Specifically, the relative risk is defined as the ratio of the outcome incidence among exposed individuals and the outcome incidence among non-exposed individuals. The population attributable fraction is a function of the relative risk and the exposure distribution and is defined as the proportion of incident cases that would be prevented in a population if exposure could be reduced to the minimum risk exposure level. However, these metrics are generally obtained through observational studies, which demonstrate association, but not necessarily causation. Information on the causal attribution between the concerned hazard-outcome pairs is therefore often limited. Furthermore, estimation of the relative risk, and thus the population attributable fraction, may be done under the competing assumptions of an additive versus a multiplicative model. The additive model assumes that RR AB, the expected RR for a person experiencing risk factor A and risk factor B, equals RR A + RR B − 1, while the multiplicative model assumes that RR AB equals RR A × RR B. Both assumptions can lead to widely varying estimates, thus resulting in important methodological uncertainty. Finally, to calculate the burden of the concerned hazard-outcome pair, the population attributable fraction must be multiplied with the all-cause burden estimates for the relevant disease outcome (the so-called burden envelope); for instance, the burden of T. gondii-associated schizophrenia is obtained by multiplying an all-cause schizophrenia burden estimate with the T. gondii population attributable fraction. The counterfactual approach is, therefore, not only dependent on estimates of the population attributable fraction but also on the availability and quality of the concerned burden envelopes.

In cases where there are insufficient data for categorical attribution and counterfactual analysis (considered top-down approaches)—this is the case for many foodborne chemical hazards, risk assessment approach (considered a bottom-up approach) is often used (Devleesschauwer et al. 2015). The risk assessment approach is the standard methodology applied to assess the safety of human exposure to foodborne chemicals and increasingly used for microbial risks. In this approach, the incidences of the hazard-associated outcomes (e.g., diarrhea due to Salmonella exposure or liver cancer due to aflatoxin exposure) are estimated by combining exposure and dose-response data. The dose-response model may, for instance, define the probability of illness at a given exposure level, which can then be translated into an estimate of the number of incident cases expected to occur in the exposed population (Prüss-Üstün et al. 2003)). As this approach does not involve burden attribution, it does not necessarily ensure consistency with existing health statistics. Furthermore, the risk assessment approach is often limited by uncertainty on the dose-response relationship. For instance, when dose-response data are extracted from animal models, a tenfold correction factor is generally included to account for the potential differences between animals and humans and another tenfold factor for the difference between humans. This strategy is relevant when estimating maximum allowable intake levels but might lead to overestimation when the aim is to assess true disease burden. Even when human dose-response data are used, these are not necessarily representative for the general population that is of interest in burden of disease studies. For example, when dose-response relationships are based only on data from high-exposure events, there may remain important uncertainty in the lower end of the dose-response curve, which may be most relevant for the general population (Teunis and Havelaar 2000). For instance, Teunis et al. (Teunis et al. 2012) developed a dose-response model for Trichinella spp. in humans based on published outbreaks of human trichinellosis; likewise, Crump et al. (Crump et al. 2003) developed a dose-response model for dioxin and cancer based on data from three occupationally exposed cohorts. Since these dose-response models were developed using data from high-exposure events, they may overestimate risk at lower exposure levels that may be more representative of exposure in the general population. When microbial dose-response relationships are based on data from human or animal feeding trials, the virulence and pathogenicity of the applied isolates or their physiological state may not be representative for that of the isolates circulating in foods. For example, Teunis et al. (Teunis et al. 2002) explored the strain differences in available Cryptosporidium dose-response models. Chen et al. (Chen et al. 2006) demonstrated that fresh (animal passaged) isolates of Campylobacter jejuni showed higher colonization potential in chickens and less within isolate variation than isolates that had been repeatedly subcultured in the laboratory.

In addition to the methodological issues that arise when modeling the association between foodborne hazards and health outcomes, causal attribution may also be hampered by ethical controversies. For instance, whether or not to include miscarriage and stillbirth in burden of disease calculations implies ethical and moral discussions on how the life, and death, of an embryo or fetus compares to that of a human after birth (Jamison et al. 2006; Phillips and Millum 2015). For this reason, many burden estimates exclude miscarriages and stillbirths.

Given the various sources of methodological and structural uncertainty regarding the association of health outcomes to foodborne hazards, a valid approach would be to generate estimates based on different, well-defined scenarios. Such scenario analyses would allow the reader to assess the impact of alternating methodological and structural choices and to adopt the estimates that correspond to what is deemed the most acceptable scenario. For instance, estimates could be generated using both a counterfactual and risk assessment approach, to assess the impact of different methodological approaches (Jakobsen et al. 2015). Likewise, estimates could be generated that either include or exclude an uncertain health outcome, allowing the reader to assess the impact of this uncertainty. For instance, Smit et al. (Smit et al. 2017) showed that the disease burden of congenital toxoplasmosis in Belgium would be twice as high if fetal losses at ≥22 weeks of gestational age would be included.

5 Attributing the Burden of Foodborne Diseases to Specific Foods, Food Groups, or Reservoirs

While burden of disease estimates are crucial to raising awareness of foodborne diseases, estimating their public health impact, and ranking diseases according to their importance, they may be insufficient for policy making. To identify and prioritize food safety intervention strategies to prevent and reduce the burden of diseases in a population, knowledge on the most important sources of the causative foodborne hazards is needed.

Several source attribution methods are available, including approaches based on the analysis of data from occurrence of hazards in foods and humans, epidemiological studies, intervention studies, and expert elicitations. All methods present both advantages and limitations, and their utility and applicability depend on the public health questions being addressed and on characteristics and distribution of the hazard (Table 9.2). As examples, epidemiological studies may be useful for source attribution of disease by microbiological hazards , which lead mostly to acute disease and thus enable an association of exposure to specific contaminated foods with the onset of symptoms; on the contrary, they are usually insufficient to attribute disease by chemical hazards, which is typically chronic and appears a long time after exposure. Additionally, methods have different data requirements and attribute human illness at either the point of production (reservoir) or of exposure to the food, and therefore their utility will vary depending on the hazard and/or the country or region in question (Pires 2013).

Table 9.2 Strengths and limitations of source attribution methods (adapted from (Pires 2013))

5.1 Overview of Source Attribution Methods

Approaches to source attribution can be grouped broadly into four categories: microbiological, epidemiological, expert elicitation, and intervention studies (Batz et al. 2005; Pires et al. 2009). Methods in all categories have been used to estimate the sources of several pathogens in different subpopulations (e.g., Salmonella, Campylobacter, L. monocytogenes). For chemical hazards, source attribution has been done mostly unintentionally, i.e., as a part of methods applied for risk assessment or burden of disease studies.

Microbiological approaches for source attribution include the subtyping approach and the comparative exposure assessment approach. Both involve the use of data on the occurrence of foodborne hazards in animal, food, and/or environmental sources. These data are ideally available from surveillance or monitoring programs in a country but may also be obtained through, e.g., targeted projects or literature review. The subtyping approach was designed to attribute human cases to the reservoir level, i.e., the closest possible to the origin of the pathogen, and gives no information on the relative contribution of different exposure routes to humans. On the contrary, the comparative exposure assessment approach estimates the relative importance of different routes for exposure, including several routes from the same reservoir.

Epidemiological approaches comprise case-control studies of sporadic and analyses of data from outbreak investigations. Case-control studies are useful to identify sources and risk factors for a disease, as well as the fraction of human cases that can be attributable to these (by estimating population attributable fractions, PAF). Even if case-control studies are not often conducted and are insufficient to extrapolate source attribution estimates at national level, a meta-analysis of several case-control studies (i.e., combining studies conducted in several countries) can be used to estimate the number of illnesses attributable to each exposure at regional and global level. In contrast, foodborne outbreak data are widely available from most world regions. Outbreak investigations are often able to identify the contaminated source or ingredient that caused infections, and an analysis of these data can show the relative contribution of the most important sources of disease. These analyses can be done at national, regional, and global levels, and, despite the limitations of assuming that outbreak data are representative of all cases in the population (i.e., also of sporadic cases of disease), outbreak attribution analyses are useful evidence for source prioritization.

Expert elicitations can be used to estimate the proportion of illnesses that are attributed to foodborne, environmental, contact with animals, environmental, or human-to-human transmission pathways (Hald et al. 2016).

Source attribution can take place at different points along the food chain (points of attribution), including at the origin of the pathogen, i.e., the point of reservoir, such as the animal production stage, or at the point of exposure, such as the food consumption stage. The different source attribution methods attribute disease at different points and will as mentioned depend on the availability of data and on the risk management question being addressed.

5.2 Attribution to Main Types of Transmission

The first step in the source attribution process is to estimate the overall proportion of the burden of disease that can be attributed to the four main transmission routes, i.e., foodborne, environmental, direct contact to animals, and person-to-person. For most foodborne hazards, data-driven methods, based, for example, on surveillance and monitoring data, would require an exhaustive review and inclusion of all potential sources and pathways within these main routes and consequently are not the most appropriate tool for this initial step when applied individually. A combination of epidemiological methods could provide a more adequate picture of the relative importance of the types of transmission, namely, a combination of an analysis of outbreak data and of studies of sporadic cases. For hazards that are transmitted through a limited number of routes (e.g., Brucella spp. ), the application of one epidemiological approach for source attribution may be sufficient. Alternatively, two methods are currently available to attribute disease to these main routes: expert elicitations and intervention studies.

Attribution of foodborne disease to food and other transmission routes could be undertaken for individual foodborne hazards or for syndromic groups, e.g., diarrheal disease. In both cases, expert elicitations can be conducted at a country or regional level, whereas interventions are optimally designed as small scale population-based studies. The latter are additionally expensive and difficult to apply.

The WHO-FERG has undertaken a large-scale expert elicitation to attribute disease by 19 foodborne hazards to main transmission groups at a global, regional, and subregional level (Hald et al. 2016; Havelaar et al. 2015). The study applied structured expert judgment using Cooke’s Classical Model (Cooke 1991) to obtain estimates for the relative contributions of different transmission pathways for 11 diarrheal diseases, 7 other infectious diseases, and 1 chemical (lead). Experts were selected based on their experience including international working experience and included in ten global panels or nine subregional panels. This study presented the first worldwide estimates of the proportion of specific diseases attributable to food and other major transmission routes. Other expert elicitations have been conducted to deliver similar estimates but at a national level, specifically in the Netherlands and in Canada (Davidson et al. 2011; Havelaar et al. 2008; Lake et al. 2010; Vally et al. 2014). Similar country-specific initiatives will be useful to improve estimates and reduce uncertainties.

5.3 Attribution to Specific Foods and Exposure Routes

As mentioned before, the risk management question, the characteristics of the hazard causing the disease, and the data available influence the utility of source attribution methods. When more than one source attribution method proves useful, the final choice of method will be determined by the question that needs answering and will be influenced by the analytical capacity in a country and the level of data sharing between agencies.

The type of reservoir of the hazard will influence the applicability of some source attribution methods, particularly the subtyping approach. This approach applies to hazards with one or more animal reservoirs, to which disease can be traced back and where the hazard can potentially be controlled. All other approaches are, in principle, applicable regardless of the origin of the hazard, since they focus on routes of transmission or the point of exposure.

There may also be differences in the utility of methods for regional or national level. In general, epidemiological approaches, specifically analysis of outbreak data and systematic review and meta-analysis of case-control studies of sporadic infections, are useful for source attribution at a regional level when data are not available a country level.

The applicability and usefulness of the source attribution methods vary for enteric, parasitic, and chemical hazards. The subtyping approach is appropriate to attribute human disease for an enteric pathogen if that pathogen has mainly an animal reservoir, can be subtyped by appropriate discriminatory methods, and subtyping data are available. This has been verified for only two pathogens (Salmonella spp. and Campylobacter spp.) (Pires 2013). For the majority of the remaining enteric hazards, source attribution by an analysis of data from outbreak investigations is appropriate. The comparative exposure assessment approach has been shown to be useful for attributing infections by pathogens that are mostly transmitted by a limited number of food routes, namely, STEC, L. monocytogenes, and Brucella (Food and Drug Administration (FDA) 2003; Kosmider et al. 2010); it has also been applied to other pathogens, e.g., Campylobacter (Evers et al. 2008; Pintar et al. 2017). A systematic review of epidemiological studies of sporadic infections can be useful for enteric hazards that have been extensively studied throughout the world (Domingues et al. 2012a, 2012b).

For chemical hazards, the comparative exposure assessment approach is the most appropriate method to attribute disease and is also often done as part of the method applied to estimate the burden of disease caused by exposure to the hazard through multiple food routes. Given the availability of data, this approach is of simple application. Epidemiological studies, particularly cohort studies, have been undertaken for some of these chemicals, and a review of these could be useful for source attribution. However, because disease caused by chemicals often appears a long time after exposure, epidemiological studies may have challenges identifying cases and sources.

5.4 Challenges and Future Directions in Source Attribution

Controlling foodborne diseases and thus improving food safety requires efforts at several levels. All research and risk management initiatives, including the ones relying on source attribution studies, are dependent on efficient surveillance, which has been the target for improvements and investments throughout the world, either through national, regional, or capacity building initiatives. Multinational organizations such as WHO at the international level and the European Food Safety Authority (EFSA) and the European Centre for Disease Prevention and Control (ECDC) at the regional level play an increasingly important role in the harmonization of surveillance statuses across countries and will be crucial to encourage countries to invest in the integration of food safety components.

In developed countries, improvements in surveillance have been largely focused on the development and use of sophisticated typing methods (e.g., molecular techniques), which have substantially increased the opportunities for research and the production of scientific evidence for interventions. Recently, whole genome sequencing (WGS) has opened yet another spectrum of possibilities, providing new and faster ways to diagnose, monitor, and track foodborne pathogens. We are now witnessing extensive research on the applications of these methods, particularly on how to best use WGS in surveillance and how to translate these data into useful epidemiological evidence.

Several factors have favored the use of such techniques in foodborne disease surveillance: (1) WGS has become mature and has been increasingly introduced in routine laboratories; (2) the price of WGS has been falling dramatically, in some cases, below the price of traditional identification; (3) the availability of a vast amount of IT resources and a fast Internet; and (4) the idea that, via a One Health approach, infectious diseases could be better controlled and prevented (Global Microbial Identifier (GMI) 2013). In this context, initiatives to harmonize methodologies and data collection and sharing are crucial. An example is the Global Microbial Identifier, a genomic epidemiological database for global identification of microorganisms which is a platform for storing WGS data of microorganisms, for the identification of relevant genes, and for the comparison of genomes to detect outbreaks and emerging pathogens (http://www.globalmicrobialidentifier.org).

Traditional microbiological foodborne disease surveillance systems have relied on the collection of samples at different stages of the food production chain, isolation and quantification of foodborne pathogens in these samples, and typing of these with different methods of phenotypic or genotypic characterization. The recent development of molecular typing methods is changing the way surveillance systems work. These changes may be particularly relevant in developing countries where surveillance of foodborne diseases is still behind with regards to their ability to diagnose/identify specific causes of disease. In these countries where systems are not yet entrenched, affordable WGS may represent a significant technological shortcut.

In the context of burden of disease and source attribution, opportunities are immense but are still to be explored. Along with pathogen characterization techniques and surveillance, the scientific methods available to produce evidence for food safety interventions are also likely to change. This will require extensive research. A major challenge of using data generated from molecular typing methods, and in particular WGS, will be to define meaningful subtypes to provide appropriate level of discrimination for source attribution models (European Food Safety Authority Panel on Biological Hazards (EFSA) 2013). Such research will also depend on the accessibility to potential enormous amounts of data that needs to be compiled, analyzed, and shared among the scientific community. Developing such a coordinated system is timely and should be carried out at a global level.

6 Conclusion

In a world of limited resources, policy makers are constantly being asked to prioritize the allocation of resources to efforts. Should they allocate resources to preventing this disease or another one? Which intervention strategies should they invest in? Burden of disease estimates provide policy makers a quantitative measurement of the impact on public health, while source attribution estimates provide information on where to intervene. Significant advancements have recently been made in understanding the burden and sources of foodborne illness, but there is still room for improvement. Public health surveillance is an important source of data for disease burden and attribution studies, but few countries have the infrastructure needed to reliably provide such data. Even in countries that do have strong surveillance systems, there are still significant gaps in understanding and a need for constant improvement as clinical and laboratory practices evolve (e.g., CIDTs, WGS). Important epidemiologic gaps also remain about the burden of foodborne disease, particularly for chemical foodborne hazards and the long-term health impact of all foodborne pathogens. As a result, these health impacts are often not included in estimates, leading to underestimates of the burden of disease. There are significant opportunities to improve the ability of policy makers to effectively allocate resources by expanding our understanding of the burden and sources of foodborne disease, but this will require substantial investments in surveillance and research.