Recently, internet search analysis using Google search queries suggested eroding public interest in environmental topics (McCallum and Bury 2013). This is critical because indifference to a topic often leads to illiteracy (Fraknoi 1996), which reduces participation in environmentally responsible behavior (Pe’er et al. 2007) including promotion of environmentally beneficial government policies and laws (Howell 1993). Our study (McCallum and Bury 2013) built on previous findings of falling interest in the environment from 1975 to 2005 among pre-college students (Wray-Lake et al. 2010). They were supported by four additional investigations. Two of these reported reduced interest in fishing and angling from 2004—present (Martin et al. 2012; Wilde and Pope 2013). The third, limited study showed decline in two terms and growth in a third newly developed term “ecosystem services” (Proulx et al. 2013). A fourth study revealed the use of environmental topics in books remained stable from the 1800s, then peaked in the 1960s, and began declining around 1990 (Richards 2013). Further, there is strong evidence of declining interest in math and science in general (Lederman 2008; Nixon et al. 2013). In fact, a dated study revealed only 28 % of surveyed American adults knew the earth revolved around the Sun (Fraknoi 1996).

Ficetola (2013) contested this evidence by suggesting that “Google Trends is a measure of relative search patterns … so public interest may not be declining.” He argued the decreasing relative importance of search-terms was from the broadening of internet use. This challenge is understandable in isolation, but stems from apparent ignorance of the above studies, misunderstanding of the nature and validity of Google Trends, inadequate methodology, and lack of distinction between “the public” and “members of the public.” Further it misconstrues the point of our paper as well as the significance of Google Trends results. Herein, we address these problems proposed by Ficetola (2013).

Validity of Google Trends

The validity of Google Trends data is clear and well supported in the literature. Since the release of Google Trends in 2004, its data have appeared in such high profile journals as Philosophical Transactions of the Royal Society (Preis et al. 2010), Nature (Ginsberg et al. 2009), and Proceedings of the National Academy of Sciences (Goel et al. 2010). Data obtained from Google Trends is accepted by the scholarly community for study of public attitudes and behavior (Choi and Varian 2012). In fact, a query of Google Scholar for “Google Trends” 6 September 2013 recovered 827 papers published in 2013 involving Google Trends. A battery of visual and statistical tests for convergent validity of Google Trends demonstrated remarkable accuracy across a range of global, health, and environmental policy issues (Ripberger 2011). Others found clear connections between Google Trends data and participation in ballot measures (Reilly et al. 2012), and correlations with donations to a conservation charity (Clements 2013). Further, query data may displace public opinion polling (Zhu et al. 2012) because it closely mirrors Gallup Poll findings (Scheitle 2011). If investigators choose terms carefully, exclude confounding and confusing synonyms, and use several related search terms for a topic of interest, the results are robust indicators of public interest (Scharkow and Vogelgesang 2011).

Terms, terminology, and methodology issues

There is clear misunderstanding about Google Trends output that transcends McCallum and Bury (2013) and Ficetola (2013). The use of the term “search volume” as a synonym for “share of the total scaled proportional search volume” is common among Google Trends investigators and by Google on its website. This causes confusion when those new to the field read the literature and the Google Trends website.

Google Trends does not report the actual number of searches done on Google. Google Keyword Planner provides an estimated monthly search volume for terms, and is a useful reference point against the larger trends provided by Google Trends. Google Trends provides aggregate data to show fluctuations in the proportional search volume for any keyword(s) used since 2004 (for terms with >50 searches per week; Varian pers. comm.; Zimmer 2010; Trivesan 2013). Google normalizes (see: Is the data normalized https://support.google.com/trends/answer/87284?hl=en&ref_topic=13975) and scales (see: How is the data scaled? https://support.google.com/trends/answer/87282?hl=en&ref_topic=13975) the search volume data to avoid the problems proposed by Ficetola (2013). Simply put, Google reports scaled rankings of percentages (Scheitle 2011; Choi and Varian 2012). Further, Google Trends accounts for the contribution of unrelated terms to the overall search volume in its computations, so the control terms suggested by Ficetola (2013) are not needed.

However, it is essential to query an array of related terms that might affect scores or provide important information about the results for subject of interest. The body of evidence from the terms as a group is critical, not the behavior of individual terms in isolation (Zhu et al. 2012, McCallum and Bury 2013). Ficetola (2013) repeatedly used isolated terms for different subjects without adequately investigating possible reasons for his results. An examination of computer- (Computer: PC, Mac, Macintosh, IBM, IPad, IPhone, laptop, HP. Software: adobe, Microsoft, Norton, email, antivirus), entertainment- [movies (n = 9), dating (n = 5), sporting events (n = 9); online shopping (n = 6)], and weather- (e.g. weather channel, weather forecast, the weather, radar weather, weather underground) related terms shows how this is problematic.

The aggregate of results from computer-related terms follows market growth for these different products, (Heath 2012; Siegal 2013; Sterling 2013; Ripley 2013; Worstall 2013), declining interest in engineering (Oon and Subramaniam 2011; Goossens 2007; Johnson and Jones 2006) and computer science careers (Maillet and Porta 2010; Porta et al. 2011; Frieze et al. 2012). The aggregate of entertainment terms was inconclusive. Interest in online shopping grew (83 % up, 17 % down), but movies (33 % up, 33 % down, 33 % no change) and online dating (60 % no change, 20 % up, 20 % down) showed no net change. Entertainment and computers followed known shifts in vocabulary patterns from general to specific terminology (Beck et al. 1982; Knight 1994; Levitzky-Aviad and Laufer 2013). Unlike these subjects, there is little evidence that the public is more informed about, or has a significantly larger vocabulary in, environmental topics today than in 2004. In fact, misinformation media outlets (Johansen and Joslyn 2008; Singer 2006; see Bushman and Anderson 2001) have probably reduced this vocabulary (Lăzăroiu 2012; Mason 2007; Jerit et al. 2006).

Interest in weather rose (4 terms went up, 3 had no change) supporting Ficetola’s (2013) observation but not his conclusion. All weather-related terms appeared relatively stable until mid-2008, when Landmark sold the Weather Channel to General Electric (Desmond 2008). This was followed by new promotions (Stelter 2008), changes in leadership and elimination of the environmental desk (Freedman 2008), and expansion of web-based weather channel presence (Stelter 2008). Interest in weather rose due to actions by the Weather Channel intended to increase searches for and revenue from their weather-related website. In fact, removal of weather channel terms “radar weather” and “weather underground” drops the volume below the minimum search volume needed for retrieval of trends. This means fewer than 50 searches per week for non-Weather Channel-related weather-related topics were made since 2004. Hence, public interest in weather did not change since 2004, like Ficetola (2013) expected.

Studies using specialized research tools, like Google Trends, require careful attention to confounding variables (Morgenstern 1982; Greenland and Morgenstern 2001; Ewers and Raphael 2006). Term selection, numbers, and the social landscape must be carefully examined to ensure the dependability of results from Google Trends.

The public versus members of the public: why the distinction?

Our paper (McCallum and Bury 2013) discussed the public as a whole, not members of the public. The contentions that “more people are searching” is not borne out by the data, in fact there has been little overall growth (<5 %) in the internet user population since 2006, and <10 % overall growth during the scope of our study, since 2004 (Rainie 2010). Apparently though, the annual number of searches performed by Google users grew from ~1 Billion in 2007 to ~5 Billion in 2012 (http://www.statisticbrain.com/google-searches/). Google Trends accounts for this problem within the limitations of overall trends (discussed earlier).

The idea that “expansion of search topics negates the importance of our findings,” baits the issue away from the problem. The scaled proportional share of the search volume on Google Trends and supporting studies demonstrate declining public interest in the environment (McCallum and Bury 2013; see earlier discussion). If the proportion of individuals interested in the environment goes down (a decline in public interest), the influence of that proportion of the public that is interested in the environment falls whether the absolute search volumes change or not. Policy is determined by proportional share of influence over issues, not absolute numbers of interested individuals. For example, George Washington was elected to the office of President of the United States in 1789 unanimously with all 69 electoral votes, and again in 1792 with 100 % of all 132 electoral votes. In the 2012 U.S. Presidential election, 100 % of the electoral votes (583) would still get you elected; however, it would require very unusual circumstances for someone with 69 or 132 electoral votes to win. In fact, Mitt Romney got 206 electoral votes but his share of the vote was only 47.2 % so he lost. It is doubtful that anyone would challenge this loss based on Romney earning more electoral votes than needed to elect many previous presidents, yet this is the logic used by Ficetola (2013) to discount falling public interest in the environment.

In policy setting, the proportional share of influence is critical. If other issues increase their share of the public interest, then those will gain influence over policymakers. To implement change, growth in numbers must be sufficient to increase the percent who are interested. Population growth of interested members of the public without concurrent growth in the portion of the public that is interested does not exert influence on policy. Google Trends data are representative of what is important to the public (see previous citations and discussion). So, our report that the proportional share of Google results held by the environment fell, we showed interest in the environment declined. Interest implies engagement and without engagement, policy does not change (McCabe 2010; Munson 2008; Jerit 2008; Lorenzoni et al. 2007). If interest in the environment falls, then environmental policies and programs become more susceptible to actions that weaken their effectiveness. Policy changes target what interests the public, often at the expense of things it is less interested in. So, contrary to Ficetola’s (2013) suggestion, fluctuations in total search volume due to usage changes in unrelated terms do not affect the validity of Google Trends data as an indicator of changing public interest (Trivesan 2013; Google Trends Support), and their over-riding implications for environmental policy are a serious problem for environmentalists.