Ubiquity of Fake News Today

Fake news abounds today on social media. Ideologues and conspiracy theorists, as well as the merely misinformed, regularly post claims that are misleading at best and outright lies at worst. But how can people tell whether an online claim is valid? Does cognitive ability play a role in this ability, possibly immunizing the so-called cognitive elite—highly educated, high IQ people—from believing and sharing false claims?

The short answer is that one cannot discern the truth value of an online statement from the information and details contained within the post itself, if the post is deliberately intended to mislead readers. This is because humans are notoriously poor at lie detection, which is demonstrated every time people believe a dubious online claim. Consider a recent claim that the Saudi Arabian women’s judo champion, Tahani Al-Qahtani, died of a heart attack after losing her Olympics match to her Israeli opponent, which led to bullying that resulted in her fatal heart attack. It is true that Al-Qahtani did lose the match to her Israeli opponent in the 2020 Olympics (held in July 2021). But she never had a heart attack and she is still very much alive, leading Twitter to label the claim of her death as false. But this is not something that even a highly intelligent or well-educated person could discern from the tweet itself. To recognize the falsity of it, readers had to be privy to information that went beyond the tweet itself. That is, there is nothing in the tweet itself that would allow a smart, educated person to perform superiorly to someone with less cognitive ability in detecting its falsity.

A screenshot from Twitter has a false tweet in the Arabic language about a player who lost her life. It has an English translation below.

An internet search of other 2020 Olympic athletes reveals similarly false assertions, including the claim that the renowned gymnast, Simone Biles, lost her concentration because she was forbidden by the Olympic Committee’s governing board to take her medication for attention-deficit disorder with hyperactivity (ADDH). This claim was later shown to be false. (She had not taken the ADDH medication for five years, and it had no bearing on her performance.) There is no validated textual analysis that would reveal the falsity of this claim.

A screenshot of the internet search has false information about Simone. She takes A D H D medication while she is unable to focus and makes bad moves.

This phenomenon extends into virtually every domain. A search for terms such as vaccine risk, Sandy Hook, election fraud, and climate change will yield thousands of similar claims that not only lack scientific support, but that often contradict all available scientific evidence: See, for example, Alex Jones’ notorious claim that the massacre of 26 children and their teachers at Sandy Hook Elementary School in Newtown, CT, was a “giant hoax,” staged by actors; or the claim that during the 2020 presidential election the Dominion voting system was rigged in favor of President Biden; or the claim that stem-cell therapy cured famed ice hockey star Gordie Howe. For this last example of the reliance on unproven medical therapies, only 3 out of 2783 tweets following Howe’s treatment with stem cells after his stroke acknowledged the absence of scientific support for direct-to-consumer stem-cell treatments. Such false claims are no easier to detect than are other types of falsehoods and, as already noted, humans are very poor lie detectors. Psychometric intelligence does not appear to afford an advantage in lie detection regardless of whether lies are encountered offline or online. This is consistent with views that distinguish the cognitive ability required to perform well on intelligence tests with the skills required to excel at rational reasoning—they are quite different (see Stanovich & Stanovich, 2010).

A screenshot of Twitter has a tweet from Shayne in the Blockchayne. It is about the hockey legend Gordie who recovered from strokes at the age of 86.

Detecting Fake News

The problem of detecting fake claims is rooted in our ancestrally essential, recurrent need to identify honest agents and reciprocators. It therefore is quite limited and domain-specific, because it searches for signs of dishonesty that might benefit one side in a social exchange. Even if there is an advantage to this ability, it has no value in detecting falsity in situations in which incorrect information has no obvious motivational value for its purveyor (Cosmides et al., 2010). Of course, there may be less-obvious advantages associated with crafting and disseminating inaccurate information, such as indirect benefits for the authors of deliberately false claims.

So, for instance, fake news that has no obvious value for its perpetrator is difficult to recognize (e.g., only 40% of respondents were able to correctly detect the falsity of the claim that “chemosynthesis is the name of the process by which plants make their food”). In contrast, consider the ability to detect fake news in which the readers’ real-world knowledge can help them to decide the truth value and motivational component via a battery of inferential tools that include inferences about the motives of the purveyor. For example, detecting the false claim “Trump to Ban All TV Shows that Promote Gay Activity Starting with Empire as President” was correctly recognized as fake news by 82.2% of respondents (Pennycook et al., 2018).

Fake claims on social media are not limited to sporting news or trivia, of course. The majority of Americans now rely on social media for all of their science news, as we show below. This represents an enormous shift away from traditional sources of news that are filtered and curated by the mainstream media. And it presents a formidable challenge when trying to sort the wheat from the chaff, lest scientific findings get misconstrued. For instance, several of the claims below assert that the Pfizer mRNA vaccine leads to female infertility, and several make the opposite claim. The same is true of claims and counterclaims about vaccine safety for children. Deciding which claims are supported by scientific or medical evidence is not as easy as one might hope. Once again, there is little in the tweets themselves that can help with this task.

It is tempting to conclude that those who uncritically accept misinformation are lower on measures of general intelligence, but the evidence for such an aspersion is weak at best, and examples abound of highly educated readers falling prey to online financial and medical scams. Consider the following posts that were generated by recent searches for COVID safety and multiply this number by several orders of magnitude to get a sense of how widespread are the posts containing misinformation.

A screenshot of the internet search has the websites that mention the information about whether the Covid-19 vaccine causes side effects or not.

Cognitive and Non-Cognitive Factors Influencing Detection of Fake News

How do readers decide which claims are based on solid evidence and which are demonstrably false? And what role, if any, do cognitive and non-cognitive factors play in their decision process? Do readers downrate sources that do not sound reputable (e.g., by preferencing scientific or medical sources)? Or are they skeptical about sources that contain extremist-sounding language, and/or sources that fail to conform to their prior beliefs and ideology, or that evoke doubts about the ulterior motives of the source? How do cognitive and non-cognitive factors influence their decisions?

Before delving into these questions, we note that there are many studies showing that the challenges of recognizing fake news are not limited to those who are poorly educated. Even members of the professoriate are not immune to making these errors, including social scientists, who have been shown to be prone to committing ideologically motivated reasoning errors when they are asked to judge the accuracy of claims (Ceci et al., 2021). For example, social scientists downrate the validity of research proposals or scientific articles when they are led to believe that the authors of the proposals and articles hold ideological beliefs contrary to their own, even when the actual content is identical except for the authors’ alleged political alignment. Relatedly, scientists downrate the quality of findings that are methodologically identical, except that one purports to find in favor of a liberal aim, and the other purports to find in favor of a conservative aim (Clark & Winegard, 2020; Ceci et al., 2021). This finding is noted in recent reviews based on decision-making about scientific findings from hypothetical experiments that are identical except for their ideological tilt: “people are more critical of scientific evidence (e.g., Campbell & Kay, 2014; Munro & Munro, 2014), and, at a computational level, make more mistakes with numeric (Kahan et al., 2017) and logical reasoning (Gampa et al., 2019) when the conclusions, outcomes, or consequences are politically inconvenient than when they are ideologically desirable” (Clark & Winegard, 2020).

Thus, even highly educated individuals are not immune to the lure of false postings on social media. Social scientists are often members of homogenous social networks and they exchange information primarily with those who share their sociopolitical orientation; they are also motivated to be particularly skeptical of comments that run counter to their ideology. Sternberg (2005) has argued that smart people are in fact more susceptible to being foolish, because they do not believe that they can be. More about this below.

Misinformation, Disinformation, Gullibility, and Suggestibility

To address the issue of vulnerability to fake news, there are four constructs that need to be distinguished: misinformation, disinformation, gullibility, and suggestibility. Cognitive scientists have studied these constructs for many decades, including in our lab at Cornell University. The first two refer to the nature of information that is presented, whereas the third and fourth constructs describe a listener’s or reader’s proneness to incorporate information into their reports and possibly into their belief systems.

Misinformation is the presentation of invalid information. This presentation may or may not be intentional on the part of the speaker or writer. For example, a speaker may unwittingly convey misinformation to a listener, without realizing that it was misinformation, or repeat invalid information that is honestly believed to be valid. Misinformation exists independently of the proneness of the listener to believe it. On the other hand, a speaker may knowingly present invalid information in an attempt to influence a listener, and this is referred to as disinformation. Disinformation refers to the wanton provision of false information in an attempt to mislead others, and it also exists independently of a listener’s gullibility. Thus, misinformation encompasses all forms of inaccurate information, regardless of the beliefs of the writer or speaker, whereas disinformation is restricted to the subtype of misinformation that is deliberately and knowingly false.

In contrast to distinctions based on the nature of information (valid vs. invalid claims) and motives (claims that are deliberately inaccurate or not), the construct of suggestibility has to do with the listener’s or reader’s likelihood of adopting claims made by others, regardless of their validity or the motives of their source. Some individuals are more suggestible than others when they are confronted with claims, regardless of whether the claim is accurate, unknowingly invalid, or deliberately invalid. This can be seen in numerous experiments that show individual differences in incorporating information in response to a wide range of sources, from subtle suggestions and leading questions to blatantly false claims. But IQ and education are not the drivers of this vulnerability, because it is largely an automatic, non-cognitive process, what Kahneman (2011) refers to as System 1 processing that does not benefit from the conscious attention or limited capacity resources that the cognitive elite might possess in abundance. Even if the initial exposure to a false claim raises some suspicion, once it has been encountered it can take root in our belief system (Corneille et al., 2020; Pennycook et al., 2020).

An example of suggestibility is Loftus and Palmer’s (1974) classic experiment showing witnesses who watched a video of an auto accident and were asked to recollect the vehicles’ speed prior to impact using various suggestive verbs, such as “About how fast were the cars going when they (smashed /collided/bumped/hit/contacted) each other?” Those who were asked how fast the cars were traveling when they smashed into each other estimated that they were going significantly faster than witnesses who were questioned about how fast they were traveling prior to contacting or hitting each other. Suggestible individuals were more likely to rate the cars’ speed higher when a verb like smashed was used. This was an automatic response that was influenced by the semantics of the verb used. Not all individuals are equally suggestible, but measures of cognitive aptitude are not good predictors and in many studies there is no correlation at all once the lowest-functioning individuals are excluded.

Gullibility is a related construct with an important difference. It is a failure of social intelligence in which a person is easily manipulated into an ill-advised act. It is closely related to credulity, which is the tendency to believe unlikely propositions that are unsupported by evidence.

The willingness to believe in fake news may in some cases be the result of non-cognitive factors like credulity. Pennycook et al. (2015) showed that the endorsement of fake news headlines that appeared on Facebook was affected by individuals’ credulity. For example, when presented with meaningless statements, some believe they reflect deep insights. (Anyone familiar with Deepak Chopra’s writings—see examples below—may have wondered if they somehow missed the deeper meaning to what appear to be nonsensical statements that others seem to appreciate.) Consider the following statements taken from Pennycook et al.’s (2015) Bullshit Receptivity Index.

A screenshot of the statement has the type of credibility. It reflects the deep insight of a person with a meaningless statement.

Source: wisdomofchopra.com

A screenshot of the statement is the type of credibility. It reflects the deep insight of a person with a nonsensical statement.

Source: twitter.com/deepakchopra

The type of credulity that accepts the cogency of vapid statements is a component of Pennycook et al.’s Bullshit Receptivity Index. This type of credulity might also be instrumental in persuading some people to accept unwarranted medical and scientific assertions that ignore base rates, lack appropriate control groups, and fail to consider contrary evidence. Such flawed reasoning has been omnipresent during the Coronavirus pandemic, with tragic consequences.

Intelligence, Cognitive Biases, and Online Fake News

A great deal of research on cognitive biases reveals that individuals who are members of the so-called cognitive elite (i.e., those who are highly educated or have high IQs; we note that some people find this term elitist in itself) are spared from the worst cognitive ravages caused by many forms of bias. These forms of bias include hindsight bias, confirmation bias, anchoring, framing, conjunction, overconfidence, and gambler’s fallacy (see Kahneman, 2011 for descriptions of these biases): “We have repeatedly observed this tendency in our lab for over two decades now (see Stanovich, West, and Toplak 2016 for a review of the evidence), and our finding has been replicated in numerous experiments conducted by other researchers” (Stanovich, 2021; see also Ceci, 1996). In short, resistance to most types of bias is correlated with individual differences in cognitive ability (Aczel et al., 2015; Bruine de Bruin et al., 2007; Finucane & Gullion, 2010; Klaczynski 2014; Parker & Fischhoff, 2005; Parker et al., 2018; Stanovich, 2021; Weaver & Stewart, 2012; Weller et al., 2018).

However, there is an interesting and important disjunction in this work. Notwithstanding the role that intelligence plays in resisting the above forms of cognitive bias, there is one type of cognitive bias for which intelligence seems to play a very limited role: The cognitive elite are not spared when it comes to what is known as “myside bias.” This is an important type of bias for recognizing fake news and resisting misinformation. Myside bias is related to confirmation bias but goes beyond it. It reflects a biased tendency when (a) searching, (b) assimilating, and (c) evaluating evidence, as well as (d) biased reconstruction of these undertakings (e.g., Clark et al., 2019; Ditto et al., 2019; Epley & Gilovich, 2016; Taber & Lodge, 2016). Thus, myside bias relates to the biased search and idea-generation process, which is not central to confirmatory bias, while also addressing factors such as biased evaluation that is central to confirmation bias. Unlike other forms of cognitive bias that largely spare the cognitive elite, myside bias affects the cognitive elite just as much, including social scientists: “(myside bias) is the bias where the cognitive elites most often think they are unbiased when in fact they are just as biased as everyone else” (Stanovich, 2021, p. xi).

On its face, myside bias would seem to be involved in succumbing to invalid claims. As noted, Stanovich et al. (2013) report that myside bias, in which people evaluate evidence, generate evidence, and test hypotheses in a manner that favors their own opinions and attitudes, is unrelated to intelligence—approaching zero correlations across the wide ability range that is found on a public college campus (see also Klaczynski, 1997; Klaczynski & Lavallee, 2005). Even eminent scientists can fall prey to ideologically driven myside bias, as we show below. Myside bias operates in an insidious manner. It influences reasoners’ “intuitive likelihood” estimation, which in turn influences their decision to accept or reject fake news. For example, those who believe that promoting the use of condoms is immoral are less likely to believe that condoms are effective at preventing pregnancy and sexually transmitted diseases. Similarly, the more strongly one believes that coercive interrogation of terrorists is immoral, the less likely they are to believe that it leads to accurate disclosures by those being brutally interrogated.

In other words, there is a human tendency to lower the costs of moral commitments we endorse, which in turn leads to acceptance of false claims/fake news that is consistent with our beliefs. This explains why when Stanovich and Toplak (2019) asked participants in their study to generate arguments in favor of their stance on controversial issues such as selling their organs. They gave many more reasons that aligned with their position than reasons that ran counter to it; this form of myside bias was uncorrelated with cognitive aptitude. Perkins et al. (1991) had originally published a related finding showing that although subjects with higher intelligence generated more arguments overall during an argument-generation task, they did not generate more arguments against their personal position, an early demonstration of myside bias being uncorrelated with intelligence. This is one of myriad studies using an argument-generation paradigm to reveal a myside bias in which participants rated arguments aligned with their personal views as superior to those that were misaligned. As Stanovich (2021) points out in his review, in all of these studies, myside bias was just as evident in participants possessing high intellectual ability as in less-intelligent people.

Readers may be surprised to read that myside bias is uncorrelated with cognitive ability, given that the latter is a strong predictor of a wide range of cognitive outcomes. The key to understanding why this is so can be seen in Klaczynski and colleagues’ experiments (Klaczynski, 1997; Klaczynski & Lavallee, 2005; Klaczynski & Robinson, 2000). Their subjects were given hypothetical experiments that contained reasoning flaws (e.g., base-rate neglect, sample limitations) and conclusions that were either opinion-consistent or opinion-inconsistent. Klaczynski and his colleagues examined the quality of reasoning when subjects critiqued the flaws in these hypothetical experiments, and showed—as might be expected based on the larger literature—that cognitive ability was a significant predictor of subjects’ overall quality of reasoning in both the opinion-consistent and opinion-inconsistent conditions. However, the critique of opinion-inconsistent results was far greater than the critique of opinion-consistent results, that is, the myside bias. So, it is not that cognitive ability plays no role in reasoning tasks; rather, it is that myside bias can be found across the cognitive spectrum of participants in typical psychology experiments on state college campuses. It is highly domain-specific, surfacing in some domains but not in others. Thus, myside bias in one situation is not a reliable predictor of myside bias in another.

Suggestibility Versus Gullibility

When it comes to suggestibility, researchers have not consistently found significant correlations between cognitive measures (IQ, Cognitive Failures Questionnaire) and suggestibility. The most suggestible individuals tend to perform as well as the least suggestible ones on intelligence measures, at least if we exclude very low-IQ individuals (Merckelbach et al., 1998). The late intelligence researcher, James Flynn, described a legal case he consulted on in which a low-functioning young man was extremely gullible (Flynn, informal talk at Cornell University, 2008). Other men showed him how to hot-wire a car and asked him to bring a neighbor’s car to them so they could test it, which they told him was a request. In their seminal work, Gudjonsson and Clark (1986) found that intelligence does not affect suggestibility when the participant’s IQ scores are within the low-average to above-average range, a finding that has been found by others. For example, Richardson and Kelly (1994) found that among average and above-average adolescent offenders, there was no correlation between suggestibility and IQ scores; the only group showing a significant correlation were those who scored below average. Sondenaa et al. (2010) did report significant correlations between the various Wechsler scales and total suggestibility. As seen in Table 14.1, the lowest IQ individuals tend to possess the highest suggestibility scores, hence the negative sign referring to the relationship between intelligence and suggestibility. But the magnitude of the effect was small and driven by the lowest IQ individuals.

Table 14.1 Data from Sondenaa et al. (2010). All correlations are significant at p < 0.05

Is Actively Open-Minded Thinking (AOT) Protective?

Stanovich and Toplak (2019) reviewed the evidence supporting the value of what they term Actively Open-Minded Thinking (AOT) in resisting biased claims. AOT refers to the tendency of subjects to consider evidence that goes against their beliefs, to delay closure during problem-solving, and to engage in reflective thought: “Actively open-minded thinking (AOT) is … the willingness to consider alternative opinions, the sensitivity to evidence contradictory to current beliefs, the willingness to postpone closure, and reflective thought.” They reported that AOT is a strong predictor of performance on various reasoning and biases tasks, including rejection of superstitious thinking and avoidance of conspiracy theories, both of which characterize much of fake news.

The absence of AOT is correlated with various types of reasoning fallacies, the most relevant of which is the acceptance of fake news. Pennycook et al. (2018) demonstrated a fascinating but troubling phenomenon: Participants in their large-scale experiments were willing to believe fake news headlines taken from actual Facebook posts as long as they were not outrageously false (e.g., claims that the earth was a perfect square). A single presentation of fake news increased its perceived plausibility a week later. Notifying participants of its falsity was not sufficient to dissuade them. These were well-educated and otherwise smart individuals, yet they readily succumbed to false information.

Concluding Thoughts

The situation facing users of social media is that online posts often contain misinformation that is not readily detected, even by highly educated, cognitively sophisticated users. And this problem is not confined to news about athletics and entertainment but includes medical and scientific news as well. In a 2015 Pew survey of 2000 Twitter and Facebook users, Barthel et al. (2015) found that 63% of Americans obtained scientific news through online social media, with many reporting that they get their news exclusively from social media rather than through the more-responsibly curated news coverage in traditional mainstream media: “The rise in the share of social media users getting news on Facebook or Twitter cuts across nearly every demographic group” (Barthel et al., 2015, p. 2). Reddit’s Ask Me Anything, which has over 11 million readers, is now the single most likely place for non-scientists to learn about breaking medical and scientific news.

Thus, the search for interventions to resist fake news has high stakes and significant implications for society. It would seem that interventions that teach readers how to avoid early closure and consider alternative views are a promising place to start, given the Actively Open-Minded Thinking (AOT) findings. A key aspect of dodging reasoning traps must be to encourage readers to challenge their own views and consider alternative views from their own, especially ideologically divergent views. There is no demonstrated way to do this at present, because such views emerge from long periods of reflection. However, one place to start is with the awareness that liberals and conservatives have been shown to rely on different moral foundations while reasoning (Graham et al., 2009); there may even be a biological component to reliance on specific moral foundations (Haidt, 2012). Research demonstrates the value of including opponents’ moral foundations while attempting to persuade them (Haidt, 2012).

What does this discussion imply for our conceptions of intelligence today, and for how these ideas should be evolving to encompass the novel demands of life in our rapidly changing era? Few would argue with the statement that succumbing to fake news and then sharing and acting upon it is a widespread and substantial threat to democracy and well-being. Stanovich has argued for AOT as a way to address myside bias, which afflicts the cognitively able (and others) and reduces the quality of their reasoning. (The degree to which myside bias affects the highly educated versus people with high practical intelligence who are less educated, for example, is an open and interesting question. But intelligence can and should be distinguished from rational thinking (see Stanovich & Stanovich, 2010, for a conceptual analysis of the difference between psychometric intelligence and rational thinking). The key is that myside bias affects everyone.)

Acting with intelligence requires that we modify our beliefs in the face of credible data, and thus it falls upon us all to engage in Actively Open-Minded Thinking or other related techniques to combat our own biases and develop our ability to see both sides and even-handedly assess the content of potential fake news. In addition, it behooves us to reconsider our definitions of intelligence to include the ability to avoid myside bias and to fully appreciate all sides of an argument or position—even a politically charged one for which the “correct” side seems obvious. One unfortunate aspect of life within the modern university is that its faculty often reflexively believe that because they have high levels of intelligence, they are unqualifiedly excellent at detecting fake news. However, as we have shown, the research does not support this belief, and consequently, we find ourself with centers of learning led by individuals who may unwittingly be a key part of the problem itself.