Journalistic fact-checking is an important new form of political news coverage (e.g., Spivak 2011; Graves 2016). However, little is known about its effects on citizens. Do they accept fact-checks that conflict with their political affiliations or shrug off those that contradict the claims of their preferred candidates? These questions have important implications for debates over citizen competence and the quality of governance in democracies (e.g., Hochschild and Einstein 2015; Jamieson 2015).

Concerns about people’s willingness to accept unwelcome factual information like counter-attitudinal fact-checks have become so widespread that Oxford Dictionaries named post-truth the word of the year after the 2016 U.S. elections (BBC 2016). These concerns are well-justified. Some research indicates, for instance, that people can be highly resistant to journalistic fact-checks. Nyhan and Reifler (2010) find that corrective information in mock news articles frequently fails to reduce salient misperceptions and can even increase the prevalence of misperceptions among ideologically vulnerable groups compared to those who read an article with no correction—a “backfire effect.” Other studies using relatively balanced formats have also found stiff resistance to uncongenial journalistic fact-checks (e.g., Nyhan et al. 2013; Garrett et al. 2013; Jarman 2016). Citizens may be especially resistant to unwelcome fact-checks during campaigns, which frequently stimulate motivated reasoning (e.g., Lenz 2012).

By contrast, other studies find that fact-checking and other types of factual information can partly overcome directionally motivated reasoning and reduce, or “debunk,” misperceptions (e.g., Weeks 2015: Nyhan and Reifler N.d.; Wood and Porter 2018; Chan et al. 2017). Notably, Wood and Porter (2018) examine 52 issue areas and observe no evidence of backfire effects. They do, however, find widespread evidence of motivated reasoning—for approximately 80% of issues tested, responsiveness to corrective information varied by ideology (unlike Nyhan and Reifler N.d.).

We thus confront conflicting expectations about how people might respond to journalistic fact-checks during a general election campaign. Citizens may resist fact-checks that conflict with their partisan or ideological commitments and maintain (or even strengthen) their misperceptions. Alternately, people might accept journalistic fact-checks and update their beliefs to be at least somewhat more accurate.

We present results of two studies conducted during the 2016 U.S. presidential campaign that help illuminate this debate. Both studies examine actual misstatements that were made by candidates and their proxies and fact-checked by the media during the campaign—a time when partisan commitments are activated and the influence of partisan leaders is likely to be especially strong (e.g., Zaller 1992; Lenz 2012). Specifically, Study 1 is a preregistered survey experiment that evaluates the effects of a journalistic fact-check of misleading claims about crime made by Donald Trump at the GOP convention. To increase the realism of the study’s evaluation of the effects of journalistic fact-checking in a campaign, Study 1 also includes experimental conditions in which a political elite attempts to denigrate and undermine the fact-check in question. Study 2 tests the effect of fact-checking a claim Trump made about unemployment during the first presidential debate among subjects experimentally induced to have watched the debate.

The design of these studies was intended to address two potential explanations for conflicting findings in the literature. In many cases, respondents in survey experiments may lack motivation to engage in effortful resistance to unwelcome factual information about relatively obscure topics or lack sufficient context to form counter-arguments. The lack of such motivations or context could explain the null or mixed results described above.

To address these concerns, we administered both studies at the height of a presidential general election campaign and designed them to maximize partisan directional motivations. In Study 1, we tested corrections of a presidential candidate’s convention acceptance speech shortly after it was delivered; in Study 2, we tested corrections of a candidate on the night of a presidential debate among a sample that was encouraged to watch the debate. In addition, while many facts are not the subject of political controversy, we tested fact-checks of claims that one candidate (Trump) used to criticize the other (Clinton), a context in which directionally motivated reasoning may be common.

Second, given that counter-arguing of unwelcome political information may be greater when relevant considerations are available, the first experiment experimentally manipulates the availability of messages denigrating the fact-check. In this study, we randomly paired a fact-check of a Trump statement with a statement by Trump campaign chairman Paul Manafort denying the correction and provided an additional Manafort statement ascribing a political motivation to the FBI, the source of the data in the fact-check. These messages should reduce the cognitive demands of counter-argument for low-information respondents and provide additional considerations upon which high-information Trump supporters can draw, increasing the realism of the information environment in which the fact-check is delivered and the likelihood of observing motivated resistance.

The design of these studies also allows us to investigate other important theoretical questions about the effects of fact-checking. First, we consider whether people are willing to not only revise their factual beliefs in response to a fact-check but to change their attitudes toward the candidate who has made a claim that has been fact-checked—an effect that would likely increase the reputational threat that fact-checking poses to politicians (see, e.g., Nyhan and Reifler 2015).Footnote 1 In addition, our first study tests whether people are willing to accept attitude-inconsistent information but instead change their interpretations of that information in a directionally motivated manner. For instance, Gaines et al. (2007) find that Democrats and Republicans updated their beliefs about the Iraq war relatively accurately over time; it was interpretations of those facts that diverged along partisan lines. Khanna and Sood (2018) similarly find that incentivized respondents provide more correct answers but perceived greater unfairness in the study when doing so.

Our results indicate that exposure to fact-checks reduced misperceptions among supporters of both major party presidential candidates but did not affect attitudes toward those candidates. In Study 1, exposure to fact-checking reduced misperceptions about crime rates even when respondents were provided a message by a Trump staffer disparaging the fact-check. Similarly, providing a fact-check of Trump just after a debate in Study 2 reduced misperceptions about unemployment in Michigan and Ohio, even among Trump supporters. In short, journalistic fact-checks can overcome directionally motivated reasoning and bring people’s beliefs more in line with the facts even when the counter-attitudinal information is disparaged by a co-partisan. However, neither Clinton nor Trump supporters changed their attitudes towards either candidate after receiving fact-checks, suggesting that voters’ preferences during a presidential election are not contingent on their perceptions of the factual accuracy of the candidates. In other words, factual corrections can achieve the limited objective of creating a more informed citizenry but struggle to change citizens’ minds about whom to support.

Theory and Hypotheses

Our theoretical approach builds on research suggests that people have competing goals in information processing (e.g., Kunda 1990; Molden and Tory Higgins 2005). When people have stronger accuracy goals (e.g., they are provided a reward for each correct answer as in Hill 2017), respondents will process information more dispassionately and seek to maximize the accuracy of their beliefs. When directional motivations are more salient (e.g., party identity or candidate preference), respondents will instead tend to process information in a manner that is consistent with that preference rather than maximizing accuracy or considering information in a dispassionate manner (Taber and Lodge 2006; Bolsen et al. 2014).

These findings suggest that directionally motivated reasoning may be an especially salient factor when people process factual information about controversial political issues in a highly partisan context such as a presidential election campaign. As discussed above, however, researchers have reached mixed conclusions about the extent to which directionally motivated reasoning affects belief updating in response to fact-checking in such contexts (e.g., Nyhan and Reifler 2010; Nyhan et al. 2013; Garrett et al. 2013 vs. Weeks 2015; Nyhan and Reifler N.d.; Wood and Porter 2018; Young et al. 2017; Porter et al. 2018). Other research suggests that directional motivations may have greater influence on interpretations of contested facts (Gaines et al. 2007) than on factual beliefs themselves; we also consider this possibility.

To adjudicate among these claims, we test three preregistered hypotheses based on the empirical literature and theoretical considerations discussed above.Footnote 2 In each case, we determine whether information is pro- or counter-attitudinal by whether respondents indicate supporting Trump or Hillary Clinton on a pre-treatment vote choice measure.

First, we propose to test whether fact-checking is ineffective at increasing the accuracy of factual beliefs among people for whom it is counter-attitudinal (H1; see Nyhan and Reifler 2010) or whether fact-checks increase belief accuracy but acceptance is greater among those for whom the information is pro-attitudinal (H2; see Wood and Porter 2018)Footnote 3

H1 (motivated resistance)::

Respondents will resist unwelcome facts about controversial issues. As a result, people exposed to journalistic fact-checks that are counter-attitudinal will not come to hold more accurate views. In some cases, their views could even become more inaccurate. (Evaluated in studies 1 and 2.)

H2 (differential acceptance)::

Respondents will accept journalistic fact-checks that are counter-attitudinal and update their beliefs to become more accurate, though the extent to which they update their beliefs may vary based on their prior attitude. (Evaluated in studies 1 and 2.)

We also consider the effects of fact-checks on “interpretations” of factual claims. We define the term consistent with Gaines et al. (2007) who use it to include evaluations (“the crime rate is high”), explanations (“crime has increased because of a decline in moral values”), and inferences (“Obama’s acceptance of this crime rate reveals indifference to our plight”). In this context, respondents may be willing to accept a fact-check which contradicts a favored politician but form an attitude-consistent interpretation of the information that they have accepted. For instance, if conservative respondents are asked to accept a contradiction of Trump’s claim about increase in violent crime, they might ascribe the decline in crime to a factor that is consistent with their beliefs (e.g., tougher policing and longer sentences). We therefore propose to test whether interpretations of factual claims are formed and updated in a directionally motivated manner (H3; see Gaines et al. 2007)

H3 (attitude-consistent interpretation)::

Respondents will accept journalistic fact-checks that are counter-attitudinal, but interpret them in an attitude-consistent manner. (Evaluated in Study 1 only.)

Finally, as noted above, we also evaluate a key research question—will people not only revise their factual beliefs, but alter their attitudes toward a candidate who has made a false or unsupported claim? We therefore also measure attitudes toward the candidate, including vote preference.Footnote 4 The voluminous literature on candidate evaluation and vote choice in a presidential election provides unclear expectations. For instance, Funk (1999) finds that perceived traits such as honesty condition overall evaluation of candidates in the expected direction—candidates who are perceived as having integrity are more warmly regarded. However, Rahn et al. (1990) and Pierce (1993) find no relationship between perceived traits and vote choice even among sophisticated participants who are thought to be capable of connecting potentially distal considerations. Accordingly, we have unclear expectations for the effect of fact-check exposure on subsequent candidate evaluations.

Stimuli

To maximize the external validity of our experiments, we adopted actual candidate statements from the 2016 presidential election. In our experiments, we focus on misleading claims made by Donald Trump in two high-profile candidate appearances—a suggestion of dramatically rising crime in his nomination acceptance speech (Study 1) and a claim about the loss of manufacturing jobs during one of the presidential debates (Study 2). Both claims were fact-checked by journalistic outlets at the time. The treatments we use in the studies aim to match the fact checks published for corresponding misstatements.Footnote 5

For instance, our fact-check of Trump’s convention statements about the prevalence of violent crime in Study 1 is very similar to the approach used by journalistic fact checkers. Here is the Associated Press fact-check of the crime claims Trump made in his convention speech (Sullivan and Chad 2016):

THE FACTS: Violent crime has dropped dramatically since the early 1990s.

According to FBI data, the national violent crime rate last peaked in 1991 at 758 reported violent crimes per 100,000 people. In 2014, the latest year for which full data is available, the rate was 366 per 100,000 people.

Our fact-check for the same issue in Study 1 is very similar:

According to FBI’s Bureau of Justice Statistics, the violent crime rate has fallen dramatically and consistently over time. According to their estimates, the homicide rate in the U.S. in 2015 was half that recorded in 1991.

Study 2 considers the effect of fact-checking Trump’s claims about job loss in Michigan and Ohio during the first presidential debate. Politico’s “wrongometer” wrote the following in response to that claim:

In fact, the state’s unemployment rate has declined in recent years, according to the Bureau of Labor Statistics. That figure now stands around 4.5 percent, down from a 14.9 percent unemployment rate in June 2009 (Politico 2016).

Similarly, the New York Times wrote the following about Trump’s claim:

Ohio and Michigan have, indeed, suffered major manufacturing job losses over the past generation. But in the past year, Ohio has gained 78,300 jobs, and Michigan has gained 75,800 jobs. In August, the unemployment rate was 4.9 percent in Michigan and 4.7 percent in Ohio, both in line with the national rate (New York Times 2016).

Our fact-check for this same issue (described further below) reads as follows:

In fact, according to the Bureau of Labor Statistics, unemployment has fallen in both Michigan and Ohio. Both states each saw 70,000 new jobs over the last year.

We believe our treatments are faithful representations of the type of journalistic fact-checking that is now widely disseminated by the media. In other words, our experiments are not a test of authoritative and logically infallible attempts to debunk false claims. It is instead an analysis of how journalistic fact-checks affect mass beliefs and attitudes.Footnote 6

Study 1: Crime Perceptions

Study 1 tested the effects of journalistic fact-checks about changes in levels of crime over time. Respondents were randomly assigned to one of five conditions. In a control condition, respondents read an article without any political content. Participants in the treatment conditions read a mock news article featuring misleading claims by Donald Trump about crime rates based on an actual article that originally appeared on CNN.com. In one treatment condition, participants read an article that omits any corrective information, allowing us to test the effect of exposure to a candidate’s statement alone. Others were assigned to read versions of the article that include neutral corrective information in the style of journalistic fact-checking, allowing us to test its effects versus a no-correction version of the article as well as the control condition. One condition tested the fact-check alone, while two other conditions tested whether elite political actors can cultivate resistance to factual information. In these conditions, the fact-check was followed by a statement from a Trump surrogate disparaging the validity of the information or a statement by the surrogate disparaging the validity of the information and attributing a political motive to the source of the information.

Experimental Design and Instrument

After a series of demographic and attitudinal questions, participants were assigned to one of five conditions.Footnote 7 The treatments were based on actual news events during the 2016 Republican National Convention. In his speech, Trump described an America ridden by increasing crime (uncorrected claim). As the media pointed out, however, these depictions were contradicted by FBI crime data showing a long-term secular decline (journalistic fact-check). When Paul Manafort, Trump’s then-campaign chairman, was pressed about this discrepancy (Schleifer 2016), he questioned the validity of FBI statistics (rejection of the fact-check) and suggested the FBI could not be trusted because it did not recommend indicting Hillary Clinton in her email scandal (conspiracy theory/fact-check source derogation).

The specific treatments shown to participants are as follows:

  • Control A birdwatching article.

  • Rising crime message A news article summarizing Trump’s claims

  • Rising crime message + fact-check A news article summarizing Trump’s claims with a fact-check citing FBI statistics.

  • Rising crime message + fact-check + denial A news article summarizing Trump’s claims with a fact-check citing FBI statistics and a statement from Manafort rejecting the statistics.

  • Rising crime message + fact-check + denial + source derogation A news article summarizing Trump’s claims about crime with a fact-check citing FBI statistics, a statement from Manafort rejecting the statistics, and Manafort’s contention that the FBI was not to be trusted.

Outcome Variables

After treatment, subjects were asked two article recall questions to measure receipt of treatment. We then measured several outcome variables of interest. To assess motivated resistance, we asked people’s beliefs about changes in the crime rate. We also asked about perceptions of the accuracy of federal crime statistics and the treatment article and whether/how the treatment article was biased. In addition, respondents were asked to choose among interpretations of crime trends, which were coded for belief consistency.Footnote 8 Finally, we measured evaluations of Trump and other politicians.

Results

Responses were collected on September 30, 2016 by Morning Consult (\(n=1,203\)) and on Mechanical Turk (\(n=2,983\)).Footnote 9 To assess our hypotheses, we performed a series of OLS regressions with robust standard errors.Footnote 10 All treatment effect estimates are unweighted intent to treat effects.Footnote 11 In the pre-treatment vote choice questions, a small percentage of respondents did not support Trump or Clinton; per our preregistration, we exclude them from subsequent analyses.Footnote 12

To evaluate motivated resistance, we test whether the marginal effect of exposure to the fact-check conditions on misperceptions about crime is null or positive for Trump supporters and whether the difference in effects is significant compared to Clinton supporters. To evaluate differential acceptance, we test whether the marginal effect of exposure to the conditions including a fact-check is negative for Trump supporters and whether the difference in treatment effects is significant versus Clinton supporters. We also measure source derogation and counterargument to understand how respondents respond to fact-checks.

Table 1 presents the effects of the manipulation on beliefs about changes in crime where the control condition is the excluded category. These estimates are calculated among Trump and Clinton supporters (the latter are thus the excluded category for the Trump support indicator). The table also reports auxiliary quantities representing differences versus the uncorrected statement condition.

Table 1 Message exposure effects on beliefs about changes in crime

Our results indicate that journalistic fact-checking had a pronounced effect on factual beliefs. Though Trump’s supporters were more likely than Clinton’s to believe that crime had increased or not declined significantly over the previous ten years, corrective information reduced misperceptions among supporters of both candidates. Specifically, respondents exposed to FBI statistics about decreased crime reported significantly lower misperceptions compared to the uncorrected statement conditions in both samples regardless of the candidate they supported (\(p<.01\)). Exposure to Trump’s claim did not further increase misperceptions among his supporters.

Our results are inconsistent with motivated resistance. We do observe some evidence of differential acceptance, however. Both the fact-check denial on Mechanical Turk (\(p<.05\)) and the denial/source derogation in both samples (\(p<.05\) in Morning Consult, \(p<.10\) in Turk) reduce crime misperceptions less among Trump supporters than Clinton supporters.Footnote 13

These findings are illustrated in Fig. 1, which presents mean crime perceptions by condition and candidate support from the Morning Consult data. Mean beliefs about crime change among Trump supporters declined from 4.17 (out of 5) in the control and uncorrected statement conditions to 3.31 in the fact-check condition, 3.62 in the fact-check and denial condition, and 3.65 in the fact-check/denial/source derogation condition. Similar declines are observed among Clinton supporters.

Fig. 1
figure 1

Crime perceptions by treatment condition and candidate support. Mean beliefs about crime change by candidate preference and experimental condition. Survey data from Morning Consult

To understand these responses, we examine how the treatments affect judgments of the accuracy and fairness of the articles. Table 2 demonstrates that fact-checks provoke different perceptions of accuracy and fairness among Clinton and Trump supporters. More specifically, this table presents results from statistical models available in Table C6 in Online Appendix C that estimate the effect of the manipulation on the perceived fairness and accuracy of the stimulus article and the perceived accuracy of federal crime statistics for Clinton and Trump supporters relative to the uncorrected condition.

Table 2 Message exposure effects on perceptions of accuracy and fairness

Relative to the uncorrected statement condition, Clinton supporters view the article as more accurate and fair when a fact-check is present. In three of four comparisons, they even view federal crime statistics as more accurate when Trump’s staffer questions them. By contrast, Trump supporters view the article as less accurate and fair when it includes a fact-check—a contrast with their reported factual beliefs, which became more accurate. Trump supporters are also less likely to view federal crime statistics as accurate when they are invoked in a fact-check, especially when questioned by a Trump staffer (\(p<.01\) in each denial condition vs. the uncorrected condition).

To illustrate these findings, we plot the means of the perceived accuracy of the stimulus article and federal crime statistics by condition and candidate support for the Morning Consult data in Fig. 2. When Trump supporters receive a fact-check, they are less likely to see the article or federal crime statistics as accurate (mean of 2.7 for both) compared to when they receive Trump’s uncorrected statement (means of 2.8 and 3.0, respectively). The opposite pattern is frequently observed among Clinton supporters, who view the article as more accurate when a fact-check is included (mean of 2.9 vs. 2.5). These responses do not vary in the presence of fact-check denial or source derogation; the response seems driven by the presence of a fact-check.Footnote 14

Fig. 2
figure 2

Perceived accuracy of article and federal crime statistics by treatment condition and candidate support. Mean perceived accuracy and fairness of the stimulus article and perceived accuracy of federal crime statistics by candidate preference and experimental condition. Survey data from Morning Consult

The differences in perceptions of the articles that we observe can be interpreted as consistent with H3. However, as Table C11 in Online Appendix shows, we do not find evidence that people interpret the changes in crime they perceive in a viewpoint-consistent manner (e.g., “[t]ougher policing and longer prison sentences” for Trump supporters who think crime has decreased).

Finally, we estimate the marginal effect of fact-checking on evaluations of Trump among Clinton and Trump supporters in Table 3. Specifically, we asked subjects to evaluate both candidates on a 1–5 scale ranging from “Very unfavorable” to “very favorable.”

Table 3 Message exposure effects on Trump favorability

We find no significant effects of the fact-check on favorability toward Trump regardless of respondents’ candidate preference.Footnote 15

In sum, the evidence from Study 1 shows that, even if people are inclined to take a skeptical view of a fact-checking article and the data underlying it, fact-checks can still spur people to hold more factually accurate beliefs. However, these changes in belief accuracy do not seem to lead to corresponding changes in attitudes toward the candidate being fact-checked.

Study 2

One limitation of Study 1 is that we examined the effects of a fact-check several weeks after the misstatement it targeted was made. It is possible that subjects had already been exposed to fact-checking of the misstatements we studied and that this exposure limited the effects of the fact-check on attitudes toward the candidate. The design of Study 2 addresses this limitation because it was conducted immediately after the first 2016 presidential debate.

In the first wave of Study 2, 1546 participants from Mechanical Turk were asked standard political and demographic questions as well as questions about their access to cable television and media consumption. They were then instructed to watch the debate and told they would be invited to take a survey immediately after its conclusion.Footnote 16 As soon as the debate ended, participants were invited to complete a survey that included questions about candidates’ debate performances and respondents’ general attitudes towards the candidates (Wave 2). It included the following statement from Trump:

Our jobs are fleeing the country to Mexico... they’re building some of the biggest, most sophisticated plants. Not so much in America. Thousands of jobs leaving Michigan, Ohio...their companies are just leaving, they’re gone.Footnote 17

The claim that large numbers of jobs were leaving Michigan and Ohio at the time due to factories being moved abroad is inaccurate. As fact-checkers pointed out on the night of the debate, employment had not fallen in either state. The New York Times fact-check pointed to the number of new jobs created in each state over the previous year (New York Times 2016). A fact-check by National Public Radio (NPR) of this Trump statement directed readers to the Bureau of Labor Statistics (BLS) for data confirming the fact check—and rebutting Trump (National Public Radio 2016).

After seeing Trump’s claim, respondents were randomly assigned with probability .5 to receive the following fact-check: “In fact, according to the Bureau of Labor Statistics, unemployment has fallen in both states. Both states each saw 70,000 new jobs over the last year.” Like The New York Times’ fact-check on the night of the debate, we pointed to the number of new jobs created in each state over the previous year; and like NPR’s fact-check, we explicitly based the claim of the fact check on BLS data.

All respondents then were asked: “Over the last few years, has unemployment gone up or down in Michigan and Ohio?” Subjects could respond on a five-point scale, from “Gone down a lot” to “Gone up a lot,” with “Stayed the same” as the middle category. (See Online Appendix A for exact wording.) This wave was closed by noon the next day. Five days later, we recontacted participants and measured perceptions of the debate’s winner and attitudes toward the candidates (Wave 3).Footnote 18

Study 2 allows us to test for both motivated resistance (H1) and differential acceptance (H2). Under H1 (motivated resistance), Trump supporters exposed to the fact-check would not only resist, but come to hold more inaccurate views than uncorrected Trump supporters. The fact-check treatment undermined Trump’s claims about the economy and his opposition to foreign trade, which were central to his candidacy. The claim’s political importance and its clear contradiction by government data makes it a useful test of partisans’ responsiveness to fact-checking.Footnote 19 Under H2 (differential acceptance), Trump and Clinton supporters who are exposed to the fact-check would both integrate this information, but Trump supporters would be less accepting of it.

Analysis

We begin with respondents’ beliefs about unemployment in Michigan and Ohio after the fact-check in Wave 2. Following Study 1, we estimate OLS models with robust standard errors that include indicators for fact-check exposure and Trump support in Wave 1 and an interaction term.Footnote 20 As in Study 1, we restrict our analysis to respondents who reported supporting Clinton or Trump in Wave 1. The outcome measure is coded so that higher values indicate belief that unemployment stayed the same or increased rather than decreased.

We present results in Table 4. The first model shows that the fact-check decreased misperceptions about unemployment in Michigan and Ohio among both Clinton and Trump supporters (\(p<.01\)). As with Study 1, the evidence from Study 2 does not support motivated resistance (H1). We also find no evidence of differential acceptance, contradicting H2. The second model shows that these results are consistent when we control for the orthogonal media manipulation.

Table 4 Message exposure effects on unemployment beliefs

Finally, Table 5 considers fact-check effects five days later (Wave 3). We again use OLS models with robust standard errors to estimate whether exposure to a fact-check affected perceptions that Trump won the debate, evaluations of his debate performance, and vote choice. We again include indicators for fact-check exposure and Trump support and an interaction term. We see no evidence that the fact-check affected these outcomes except for perceptions that Trump “won” the debate, which declined slightly among his supporters. However, vote choice was not affected. Echoing Study 1, fact-checking reduced misperceptions but had no discernible effects on participants’ candidate preferences, including supporters of the candidate who had been fact-checked.

Table 5 Fact-check effects at Wave 3

Conclusion

In two studies of fact-checking conducted during the 2016 presidential election, we find that people express more factually accurate beliefs after exposure to fact-checks. These effects hold even when fact-checks target their preferred candidate. In Study 1, we exposed participants to variants of an article covering claims Donald Trump made about crime. Trump supporters were willing to accept a fact-check of those claims and to update their beliefs, though we observe some evidence of differential acceptance (i.e., they revised their beliefs less than Clinton supporters). Similarly, the accuracy of Trump supporters’ beliefs about unemployment increased in Study 2 after seeing a fact-check of a claim Trump made during the first debate. However, exposure to journalistic fact-checks did not affect attitudes toward him in either study. Ultimately, we find no evidence that changes in factual beliefs in a claim made by a candidate affect voter preferences during a presidential election.

Our results on interpretations were mixed. Study 1 participants evaluated the fairness and accuracy of the stimulus article and the accuracy of the federal crime statistics it cited in a directionally motivated fashion. However, they did not adopt viewpoint-consistent interpretations of the change in crime. Further research is needed to understand how respondents interpret counter-attitudinal fact-checks and other forms of corrective information.

These results help theoretically inform our understanding of both motivated reasoning and factual belief updating. As with other recent research (Wood and Porter 2018), we find little evidence of a backfire effect on respondents’ factual beliefs. However, our results also do not suggest that respondents accept fact-checks uncritically; exposure to counter-attitudinal information decreases perceptions of the accuracy of our stimulus article and the source of counter-attitudinal information in Study 1. These findings suggest motivated reasoning can coexist with belief updating, a possible explanation for divergent findings in the literature (e.g., Flynn 2016); they need not be mutually exclusive phenomena (see also Guess and Coppock 2018 and Khanna and Sood 2018).

Of course, these studies have several limitations. First, we did not test a fact-check of a Clinton misstatement and cannot evaluate how her supporters would have reacted. Second, Trump was infamous for extreme exaggerations and misstatements, which may have made some respondents receptive to fact-checking but also prepared his supporters to rationalize their continued support for him. Also, we measured feelings toward the candidates post-treatment and use them as an outcome measure. It is possible that the strongest Trump supporters respond to fact-checks differently than less ardent supporters, but we are unable to test this conjecture.

Further research is also necessary to determine the extent to which our results generalize to other contexts or forms of fact-checking. Our results suggest that fact-checks are unlikely to meaningfully diminish the strong attachments people have to their party’s candidate in a campaign context with partisan cues. Their effects may be stronger, however, in elections with weaker partisan cues and less well-known figures on the ballot (e.g., Wintersieck 2017). In addition, fact-checks could have stronger attitudinal effects on feelings toward politicians the more temporally distant they are from an election. In addition, the scope of our finding is limited to the effects of a fact-check of a single misstatement, the format most often employed by journalists. Correcting a series of inaccurate claims might have had larger effects on candidate evaluations. Other research might also consider the effects of joint fact-checking of both major-party candidates or of positive fact-checks corroborating the accuracy of a candidate’s claim. In addition, future research should examine affective responses to candidate misstatements of fact as well as the fact-checks that set the record straight.

Finally, as with all studies of this sort, we cannot completely rule out acquiescence bias or demand effects. Recent research indicates that demand effects in Internet-based survey experiments are relatively rare, however (Mummolo and Peterson 2018).

Despite these limitations, our results provide compelling evidence that citizens can accept the conclusions of journalistic fact-checks of misstatements even when they are made by one’s preferred candidate during a presidential election. This information had little effect on people’s attitudes toward the candidate being corrected, however. In other words, Trump supporters took fact-checks literally, but not seriously enough to affect how they felt toward their preferred candidate.