Mandatory information disclosure programs — which require public or private institutions to disclose factual information about their products, services, operations, or performance to the public — have become increasingly popular. These programs aim to use information to empower stakeholders such as citizens, non-governmental organizations, and the media to hold governments and private entities accountable. Embodying the widely shared democratic values of transparency and participation, information disclosure has become one of the most common components of public policy (Fung et al., 2007; Loewenstein et al., 2014).

The virtues of information disclosure partly rest on the assumption that citizens will respond to disclosed information and use it to make better choices, but this assumption is not always upheld (e.g., Healy & Malhotra 2013). While many studies have shown that information provision can change attitudes and preferences (e.g., Boudreau & MacKenzie 2018; Holbein, 2016; Larsen & Olsen, 2020; Pianzola et al., 2019), others have found no such effect (e.g., Barnes et al., 2018; Hopkins et al., 2019), and the informational effects vary significantly for people of different attributes (e.g., Alt et al., 2016; Gaines et al., 2007; Nyhan & Reifler, 2010). Citizens may fail to connect information with specific public policy positions (e.g., Bartels 2005; Kuziemko et al., 2015), and people may also engage in confirmation bias and motivated reasoning (e.g., Taber & Lodge 2006), which can lead to biased interpretation of information.

This study evaluates the assumption by examining citizens’ responses to the provision of correct local environmental information through a survey experiment. The provided information measures the relative environmental risk from toxic emissions at zip code level and is derived from the Toxics Release Inventory (TRI). In the experiment, I first assess respondents’ prior knowledge of the local environmental risk and randomly assign half of them to receive correct information. All respondents then are asked to answer a series of questions about their environmental concerns, attitudes, and preferences and behavioral intentions.

The results suggest that individuals can correctly process information to update beliefs and attitudes, but the information does not further change preferences and behavioral intentions. For those who underestimate the risk, information provision makes them more concerned about the risk to themselves and their family and to develop a stronger sense of personal obligation to act, and vice versa for those who overestimate the risk. Moreover, the effect on concern for self and family seems to be greater for liberals than for conservatives. However, the provided information has failed to move policy preferences and the intentions to change consumption behaviors, participate in groups, or take political actions, for either the overall sample or subgroups. These results attest to the potentials of information disclosure, as people do not completely reject new information, but they also highlight the challenges of using it to spur meaningful participatory and behavioral changes.

This study expands on the current literature and makes a few contributions to the understanding of the relationships between information, knowledge, attitudes, and preferences. First, many previous studies on this topic have been observational, but knowledge, attitudes, and preferences often mutually affect each other (e.g., Acharya et al., 2018). The experimental research design allows a causal assessment of the impact of information. Second, this study examines the impact of information provision based on misperception (the difference between prior knowledge and the provided correct information), while existing studies mostly have not. Instead of treating information provision as a uniform treatment, considering misperception clarifies how individuals respond to different information shock, which is especially important for comparing the effects on subgroups, as respondents with different attributes often hold different misperception and therefore experience different information shock from information provision (Li & Konisky, 2022). Third, the information used in this study is personalized at zip code level, whereas most other studies have focused on national-level information. Lastly, unlike most extant studies that use general political information, this study tests the impact of information from an authentic information disclosure program in the environmental area, where information tools are widely adopted to inform citizens and deal with thorny environmental challenges.

Literature Review

A large body of literature has assessed the impact of information on attitudes, preferences, and behaviors. The retrospective voting literature implicitly addresses this question, and it shows that citizens respond to economic conditions (e.g., Rudolph 2003), school performance (e.g., Berry & Howell 2007), and disaster responses (e.g., Healy & Malhotra 2009). However, this literature does not provide direct evidence regarding how citizens respond to new information, especially information from specific public policy, that is independent of their existing experience and knowledge. And these studies, mostly observational, often face the obstacle of endogeneity (Healy & Malhotra, 2013), as exogenous variations in access to relevant information in real world are rare (Holbein, 2016).

Many recent studies on the impact of information attempt to address endogeneity through experiments, but the results are mixed. While some research suggests that citizens’ policy preferences respond to information about unemployment (Alt et al., 2016), federal welfare programs (e.g., Kuklinski et al., 2000 [study 1]), crime (e.g., Gilens 2001), police violence (e.g., Boudreau et al., 2019), foreign aid (e.g., Gilens 2001), income inequality (e.g., Boudreau & MacKenzie 2018), and social security (e.g., Cook et al., 2010), other studies find muted effects on preferences despite increased knowledge. For example, Hopkins et al., (2019) demonstrated that providing information on the number of immigrants has no measurable effect on attitudes towards immigration, despite the fact that it can sometimes reduce the perceived size of foreign-born population. Similar results have been found for income inequality (e.g., Kuziemko et al., 2015), government spending (e.g., Barnes et al., 2018), civic education (e.g., Green et al., 2011), and political statements (e.g., Nyhan et al., 2019). Moreover, many studies show that how citizens process new information is moderated by personal attributes such as ideology (e.g., Nyhan & Reifler 2010).

The evidence is also mixed in the environmental area. One the one hand, it is commonly assumed that knowledge of environmental risk has strong positive effects on environmental attitudes and behaviors (e.g., Li 2021; Siegrist & Árvai, 2020). In support of this assumption, many studies have demonstrated that personal experiences of environmental events that may affect risk perception, such as wildfire and flooding, have significant impacts on environmental concerns, attitudes, and behaviors (e.g., Bergquist & Warshaw 2019; Bishop, 2014; Egan & Mullin, 2012, 2017; Hazlett & Mildenberger, 2020; Konisky et al., 2016; Spence et al., 2011). In survey experiments, scholars (e.g., Lacroix & Gifford 2018; Scannell & Gifford, 2013) also have found that providing individuals with information about environmental risk can affect their environmental attitudes and behaviors.

On the other hand, many studies suggest a limited relationship between risk perception and environmental attitudes and behaviors. For example, Javeline et al., (2019) found that knowledge of climate change is not correlated with homeowners’ intention to reduce the structural vulnerability of their homes. Similarly, studies based on survey experiments have shown that environmental information sometimes has no effect on attitudes (e.g., Shwom et al., 2008) and can even lower individuals’ environmental concerns (e.g., Mildenberger et al., 2019).

One reason for the conflicting results in the literature may be that many studies have not accounted for the direction and magnitude of misperception. Misperception is important because it determines the shock that correct new information has on citizens. (Information shock, which equals the difference between the provided correct information and prior risk perception, is opposite in direction but equivalent in magnitude to misperception.) Information may have a greater impact when it is novel. If it only provides something that people already know, we will expect it to have lesser effects, if any. Similarly, the direction of misperception has strong implications for the direction of informational effects. Further complicating the issue is the non-randomness of misperception. For example, conservatives and liberals may hold different misperception; hence, the information shock from information provision may be very different for them. Without considering misperception, we are uncertain if subgroups’ different responses are due to different information shock or their different interpretations of the same information shock.

This study explicitly models misperception in evaluating citizens’ responses to information provision. Specifically, I assess respondents’ prior knowledge of local environmental risk and provide correct information to the treatment group (I have the correct information for the control group as well, but it is withheld from respondents). This allows me to measure misperception (prior perceived risk – actual risk) and to examine the effects of information provision conditional on it. In doing so, this study provides further clarity on how citizens internalize new information.

Theory and Hypotheses

How do people process information? The classical model assumes that new information updates prior beliefs in accordance with the Bayes rule. The implication is that after receiving enough new information, people’s beliefs will eventually converge (Blackwell & Dubins, 1962). There are reasons to doubt the explanatory power of the basic Bayesian model, as gaps in public opinions among different groups persist over long periods of time (Bartels, 2002).

Increasingly, the basic Bayesian model has been challenged/complemented by insights from psychology. Most notably, scholars have found that people’s responses to new information are based on prior beliefs and their values and preferences. Specifically, people experience confirmation bias when they tend to accept new information that reinforces their existing views and reject information that contradicts those views (Nickerson, 1998). In addition, people engage in motivated reasoning, which causes the same information to have different impact (Edwards & Smith, 1996; Kunda, 1990). The phenomena of confirmation bias and motivated reasoning are well-documented in political science research (e.g., Gaines et al., 2007; Khanna & Sood, 2018; Taber & Lodge, 2006).

Information may also have distinct impacts on beliefs, attitudes, and preferences. While beliefs come from assimilation of information, attitudes derive from the evaluation of beliefs, and preferences involve applying that evaluation to assess individual or government actions (Barnes et al., 2018). Recent studies found that provision of information about government spending and income inequality has impacted relevant beliefs and attitudes, but has hardly changed policy preferences (Barnes et al., 2018; Kuziemko et al., 2015). In this study, I evaluate the effects of information provision on environmental concerns, attitudes, and preferences and behavioral intentions separately.

Following the earlier discussion, I expect information provision to affect environmental concerns by updating individuals’ risk perception. Moreover, the informational effects will negatively correlate with misperception (i.e., it will increase the concerns of those underestimate risk and decrease the concerns of those overestimate risk).

H1

The provision of information will affect environmental concerns.

H1a

The effects of information provision on environmental concerns will negatively correlate with misperception.

I also expect the effects on environmental concerns to vary based on personal attributes, as people tend to accept or interpret information in ways that are consistent with their existing values and preferences. I focus on ideology as a moderator. The ideological divides on environmental issues are strong and escalating, with liberals being more supportive of actions to address environmental problems than conservatives (Dunlap, 2014; Gray et al., 2019). Given the large differences in their preferences, the way people interpret the same information may differ: liberals will interpret the information more pro-environmentally than conservatives.

H2

The provision of information will have larger positive effects or smaller negative effects on environmental concerns for liberals than for conservatives.

I present the hypotheses only for environmental concerns in the text, and the hypotheses for environmental attitudes and preferences and behavioral intentions follow the same structure. Despite the results from some studies indicating that information provision may face more barriers to change attitudes and behaviors, I still hypothesize its impacts to be similar to those on environmental concerns, given the large body of literature that shows strong correlations among risk perception, attitudes, and behaviors (Lacroix & Gifford, 2018; Li, 2021; Scannell & Gifford, 2013).

I also include multiple measures (details in the survey design and implementation section) within each concept (concerns, attitudes, and behaviors). Since the provided information is at zip code level, I expect it to have especially strong effects on outcomes at local level such as concern for self and family. However, this is not to say that it will have no effect on measures at regional or national levels. The literature suggests that perception of local risk can often predict a wide set of attitudes and behaviors at higher levels that aim to address relevant issues collectively, including policy support (O’Connor et al., 1999), political activities (Hazlett & Mildenberger, 2020), and personal consumption behaviors (Lacroix & Gifford, 2018).

Toxics Release Inventory and Risk Screening Environmental Indicators

The information used in the experiment is from the Toxics Release Inventory (TRI). Created by the Environmental Protection Agency (EPA) in 1986 under the provisions of the Emergency Planning and Community Right-to-Know Act, the TRI tracks the management of toxic chemicals that may threaten human health or harm the environment across the country. Every year, more than 20,000 industrial facilities report how much of each listed chemical is released into the environment and managed through recycling, energy recovery, and treatment. The reported information is compiled in the TRI and made available to the public.

To use TRI data to better understand the risk from toxic emissions, the EPA has developed the Risk-Screening Environmental Indicators (RSEI) model. The model incorporates information about the amounts of toxic chemicals released, chemicals’ fate and transport through the environment, and each chemical’s relative toxicity to calculate a variety of risk measures at different geographical levels. From the RSEI microdata, which include risk measures for 810-meter-by-810-meter grid cells that cover all of the U.S., I calculate the toxicity-weighted RSEI scores for zip codes in the contiguous U.S., which measure their relative risk from toxic emissions. The calculation is based on air releases in 2018, which is the latest year with available RSEI microdata at the time of the survey. Figure A1 in Appendix A shows the scores for zip codes in the contiguous U.S.

In this study, I assess respondents’ knowledge of how the risk from toxic emissions in their zip codes compares to other zip codes in the contiguous U.S. The information provided to the treatment group is the actual percentile rankings of their zip codes based on the RSEI indicator described above. I use a comparative risk measure because the RSEI score, which is based on a screening-level model, is comparative in nature. The score itself is unitless and cannot be translated directly into tangible health impacts, such as mortality, life expectancy, or rates of various diseases. The percentile ranking makes the RSEI score concrete and intuitive. In addition, perception based on social comparison is common, as people often measure themselves (or their neighborhoods) against others. The comparative format is similar to those used in recent empirical studies, such as Kuziemko et al., (2015), which allowed respondents to explore the percentile rankings of their income, and Condon & Wichowsky (2020), which manipulated respondents’ perception of their relative social economic conditions.

The comparative risk measure is based on zip code instead of population. As the distribution of population is uneven across zip codes, the zip code-based rankings will differ from the population-based rankings. For environmental risk, it is common to compare neighborhoods even when describing personal exposure, as environmental risk is often understood through place. For example, people often say that someone lives in one of “the most polluted neighborhoods.”

While there are different ways that the comparative risk measure could be constructed, since the comparison can be based on different geographic units (census blocks, census tracts, zip codes, counties, or states) and the level of the comparison can also differ (national level, within states, within counties, or neighboring geographic units), the nationally ranked risk at zip code level is a reasonable and meaningful choice. A zip code is a common way for the public to conceptualize a neighborhood, and it is small enough to differentiate different levels of individual exposure to environmental risk.

My approach asks respondents to compare their zip codes with all other zip codes in the country, which in essence asks them to compare the conditions of where they live to the same conditions elsewhere in the U.S. This approach is consistent with how the TRI and RSEI information is often used in practice by the EPA to educate citizens and communicate risk. For example, the EPA’s RSEI outreach application presents information in a similarly comparative format.Footnote 1

Survey Design and Implementation

The survey experiment proceeds in three parts (key questions are listed in Appendix B). In the first part, I collect pre-treatment baseline information about the respondents. Most importantly, I assess their prior knowledge of the local risk. I provide background information about the RSEI score to all respondents and ask them to answer the question “If we rank all zip codes in the contiguous U.S. from the lowest risk to the highest risk from toxic chemicals, how do you think your zip code compares to other zip codes?” Respondents answer the question on a scale (Fig. 1). In addition to prior knowledge, I also measure their environmental values with the New Ecological Paradigm (NEP) developed by Dunlap et al., (2000). As the most widely used measure of pro-environmental orientation, the NEP scale consists of 15 items. Following Stern et al., (1999), I use 5 items from the longer scale.

Fig. 1
figure 1

Assessment of Prior Knowledge

Source: Snapshot from the survey

In the second part, I provide correct information to the treatment group. Specifically, I show respondents in the treatment group the actual percentile rankings of their zip codes based on the RSEI data, along with their own prior estimates. To ensure that these respondents have actually received the treatment, they are also required to complete a fill-in-the-blank question about the actual rankings of their zip codes (all of them answered it correctly). For respondents in the control group, I show them only their own prior estimates. I present the information in a straightforward way, instead of using indirect approaches such as embedding the information in a news article. I do so because I am primarily interested in how people process information after receiving it. It is important that I provide information in a way that “hits them between the eyes” (Kuklinski et al., 2000) and ensure that the respondents have received the information.

After the treatment, in the third part, I use a battery of questions to measure issue-specific environmental concerns, attitudes, and preferences and behavioral intentions for all respondents.

Environmental Concerns. Two items measure how serious respondents think toxic chemicals in the environment is a problem (1) for themselves and their family and (2) for the nation, respectively. The first item is more relevant, as the provided information is local at zip code level.

Attitudes. Two items serve to measure norms/attitudes. The first item measures how much the respondents agree that “the government should take stronger action to clean up toxic chemicals in the environment”; the second item measures the degree of “personal obligation to take action.”

Preferences and Behavioral Intentions. I ask a battery of questions to gauge respondents’ preferences and behavioral intentions. A factor analysis suggests that they load on four factors (I consider all questions/items with loading values above 0.4 on a factor as components of the factor). The first factor — consumer behaviors — consists of (1) the intention to avoid buying products from bad polluters and (2) the intention to buy environmentally friendly household chemicals such as detergents and cleaning solutions (Cronbach’s \(\alpha\): 0.76). The second factor — willingness to pay — includes two items that measure the willingness to pay (1) higher tax and (2) higher prices, respectively (Cronbach’s \(\alpha\): 0.93). The willingness-to-pay measure is also the policy preferences measure. Policy instruments, whether a pollution tax, technical or performance-based standards, or pollution cleanup programs, would increase the cost of final products and/or government spending. The willingness-to-pay measure accounts for the consequences of public policy and is a more realistic evaluation of policy options. The third factor — group participation and contribution — comprises two items that measure the intention to (1) join and (2) contribute time and money to relevant groups (Cronbach’s \(\alpha\): 0.86). The fourth factor — political activities — consists of three items that measure the willingness to (1) sign petitions, (2) contact government officials, and (3) participate in protests (Cronbach’s \(\alpha\): 0.84).

For all the concepts/factors that include more than one items, I calculate their scale by averaging the scores of all items of the respective concepts/factors (with reverse coding adjusted). Since all items are answered in 5-point Likert scales, this approach allows me to measure all concepts/factors in the original scale of the items/questions (1–5) to facilitate interpretation.

The survey was administered on a representative sample of 1,000 adult respondents (age > 18) in the contiguous U.S. by YouGov, an Internet-based market research firm, in February 2020. YouGov created the sample by drawing respondents from their opt-in panel to match a target sample based on the 2018 American Community Survey. YouGov’s methodology to generate representative samples has been validated extensively by previous research (e.g., Ansolabehere & Schaffner 2014; Liu et al., 2010), and their service is widely used in political science research (e.g., Boudreau & MacKenzie 2018; Konisky et al., 2020).

The 1,000 respondents were randomly assigned to either a treatment or a control group with an equal chance. The randomization was successful. All the demographic and pre-treatment measures between the treatment and the control groups are similar, and none of the differences are statistically significant (Table A1 in Appendix A).

Descriptive Statistics of Perceived Risk, Actual Risk, and Misperception

Respondents do not seem to have good knowledge of the risk in their neighborhoods. The correlation coefficient between the perceived and actual risks is only 0.13. (Figure A2 in Appendix A also shows a scatter plot of the perceived and actual risks.) More specifically, respondents tend to underestimate the risk (summary statistics presented in Table A1 in Appendix A). On average, respondents estimate the risk in their zip codes to be 43, while the average actual risk is 64. The average actual risk is above 50 because urban areas, which tend to be more polluted, also have a larger share of the population and therefore of the respondents.

Figure 2 shows the distributions of respondents’ perceived risk, actual risk, and misperception. The distribution of perceived risk has a cluster around 50, which indicates a tendency of many people to assess their neighborhoods as about average — similar to the phenomenon that a disproportionately large share of the population consider themselves to be middle class (Shenker-Osorio, 2013). Another possible reason is survey satisficing when respondents who do not have strong prior beliefs choose a convenient answer at the middle point. The latter case, if true, poses challenges for the analyses that center on the role of misperception in explaining the informational effects. I address this concern by conducting sensitivity analyses that exclude respondents who have high risk of survey satisficing. In one analysis, I exclude respondents who indicate that they have no confidence in their assessment of the local risk. In another, I exclude respondents who rate the local risk to be in the range of [48, 52].

Fig. 2
figure 2

Distributions of Perceived Risk, Actual Risk, and Misperception

The actual risk is pretty similar for respondents of different attributes, but their perceived risk differs (Figure A3 in Appendix A shows the means of perceived risk, actual risk, and misperception by attributes). Democrats, liberals, and people with strong pro-environmental orientation perceive the risk in their zip codes to be larger and, as a result, underestimate the actual risk less compared to Republicans, conservatives, and people with weak pro-environmental orientation, respectively. For instance, liberals on average estimate the risk to be 48, while conservatives’ estimate is 37. The difference extends into misperception, with conservatives underestimating by 25 and liberals underestimating by 16.

The difference in misperception between subgroups means that the information shock from the information provision will differ for them. Thus, it is critical to consider the magnitude and direction of misperception to understand the informational effects, especially in subgroup analyses, which aim to compare how different groups process the same information differently.

Methods

The model to test the hypotheses is straightforward. I estimate OLS model

$$Y= {\beta }_{0}+ {\beta }_{1}*Treatment+ {\beta }_{2}*NEP+ \epsilon$$
(1)

where the dependent variables Y are measures of concerns, attitudes, and preferences. I include NEP, which is the strongest pre-treatment predictor of environmental attitudes and preferences, to control for potential imbalances between the treatment and control groups.Footnote 2 For subgroups, I estimate Eq. (1) using only respondents from the subgroups (i.e., liberals or conservatives). The OLS model treats the dependent variables as continuous. I use it because many dependent variables are constructed with multiple items, and their measurements are no longer categorical. It also facilitates the interpretation of the results. As a robustness check, I estimate an ordered logit model for each individual question/item of the outcome concepts. The results, which are included in Appendix C, do not change substantively.

To consider the effects based on misperception, I modify Eq. (1) to

$$Y= {\beta }_{0}+ {\beta }_{1}*Treatment+ {\beta }_{2}*Misperception+{\beta }_{3}*Treatment*Misperception+ {\beta }_{4}*NEP+ \epsilon$$
(2)

The new term misperception, which is opposite in direction but equivalent in magnitude to information shock, is measured as the difference between respondents’ prior knowledge and the actual risk. Negative misperception means that respondents underestimate the risk, and vice versa for positive misperception. Equation (2) estimates the effects of information provision conditional on misperception, and the treatment effects at a given level of misperception equal \({\beta }_{1}+{\beta }_{3}*Misperception\).

Equation (2) assumes that the treatment has a linear interaction effect that changes at a constant rate along misperception. The assumption is based on a semiparametric kernel estimator (Figure A4 in Appendix A) that characterizes the marginal effect of the treatment across the full range of the moderator (misperception) (Hainmueller et al., 2019), which suggests very strong linearity of the interaction effect. In addition, for models that consider misperception, I include only respondents who have misperception within the (-80, 40) range, as under- or over-estimation beyond this range is rare. Because of the lack of observations, we are not sure if the linear functional form would apply beyond this range, and inclusion of the outlier observations may make the estimated interaction effect misleading due to over-extrapolation of the linear form (Hainmueller et al., 2019). This trims about 5% of the observations.

At any given level of misperception, the effects of information provision are identified by comparing the differences in the outcomes between respondents in the treatment and control groups with the same misperception. Since the assignment of the treatment/control status is random (independent of misperception), the effects at each level of misperception are causal. However, caution is needed when comparing the effects of information provision across the range of misperception. Respondents with different levels of misperception may be different in other attributes as well. For instance, respondents at the misperception level − 50 may be more likely to be conservatives than respondents at -30. If we observe a larger treatment effect at -50 than at -30, we cannot be sure if it is because of greater information shock or conservatives being more responsive to information. In this sense, the comparison of the effect size across the range of misperception is correlational.

Despite being correlational, comparing the effects across the range of misperception offers important insights. The ultimate goals of information and communication campaigns are often to correct misperception (and the attitudes and preferences based on it). Thus, a key question in practice is how information provision works on people with different degrees of misperception, regardless of other possible underlying differences in their personal attributes. In addition, since misperception tends to differ across subgroups, it is important to take it into consideration to understand how subgroups respond to information provision. This advances the existing literature, which mostly does not consider misperception when comparing subgroup responses.

Nevertheless, to alleviate this concern, I conduct an additional analysis by adding potential confounders with misperception and their interactions with the treatment as covariates to Model (2). By holding these factors constant, the analysis increases confidence that the differences in the effects across the range of misperception are indeed due to different information shock instead of the differences in these confounders. Specifically, I include gender, education, ideology, income, rural/urban indicator, and their interactions with the treatment.

A second concern lies in the threat of survey satisficing when respondents who have no strong beliefs about the local risk pick a random answer or the middle point (given the cluster around 50). To address this concern, I test if results from the main analysis still hold after excluding respondents with high risk of survey satisficing. Specifically, in two separate analyses, I exclude (1) respondents who indicate that they have no confidence in their assessment of the local risk and (2) respondents who assess the local risk to be around 50 ([48, 52]).

Results

Informational Effects: Overall Sample

Figure 3 presents the effects of the information provision. Panel A shows that the effects on all outcome variables are largely insignificant except for concern for self and family, which is also only significant at the 0.1 level. The results could be due to the effects on respondents with opposite misperception offsetting each other. In addition, a significant portion of the respondents possess prior knowledge that is in the ballpark of actual risk (e.g., 26% of the respondents have misperception between − 15 and 15), and I do not expect the provided information to have great effects on them because it would not offer much new.

Fig. 3
figure 3

Effects of Treatment (Overall Sample)

Notes:

1. All dependent variables are measured with 5-point Likert scales. 2. Panel A: Markers are the point estimates; thick bars and thin bars are 90% and 95% confidence intervals, respectively. 3. Panels B ? D: Solid lines are point estimates; shades are 95% confidence intervals.

Panels B – D of Fig. 3 report the treatment effects conditional on misperception. They show that misperception plays a critical role in explaining the effects of information provision. The treatment increases concern for self and family and the sense of personal obligation to act for those who underestimate the risk and decreases personal concern and obligation for those who overestimate the risk. In addition, as the magnitude of misperception grows, so do the effects. When misperception is large enough, the effects become statistically significant. In my sample, the effects are statistically significant for those who heavily underestimate the risk.

Panels B – D of Fig. 3 are based the regression results in Table 1. In Table 1, the coefficients on “treatment” for all outcomes are not significantly different from zero, indicating that when there is no misperception (misperception = 0), provision of information does not have much impact. The coefficients on the interaction term “treatment * misperception” are negative and statistically significant for concern for self and family and personal obligation to act.Footnote 3 For a respondent who underestimates the risk by 40 (the middle point of the range of underestimation), information provision increases concern for self and family by 0.18 and the sense of personal obligation to act by 0.16, which represent 6% and 5% increases from the average baseline scores of 3.06 and 3.46, respectively.

In contrast with the impact on individual-level concern and attitude, the results show that information provision has no effect on concern for the country and the attitude that government should do more across the range of misperception. A possible reason could be that the provided information is personalized at zip code level. Thus, it does not update respondents’ knowledge of the severity of the problem for the country as a whole, and/or respondents do not think group-level collective actions are viable or appropriate to address the relevant issue. The inclination to treat the issue as a personal concern and responsibility may also explain the null effects on policy preferences and behavioral intentions, all of which that are included in the study attempt to address the underlying problem collectively.

Table 1 Effects of Treatment Conditional on Misperception (Overall Sample)

As discussed in the methods section, in an additional analysis, I include variables that are potentially correlated with misperception such as gender, education, ideology, urban/rural indicator, and their interactions with the treatment as covariates. Results from the analysis (Table A3 in Appendix A) show that the marginal effects of information shock with regard to concern for self and family and personal obligation to act (i.e., the coefficients on “treatment * misperception”) become slightly larger and more significant compared with those presented in the main analysis, alleviating the concern that the differential effects of treatment along misperception are driven by differences in these attributes instead of the difference in information shock.

In two another analyses, I explore the robustness of the findings to the threat of survey satisficing by excluding respondents who have no confidence in their assessment of the local risk and respondents who rate the local risk to be around 50, respectively. Results from the two analyses (Tables A4 and A5 in Appendix A) are substantively the same as those in the main analysis, suggesting survey satisficing is unlikely to play a large role in driving the results.

Informational Effects: Liberals vs. Conservatives

To contrast the effects between liberals and conservatives, I conduct subsample analysis in the main text (estimates based on an interactive model are reported in the Appendix). Panel A of Fig. 4 presents the results that do not consider misperception. It shows that the treatment leads liberals to significantly increase their concern for self and family: The concern increases by about 0.4 point in a scale of 4-point range. The effect represents a 12% increase from their average baseline score of 3.37. In contrast, information provision has no significant effect on conservatives. Results from an interactive model (Table A6 in Appendix A) show the difference between the effects on liberals and conservatives is statistically significant at the 0.05 level. Besides concern for self and family, the effects on other outcomes are largely insignificant for both liberals and conservatives.

Fig. 4
figure 4

Effects of Treatment (Liberals vs. Conservatives)

Notes:

1. All dependent variables are measured with 5-point Likert scales. 2. Panel A: Markers are the point estimates; thick bars and thin bars are 90% and 95% confidence intervals, respectively. 3. Panels B ? D: Solid lines are point estimates; dash lines are 95% confidence intervals.

Panels B – D of Fig. 4 report the effects when misperception is considered. They also suggest that the impact on concern for self and family is different for liberals and conservatives. While the interaction effects of the treatment with misperception (the slopes) are similar for both groups, across the range of misperception, liberals tend to interpret the information in ways that result in higher levels of concern for self and family compared to conservatives. But the effects on the sense of personal obligation to act are pretty similar for both groups. They have the same pattern as the overall sample but fall short of statistical significance, probably because of smaller sample sizes. As for other outcomes, after considering misperception, the effects of treatment are still largely insignificant for both liberals and conservatives.

The subsample analysis is equivalent to estimating an interactive model with indicators of subgroups interacting with other explanatory variables. However, the interactive model makes it more convenient to examine if differences in the effects between subgroups are statistically significant. Table A7 in Appendix A reports results from the interactive model. Regarding concern for self and family, they confirm that the marginal effects of information shock (the slopes) are not statistically different between liberals and conservatives, but there is a statistically significant difference (at the 0.1 significance level) in the effects when misperception equals zero (i.e., the intercepts). For other outcomes, both the slopes and intercepts are not statistically different between liberals and conservatives.

Discussion and Conclusion

In this study, I examine the effects of the provision of correct local environmental information on individuals’ beliefs, attitudes, and preferences and behavioral intentions through a survey experiment. I find that individuals can correctly process and interpret new information to update their beliefs and attitudes. For those who underestimate the risk, the information provision increases their concern for self and family and personal obligation to act; vice versa for those who overestimate the risk. The effects on concern for self and family also seem to be greater for liberals than for conservatives. The results are consistent with recent findings by Wood & Porter (2019) that citizens largely heed factual information and by Hill (2017) that people learn political facts that are inconsistent with their prior preferences more cautiously and slowly compared with the learning of consistent information.

Despite the effects on concern for self and family and personal obligation to act, the provided information mostly has not had any meaningful impact on policy preferences and behavioral intentions. The results are similar to recent studies that find information provision can change concern for inequality (Kuziemko et al., 2015) and knowledge of government spending (Barnes et al., 2018), yet fails to change preferences on these issues. I have identified three possible explanations for the null effects: (1) the provided information is not strong enough to motivate behavioral changes, (2) respondents fail to make the connection between the information and the behaviors measured in the study, and (3) respondents do not believe their behavioral changes could improve the situation.

First, the literature suggests that vivid information is more powerful than dry, statistical information (Loewenstein et al., 2014). The information used in the experiment has not provided respondents with a concrete and vivid picture of the consequences — for example, in terms of health outcomes. Thus, while it may affect respondents’ beliefs and attitudes, it fails to induce changes in preferences and behaviors.

A second potential explanation is that respondents have not connected the provided information with the behaviors measured in the study. The information is personalized at zip code level. Knowledge of local risk does not necessarily translate into preferences for actions that address the problem collectively at higher levels. Instead of changing their collective group-level actions (e.g., policy preferences, political activities, donation, or group participation), respondents may opt for avoidance behaviors (e.g., moving away or installing air purifiers). This explanation receives some support from the fact that the information provision has not had any impact on the attitude that government should do more, but has instead increased the sense of personal obligation to act.

Third, respondents may lack political trust or efficacy to engage in the behaviors measured in this study. Political trust and efficacy may moderate the effects of information provision. We may not expect someone who does not trust the government to resort to public policy as a solution. Similarly, people without strong political efficacy may be less likely to engage in political activities and group participation and donation, as they believe these efforts would be futile. I have tested this explanation with subgroup analyses based on respondents’ political trust and efficacy. The results (Figures A5 and A6 in Appendix A) suggest that the null effects are not due to respondents’ low political trust and efficacy, as the information has had no effect on respondents with high political trust or high political efficacy as well.

The above explanations are by no means exhaustive. More research is needed to investigate the above and other explanations of the muted effects of information provision on preferences and behaviors. For example, researchers could manipulate the treatment in ways such as making information vivid and concrete, appealing to emotions, or demonstrating the link between policy/actions and outcomes to explore the effects of different forms of information on different types of behavior.

In interpreting these results, it is important to bear in mind a few caveats. First, while the study highlights the critical role of misperception, the different effects at different levels of misperception may not be fully attributable to the difference in information shock, as respondents who have different degrees of misperception may vary in other characteristics as well. Second, some respondents might encounter challenges to understanding the provided information and the question that assesses their prior knowledge. If they did not understand the question and information, I would expect the information provision to have had no impact on their concerns, attitudes, or behaviors. As a result, this will dampen the effects of the treatment and flatten the slopes of the effects of information provision along misperception. Third, the relatively small samples for the subgroup analysis may limit the power of the study to detect the distinctive impact of information provision on individuals with different characteristics. Fourth, this study examines only the immediate impact of information provision, but its long-term impact is unclear. The immediate impact could dissipate over time, but it may also strengthen and expand if the information and heightened concern prompt follow-up investigations.

Despite these limitations, this study has made several contributions. It underscores the central role of misperception in understanding the impact of information provision. The analysis that treats information provision as a uniform treatment mostly shows no effect on all outcomes, but when misperception is taken into consideration, the results demonstrate that information provision has affected respondents’ environmental concerns and attitudes, and the effects depend on misperception. By focusing on misperception, this study provides more clarity on how individuals respond to new information.

This study also highlights some of the potentials and perils of information disclosure as a policy tool. The findings that individuals can correctly process new information relative to their prior knowledge to update their beliefs and attitudes are encouraging to advocates of information disclosure. However, this study also shows that the provided information, even when misperception is considered, has not been able to change policy preferences and the intentions to change consumption behaviors, participate in groups, or take political actions. The findings suggest that merely providing factual information may not be adequate to spur meaningful behavioral changes. Dry, statistical information may not be strong enough to motivate actions. People may also have limited capacities and resources to act on provided information. Future studies that examine these potential constraining factors are needed to better understand individuals’ responses to information.

This study also raises some questions about the limitations of providing personalized and localized information. While scholars have advocated the use of more personalized information (e.g., Loewenstein et al., 2014), this study shows individualized information seems to mostly increase concern for self and family and the sense of personal obligation to act, but has no effect on concern for the country and attitude on government responsibility. The implication is that personalized information may be effective at encouraging private-sphere avoidance behavior (e.g., moving away, installing air purifiers, or drinking bottled water) yet fail to prompt responses in the public sphere (e.g., complaints, citizen suits, signing petitions, contacting, or voting). The venues of the responses have important implications for information disclosure policy. While encouraging behaviors that avoid or adapt to adverse conditions should be an important goal of information disclosure, avoidance behaviors alone often cannot address the underlying problems that information disclosure policy attempts to solve, and people who are unable to adapt are usually the poor and the powerless. Further investigation is needed to better understand how information of different natures and formats affects different types of attitudes and behaviors differently.