1 Risk, Uncertainty, and Hazard

In everyday life, using the terms “risk” and “uncertainty” interchangeably is acceptable. Both terms convey the impression that the behaviour, situation, or object being discussed has unknown elements that have some level of potential danger, threat, or loss.

From a technical perspective, however, the difference can be critical. This century-old economist’s comparison of risk and uncertainty explains the distinction as well as any circulating today (DeGroot and Thurik 2018, pp. 1–2).

In the case of risk, the outcome is unknown, but the probability distribution governing that outcome is known. Uncertainty, on the other hand, is characterised by both an unknown outcome and an unknown probability distribution. In both cases, preferences are defined across chance distributions of outcomes. For risk, these chances are taken to be objective, whereas for uncertainty, they are subjective.

In practice, everyday use of the term “risk” is likely referring to what scientists would call “uncertainty”. That is, not only is the outcome unknown, so too is the probability of that outcome.

To calculate risk technically, the conceptual relationships represented in the following formula are beguilingly simple: risk = probability x consequence. Here “probability” is the likelihood of a hazard coming to pass, and “consequence” reflects how severe the effects of realising the hazard would be.

So, what is a “hazard”? If “risk” is the likelihood, or probability, of harm occurring should something happen, “hazard” refers to the object, situation, or behaviour itself. For example smoking cigarettes is a hazard, the likelihood of getting lung cancer as a result of smoking is the risk.

But, the term “hazard” expands when examining what happens when the technical expression of a risk meets alternative perceptions of risk in communication exchanges that include non-technical participants.

When risks are communicated in situations beyond the realm of technical domain experts, people’s relationship with, and responses to, them almost always encompass more than objective calculations.

As Peter Sandman puts it, in these contexts: risk = hazard + outrage (Sandman 1989).

Here “hazard” now stands for both elements of the technical risk equation above, and “outrage” represents people’s perceptions of, and reactions to, the hazard should it actually manifest. Outrage need not have any relationship to the technical realities of the risk being realised (Sandman 2003).

For the purposes of this brief introduction to elements of risk communication, unless the technical distinction between risk, uncertainty, and hazard is critical to appreciating a specific concept, the word “risk” will be used.

2 Assumptions, Goals, and Context

It makes little sense to get involved in risk communication without having a reason for doing so. The most common reasons I have heard people say they want to “do” risk communication are:

  • To inform people or “raise awareness” of a risk,

  • To persuade people to “do something” about a risk, and/or,

  • To learn about a risk from others.

Regardless of motivation, all risk communication efforts will benefit from starting with an explicit examination of the assumptions that motivate the effort, the goals to be realised, and the context in which the efforts will operate.

For more than two decades, I have interacted with all manner of sciences and scientists. In almost every interaction, I am struck by how passionate these people are about their work and its value. In the context of risk communication, none are more fervent about this than climate scientists.

A composite example of many of my interactions with climate scientists offers a neat illustration of the importance of having clear appreciation of assumptions, goals, and the centrality of context in risk communication.

  • Composite climate scientist: The public need to understand (more) climate science.

  • Me: OK, and why do you say that?

  • Composite climate scientist: Because the climate situation is looking really bleak, and we need people to do something about it!

2.1 Assumptions

We all make assumptions every day, often unwittingly, and usually without profound consequences. However, in risk communication (which I approach as a branch of science communication), acting under the influence of unconscious assumptions can be problematic.

In the example above, the composite climate scientist assumes that people do not “do something” about climate change because they do not know enough about it. This is not to impugn the scientist: it is easy to appreciate why they would assume knowing more about the dire situation would motivate action. Unfortunately, we know that merely increasing science knowledge does not guarantee people will act (Simis et al. 2016).

But how can we identify our implicit risk communication assumptions? One simple strategy is to check our language for phrases like “people should really…” or “what everyone needs to know is…”. Assertions like this can flag where we are making implicit assumptions about the effects our communication efforts will have. Once they are identified, they can be verified.

2.2 Goals

It is impossible to estimate the success or failure of a risk communication without first being clear about what you are aiming to do, and having ways to tell the extent to which you have done it. This is something that needs to be done explicitly if you want to be confident about the extent to which your communication efforts have succeeded.

Let us imagine here that communicating more climate science in fact does increase climate-positive behaviours. What exactly are these climate-positive behaviours, and how do we know if they have increased?

Goals such as “improving the climate” are noble, but ambiguous and seductive. It is easy to agree on such goals with like-minded people without exploring if they are realistic or measurable. Explicitly articulating goals and their indicators might not guarantee success, but it certainly helps clarify the task upfront and identify if there was any effect afterwards.

2.3 Context

There is an enormous number of potential context factors that affect risk communication efforts. Appreciating the context in which you, and the audiences with whom you are communicating, will engage over risk communication activities is critical to maximising your likelihood of success.

For example climate scientists know that phasing out coal is an essential part of climate change action. For them, phasing out coal as fast as possible is not just desirable, it is essential. But for people who rely on the coal industry for their income, the life-changing consequences of shutting down mining could represent a far greater, and more immediate threat. Risk communication enterprises need to be tailored to the contexts in which they will be conducted, and the most influential elements in one context may be of little concern in another.

3 Perspective Is Everything

In the early 2000s, a UNESCO science advisor told me a story about a dietician working in Samoa to help address their high incidences of obesity, diabetes, and heart disease. As in many countries around the world, whenever there is a celebration in Samoa, there is feasting. In this case, people were celebrating the opening of a new school. At the ceremony, there were tables covered in all manner of foods, including one that was stacked with cans of preserved meat. The canned meat had been a common staple in many Samoans’ diets for decades. It was also very high in fat and salt—key contributors to the diseases the dietician was there to help address.

When the dietician suggested that these meats were harmful and should be cut back, one local asked if she was suggesting they give up a traditional food. The dietician was surprised! The meat had been introduced some decades before by Anglo visitors to the country: not something an outsider might have thought of as “traditional”.

In this example, the risk the local saw (giving up on a traditional food) outweighed the risk the dietician saw (removing an unhealthy food). Both positions were legitimate from the individual’s perspective, but the risk they saw was quite different. Years late I told this story in New Zealand, and afterwards a Samoan man told me that it was less about the canned meat being a traditional food as it was a filling, affordable meal addition that would not spoil in villages where there was no refrigeration. For him, removing canned meats meant people might go hungry or be in danger of eating spoiled food: much more immediate than the diseases of obesity, and now a third perspective on the risk issue.

In risk communication perspectives matter, though they may not be immediately obvious to the various parties involved. When investigating the potential impact of perspective, here are four key questions to consider:

  1. 1.

    Relevance—is this risk relevant to people’s day-to-day lives, and if so, how (and how do you know)?

  2. 2.

    Pre-existing biases—do people in the intended audience already have a position on the risk, and if so, is this position (a) aligned with or opposed to our own, and (b) strongly held?

  3. 3.

    Threat to status quo—would becoming (more) aware of the risk we want to communicate unacceptably jeopardise or threaten audience members’ existing beliefs, values, social systems, livelihoods, or lives?

  4. 4.

    Ability to act—even if they accept and want to mitigate the risk being communicated, do they have the time, knowledge and resources to do so, and what trade-offs are required?

4 Choosing Between Risks

It would be impossible to gather and weigh up the evidence behind about every choice we make each day. This is why we accept the instructions of doctors, mimic our friends, and watch the movies suggested by Netflix.

All of us unconsciously use shortcuts (or heuristics) daily when making choices between competing options. For example many of us make a decision every morning before we leave the house: we look at a weather forecast and decide whether to take rain protection or risk leaving it behind.

Usually, weather forecasts (or “rain risk communications”) present the likelihood of rain as a percentage, such as “the chance of rain this morning is 25%”. But what does that mean when making a practical decision? Do you hang your washing outside if there is a 25% chance of rain? What about at 35%, or 60%?

According to Gigerenzer et al. (2005), we are likely to pay less attention to what the specific percentage is, and choose depending on how close it is to one of three values: 0%, 50%, or 100%. A 0% prediction leaves us confident that it will not rain, but so does a 10% or 20% prediction. Similarly, 80 or 90 percent is close enough to 100 that we are likely to see the risks as roughly equivalent.

When the prediction is closer to 50%, it becomes harder to decide, and people are also more likely to think the forecasters do not know what they are talking about.

The mere presence of numbers can also affect our estimates about facts concerning our own lives, as this brief example of the phenomenon of anchoring and adjustment demonstrates (West and Meserve 2012). In this study, people were asked one of two questions: “How many headaches do you have a month—0,1,2—how many?” or “How many headaches do you have a month—5,10,15—how many?”

Estimates were routinely higher when the question prompt included larger numbers, even though those numbers were entirely arbitrary. This simple example elegantly reveals the importance of context on choice.

It is also common for people to judge how well a risk has been handled based on how well it turned out after the risk has passed, rather than what kind of information was available to the decision-maker at the time they were determining what to do. This is referred to as ‘outcome bias’, and I used to see this happen regularly when I worked as a security guard (aka a ‘bouncer’) at a large concert venue in my undergraduate days.

The head bouncer had to estimate how “risky” the evening would be, prioritising negating the risk of harm to concert-goers, band members, the building and the staff, and roster on people accordingly. For the manager of the venue, a core priority was profit, so he would always look to cut costs.

If a concert finished and there was no overt violence or damage, the security team thought it had been a good night: security risks had been successfully managed. But on those same nights, the venue manager would often complain that the head bouncer had put on too many staff. For him, the fact that nothing went wrong was evidence that the head bouncer had mismanaged the “real” risk: spending more than necessary. The manager thought the lack of trouble meant there was a superfluous security presence: classic outcome bias.

Next, we turn to a simple set of risk-decision dichotomies from Fischhoff et al. (1981). Here the authors summarise findings from many studies and practitioner experiences into simple pairings of risk preferences: in essence, a summary of shortcuts.

This summary reveals that in general we judge risks befalling children as worse than those affecting adults. We favour risks that are voluntary over those that are imposed upon us by others. Risks that are from familiar causes seem less threatening than those from exotic sources and those that have little or no benefit to us are judged as worse than those that have clear benefits.

It has been easy to see instances of Fischohff et al.’s dichotomies at play during the COVID pandemic. For example among people who prioritise “freedom of choice” above all, the perception they may be “forced” into having a covid vaccine involuntarily can render other facts about the vaccine irrelevant.

This example also provides an excellent case for demonstrating the “hazard plus outrage” interpretation of risk perception introduced earlier. Some among those who are resistant to having a COVID vaccination argue that the vaccine is unproven and could be dangerous. For them, the potential consequences of taking it represent an acceptably high level of personal risk. However, a number of these same people also express strong resentment at being told what to do by government authorities in many aspects of their lives.

From a “hazard plus outrage” perspective, their expressed outrage about the hazards of the vaccine may well be driven by their more strongly fuelled outrage at being compelled to take it. Here, the issue people are upset about (the threat of involuntary vaccination) and the hazard they overtly express outrage about (the possible dangers posed by the vaccine itself) are quite different.

The lesson here is that you cannot effectively engage in risk communication with people about the realities of a hazard (vaccine side effects) if you do not address their outrage (forced vaccination).

5 Values and Tribalism

No primer on communicating uncertainty and risk would be complete without noting the profound influence of shared values, sometimes called “tribalism”, on how we appreciate and relate to risk perception and communication. Kahan and his colleagues refer to this as “cultural cognition of risk”, which is grounded in “the tendency of individuals to form risk perceptions that are congenial to their values” (Kahan et al. 2011, p. 147).

This theory goes on to assert that humans “endorse whichever position reinforces their connection to others with whom they share important commitments” (Kahan 2010, p. 296). The implications of this for risk perception and communication can be profound.

Turning once more to the covid vaccine example, the depictions of iconic anti-vaccination, anti-authority, libertarian American citizens that flooded our screens throughout 2020 epitomise the cultural cognition of risk idea. Viewed through this theoretical lens, their anti-vaccination position can be characterised as one that reinforces the anti-authority values of their group. A cultural cognition of risk perspective enhances our capacity to make sense of their outrage.

6 What Next?

This chapter is really a snapshot of a summary of an overview of the myriad factors that may be at play when making sense of, and attempting to communicate, risk and uncertainty. As such, the material presented here should be seen as a launchpad from which interested readers can explore the enormous body of related scholarly and practice-oriented literature.

With this in mind, perhaps the most important message to takeaway for now is this: the first step in successful risk communication starts with reflecting on your own position.

Explicitly and honestly articulating your motivations and exploring your assumptions within a risk communication context should help identify potential complications and illuminate fruitful ways to move forward.

7 Activities

7.1 Activity One

Write 2 opinion pieces (500–1000 words) on a single, controversial, science-based topic about which you hold strong, partisan views (e.g. climate change, vaccination, GM crops, A.I, etc.). Piece one should align with your views, piece two should argue the opposite.

Purpose—to interrogate personal values and then actively consider why and how others might oppose them.

7.2 Activity Two

  1. 1.

    Choose one controversial, science-based risk topic.

  2. 2.

    Have everyone draw a mind-map of the issue and all its relevant aspects (see Morgan et al. 2002).

  3. 3.

    Compare yours with other people’s.