Keywords

1 Teaser (In Lieu of an Introduction)

Think of a time period 4,000 or 5,000 years ago. There is a little anecdote, which, of course, is fictitious because nobody was there to observe it at that time. Imagine three cavemen sitting in front of the cave, and they are talking with each other about life. The first caveman says, “Look, I think we have a wonderful safe life. If you look outside, we have clean air. There are no pollutants anywhere around, and we are in the fresh air all the time. We are working outside, and it seems to be a very safe environment.” The second caveman replies: “Well, even more so, if you think about our water, it is all clean fresh water. We take it directly from the springs that we have in front of us. All clean and fresh!” The third person enters the conversation: “Well, and our food is all organic. We eat only food that Mother Nature has given us.” They continue to contemplate about their life until, after a while, one is scratching his head and remarks: “Well, there is only one question I have. Why on average are we getting no older than 30 years?”

2 The Increase of Life Expectancy and the Reduction of Accidents

The conditions of human existence in terms of life expectancy and health conditions have improved considerably, specifically in the last 150 years. From 1950 to today, we witnessed another dramatic increase. Life expectancy in Germany has increased over the last 30 years by around 12 years, and German newborns have a life expectancy of around 79 for male and 83 for female individuals [32]. That is unheard of in human history. Comparing with life expectancies over the centuries, but also across different countries, this is a very spectacular accomplishment. We succeeded in making life safer, securer, and much less dangerous than it used to be. In that sense, risk is a paradox. While life has become safer year after year but, as revealed by many surveys, our impression leads us to infer the opposite: most people believe that we face more risks to health and life today than during the previous decades [21: 44f.]. However, using the usual risk indicators in terms of premature death, in terms of health losses, and in terms of accidents and other hazards, there is a huge and very impressive record of success. This success is even more stunning in view of the following fact. If one asks how many Germans will die prematurely, and I will deliberately put “prematurely at the age of 60,” the answer is that from 10,000 people in Germany 9,315 will reach their 60th birthday [21: 51]. That, again, is a very impressive number, and it is something that should not be taken for granted. Let me choose another country as a different example. Zambia, for instance, is an interesting case because no civil war or something comparable has blurred the statistics there. Out of 10,000 Zambians, 4,300 will reach their 60th birthday. More than half of them will die prematurely. Hence, there are dramatic differences between countries. However, in nearly all OECD countries, risks to life and health have been significantly reduced.

When we talk about risks, we tend to forget about these success stories. Take occupational accidents. In 1960, 4,893 work-related fatal accidents occurred in West Germany, not including East Germany. Now including East Germany, the number is down to 420 as of 2018.Footnote 1 The number of people that actually die during work has been reduced by almost a factor of 10. This statistic includes also the traffic accidents during work. Great Progress! And again, other countries do not fare so well. In Brazil, for instance, which has three times as many inhabitants as Germany, around 5,000 people are killed annually during work [30], a number that is more than one order of magnitude larger in comparison to Germany. Safety cannot be taken for granted. There are many threshold countries that are in the phase of rapid industrialization and face many more accidents than the OECD countries. Therefore, a very strong impetus is required to assure that these countries achieve the institutional and organizational preconditions for reducing the number of work-related accidents and fatalities.

Another example is the dramatic reduction of fatal car accidents. If one takes the year 1972, Germany experienced close to 22,000 fatal accidents. Nowadays, we are down to 3,059 as of 2019.Footnote 2 Furthermore, we now drive around 2.6 times more than in 1972 [14: 106]. If you take the ratio of accidents per kilometre driven by car, the reduction amounts to a factor of 16. These are all dramatic improvements.

These examples all refer to conventional risks, risks that we can regulate within a specific regime that can be contained in time and space and linked to a specific sector, in particular workplace or car accidents, technological incidents, or other safety failures. At least ,the wealthy countries apparently have been successful in developing public regulations and institutions that are reducing the general risk so considerably that they still experience increasing life expectancies from one year to the next. It is a public prejudice that life expectancy is stabilizing or even decreasing. At some point, it will, but not yet. Hence, the perception that life is getting riskier every day does not fit the statistical reality.

3 Systemic Risks and the Risk Paradox

About 78% of the German population believes that life has become riskier over the last two decades [21: 24]. They believe that we face more threats that life has become more dangerous. However, that does not seem to fit the overall epidemiological results derived from reliable data sets by established statistical methods. I have called this discrepancy the risk paradox, but, at the same time, we also witness a phenomenon that we framed as risk attenuation. That leads back to a theoretical concept that Roger Kasperson, Rob Goble, and others including myself have developed in the late 1980s [10, 18, 26]. It claims that risk apprehensions are either amplified through social processing of information, communication, and perception, or they are attenuated.

Either the magnitude and likelihood of a risk might be augmented or amplified, or, reversely, some risks that have the potential to do great harm may appear more or less attenuated. Such seemingly attenuated risks are neither visible in the public sphere nor are they often discussed in the public debate. We refer to these risks as systemic risks [22].

Systemic risks have a couple of features that makes them likely to be apprehended as attenuated. But before coming to the features, it is necessary to define systemic risks. Systemic risks have the potential to threaten the functionality of a vital system on which society relies. The services associated with such a system, for example energy supply or internet access, are crucial for people. These risks can still be assessed in terms of lives lost, health impacts, or impediments to wellbeing, but the focus on functionality provides a different perspective on what is at risk here. These risks may pertain to crucial social services in terms of energy, water, health, food security, and education, or to technological services such as the internet or its cybersecurity. They have the potential to endanger or threaten the functionality of those systems or even destroy them in such a way that recovery, at least fast recovery, is not possible [13]. That is the first major aspect of the concept of systemic risk.

The word itself has been used frequently when referring to financial risk. In 2008 and 2009 during the financial crisis, it was called systemic risk because the chain of events acted like domino bricks (see early definition by Kaufman and Scott [11]). If you start with one, then all others are collapsing, and in the end, the whole system loses functionality. We know the financial system was close to collapse. So, when referring to systemic risks, we think of the potential that a crucial system may be threatened by an entire set of potential activities or events that could trigger dysfunctionality or even collapse [9].

4 The Characteristics of Systemic Risks

What is it that makes risks systemic as compared to conventional risks? First of all, these risks are very complex [14]. While “complexity” is a rather fashionable word with frequently unprecise use, the attribute complexity has a clear meaning when describing systemic risks. Here, it does not merely mean that things are complicated but, with regard to the relationship between triggers and consequences, between causes and effects, that there are many variables intervening in the chain of causes and effects making it is either impossible or at least extremely difficult to reconstruct a valid chain of causal structures that allows to discern the triggers, the consequences, and the impacts of these risks. Often, there is only a vague representation of all the relationships and interdependencies.

Complexity means we have a whole web of intervening factors that interact with each other, that reinforce each other, and that attenuate or amplify the given causal relationships. Very often, we can retrospectively understand what has happened. However, we cannot predict what will happen. This gives rise to a large uncertainty, which is a second major characteristic of systemic risks [25]. It is not just that we need to consider the usual statistical confidence intervals. There always are probability distributions with confidence intervals when we reach into the stochastic world. But with systemic risks, we enter the world of genuine uncertainty. In this world, identical causes may lead to different effects in different situations, even if the observer is cognizant about these situations and knows perfectly well in which way they differ. This feature is familiar from health physics such as in case of cancer, for example. We know the overall distribution over time but we cannot say which individual will be affected. Often, we are even uncertain about population risks; in particular if context conditions are changing. This kind of second-order uncertainty is typical for systemic risks.

The third major characteristic of systemic risks, due to their complexity and their uncertainty, is their trespassing of boundaries, national boundaries as well as sectoral boundaries [28]. A good example is the risk of mad cow disease, or more accurately the variant Creutzfeldt–Jakob disease (vCJD; [36]). This is an example of the past, but is mentioned here because the major risk was not the health threat but the risk to the institutions dealing with the threat. In all of Europe, only about 174 additional cases of the Creutzfeldt–Jakob disease occurred, which is not a major threat considering that about 300 million people were exposed, but the event had a lot of repercussions as ministers had to resign and a major economic loss for agricultural products occurred in the UK. Due to the loss of trust, many agencies were remodelled, among them the European, German, and British food protection agencies. Obviously, one type of risk has caused ripple effects from one sector to the next, from the health sector to the economic sector, from the economic sector to the political sector, and from the political sector to the institutional sector. Each time it extended into the next ripple, the perceived risk increased in intensity and impact.

For many of these systemic risks, we do not know what triggers them, and there might be tiny instances that trigger major impacts. That makes it difficult, for example, for regulatory bodies to anticipate them. Conventional risks in comparison are very clear: These are cars at high speed, for instance, which can have accidents. Consequently, we make sure that the cars are getting technologically better and the drivers better trained. In the field of systemic risks, however, there may be impacts from a very different domain that turns over to another domain and create havoc there. Systemic risks transcend boundaries of jurisdiction, nationality, or sectoral responsibility. Consequently, it is extremely difficult to regulate such risks. Global risks, in particular, such as Climate Change or water pollution worldwide or agriculture and nutrition, cannot be confined to one sector, country, or legal domain.

The fourth characteristic which is probably one of the most problematic in terms of human learning refers to nonlinear cause–effect functions with thresholds or tipping points [24]. The tipping point problem is extremely difficult to handle because those, who take risks, get positive feedback for what they are doing until a specific point. As soon as this point is reached, it is too late, however. We have seen this pattern evolve during the financial crisis. Everybody was very confident that they would handle the risks and could go on forever. Anybody in the financial world was very much aware that they should not inflate the virtual assets without having any real value behind them. However, if everybody thinks, “I am out before it collapses,” then the system is bound to collapse. In fact, even those people who felt very confident about being ahead of the financial lottery lost a lot of money. Then in the end, the governments had to bail out the financial sector and put a lot of taxpayers’ money into protecting the functionality of the banking system.

If we are confronted with nonlinear systems that have tipping points, we do not get enough feedback to learn when these thresholds have been reached. Once the thresholds have been surpassed, we may experience irreversible effects that will be very difficult to undo. That is a situation to which our learning capacity is not very well prepared for, since we learn by trial and error [34]. However, with such risks, trial and error is not a good strategy. Having reached the specific tipping point, it is too late to learn. Societies need to make changes before negative feedback arrives. That is one of the big challenges of dealing with systemic risks.

To sum it up, systemic risks tend to be transboundary, and they are stochastic in nature which means they do not follow deterministic cause–effect chains. They can occur under specific circumstances, but we do not know exactly what these circumstances are. These risks hide behind positive feedback to our activities for a long time but if we continue to act in the same way, we reach a point of no return. The switch moves from positive to negative feedback almost instantly. Systemic risks are very complex so that we feel overtaxed in understanding these risks. As a consequence of these features, we tend to go into denial. Most systemic risks tend to be apprehended as being attenuated, even if we are fully aware of them, such as Climate Change, for instance [20].

Looking back to the conventional risk, we can learn that awareness is not enough. Significant risk reduction requires also effective governmental regulation together with behavioural changes. If awareness, collective rules/institutions, and behavioural adaptations proceed line in line, one is able to reduce these conventional risks to a point where they are partially marginalized.

Interestingly enough, if we ask people what they are most concerned about, many of these marginal risks are mentioned because we still have cultural memories of all the hazards and perils that endangered our grandmothers and grandfathers, mostly threats that are readily available in our minds [21: 193ff.]. The new type of systemic risks appears to be more distant, but in the end, they are much more dangerous for modern people than the conventional risks that we have largely mastered during the last decades. This is another paradox, not just the paradox between public risk awareness and the results of statistical analysis, but also the paradox that some of the risks that exert a strong impact on the functionality of our systems are likely to be perceived as attenuated in spite of the fact that people perfectly know about their real magnitude [27]. It is not an issue of knowledge. It may be an issue of apprehension to understand the proportionality of these risks compared to conventional risks but the mechanisms of systemic risk are widely known to many institutions and individuals. However, they tend to take them not seriously enough and engage in serious efforts to reduce the risks to a degree that we all would feel comfortable with. The best example of this is Climate Change. Until 2018, we have faced increasing CO2 emissions year after year (only exception 2009 after the financial crisis). In 2019, this spiral to the worse has stopped, yet it is still too early to claim a break or even a shift in the overall trend. Global CO2 emissions from coal use declined by almost 200 million tons (Mt), or 1.3%, from 2018 levels, offsetting increases in emissions from oil and natural gas.Footnote 3 The overall CO burden remained more or less the same as in 2018.

In spite of all the conferences, summits, and meetings that we have organized on Climate Change, we are not reaching progress here, at least on the global level. Considering renewable energy, one might argue that there is more renewable energy in the world than ever before. This is true, but if we look at the numbers, the increase is far from being impressive. From 7% in 1998, the global harvest of renewable energy has increased to just 11% today.Footnote 4 Given all the hype on renewable energy, increasing the share of renewable energy from 7 to 11 percent within 20 years is not dramatic. Compared to the other risks and their reductions which were mentioned in the beginning of this article, i.e., traffic accidents, occupational health and safety, and technical accidents, this increase is comparatively modest, to say the least. Therefore, we need to raise the question: Why are we much more hesitant to reduce these systemic risks than reducing the conventional risks where we experienced a lot of success?

5 Temporal and Spatial Connection—Issues of Risk Perceptions

Why is that that we are not so serious about systemic risks? That question leads to the psychological domain of risk perception. In 2010, more than 67% of the German population expressed concerns about genetically modified food [21: 90]. However, there are hardly any genetically modified organisms for sale in Germany because they are not on the market. Why are people thinking that they are threatened by a risk to which they are not exposed?

I would like to give a little bit of background about the perception of risk, i.e., how people intuitively assess and evaluate risks (more details in: [21: 301ff., 23: 43ff.]. We should first be aware that individuals intuitively associate causation strongly with proximity in time and space. In terms of anthropology, this is very prudent. Normally, if something happens to us, it makes sense to look for causes in the vicinity of where it happened. So, we ask ourselves: what happened just before the event in our vicinity. If I eat something that contains a poisonous chemical, I will experience some health problems within minutes or hours after the consumption. For complex systems, that reasoning does not work. If an expert talks about Climate Change and states that “the exhaust gases of your car may have an impact on a flood in Bangladesh,” such a statement seems to be far-fetched. It is temporally and spatially not connected to what people experience. There is a very strong doubt that these complex relationships have any plausibility. It is rather clear that many advocates of the populist movements take advantage of the implausibility of complex relationships. They offer simple, seemingly plausible explanations. All kinds of conspiracy theories appear to be much more plausible than the complex web of Climate Change triggers. Denying the threat of Climate Change is fortunately not a powerful movement in all countries, but we can see that specific groups in society do not believe in Climate Change as something that is caused by human action [1]. And they gain momentum because the relationships appear to be so implausible. My little car should have an impact on a natural disaster in East Asia? If you trust scientists or the science behind the claims of Climate Change, trust can overcome counter-intuition, but if you do not trust them, you fall back to intuition. Systemic risks are complex by nature. Their causal structure defies mechanisms of plausibility. That is the first reason for the likelihood of attenuation or even denial when it comes to complex, systemic risks.

6 The Stochastic Nondeterministic World

The second reason for attenuation refers to the experience of stochastic relationships. Specifically, systemic risk can hardly be characterized by deterministic relationships [14]. There are only a few “If A then and only then is B” causal connections between drivers and consequences in the context of systemic risks. The best we can do is to calculate the probability distribution over outcomes when the effects of one driver or several divers are assessed. However, when scientists communicate these stochastic relationships, many people are confused. They think: “Oh, even the scientists do not know for sure. They are also ignorant about this complex issue.” Or, even more to the point that I want to make, they say, “if the scientists are not certain, then I can just as well rely on my intuition.” Unfortunately, much of this knowledge relativism is allegedly supported by the social science concept of social constructivism, i.e., the belief that all knowledge is a product of social communication and exchange and not of observing external cues from nature or society [6]. The confusion about what truth means and how scientific claims are substantiated has given rise to a sense of insecurity and irritation: “If the scientists do not know for sure, then we are free to take whatever truth claim fits our interest.” And soon, society ends up in the post-factual era [15: 128ff.]. People go out there and bluntly lie about factual relationships, because nobody can distinguish anymore what is truth and what is a lie and what is an error. In extreme cases, people take all their prejudices as valid truth claims.

We may complain about this post-factual abuse of truth claims but there is no way back to the conventional scientific concept of determinism [23: 30]. Scientists have learned that there is much more complexity and stochasticity in the world than we had previously assumed. However, I think we have failed to make those new visions of the world become better understood by the public at large. Truth claims from science are far from being arbitrary or representations of wishful thinking; they rather demonstrate the complexity of the phenomena that we want to understand better. They can be characterized and described much more accurately by using stochastic models than by using deterministic relationships.

Furthermore, stochastic modelling is also a reason for people to attenuate the seriousness of a risk. If we do not have certainty that all these bad consequences will happen, we take an optimistic view and assume that they will not happen. If one observes some of the debates on Climate Change in the United States, one will be confronted with a lot of statements saying, “if the scientists are not 100 percent sure about the anthropogenic nature of Climate Change, I do not believe it.” In a stochastic world, we will never be 100 percent sure. It is impossible. It is inherently impossible. This basic message is not easy to convey to a society which has been educated to believe in deterministic natural laws. And as pointed out before, this scepticism towards stochastic reasoning leads to attenuation in the apprehension of risks.

The third element lies in trust. I first mentioned the post-intuition world, then the post-truth world, now I turn to the post-trust world. The post-trust world sheds some light on the relationship between science and the wider public [23: 73ff.]. Most of the threats that we envision and that we are facing do not come from our personal experience. Most modern hazards such as ionizing or non-ionizing radiation, the destruction of the ozone layer by FCCs, Climate Changes caused by greenhouse gases, and health threats caused by mixes of chemicals are not seen by our eyes or cannot be realized through our own senses or through our personal experiences. Nobody of us has seen the ozone hole above us; eating something, we do not know whether the beef in the food has prions in it or not, as prions cannot be detected by tasting. This list of examples can be extended almost endlessly. Take the debate about the pesticide glyphosate (Roundup®). Is that carcinogenic or not? Except the toxicologists, nobody has an idea. In that sense, we are all relying on second-hand information. That is something that is psychologically difficult to deal with. If nobody has a proper way of proving who is right or wrong, then we all need to rely on trust. If we lose trust, we go back to intuition. And then we are again in the vicious cycle of what appears plausible. But let us stick to the topic of trust. There are three major routes of how we can resolve the issue of trust [3, 19].

The first route is that someone has confidence in a reference group, say scientists, that they will tell him or her the truth. Under this condition, the individual will adopt whatever they will tell him or her Assuming that, “they know better than me.” Interestingly enough, if we look into the statistical evidence, the group of people who are loyal to a reference group is dramatically decreasing [23: 81]. That is true for almost all sectors of society as we can observe from recent voting behaviour in Europe and elsewhere. Established parties that had millions of devoted voters behind them lost almost overnight the support of their followers. The unattached voter is now dominating the political scenery.

The scientists are still belonging to a category of people that receives the best grades on trustworthiness in almost all surveys in Europe, Japan, and the United States [4]. However, if a scientist is not working in a university but in a factory or in a lab for genetically modified organisms, trust declines dramatically [35]. Overall loyalty towards reference groups that used to dominate the trust landscape in Germany and in most OECD countries is declining. So, what do people do when they lost trust in their previously preferred reference groups? Then they have two choices. The first possibility is to say, “I trust nobody.” That means that whatever experts or others may say, they are likely to be in error or to be lying. All statements are allegedly driven by interest. In this case, people demand zero risk [23: 81]. Since I do not trust anybody I rather leave everything as it is now. No desire for change or innovation! People in this camp develop and maintain a structural conservative attitude that tends to glorify the past and be sceptical about the future. Again, we can see that populists from the right take advantage of this structural conservatism and promise to bring the golden days back to the people.

Then we have the third route, which is pursued by the majority of people. We call this strategy “vagabond trust”. Because people cannot evaluate the validity of arguments, they look for peripheral cues to assign credibility. Take as an example the usual talk shows that are aired on German TV. In most talk shows, there are four participants in addition to the host [23: 73ff.]. One is defending the activity that is planned or given. For example, the use of glyphosate for pesticide control. So, the industry spokesperson is going to say, “Glyphosate is safe and does not cause cancer. We have tested all of this.” Then there is the opponent; this might be a spokesperson from Greenpeace saying, “This is the worst thing that we have used on our land. All the bees are killed, and of course, many citizens get cancer.” Then the third participant comes from a regulatory agency, in this case, the Federal Institute for Risk Assessment: “It is all a question of dose, and we regulate exposure so that the critical dose is never reached.” Then we have a fourth person, normally an actor or an actress representing common sense and usually saying something like, “I did not know that it was all that bad!” This is the typical composition of a host show in Germany. If the audience who watched the TV show is asked after the show about the arguments exchanged by the participants, most people are unable to remember any of these arguments. But they can tell whom they found trustworthy and whom they tend to believe. So, somebody from the audience might say: “I liked the lady from Greenpeace the most. First, she was very alert and attentive. Secondly, she had this elegant form of articulating herself and I appreciated the way that she really had good answers all the time. I do not remember which they were, but they sounded good. I think she is right.” People tend to judge the truthfulness of statements by peripheral cues of credibility [2, 16]. That does not mean that it has any real relationship with what scientifically might be true or false but that it is driven by the impression that viewers associate with each participant. Needless to say, that such cues are also connected to the plausibility of what is being said. And again, we are back to the problem of intuition versus complex knowledge.

However, the vagabond trust assignment has another problematic consequence. The first week, the spokesperson from Greenpeace may be the person who gets the most trust credits, but a week later, this might shift towards another participant, maybe the representative from industry or the regulatory agency. Then people reconsider what they thought was right or wrong and might change their judgment. Changing judgements is not pleasant; psychologists call this the pain of cognitive dissonance [5]. Most people can get very angry if that happens to them and out of frustration and insecurity about what is right and what is wrong, they tend to develop a feeling of anxiety and sometimes aggression.

7 Uncertainty and Insecurity

Thus, people change, they trust first this person and then over next week, the other person and so on. More and more they get nervous about that. They feel increasingly insecure about an issue. Insecurity leads to heightened risk perception. The more people feel insecure about the severity of a risk, the more they will rate such a risk higher than risks that are more familiar to them. Thinking and re-thinking about threats and being torn between competing cues affects risk ratings [8]. You get first annoyed about it, but secondly, in order to get over this cognitive dissonance, you start to see the risk as more pronounced than you would have seen if you had either delegated it, regardless to whom, or if you had factual insight into the argumentation. That has major impacts, for example, on crime. Individuals who have the least experience with crime tend to have the highest anxiety of crime, because they rely on contesting testimonials of crime commentators on TV, other media, or social media [23: 118f.]. The same is true for refugees [31]. In areas with the lowest percentage of refugees, we can observe the highest anxiety with respect to refugees conducting crimes. That is a mechanism of vagabond trust situation in which trust is becoming a currency that is changed and exchanged from time to time, heightens the anxiety and the preoccupation with that specific risk. In the end, it may lead to high attention to some of the rather well-managed conventional risks (that still may raise controversies) and leaves no room for dealing with the complex systemic risks that are less attractive for TV host shows.

8 Cognitive Dissonance in a Post-communication Environment

The last cause of attenuated apprehension of systemic risks is related to the topic of post-communication. It does not mean people have ceased to communicate. They communicate more than ever but in a different form. Now we are in the domain of media communication, specifically of social media [12]. Special attention should be given to virtual spaces in which people exchange their views and ideas. These virtual spaces are optimal opportunities to avoid cognitive dissonance. That is less prevalent in Germany than, for example, in the United States but the appearance of so-called echo chambers is a serious problem [15: 96]. In these echo chambers, people want affirmation of and confirmation for what they already believe. When we engage in physical communication or use conventional media such as newspapers, we are always confronted with judgments and opinions that differ from our own positions. Under these conditions, we are more or less forced to reconsider our own position. Cognitive dissonance is a driver of learning. If individuals avoid cognitive dissonance, first, they do not learn anymore, and secondly, they believe that anybody who shares their opinion is their friend and anybody who disagrees is their enemy, and nothing is in between. This is very prevalent in the social media where users can get really upset if someone says something opposite to what they believe. The structure of social media facilitates this kind of avoidance of cognitive dissonance. It creates polarization [7].

A couple of years ago, we conducted research on two focus groups at the same time and at the same location. One group assembled individuals who strongly believed that an expansion of mobile communication would be dangerous for their health. The second group was convinced that there is a need for more and powerful infrastructure for mobile telephony. The two groups met separately in two different rooms. I was commuting between the two rooms. Then I heard a person from the first group saying: “Well, if you go to Google, you get an immediate proof that magnetic fields are very dangerous for your health.” When I entered the other room, I heard somebody saying “When we go to Google, they say there is no problem.” So, we asked both groups to convene in one room and I took one laptop from each group and asked the owners to type in: “What are the health risks of electromagnetic fields?” The first group started the Google search and got as their first shot a paper entitled “Even cows get cancer from electromagnetic fields.” This was a Bavarian study that was conducted several years ago and reported about cows near transmission lines. The second group entered the same question into the Google search engine. Number one of that search was a paper entitled, “WTO foresees no problem with cancer when expanding networks on electromagnetic fields.”

What happened? Very cleary, both groups had included the learning mode when conducting searches. The first time they consulted Google, they looked for something that confirmed their view, and they did this many times. Over half a year, Google learned what they liked to hear and made sure that the entries with negative comments on mobile telephony were placed among the top 5 of the search list in the first group and, vice versa, that the most positive entries came out top in the second group. Most people do not go further than looking at the first three entries. And those were confirming what in both groups had been believed before already. Since over time the Google search produced more and more confirming statements, the user was left with the impression that slowly but surely the world has learned that he or she was right in the first place. However, this was true for both groups. Having no experience of cognitive dissonance, the only conclusion for both groups was that non-believers are either dumb, bribed, or cynics. If someone is bribed, dumb, or cynical, you do not have to talk to that person anymore. Then communication is considered a waste of time. Polarization will take place and we can see this right now in the United States between the adherents of the two major parties. They do not see any need for further conversation, deliberation, or negotiation. There is only right or wrong, black or white. This tendency is a real danger for democracy.

What does post-communication tell us about systemic risks? These risks cannot be adequately described by polarization in right or wrong. The stochastic nature of the issue, the nonlinear features of the causal effects, and the complex structure require shades of grey between the two extremes of right and wrong. In those countries where polarization has strongly evolved during the last decades, the governance of systemic risks has led to paralysis of the political regulatory system since there is no way to compromise in a polarized world [27]. Climate Change advocates and Climate Change deniers are irreconcilable against each other and make Climate Change an issue of almost religious belief. The new Friday for Future movement also tends to use science as an ultimate stronghold against the inactivity of politics and economics. Those who believe in Climate Change blame others for not doing anything, while those who do not believe have no reasons to adjust. Non-action is the consequence. Again, the risk apprehension tends to be attenuated rather than amplified even if verbally the fight for Climate Change protection has increased in intensity.

9 What Have We Learned About Systemic Risks: A Summary

Now given these effects, let me conclude in few words what I have tried to point out in this article: First, it is helpful for the discussion on risk governance and risk management to distinguish between conventional and systemic risks. Conventional risk can be contained in time and space, is fairly easy to assess by using scientific methods, and can be managed by introducing effective interventions at the right places in the known cause–effect chain. So far, we have been extremely successful in dealing with conventional risks in most of the OECD countries (the picture is quite different in many threshold and developing countries). The conventional risks need to be distinguished from systemic risks. These are characterized by complex relationships within the cause–effect chain as well as in their interaction with external systems. They follow stochastic reflationary patterns, they include sudden tipping points, and tend to transcend traditional geographic, political, or sectoral boundaries. In this field of risk, all our management and governance approaches are less successful. It is also less obvious of what science can do to assist risk managers and policymakers in reducing systemic risks. One major obstacle for bridging the gap between the acknowledgment of systemic risk as a serious challenge and the lack of effective actions that are required to deal with these risks effectively is the likelihood of attenuated risk apprehension in the public discourse. This is due to specific heuristics of how most people perceive these risks.

In this article, I identified four major reasons that may trigger the likelihood of attenuation. The first reason is that most complex systemic risks run counter to our intuition that serious dangers are caused by factors close in space and time. Anything that appears “far-fetched” is also seen as less plausible and less obvious than risks whose driver we can immediately observe in our own neighbourhood. Secondly, science cannot provide deterministic and non-ambiguous models of systemic risks. Although they are far from being arbitrary, people tend to withdraw trust and credibility to information that is associated with uncertainty and ambiguity. Public perception often oscillates between the belief in determinism on one hand side, which is scientifically problematic, and arbitrariness on the other hand, which is far away from what science can indeed offer.The third reason refers to the need to trust in scientific assessments even if they are not plausible, visible, or reconfirmed by personal experience. To rely on information that only others can provide and that we cannot prove right or wrong creates a lot of tension. Distrust in science is still not wide-spread but clearly increasing. Furthermore, as soon as scientific dissent is openly recognized, most people refer to so-called peripheral cues to assign trustworthiness or credibility. Since these cues change over time and are often contradictory, people feel irritated and frustrated and usually prefer inaction rather than risking to do the wrong or the inappropriate thing. Fourth, confusion is all reinforced by new communication tools in the IT world, in which everything that we believe, every prejudice we have, finds support in the social media and assembles enough followers to confirm whatever we believe is true. As a consequence, knowledge camps become polarized and differentiated approaches that are crucial for dealing with systemic risks become marginalized.

10 What Can We Do?

Last not least, I want to address the question: What can scientists and science institutions do to deal with systemic risks and their attenuation in public perception? As I pointed out, our usual learning mode of trial and error is totally inapt to deal with nonlinear cause–effect chains with sudden tipping points. However, trial and error as a heuristic is deeply engrained in our learning process [34]. So, we need to create a virtual environment in which we can simulate trial and error. If the virtual error occurs, people can experience what it means to trespass these tipping points. Fortunately, these negative experiences then are only simulations and not real events. But they can sensitize people not to wait for negative feedback before changing behaviour and lifestyles. This method of virtual preparation for relying on anticipation rather than trial and error is effective only when the simulations are framed in a form of a plausible, easy-to-grasp, and credible narrative. It has to be so convincing that people conclude, “Oh, if that is really happening, I better change now before this kind of disaster is approaching.” The simulations need to be not only scientifically well designed. They also need to be so well visualized that people feel as if they were real. This task is not trivial and requires a joint effort of excellent modellers, natural scientists, social scientists, communication specialists, and psychologists. It may even be wise to include professional writers and science fiction authors.

Beyond encouraging anticipation, it is crucial to include people more into collective decision-making. Once they get engaged in making decisions collectively for their community, they are much more willing and determined to learn about the complexities in which they operate [33]. If they sit around a regular table in a pub, they will not care much about facts and complexities, they know what is right for the world, and nobody can make them change their opinion. However, they are invited to join a Round Table with other citizens, the situation transforms dramatically. People become aware that their opinion and their judgment will have an impact on the wellbeing of the community in which they live. They feel more accountable for all the preferences that they articulate [23: 165]. We have accumulated good evidence that people in situation of collective decision-making are, first, more willing to look into more complex relationships and deal prudently with uncertainties and ambiguities. Secondly, they are willing to resolve conflict by looking into the trade-offs between different options and consider not only the consequences for themselves but also for others who ideally are all represented at the Round Table. For this to happen, we need excellent opportunities and open spaces that provide such a catalytic service to the communities. Social scientists are capable to investigate and design the appropriate institutional structures and processes in which people are encouraged to develop these civic virtues of evidence-informed and value-based production of collective decision-making.

The last point that I like to raise may be more contested than the two that I just elaborated on. The recent development in the sociology of science and knowledge towards a postmodern understanding of science as one narrative among others provides a disservice to society in my eyes [15: 123]. My main argument is that all our efforts to explicitly mention and characterize uncertainty, to stress the stochastic nature of what we know, and to point out the various ambiguities in the interpretation of complex relationships help society to get a more accurate and more truthful representation to what we frame as reality than gut feelings or intuition. We should make it very clear that through sciences we are able to set boundaries of legitimate knowledge [29]. To step outside of these boundaries means that we accept knowledge claims that are either absurd, without any evidence or mere wishful thinking. That is where scientists are really needed because normal intuition is not a good guide for inferences about complex systems. Scientists should be encouraged to make these boundaries more visible and pronounced in public discourse. “Anything goes” is not an adequate response to complex challenges and even less so to deal with complex risks. True is also: To resolve complex problems, we cannot rely only on systematic scientific knowledge, we also need experiential or tacit knowledge but without scientific knowledge and its rigorous methodological approach, we are likely to fall prey to “comfortable” illusions or manifestations of special interests and value camps. We need science as a watchdog about what we really know about a phenomenon and the relationships between phenomena. Only on that premise can science meet its role as an honest broker in societal discourse [17]. If we talk about complex systems and their impacts, nothing is better than a very good, rigorous scientific analysis. We should be proud of what science has been and still can offer to society. Science is not the only actor but an indispensable actor when it comes to the identification, analysis, and governance of systemic risks.

The recent development in the sociology of science towards a postmodern understanding of science as one narrative among others provides a disservice to society.