Keywords

1 Introduction

In Chap. 9, it was highlighted how various behavioural conditions can impact how effectively, or not, individuals and group make decision-based choices. Subjectivity was seen to override objectivity, impacting the efficacy of decision-making, falling back on biases when confronted with complex situations. In this chapter, we shall examine a variety of approaches to challenge the worst of behaviour governed by these traits and mitigate their impact.

2 Counteracting Biases

It is extremely difficult, if not nigh on impossible, to eradicate completely bias behaviour in ourselves and in others. The main defence is for individuals, analysts, and decision-makers to be, at least, aware how cognitive bias can influence decision-making—identification can help mitigate the worst effects.

Pherson (2019), a former CIA intelligence analyst, confirms that engrained mind-sets are a major contributor to analytic failures. Although recognised as a problem, past experience shows that analytic traps and mind-sets are easy to form but surprisingly difficult to change. There are a myriad of reasons why mind-sets are difficult to dislodge. Most often, time pressures lead analysts to jump to conclusions and to head down the wrong path. As more information becomes available, analysts are increasingly inclined to select that which supports their lead hypothesis and to ignore or reject information that is inconsistent. Contradictory information becomes lost in the noise.

Kahneman et al. (1982) suggest there are three questions to ask in order to reduce the impact of cognitive biases when making decisions:

  1. 1.

    Is there any reason to suspect the people making the recommendation of biases based on self-interest, overconfidence, or attachment to past experiences?

  2. 2.

    Have the people making the recommendation overcommitted to it and thus failure to follow up would cause some discomfort?

  3. 3.

    Was there groupthink or were there dissenting opinions within the decision-making team?

Taylor ( 2013 ) identifies four practical steps to mitigate such cognitive bias:

  1. 1.

    Awareness that such biases exist and influence decision-making. Such awareness acts as an initial buffer when faced with behaviours such as groupthink, silo thinking, and hubris. Self-reflection is key here.

  2. 2.

    Collaboration can help mitigate cognitive biases as one can observe biased behaviour easier in others than one can oneself. Self-awareness as identified in 1 above can be enhanced by such external observations in others.

  3. 3.

    Continuous and iterative inquiry is vital if one is to challenge perceptions and judgements that can be tainted by cognitive biases.

  4. 4.

    Though brainstorming type activities are useful introductory techniques they can hide the presence of biases especially where a dominant member pushes their particular agenda. More structured frameworks and processes help increase the identification of cognitive biases before they are internalised into the decision-making activity.

What Pherson calls “structured analytic techniques” can help decision-makers and analysts avoid or at least mitigate many of these biases helping them to:

  • Reduce error rates

  • Avoid intelligence and other analytic failures

  • Embrace more collaborative work practices

  • Increase accountability

  • Make the analysis more transparent to other analysts and decision-makers.

All the above approaches are valid, yet in so many instances, humans remain contented to be cocooned within their entrenched biases and established thought processes—if individuals, groups, and organisations are unwilling to examine their thought processes and value systems consistently, then there is little hope that behavioural change can take place and old habits continue to contaminate objective decision analysis and decision-making.

Pherson believes diagnostic and reframing techniques can help mitigate the worst of this behaviour saying that experience shows how difficult it is to overcome the tendency to reach premature closure, embrace “groupthink”, and avoid analytic traps. Overcoming mind-sets relies on employing structured forcing mechanisms that require analysts to seek out new perspectives and possibilities. Without the use of structured analytic techniques analysts are less likely to identify and challenge key assumptions, think critically about the evidence, reframe analysis, and, most importantly, avoid surprise. The techniques also impose a greater degree of transparency, consistency, and accountability. They work most robustly with the participation of a diverse set of participants bringing a variety of perspectives to the table.

Diagnostic techniques include:

  • Key assumptions check: Makes explicit and questions the assumptions that guide an analyst’s interpretation of evidence and the reasoning underlying any particular judgement or conclusion.

  • Multiple hypothesis generation: Generates multiple alternatives for explaining an issue, activity, or event. It is done in a variety of ways, ranging from a form of structured brainstorming to the development of complex permutation trees.

  • Diagnostic reasoning: Applies hypothesis testing to the evaluation of significant new information in the context of all plausible explanations. It forces analysts to challenge their existing mental mind-sets.

  • Analysis of competing hypotheses (ACH): Applies Karl Popper’s theory of science to intelligence analysis. It involves the weighting of the available information against a set of alternative explanations and selecting the explanation that fits best by focusing on the information that tends to disconfirm the other explanations. Note: Chap.8introduces ACH as a key MTT in more detail.

  • Inconsistencies finder: Uses a simplified version of ACH that evaluates the relative credibility of a set of hypotheses based on the amount of disconfirming information that has been identified.

  • Deception detection: Employs a set of checklists analysts can use to determine when to anticipate deception, the actual presence of Fake News deception, and what to do to avoid being deceived.

  • Chronologies and timelines: Organises data on events or actions when it is important to understand the timing and sequence of relevant events or identify key gaps.

Intelligence errors, which led to the 9/11 attacks and the erroneous analysis in overstating Iraq’s weapons of destruction, forced US intelligence agencies to focus on alternative forms of analysis which reduced the impact of cognitive biases via the use of “reframing techniques” including:

  • Outside-in thinking: Focuses on the broader forces that can influence an issue of concern.

  • Structured analogies: Applies analytic rigour to reasoning by analogy.

  • High impact/low probability analysis: Warns a decision-maker of the possibility a low probability event may happen even if the evidential base for making such a conclusion is weak.

  • What if? Analysis: Alerts a decision-maker to an event that could happen, or could be happening, even if it may seem unlikely at the time.

  • Classic quadrant crunching: Uses key assumptions and their opposites as a starting point for systematically identifying and considering all possible relationships in a multidimensional highly complex, and usually non-quantifiable problem space.

  • Pre-mortem analysis: Reduces the risk of analytic failure by identifying and analysing a potential failure before it occurs.

  • Structured self-critique: Employs a checklist process to review all the possible ways an analysis could turn out to be incorrect.

  • Red hat analysis: Marshalls the expertise, culture, and analytic skills required for a team to explore how an adversary or competitor would think about an issue.

The main argument here is of course the willingness of those policy-makers to avail themselves of such techniques and not fall into the trap of hubris so that alternative approaches are seen as detrimental to more ideological forms of policy development.

The diagnostic and reframing techniques described above provide a systematic and rigorous check for analysts to assure themselves that their assessment about “what is” is as accurate as possible. They are designed to uncover untested assumptions, examine alternative explanations and perspectives, and uncover hidden analytic traps. Armed with such indicators, the analyst can warn policy-makers and decision-makers of possible futures and alert them in advance, based on the evidence. Turning such messages into action is of course another issue.

None of these diagnostic or reframing techniques guarantee that all unforeseen events will be anticipated. Intelligence surprises are inevitable, but the use of these techniques will ensure a greater rigour to the analysis and reduce the chances of surprise. More important is that such application of techniques needs to be done on a regular if not continuing bases and be integrated into operational activity. If analysts continually test, probe, and indeed attack their assumptions and mind-sets, they will be more capable of knowing what they know and discovering what they did not realise they did not know. The use of these techniques helps analysts anticipate what might occur in the future and better prepare themselves to track developments that presage dramatic change. In the end, decision-makers will benefit from the more thoughtful, comprehensive analysis that results from employing these techniques (Pherson & Pyrik, 2018).

More recently and following the polarising impact of the Trump era and issues such as Brexit, Pherson (2021) turned his interest to addressing how such polarisation could be addressed. Such polarisation itself is a manifestation of ingrained cognitive biases and cognitive dissonance amongst both individuals and groups. He encourages the process of “constructive dialogues” which includes:

  • Spending more time talking to each other—not arguing with each other. The focus when we speak should be to inform, not persuade. He continues saying:

A good way to start a conversation is to ask where someone gets their information. If it is a different set of sources than yours then consider this a great opportunity to learn what data they are relying on to form their opinions. Later you can reflect on whether that data is valid. If it can be challenged, then send them reports or information that points out the factual errors in their data or the faults in their judgment that they can read privately without feeling challenged.

  • Stop arguing about “facts” and reframe discussions around positive narratives.

Focus attention and energy on the future and listening to or seeking positive solutions.

  • Let the parties concerned be aware that cognitive bias is extremely powerful and that mind-sets are extraordinarily hard to change.

  • Establish an authoritative set of objective standards for what is appropriate and inappropriate to post on social media. This, however, may require considerable heavy lifting when it comes to lobbying various institutions and vested interests.

  • Craft your own positive personal narrative of what needs to be done to make things better. Identify who needs to be engaged and what resources are required to make it happen. Pherson adds that you should “Join and/or build a network connecting you with others who want to promote constructive narratives and forge fair and balanced solutions. Make sure your group is inclusive of all views on the topic. Once your “team” has agreed on a preferred, consensus outcome, construct an action plan and generate someindicatorsto track your progress”.

In the previous chapter, we referred to how expert opinion can also be prone to bias (Tetlock, 2005). A recent academic paper entitled “Expert biases in technology foresight. Why they are a problem and how to mitigate them” by Bonaccorsi, Apreda, and Fantoni (2020) states that that it is extremely difficult to “formulate foresight in new technologies by relying exclusively on quantitative methods, without the support of human experts … .”

They continue:

It is common knowledge in the technology foresight literature that human experts are subject to a number of biases and distortions in their judgments. It can be said that the impressive development of methodologies in the last half century is an effort to mitigate these distortions, particularly with Delphi techniques and their variants.

They go on to propose a number of newly developed techniques which are more promising for addressing the limitations of experts. However, they also observe that only a few studies have explored the role of cognitive biases recognising that Delphi techniques may mitigate some biases such as overconfidence, but not all.

A number of mitigation approaches are highlighted namely:

Mitigation by diversity—by enlarging the perspective of individual experts and combining their opinions with non-experts, the aim is that the increased diversity might mitigate cognitive biases and not be dominated by one or several individuals.

Mitigation by negation—this encourages experts to systematically consider an opposite view or counter argument. In this way framing and anchoring biases can be mitigated.

Mitigation by abstraction—it is argued here that the reasoning of experts can be deeply embedded into their specific domain knowledge. Bonaccorsi et al. state that as a result experts “are less cognitively loaded when they reason in terms of domain knowledge, that is, in terms of known solutions to problems. On the contrary, it is very demanding to keep the reasoning active for several hours in an abstract space, in which, to make an example, drawings or calculations are not concretely available. Therefore what is needed is a strategy to alleviate the cognitive load of abstraction, helping experts to keep in their mind several, possibly conflicting, high level technological options, while exploring all potential implications”.(Bonaccorsi et al., 2020)

As per Pherson’s point of view, Bonaccorsi et al. also see post-mortem exercises are a useful format in identifying biases which can help identify what methods were most effective in reducing them. Their final call is for more research in the field of cognitive biases to be carried out in relation to expert opinion.

3 Digital Disinformation, Media Literacy, and Fact-checking

We have seen in the chapter on the evidence base (5) how the latest and increasing trends in the dissemination of “fake news” is a clear and present danger to rational argument and balanced objectivity. Wardle and Derakshan (2017) identified that the purveyors of disinformation tap into our biases, conscious or otherwise, and our deep-seated fears. Truth therefore needs to be more resonant if it is not to be drowned out. To re-iterate what was said in Chap. 7, if such false information is to be challenged, then our brains need to replace such falsehood with an alternative narrative. It would appear that much greater resources, neutrally funded, be made available to fact-checking organisations—since as has been identified earlier “fact-checking” costs money whereas lies are cheap. The cost of mounting a “counter-insurgency campaign” against the increasing hegemony of fake news will be a high one.

The challenge for those individuals, groups, organisations and even nations wishing to maintain and secure the veracity of their evidence bases will be to continually seek out and deploy technology-driven strategies that will counteract “bad actors”—a complex, daunting, and, probably, never-ending task.

“Fake news” is not a new phenomenon—false propaganda has been around for centuries, albeit in different guises. Wherever there is diversity of opinion, biased opinion, based on questionable sources of information, can prevail—especially when the means of communication are tightly controlled by governments and/or powerful vested interests.

Although the current and growing spate of disinformation has relied heavily on the application of technology to media-based dissemination, those same groups of technologies can also be deployed to challenge such threats and increasingly identify fake news. The challenge is to ensure that such counter platforms have a voice which is louder than the “bad actors”.

Defense One (2019), an online news platform specialising in national security issues, recently stated that:

Thanks to social media, fake news can now be disseminated at breakneck pace to vast audiences that are often unable or unwilling to separate fact from fiction. Studies suggest that fake news spreads up to six times faster on social media than genuine stories, while false news stories are 70 percent more likely to be shared on Twitter. Observers call it “spam on steroids.

Pertinently the article observed:

Put another way, it is difficult to consume fake news free from the influence of personal opinion. That’s where technology can help.

The article goes onto introduce two real-life approaches to combating disinformation and fake news, especially when channelled via social media. The first one goes under the name “Tanbih”, a Qatari-based operation which looks at specific bits of content, searching for common propaganda techniques, including loaded language, stereotyping, and stretched facts within content and coverage. It uses AI to train users to spot usage of propaganda techniques in texts and develop critical thinking when interacting with news.

A more formalised approach has been adopted by the Finnish government in its battle against digital disinformation and where a number of commentators have referred to this template. Sources include an extensive 2019 CNN report, Defense One’s online comments, and Pherson Associates’ May 1921 reference in “The Analytic Insider” news sheet. In this next section, we shall examine in greater detail how the Finnish approach operates.

3.1 The Finnish Approach

In 2015, Finland launched a concerted campaign to advise officials help prepare its citizens identify fake news and counter narratives designed to sow division within the country, understand why it goes viral, and develop strategies to combat it. This approach was integrated into the education system curriculum so that it paid greater attention to critical thinking. Another strategy that proved highly effective was to develop a strong, positive national narrative, rather than trying to debunk false claims.

According to a CNN Special Report (2019), the campaign has been successful, and in 2018 in a study measuring resilience to the “post-truth” phenomenon, Finland was placed first out of some 35 countries.

Another strategy that proved highly effective was to develop a strong, positive national narrative, rather than trying to debunk false claims. Through its critical thinking curriculum, Finland encourages children to examine YouTube videos, social media, and news articles for factual and statistical errors. A fact-checking organisation “Faktabaari” has since 2017 adapted professional fact-checking methods for Finnish schools. A paper prepared by the Faktabaari team (2018) provides extensive detail as to the scheme’s modus operandi.

It appears that Finland’s strong position in the battle against fake news is based on a number of factors such as:

  • A national narrative that places a high premium on the rule of law and belonging.

  • A high education profile all helping to create an environment where media literacy can flourish.

  • A high standard of living more equally spread across its population.

  • A largely homogenous society free from social fragmentation.

Specific tools deployed by the Finns, especially amongst highly literate school age and higher education student cohorts, include:

  • A checklist of methods used to deceive readers on social media: image and video manipulations, half-truths, intimidation, and false profiles.

  • How to identify bots: look for stock photos, assess the volume of posts per day, check for inconsistent translations, and a lack of personal information.

  • Exercises to examining claims found in YouTube videos and social media posts, comparing media bias in an array of different “clickbait” articles, probing how misinformation preys on readers’ emotions, and even getting students to try their hand at writing fake news stories themselves (CNN Report, 2019).

  • Encouraging students to think twice before liking or sharing social media and ask “who has written this?”, “where has it been published”, and “can I find the same information from another source?”—aka validation.

However, it is accepted that Finland has a number of advantages which makes it especially well placed to combat the tsunami of fake news and disinformation. It is a small and largely homogenous country consistently ranked at or near the top of almost every index—happiness, press freedom, gender equality, social justice, transparency, education, and literacy. This makes it difficult for external actors to find cracks within society to force open and exploit.

Even within Finland some commentators state that the social media companies themselves (Facebook, Twitter, Google, YouTube) need to be regulated as they are regularly seen as enablers of hostile actors and trolls. A journalist Jessikka Aro suggests that:

Just like any polluting companies or factories should be and are already regulated, for polluting the air and the forests, the waters, these companies are polluting the minds of people. So, they also have to pay for it and take responsibility for it. (CNN, 2019).

Finally, even the Finns acknowledge that the battle against fake news and disinformation is a never-ending battle as “bad actors” continually seek new ways and means to contaminate the “airwaves”. The battle will not be won by just the Finns of this world. Far greater international coordination between nations, the social media companies themselves, NGOs, and international regulatory bodies needs to be enacted if those bad actors who exploit cognitive biases are ever to be challenged and eventually defeated. It is one of the world’s most wicked of problems!

4 Filter Bubbles and Echo Chambers: The Curse of the Selective Algorithm

A major criticism levelled at various social media search engines is that the algorithms used help create filter bubbles. The bubbles allow for the isolation of ideas and views belonging to an individual by selectively assuming the information a user wants to see, and then providing such information to that user according to this assumption. The website algorithms track user behaviour such as former click preferences, browsing and search history, as well as location. This means that websites will tend to present only information to that user that reflects the user’s past activity. A filter bubble, therefore, can cause users to receive significantly less contact with different or contradicting viewpoints, so that the user can become intellectually isolated. It is argued that filter bubbles can lead to ideological polarisation so that users fail to receive balanced information, seeing only that information that is aimed at re-enforcing our established interests and existing worldviews. It should be said that further research needs to be carried to ascertain the full impact of how much filter bubbles actually do constrict access to alternative views.

4.1 A New Tool to Help Mitigate the Impact of Filter Bubbles

Je Hyun Kim,Footnote 1 a student carrying out a research project at Imperial College London and the Royal College of Art, has developed an app aimed at exploring opposing views and echo chambers in order to help mitigate the impact of polarisation caused by machine learning algorithms.

The objective of Je Hyun’s project was to see if people changed their initial opinion once they are given both sides of the story. After a number of trials using the app that he developed he noticed that people did in fact hold less extreme opinions when they heard about the opposing point of view and notably for individuals who are similar in education and social/economic background as other people.

A key insight from this phase of the project was that participants were not aware of how these recommendations later influenced their opinion. To prevent individuals from having extreme opinions and to understand the other point of view (POV) of other people, the user first need to realise that their own opinion was one-sided.

By matching user A with one set of strong opinions with the opposing views of user B introduced doubt into each of the users as to their own biases, so that they started to rethink their own POV. The biggest challenge was to design a user interface that highlighted the opposing view in the most convincing way.

His research identified that this required:

  • Using a personal recommendation system, as both business and users get benefit from it,

shows opposite point of view.

  • The opposing view should come from someone similar in terms of social profile, age, etc.

  • The ability to see directly the other user’s POV

  • Making the app design highly interactive.

The process is illustrated in Fig. 10.1 below

Fig. 10.1
An illustration of design research has to create a user interface that shows the opposing viewpoint in the most convincing way. The viewpoints are personal communication, opposite P O V, from someone similar to you, experience the others' P O V, and Interaction design.

Profile of design research

The algorithm behind the app consisted of two main parts. The first part recommends similar users. This is done by using a clustering method using unsupervised machine learning, similar to how dating apps recommend you a date. Past behaviour such as subscriptions and history will be included in the data to cluster. The second part is to find the opposing videos. By using natural language processing, keywords can be spotted. The algorithm can now find videos that aren’t related to the user’s keywords so that a real alternative is accessed.

The initial interface is illustrated in Fig. 10.2

Fig. 10.2
A schematic diagram of the opening interface such as Apple Great Design, Machine learning, the next climate change, 5 reasons why the U K shouldn't leave E U, Cons of Brexit, and Economic Crisis Brexit is causing.

Opening interface

The app then displays the opposing viewpoint from someone similar on the right-hand side—see Fig. 10.3 below

Fig. 10.3
A screenshot of opposing views of the Cons of Brexit has like, dislike, forward, and download buttons below. It has clustering and natural language processing.

Presentation of opposing views

Then rotate the screen to experience the other user’s POV as below Fig. 10.4

Fig. 10.4
A photo of the BREXIT YouTube screen displays 5 reasons why the U K should leave the E U. It has like, dislike, share, and download buttons with subscribe option below. It has 90 percent clustering.

Alternative point of view via screen rotation

Fig. 10.5 shows the full interface and both pros and cons of the argument.

Fig. 10.5
A photo of the logos on the screen has an algorithm written on top. The logos are for Apple great design, Machine learning, the next climate change, 5 reasons why the U K should leave E U, the Pros of Brexit, and the Economic benefit that Brexit provides.

Full array of pros and cons

The app has an advantage as it is a new type of interface that can be applied to various platforms such as Netflix and Facebook. These platforms are already collecting data via machine learning algorithms to recommend contents. This data can be used and add value to the app. It can also be used in multiple environments such as on a smartphone.

Although JeHyun’s app is only a student project it does demonstrate how the younger generation themselves, as prime users of social media, are aware of the limitations and biases it can re-inforce and seeking to mitigate the worst of such biases. This must augur well for the future and that technology itself can be used to mitigate the worst excesses of social media echo chambers. The user base itself is becoming increasingly aware of how data can be manipulated by false premises. (Note: additional academic research is being undertaken for further product development.)

The main challenge, here of course, is how to get people to voluntarily seek out alternative views. Perhaps the best way to use such an app is as part of a recognised training programme promoting positive narratives, such as that employed in the Finnish programme.

5 How to Reduce Cognitive Dissonance

In a world where we are bombarded with vast volumes of data, much made up of very different points of view, it is very difficult to avoid cognitive dissonance. So, on the assumptions that an individual recognises that they are being exposed to dissonant arguments (a big assumption by the way), how can he or she reduce the mental stress of such dissonance.

The three most common approaches to mitigate such stress are:

  1. 1.

    Change your beliefs

  2. 2.

    Change your actions

  3. 3.

    Change the way you see your actions so as to make them less contradictory.

All this sounds quite reasonable from a logical point of view—yet we know our own biases can act as powerful barriers to allowing us to adopt such changed behaviour and perceptions.

There is very little the individual can do to confront cognitive dissonance unless he or she is aware of it in the first place (being more mindful)—and that is part of the problem—it is partly ignorance of the need for personal introspection or mindfulness and partly the stress of holding dissonant views which the individual likes to deflect or subsume in the first place. An additional barrier, of course, to reducing cognitive dissonance is that people simply don’t like being told they are suffering from it in a similar way many people bridle when told they are sexist, homophobic, or racist or ageist—they prefer to seek out information that provides cognitive support for their pre-existing attitudes and beliefs and that they are acting reasonably.

Due to such a behavioural challenge, we may have to accept that it is a cognitive condition we have to live with. That may be so but there is no reason not to inform and evangelise the existence of such a mental phenomenon—a message that needs to be regularly and continuously repeated so as to increase awareness of the condition if we are to mitigate the impact of such biases.