1.1 Introduction

Decisions, decisions, decisions, we make them all the time, probably thousands each day. Most are part of daily living, such as moving about our environment, others need more thought, but are not particularly critical, such as what coffee to buy. However, some decisions are important, even with life implications, from deciding if it’s safe to cross the road, to a doctor deciding what cancer treatment to suggest for a patient. We might imagine that all these decisions, whether trivial or not, are based on sound reasoning using our senses and our experience stored in our memory. However, it is generally agreed that the majority of decisions are made unconsciously using heuristics - strategies that use only a fraction of the available information. This makes sense in evolutionary terms [32], as to survive approaching danger, for instance, decisions had to be made rapidly. Humans do not have the time or brain processing power to do much else than use heuristics, and are, in fact, inherently lazy in order to conserve precious energy resources [22]. Fortunately, most of the time the result of using the very fast and automatic heuristic strategies are “good enough”, however in certain situations they are not good enough, leading to poor judgments. It is these “errors in judgment” or “irrational behavior” that are commonly referred to as cognitive biases.

During this decade, interest in cognitive biases has increased markedly, with several large research projects [38, 57] starting in 2012, as well as a few mentions in popular on-line publications [8] and even in the press. In addition to an increase in scholarly articles,Footnote 1 the biggest change has been in media interest, especially in the business world. A recent Google search for “cognitive bias” presents many business orientated items which are either aimed at selling (e.g. Cognitive Biases : How to Use Them to Sell More) or as a warning (e.g. Hidden Cognitive Biases That Cost You Big Money). Other search results are generally pessimistic regarding cognitive biases such as The 17 Worst Cognitive Biases Ruining Your Life!

More recently, implicit or unconscious bias has been in the media, in the context of equality and anti-discrimination. This is often the result of stereotyping which is influenced by our background, culture and experience. In this sense “unconscious” means that humans make this judgment without realizing it, as with heuristic processing. And, if we think that cognitive biases only affect humans, then there are studies on rats [6], sheep [69], bees [62], chicken [72] and many other animals which use cognitive bias as an indicator of animal emotion [59]. However, these uses of the term “cognitive bias” differ from the more traditional one which we are discussing in this book.

Before considering cognitive biases (in humans) in the context of visualization and visual analytics tools, the next sections, provide some examples of common cognitive biases and a brief history of their ‘discovery’ and subsequent research.

1.1.1 Examples

A recent classification of cognitive biases, the Cognitive Bias Codex by Benson [47] lists 187 biases.Footnote 2 These have been added to since the 1970s and the trend seems to be continuing, although sometimes just a bias by another name. There are, of course, similarities which various classification schemes over the years have attempted to tease out [3, 4, 9, 34, 37, 58, 66, 70] although most of the work has been in the area of decision support. In Chap. 2, Calero Valdez et al. propose a framework, specifically for the study of cognitive biases in visualization, and contrast this with the aforementioned Cognitive Bias Codex.

For those readers, not familiar with cognitive biases, here are four examples of common biases:

Familiarity/availability bias is where people tend to estimate the likelihood of something to happen by how easy it is to recall similar events. For instance, people will generally think that travel by airplane is significantly more dangerous in the aftermath of a plane crash being reported in the media (see Chap. 6).

Confirmation bias is where people tend to search for confirming rather than for dis-confirming evidence with regard to their own previous assumptions. For example, if you think that eating chocolate makes you loose weight then a Google search “loose weight by eating chocolate” will confirm this if you ignore article to the contrary (see Chap. 5).

Representational bias in visualization involves constraints and salience. For example, a matrix representation is not good at showing network data (a constraint) but can highlight missing relationships in its table view (salience) (see Chap. 10).

Overconfidence bias is where people tend to assess the accuracy of their answers or performance as greater than it actually is. There are many related cognitive biases such as illusion of control and planning fallacy (see Chap. 9).

1.2 A Brief History of Cognitive Biases

Early research on decision-making was founded on the theory of rational choice, where a person carefully assess all the alternatives and if they make errors these would not be systematic. However, in the 1950s and 60s, experiments found that people are generally poor at applying even basic probability rules and often make sub-optimal judgments when measured against an ‘ideal’ standard derived from Bayesian analysis [19]. Even experts, such as physicians, were found to make biased judgments [48]. Simon proposed bounded rationality [63], suggesting that humans are too limited in their data processing abilities to make truly rational decisions but employ simplifying heuristics or rules to cope with the limitations.

In the early 70s, Tversky and Kahneman developed this idea with their heuristics–biases program, with particular attention on judgments involving uncertainty. Systematic deviations from ‘normative’ behavior were referred to as cognitive biases and this was backed up by a series of experiments which illustrated 15 biases [66]. Heuer [34] also promoted the idea of cognitive bias errors being due to irrationality in human judgment with his work amongst intelligence analysts. Over the years many more cognitive biases were proposed, based mostly on laboratory experiments. However in the 80s, researchers began to question the notion that people are error prone and a lively debate has ensued over the years typified by the communication between Gigerenzer [26], and Kahneman and Tversky [41]. One of the opposing arguments poses the question “Are humans really so bad at making decisions, especially where it involves uncertainty?”. Gigerenzer [28] suggests that the use of heuristics can in fact make accurate judgments rather than producing cognitive biases and describes such heuristics as “fast and frugal” (see Chap. 13).

It is suggested that the success of heuristics–biases program is partly due to the persuasive nature of the experimental scenarios, often in the laboratory, which can easily be imagined by the reader [42]. However, many of the studies have clearly involved domain experts in the workplace. Another criticism of the heuristics–and–biases approach is the resultant long list of biases and heuristics, with no unifying concepts other than the methods used to discover them [4]. So the focus of later work has been to propose decision making mechanisms rather than just looking for departures from normative (ideal) models [40]. To this end, dual process models have been put forward, for example the two system theories of reasoning which feature System 1: involuntary/rapid/rule-based + System 2: conscious/slower/reasoning decision making [22, 65]. Kahneman’s book “Thinking, Fast and Slow” [39] also adopts this dual process model and gives a very readable account of heuristic and biases.

Other developments include the Swiss Army Knife approach [29] that proposes that there are discrete modules in the brain performing specific functions, and deviations occur when an inappropriate module is chosen or where no such module exists, so the next best one is used. Formalizing heuristics [27] and modeling cognitive biases [36] are other approaches to understanding what is going on in our heads when we make decisions. A useful discussion of the impact of Tversky and Kahnemans work can be found in [24]. But as research continues in this area, Norman provides a word of warning, especially in medical diagnosis, that there is bias in researching cognitive bias [51].

1.3 Impact of Biases

Not withstanding the debate amongst researchers as to the underlying cognitive processes, there is little doubt that in particular circumstances, systematic behavior patterns can lead to worse decisions. Making a poor decision when buying a car by overrating the opinion of a person you have recently met (vividness bias), is often not a major problem, but in other realms such as medical judgments and intelligence analysis, the implications can be damaging. For instance, a number of reports and studies have implicated cognitive biases as having played a significant role in a number of high-profile intelligence failures (see Chap. 9). Although uncertainty is a factor, a person’s lack of knowledge or expertise is not the overriding consideration. Cognitive biases such as overconfidence and confirmation are often associated with poor judgments among people in senior roles, as in a realistic study where all the twelve experienced intelligence analysts were led astray by confirmation bias, leaving only the inexperienced analyst with the correct answer [5].

In addition to Chap. 9, which focuses on intelligence analysis, many of the chapters in this book describe the impact of various cognitive biases, especially in relation to interpreting visualizations or when using visualization tools. For instance, Chap. 6 details the impact of familiarity related biases, especially with experts from the physical sciences and Chap. 10 discusses potential problems with representational biases when viewing visualizations. The case study described in Chap. 12 reveals the likelihood of numerous cognitive biases which can seriously affect decision making in a college admissions process. Chapters 3 and 4 discuss the notion that various aspects of computer systems, as well as humans, can also exhibit biases.

1.4 Cognitive Biases in Visualization

Interest in cognitive bias research has grown considerably at both the cognitive science level and also in relation to the visual analytics and decision-making tools that we build. The DECISIVe workshopsFootnote 3 have focused on two main issues related to visualization: (i) is the interpretation of visualizations subject to cognitive biases and (ii) can we adapt visualization tools to reduce the impact of cognitive biases?

1.4.1 Interpretation of Visualizations

There is evidence from peoples’ susceptibility to optical illusions that systematic errors can occur due to simplifying heuristics, such as grouping graphic items together, as set out in the Gestalt principles [1, 53, 55, 67]. It has also been demonstrated that different visual representation of common abstract forms or appearance of the visualization itself can affect the interpretation of the data [12, 16, 54, 74, 75, 77]. In relation to the comprehension of images, Fendley [23] discusses cognitive biases in detail and proposes a decision support system to mitigate a selection of biases. Ellis and Dix [21] proposed that cognitive biases can occur in the process of viewing visualizations and present examples of situations where particular cognitive biases could affect the user’s decision making. Recent studies into priming and anchoring [68], the curse of knowledge [73] and the attraction effect [17] demonstrate these cognitive bias effects when interpreting visualizations, but as their authors point out, much more work needs to be done in this area.

1.4.2 Visualization Tools

In visual analytics, user interaction plays a significant role in providing insightful visual representations of data. As such, people interact with the systems to steer and modify parameters of the visualization and the underlying analytical model. While such human-in-the-loop systems have proven advantages over automated approaches, there exists the potential that the innate biases of people could propagate through the analytic tools [61]. However, if the system is able to monitor the actions of the user and their use of the data resources, then it may be possible to guide them and reduce the impact of particular cognitive biases. This requires ways to effectively detect and measure the occurrence of a range of cognitive biases in users [10, 45, 46, 71]. Work towards this is the subject of Chaps. 5, 7 and 9 in particular. Researchers point out that novel corrective actions, ideally tailored to the user, are then required.

1.5 Debiasing

Reducing the negative impact of cognitive biases is a challenge due to the inherent nature of biases and the indirect ways in which they must be observed. Early work generally focused on developing user training, typically scenario-based, in an attempt to mitigate the effect of a small number of cognitive biases. However, this approach has met with little convincing generalizable and lasting success. Research shows that even if users are made aware of a particular cognitive bias, they are often unwilling to accept that their decisions could be affected by it, which itself constitutes bias blind spot [56]. Structured analytical techniques (SATs) (as discussed in [35]), such as ‘argument mapping’ and Analysis of Competing Hypotheses (ACH) have been used in intelligence analysis to reduce the impact of cognitive biases. Few of these techniques have been evaluated in empirical studies, apart from ACH, which, for realistic complex problems, has proved unsatisfactory, often due to the time pressures (see Chap. 9).

There has been appreciable effort in the medical field to identify cognitive bias effects and reduce prevalent diagnostic errors [14, 15, 30] with interventions (checklists) to increase clinicians knowledge, improve clinical reasoning and decision-making skills [11] or assist clinicians with selected tools. According to Croskerry [13], progress is being made, but this is hindered by the general lack of education in critical thinking amongst clinicians.

Bias-Reducing Analytic Techniques (BRATS) are another way of investigating bias mitigation. They benefit from minimally intrusive cognitive interventions [44] based on prior work on cognitive dis-fluency [33]. While results were mixed, opportunities for further research show promise. Another method involves the application of serious games to improve critical thinking as in the MACBETH [18] and HEURISTICA [2], games developed as part of IARPA’s Sirius program [7].

A common challenge across all these methods is the difficulty to shape an individual’s cognitive behavior. Therefore, research is shifting toward modifying and improving the decision environment (i.e. tools, etc.). Recent works investigate how visualizations can reduce base-rate bias in probabilistic reasoning [43, 49]. Other visualization research focuses on the cognitive biases that affect judgments under uncertainty [78]: for example in finance, helping investors to overcome uncertainty aversion and diversification bias [60] or loss aversion and conservatism [76]; assisting Fantasy baseball experts to mitigate the regression bias in their predictions [50]; or countering the anchoring and adjustment bias in decision support systems [25].

Researchers further propose frameworks, integrated into visual analytic systems, that provide support for mitigating some cognitive biases through measures such as the use of appropriate visualization types, uncertainty awareness, the use of statistical information and feedback from evidence-based reasoning [52, 61]. Other approaches attempt to “externalize the thinking” of the decision-maker [45] or improve hypothesis generation [31], in this case to avoid confirmation bias.

1.6 Conclusion

Cognitive biases are still somewhat intriguing. How humans actually make decisions is still largely a mystery, but we do know that most of this goes on at an unconscious level. Indeed, neuroscience experiments suggest that human decisions for physical movement are made well before the person is consciously aware of them [64]. From a survival of the species point of view, the evolutionary argument is compelling for very quick decisions and we often cannot say how we arrived at a particular judgement other than say it was a ‘gut feeling’. The popular classification of cognitive biases as errors brought about by heuristics - the unconscious decision-making processes in the brain - is more a matter of academic than practical interest. The important point is that better decisions can be made if we are more aware of the circumstances in which cognitive biases can occur and devise ways of countering this unhelpful behaviour. Both of these factors, bias detection and mitigation, pose serious challenges to the research community, as apparent from the limited progress so far on both accounts. However, the DECISIVe workshops have stimulated research into dealing with cognitive biases in visualization, and I hope that readers of this book will find help and inspiration in its chapters.