FormalPara Definition

A heuristic is a ‘rule of thumb’, or shortcut, that helps people make quick or intuitive judgements without apparent deliberation or calculation. A bias is a systematically incorrect outcome generated by the use of a heuristic. It differs from the correct, unbiased outcome that would result from the use of a normative rule to solve the same problem.

Tversky and Kahneman (1974) popularized the idea of heuristics and biases in numerous articles, the first of which demonstrated that people use small samples when estimating probabilities. Their article ‘The belief in the law of small numbers’ (Tversky and Kahneman 1971) demonstrated a bias later categorized as an exemplar of the representativeness heuristic. Tversky and Kahneman stated: ‘people rely on a limited number of heuristic principles which reduce the complex tasks of assessing probabilities and predicting values to simpler judgemental operations’ (1974: 1124). In this entry, we review a few main heuristics but not the long and growing list of biases. We identify the three main heuristics appearing in the Tversky and Kahneman (1974) article and belonging to the category of judgemental heuristics, and describe a couple of others in the category of choice heuristics.

Judgemental Heuristics

Availability

Tversky and Kahneman identified three principal judgemental heuristics. The first of these, availability, refers to the ease with which an occurrence is brought to mind (Tversky and Kahneman 1973). The more ‘available’ an outcome is, the higher the probability an individual will assign to it. For example, a person might judge the probability of a car accident by the frequency of notable car accidents among her own acquaintances. The availability heuristic leads decision-makers to overestimate the probability of salient events. In general, more recent events can be more easily recalled than distant ones.

The use of the availability heuristic produces several biases. For example, the ‘retrievability’ bias applies when the memory structure of individuals affects their judgement of frequency. If given a list of an equal number of men and women, on which the women are more famous, an individual is likely to assert that there are more women on the list. The ‘effectiveness of search set’ bias describes a situation in which, when required to assess the frequency of outcomes, an individual will overestimate the likelihood of those that are more easily understood. For example, when asked whether there are more words starting with the letter ‘r’ or with ‘r’ as the third letter, most people choose the former outcome, as it is easier to think of words by first letter than by third. The ‘imaginability’ bias results in overestimation of the probabilities of easily imagined events, such as earthquakes or market crashes.

Representativeness

This heuristic describes the process of judging whether an item or outcome belongs to a particular category or class by the degree to which it is representative of that class. You might determine whether an animal is a giraffe by observing the length of its neck and whether or not it has stripes.

Reliance on the representativeness heuristic generates the ‘insensitivity to prior probabilities’ bias. Decision-makers violate Bayes’ rule, ignoring base rates whenever other information, even if irrelevant, is given. A classic example is the librarian/farmer experiment. Subjects are informed that the frequency of librarians in the population is much lower than that of farmers. Nonetheless, when told that Linda is shy, most subjects believe she must be a librarian because librarians are stereotypically shy. The ‘insensitivity to sample size’ bias leads individuals to assume the same dispersion of probabilities for large and small samples (Tversky and Kahneman 1974). One might expect a large and a small hospital to have the same probability of delivering over 60% male babies, but statistically, the small hospital has a much higher probability of experiencing an unusual event. The ‘misconception of chance’ bias describes the fact that individuals expect data generated by a random process to look random. This is epitomized by the ‘gambler’s fallacy’, that is, the belief that black is likelier to come up on the roulette wheel after a long run of red.

Anchoring and Adjustment

Decision-makers estimate the value of an outcome by adjusting from an initial value, or anchor. In determining a reasonable price for an item, for example, we might anchor from the price we paid the last time we bought it. The ‘insufficient adjustment’ bias describes the common tendency to fail to shift sufficiently far from the initial value of the variable to be estimated. For example, decision-makers might extrapolate the value of the problem 1 × 2 × 3 × 4 × 5 × 6 × 7 × 8 from the first few calculations in the series, whose value is naturally much smaller than those of the last few. The ‘conjunctive and disjunctive events’ bias refers to the overestimation of the probability of events that must occur in conjunction and underestimation of the probability of events that occur independently. The probability of a conjunctive event is lower than that of the simple event – the anchor – and the probability of a disjunctive event is higher, but individuals fail to realize this.

Exemplar Choice Heuristics

Satisficing

In a pioneering article, Simon (1955) proposed that for many of their most important choices, people do not pursue the rational model – one based on precise calculations ending in maximization. Rather, Simon argued that aspirations guide the way people make choices. Because the human mind is not equipped to make the very complex calculations often needed for maximization, decision-makers instead set targets and choose the first alternative they encounter that meets the target – that is, they ‘satisfice’. Obviously, satisficing can often lead to suboptimal choices as compared with maximization; however, it provides a satisfactory criterion for ending search and deliberation.

The Affect Heuristic

Slovic and colleagues (2002) defined a new heuristic, the affect heuristic, which describes an individual’s reliance on feelings derived from instinctive responses to a stimulus. Such responses are automatic and rapid, and they can be either positive or negative. The affect heuristic differs from those defined by Tversky and Kahneman in that it is not a cognitive phenomenon. Slovic and colleagues argued that it is often more efficient, and certainly simpler, for decision-makers to use affect and emotion, rather than rational analysis, when faced with complex problems. Essentially, negative affective responses to a stimulus will encourage the decision-maker to take actions to avoid repeat exposure to it; positive responses will motivate her to do the reverse.

Slovic and colleagues’ definition of the affect heuristic includes an ‘affect pool’ that contains all the positive and negative ‘tags’, or associations, that decision-makers have with objects or events. These references then act as cues in complex decisions involving the objects or events tagged in the pool. Reactions to others’ facial expressions – and the subsequent conclusions we draw with respect to their intentions and credibility – are one example. The affect heuristic also implies that initial positive or negative associations with a stimulus can prove especially strong, continuing to condition an individual’s responses to this stimulus even in the face of contrary evidence in the future. In such situations, affect clearly dominates rational cognitive updating.

The Dual Processes Approach

Research in cognitive psychology has long shown that judgement and decisions can be described by automated, associative, quick processes on the one hand and as deliberate, rule-based processes on the other (Evans and Over 1996; Sloman 1996). Kahneman and Frederick (2005) follow Stanovich and West (2002) in labelling these processes as system 1 and system 2, respectively, to denote intuitive and deliberative modes of reasoning. The first system – fast, automatic, effortless and habit driven – is marked by high accessibility, which is the ease with which mental content comes to mind and facilitates intuitive judgements. The second is slow, serial, effortful and rule-governed; its deliberations often produce conclusions at odds with the intuitive system. These two systems operate simultaneously; current research on reasoning attempts to understand their interactions (cf., Kahneman 2011).

Heuristics, Biases and Strategy Research

Schwenk (1984) was one of the pioneers in pointing researchers to the importance of heuristics and biases in strategic decision-making. He discussed representativeness and anchoring and adjustment, along with other psychological mechanisms, to demonstrate the role of cognitive simplification in strategic decisions. A more recent example of work employing heuristics and biases to analyse strategic decisions is Garbuio et al. (2010) analysis of the role heuristics and biases play in merger and acquisition decisions. Other recent work demonstrates how reliance on small samples can lead to erroneous estimation of the probability of strategic surprises (Lampel and Shapira 2001), as well as the effects of anchoring on biased insurance decisions (Shapira and Venezia 2008) and on capital budgeting decisions (Shapira and Shaver 2011).

While cognitive biases inspired a flurry of research in marketing and economics, strategy researchers have tended to focus on a larger notion of cognition, one that acknowledges the major role of context in such decisions. Thus, mechanisms such as cognitive maps and sense-making appear at times to be more relevant to managerial decision-making than heuristics and biases. However, when heuristics are combined with framing, behavioural decision-making becomes more relevant. We feel that the dual processes approach, with its focus on the intuitive system on the one hand and the deliberative system on the other, can prove more useful in the analysis of strategic decision-making in situations where expertise, ignorance, skill and uncertainty play a major role. Indeed, Mintzberg’s (1973) pioneering study demonstrated how the above ingredients characterize managerial decision-making.

See Also