1 Background

One of the fundamental research themes of economic studies is how people allocate accessible but limited resources when trying to accomplish some goals and optimize the obtained utility (Mankiw, 2014). Classical rational models proposed in economics are practically flexible and have been applied in not only economics studies but also a variety of areas that are not considered as traditional economics research problems. Based on the idea of analyzing costs and optimizing utility, information seeking and retrieval researchers have also applied rational economic models in developing testable hypotheses regarding search interactions, explaining users’ actions under different cost and gain scenarios (e.g., Azzopardi, 2014; Pirolli & Card, 1999), and developing basic components of evaluation metrics, such as rank-based discounted utility and cost budget (cf. Zhang et al., 2017). However, under a variety of formal rational models, it is unclear how people make boundedly rational decisions with limited resources and incomplete information, especially in complex problematic situations. The ideal assumptions and simulated conditions for analysis, such as complete information about available options, unlimited computational resources, and goal of optimizing measurable utility, are difficult to achieve in real-life decision-making scenarios, which often lead to significant individual differences and systematic deviations from expected optimal options and outcomes.

In contrast to classical economic theories and associated formal models, behavioral economics researchers seek to (1) build the analysis of the rules employed in decision-making under uncertainty on a more realistic behavioral and psychological basis and (2) to differentiate rational man’s simulated optimal behavior from people’s real-life behavior under various human biases, cognitive limits, and situational constraints (Kahneman, 2003; Thaler, 2016). Furthermore, based on the learned knowledge about bounded rationality in decision-making, researchers also seek to design and develop cognitive debiasing tools to mitigate the negative impacts of human biases and heuristics in varying application areas, such as clinical diagnosis, hiring, financial services, and crowdsourcing tasks (Croskerry, 2003; Draws et al., 2021; Ludolph & Schulz, 2018; O’Sullivan & Schofield, 2018). Leveraging the knowledge about human biases and limits accumulated in behavioral economics research could help us better understand and explain the search decision-making and judgments of boundedly rational users.

Previous chapters introduce different types of user models underpinning search models and evaluation metrics and describe the gaps between a series of empirically confirmed human biases and simulated rational user models in varying experimental settings. In particular, Chap. 3 aims to offer readers a preliminary understanding of the deviations of boundedly rational users from simulated formal users. The identified gaps highlight the importance of reflecting on the assumptions and limitations of formal IR models and also encourage us to further explore the cognitive roots, behavioral patterns, and nuances hidden in boundedly rational decisions. To better introduce the behavioral economics approach and reinforce the theoretical basis for supporting bias-aware user modeling and evaluation, we need to have a deeper understanding of the concepts, theories, recent progress, and empirical findings on users and their biased and non-optimal decisions in varying scenarios.

To achieve this, this chapter takes a step back from specific computational IR models and focuses on explaining the fundamental frameworks (e.g., theories of two systems), research progress, and practical implications of behavioral economics research on boundedly rational decision-making activities. A comprehensive overview of all human biases identified in behavioral experiments is beyond the scope of this book. Also, a large portion of identified biases are not mutually exclusive and hard to differentiate from each other in naturalistic settings. Therefore, we focus on the major human biases and heuristics that are both widely examined in behavioral economics studies and also clearly contradict one or more assumptions that are explicitly or implicitly made in formal IR models (see Chap. 3 for a preliminary discussion on the gaps between formal model assumptions and empirical findings on human biases). A more comprehensive list of over 175 individual cognitive biases and mental shortcuts can be found at Benson (2016).Footnote 1

Research on human biases tend to be individualized and sometimes difficult to quantify or formalize based on a set of axioms. Although the theories on bounded rationality may not be able to match the precision and quantifiability of formal economic models, as argued by Kahneman, this statement of limitation from the classic economics side is “just another way of saying that rational models are psychologically unrealistic” (Kahneman, 2003, p.1449). This argument also serves as part of the motivations for this book on IR research.

2 Two Systems of Human Cognition: Which One Are We Using?

Depending on the nature of task, individuals often engage in different modes of thinking, judging, and deciding (Sloman, 1996). To characterize the basic structure of human cognition, Kahneman proposed the framework of Two Systems, which offers a theoretical umbrella under which various specific decision-making strategies, habits, and heuristics can be categorized, analyzed, and grouped together (Kahneman, 2003; Thaler, 2016). System 1 often operates in automatic, fast, and effortless manner. The operations of System 1 are often defined by habits, biases, and heuristics as they allow individuals to act fast without consuming much cognitive resources or relying on rich new information. Also, when System 1 operates in decision-making activities, the process is usually difficult to explicitly control or modify. The decision makers’ preferences over different options are often established quickly and unconsciously and are also heavily affected by their in situ emotional responses. In contrast, the operations of System 2 are often associated with careful reasoning and are usually slower, effortful, and under individuals’ control. Compared to fast decision-making processes governed by habits and mental shortcuts, System 2 tends to be more flexible and can be integrated with externally obtained rules and predefined plans that could be independent from the individuals’ prior beliefs and knowledge. As a result, System 2 consumes more cognitive efforts and slows down decision-making and evaluation processes and sometimes is perceived as unaffordable when quick decision-making is needed on seemingly simple tasks.

Based on the relevant theories and empirical observations (e.g., Evans, 2003; Kahneman, 2003, 2011; Neys, 2006; Tversky & Kahneman, 1992), Table 4.1 summarizes the features of the dual systems and their respective roles in different cognitive activities. Among different indicators of cognitive activities, the difference in effort is most useful in differentiating the tasks assigned to System 1 and the tasks that are processed under System 2 (Kahneman, 2003; Tversky & Kahneman, 1992). Effortful processes that operate under System 2 tend to disrupt each other. For instance, it is difficult to read a book while monitoring the trending events and news updates on TV. In contrast, effortless processes that do not involve much reasoning or intentionally controlled actions cause little or no interference to other ongoing tasks. For example, a driver can sometimes have a conversation with the passenger while driving on a highway, as the driving task may not consume much attention when the traffic is not too busy.

Table 4.1 Two Systems

Note that many real-life work tasks involve the operations of both System 1 and System 2 at multiple stages. According to Kahneman (2011), the perceptual system and intuitive operations under System 1 can generate initial impressions of the features of encountered items or objects. The impressions of objects do not need to be verbally explicit and are not controlled by decision makers. For instance, when searching for information relevant to climate change, people may automatically notice and engage with salient vertical results first (e.g., short videos, trending news and images about natural disasters and economic losses related to climate issues, answer boxes about frequently asked climate questions, social media messages about new scientific experiments on climate effects), rather than regular organic results and Web pages (e.g., Wikipedia page about ongoing climate issues). At this stage, users’ impressions and perceptions are heavily influenced by visually salient factors, such as color, vertical boundaries, and font sizes, and this process is not voluntary or effortful. This superficial processing of incoming information may also intensify clickbait issues and biased decisions under unbalanced information exposure (e.g., Jung et al., 2022; Molyneux & Coddington, 2020; Wang et al., 2021).

However, System 2 may operate as the dominant force when users need to evaluate the relevance and usefulness of the contents and synthesize them into usable answers or supporting materials for facilitating subsequent decision-making activities. Under this circumstance, the impressions generated by System 1 has less impacts on users’ decision-making activities, and the operation of System 2 supports explicit judgments and slow reasoning needed for the information evaluation task. In addition to operating on complex, cognitively demanding, or intellectually challenging tasks, System 2 in some occasions (when doubts regarding the current decision come to one’s mind) can monitor the impressions and intuitive judgments generated by System 1 and proactively correct potential errors (Kahneman, 2003). For instance, a user who has a well-defined evaluation task in mind may voluntarily compare the content of different information items presented on SERPs and check their respective credibility and thus may have a better chance of overcoming the potential biases caused by visual factors and clickbait.

According to Tversky and Kahneman (1974) and Kahneman (2011), another key feature shared by the two systems is that both of them can deal with stored concepts and prior beliefs, and they both can be evoked by language. In contrast of perception, the operation of System 1 is not limited to the processing of current simulations. This broad scope of System 1, compared to human perception, enables its operations on a wider range of tasks and processes where people can leverage accessible heuristics and rules to save cognitive resources and make immediate decisions. However, it also leaves more room for human cognitive biases (cf. Azzopardi, 2021) to operate in decision-making processes, which may interact with biased information presentation and lead to inaccurate understanding and undesired outcomes. While System 2 can monitor the intuitive judgments in some scenarios, the superficial processing of incomplete information under System 1 often plays the dominant role; System 2 monitors immediate judgments and quick decision-making tasks quite lightly (Kahneman, 2003). Thus, it is critical for researchers to study the operations of both systems and leverage the knowledge about dual systems in developing adaptive and useful debiasing interventions and nudging to help people mitigate the negative impacts of various biases (Battaglio et al., 2019; Draws et al., 2021).

Boundedly rational decisions under System 1 are often made quickly through simple rules and mental shortcuts and allow individuals to save cognitive resources in a wide range of tasks. However, due to the intrinsic limitations of System 1, surprising errors could happen when even a simple form of deliberate reasoning is required for completing the task correctly. Kahneman (2003) presents a simple puzzle used for studying cognitive self-monitoring and errors from System 1 in immediate judgments. The original question is:

A bat and a ball cost $1.10 in total. The bat costs $1 more than the ball. How much does the ball cost?

According to the result reported in the original study, in both of the two groups of college student participants, over 50% of participants yielded to the immediate impulse of answering “10 cents.”Footnote 2 Errors in simple calculation are not unique to this task. Researchers have also obtained unexpectedly high error rates in other similar tasks and behavioral experiments (e.g., Frederick & Fischhoff, 1998; Kahneman, 2011; Thaler, 2016). This surprisingly high error rate in bat-and-ball question (and other similar puzzles) demonstrates that (1) errors could happen under the immediate impressions or impulses generated by operations of System 1 and (2) the intuitive quick judgments are lightly monitored by System 2. People are not used to think hard, and they are often satisfied with seemingly plausible and straightforward judgments that are immediately accessible in their mind. Under simplified assumptions and simulated rational models, it would be challenging to predict or prevent these errors and deviations (that are seemingly easy to avoid) from optimal or correct answers at individual level.

The dual-system architecture and associated empirical experiments demonstrate the complexity of human cognition behind seemingly simple and straightforward decision-making strategies. As discussed above, different systems are associated with different decision-making processes and are subject to the impacts of different external factors and internal biases. The operations of System 1 and System 2 may be triggered by different task types (e.g., simple factual retrieval or open-ended complex retrieval tasks), situational factors (e.g., task urgency), and individual characteristics (e.g., emotional state, prior knowledge on certain domains). Also, at different stages of a motivating task, the two systems may interact with each other (e.g., System 2 may monitor the quick decisions made based on the impressions from System 1).

Although we can differentiate the features and operational processes of the two systems at theoretical level and in highly controlled experimental settings (Evans, 2003; Frederick & Fischhoff, 1998; Kahneman, 2011), it is difficult to clearly separate them and model their operations, respectively, in relatively complex, uncontrolled task settings, such as real-life information seeking and retrieval scenarios. For instance, a user may plan to learn more about the seriousness and actual impacts of heat waves in different regions of the world. Although the user may rely on the operations of System 2 on carefully examining the statistics regarding the impacts of economy, public health, and transportation, their judgments on the seriousness of the situation in different countries may be biased due to the biased presentation of information on SERPs under related queries: Depending on the past search history, geographical location of the user, and the specific personalization algorithms behind the search engine in use, the retrieved search results may be heavily biased toward certain regions and populations. As a result, the user may overestimate the seriousness of the situation in local areas as there are more relevant reports, images, and news stories available on retrieved SERPs and underestimate the gravity of the related problems in other countries and regions.

Another example is about financial decision-making and information processing. Although people who invest in stock markets can sometimes analyze the current situation rationally based on the available information (past stock prices, overall trend of the economy, ongoing and pending policies), this does not mean that System 2 can always be in charge when critical financial decisions need to be made. In some cases where decision makers are not clear about the overall trend of the stock market and have difficulties in predicting the price changes of stocks purchased, they may decide to hold on to the stocks in their accounts and stick to the current status, despite of the uncertain but alarming signals (e.g., ongoing price drops of stocks, fluctuation of interest rates). This status quo bias (cf. Fleming et al., 2010; Samuelson & Zeckhauser, 1988) and aversion of risk and ambiguity (cf. Holt & Laury, 2002) allow people to make quick decisions under uncertainty and incomplete information (usually under the operation of System 1); they may lead to inaccurate and irrational decisions regarding “sunk cost” in stock investments and cause even bigger financial losses. Note that people are more likely to rely on intuitive judgments enabled by System 1 and biased decisions when they are required to make choices under high levels of uncertainty and pressure.

Methodologically, although some progress have been made on studying human reasoning and biases in simple crowdsourcing tasks (e.g., Draws et al., 2021; Eickhoff, 2018; Saab et al., 2019), the existing tools (e.g., interfaces, standard tasks, questions, and scales) may not be enough for capturing different aspects of task-based information search interactions. In particular, it is difficult to simulate the contexts, motivations, and situational limits that often trigger the operations of System 1 and create conditions for human biases to play their roles. Although System 1 often offers more room for human biases and rule-of-thumb heuristics to operate, people’s decisions made under the operations of System 2 could still be biased and boundedly rational due to the limited capacity of mental efforts and restrictions caused by specific problematic situations (e.g., time limits, constraints, and biases of available information, prior beliefs, and public opinions). Thus, to develop useful, reproducible, bias-aware user models in IR and related fields, researchers need to not only integrate the theories on bounded rationality and human biases with formal models and evaluation metrics but also overcome the methodological challenges of investigating and simulating human biases in user studies and controlled experiments, especially under complex information retrieval tasks.

As discussed above, individuals’ decision-making activities are affected by the operations of both System 1 and System 2 in varying ways. The tension and interaction between the two systems often trigger human biases of varying types (Kahneman, 2011), which brings both positive and negative impacts on the process and quality of decisions. Figure 4.1 summarizes the operations of both systems in decision-making activities and illustrates the role of individual human biases within the whole process. As shown in Fig. 4.1, although human biases could generate behavioral impacts under both systems, they usually have higher chance to cause frequent and significant deviations from optimal results under quick, automatic operations of System 1, or in situations where there are conflicts between the impressions generated by System 1 and judgments made under System 2.

Fig. 4.1
A model summarizes the operations of systems 1 and 2. It includes light monitoring, human biases, quick automatic impressions, explicit judgment, deliberate reasoning, decision-making activities, and predictions of formal normative models.

Operations of System 1 and System 2 in decision-making activities

The dual-system framework provides an overall conceptual structure for characterizing and explaining different forms of human decision-making activities under varying environments, including the local and global decisions in information seeking and retrieval. Within the operations of two systems, different cognitive and perceptual biases arise and affect different aspects and stages of decision-making and task performances. Knowledge regarding human biases and the operations of System 1 may allow researchers to better understand, explain, and predict the deviations of real-life boundedly rational decisions from optimal decisions, which may require slow deliberate reasoning. To further investigate the details regarding boundedly rational decisions under the impacts of different biases, the following sections will explore a series of widely examined human biases in behavioral economics and explain their implications for information seeking, IR, and other closely related research areas, especially in terms of enhancing user models and developing and meta-evaluating bias-aware search evaluations from both user-centered and algorithmic perspectives.

3 Reference Dependence

Reference dependence effect is one of the widely studied human biases in option evaluation and decision-making and also connects to or causes several related biases, such as framing effects, loss aversion, and anchoring biases. According to Tversky and Kahneman (1992)’s research on reference dependence, when people make decisions and evaluate available options, they make their judgments based on the gains or losses relative to varying reference points in mind, rather than final absolute outcomes. Thus, with the same final outcomes associated with choices, different people under different conditions may have largely different judgments and reactions, which may lead to divergent subsequent behaviors. In economics analysis, researchers found that in contrast to the assumptions of standard decision-making models, initial entitlements, which often act as default reference points, do play an important role in determining people’s preferences and perception-based evaluations; also, the rate of exchange between products can largely differ depending on which is obtained and which is given up in transactions (Kőszegi & Rabin, 2006; Tversky & Kahneman, 1991).

The impacts of reference dependence are ubiquitous in real-life decision-making tasks and do not need to involve complex mechanisms or strict conditions. Figure 4.2 presents a simple example that illustrates reference dependence effect. Suppose there are two persons moving from old apartment to new apartment, respectively. The person in situation A is moving from Apartment A to Apartment B. As a result, their work-home distance decreases from 30 miles to 15 miles. Meanwhile, in situation B, the person moves from Apartment C to Apartment D. Consequently, the person’s work-home distance increases from 2 miles to 10 miles. It would not be surprising to see that the person in situation A is happier (assuming that work-home distance is the only factor that matters here, and levels of happiness and work-home distance are negatively correlated). However, if we merely compare the final outcomes from the two situations, a standard decision-making model would ignore the perceived gains and losses involved in the process and predict that the person in situation B is happier, given the shorter final work-home distance. As argued in reference dependence model, it is the perceived gains and losses relative to the corresponding reference points that matter in decision-making activities and individuals’ judgments.

Fig. 4.2
An illustration depicts the reference dependence effect in two situations, A and. When a person moves from apartment A to B, the work-home distance decreases from 30 to 15 miles, which is a gain. However, when a person moves from Apartment C to D, the work-home distance increases from 2 to 10 miles, which is a loss.

Reference dependence: an example of moving

Theories and research on reference dependence cast doubts on the long-standing normative theory of expected utility, where individuals’ decisions are assumed to be determined by a utility function that includes the expected final outcome and probability associated with each possible situation under the same action. As shown in Formula (4.1), S represents the full set of all possible situations or outcomes under the action A. P and U refer to the measurable possibilities and utility scores associated with each possible situation. E(A) measures the expected utility of action A, which considers all theoretically possible situations and the related utility that an individual could obtain.

$$ E(A)=\sum \limits_{s\in S}{P}_A(s)U(s) $$
(4.1)

In contrast to the findings on reference dependence, the expected utility theory does not involve individual-level factors that are subjective in nature, such as pre-search expectations, changing preferences, and in situ reference levels (Harrison, 1994; Kahneman, 2003). Expected utility or other forms of optimization goals built upon mathematical expectations are widely used in estimating utility and efforts in decision-making processes as this approach offers researchers a tangible way to simplify the analysis of individual choice and quantitatively compute, compare, and evaluate choices, decisions, and possible outcomes (Tversky & Kahneman, 1991). When we apply reference-dependence framework in analyzing and predicting human decisions, one obvious challenge is to estimate and accurately predict people’s reference points in mind. In controlled lab settings, researchers can design reference-dependence scenarios by manipulating initial entitlements (e.g., pre-experiment gifts, a certain amount of cash) (Apesteguia & Ballester, 2009; Bateman et al., 1997; Tversky & Kahneman, 1991; Sprenger, 2015) in simulated simple decision-making tasks, with available options, external conditions, and situational restrictions being fully explained and transparent. However, in real-life environments, the estimation of reference points could be difficult and may require deeper knowledge about individuals’ knowledge structure, in situ expectation, as well as the perceived gains and efforts from recent and previous similar decisions.

Furthermore, from the reference dependence perspective, researchers also found that the marginal value and impact of both gains and losses decrease with their size (Tversky & Kahneman, 1991). When the changes of gains and losses are far away from the reference points, the impacts of these changes on people’s perceived utility and behavior would be smaller compared to the effects of the variations that are close to reference points. For instance, under the impact of inflation, oil prices often increase over time. If the previous long-term stable price range is around $1.99 to $2.19, then people tend to be more sensitive to the changes that are close to their initial reference points (e.g., 50 cent price increase from $2.19 to $2.69), compared to the same-size marginal changes in a higher price level (e.g., price increase from $4.49 to $4.99). On the gain side, this phenomenon of diminishing sensitivity can be written as follows:

$$ G=d\left(o, ref\right)\ge 0 $$
(4.2)
$$ V(G)\ge 0 $$
(4.3)
$$ \frac{\partial V}{\partial G}>0 $$
(4.4)
$$ \frac{\partial^2V}{\partial {G}^2}<0 $$
(4.5)

As shown in the formulas above, G represents the gain that an individual collects or perceives relative to a reference point in mind. Thus, G is determined by the final outcome o and the reference point ref. According to the findings on diminishing sensitivity, the first-order derivative is greater than zero as the perceived value or utility keeps increasing as the gain increases. The second-order derivative is negative as the marginal value that each unit of marginal gain brings to the person keeps decreasing as the overall total amount of gain increases. Thus, the relationship between the variations of perceived gains or losses and the corresponding changes in perceived value or utility is nonlinear.

Theories and empirical evidences on reference dependence encourage us to reflect on the simplified assumptions and associated formal models built in a variety of application areas, including information seeking and retrieval. The change from outcome-based perspective to reference-based perspective would motivate researchers to re-examine and revise a large set of existing user models and evaluation metrics. Also, it would be important to investigate the impacts of relative gains and losses (e.g., increases or decreases of dwell time on SERPs and content pages, quality of SERPs measured by nDCG, and difficulty of formulating meaningful queries) on users’ in situ search decisions (e.g., changes of search tactics) and levels of satisfaction. However, as discussed above, before we could revise user models and metrics from reference dependent perspective, as the starting basis, researchers need to accumulate solid direct evidences on the roles, changes, and impacts of in situ reference points in search interactions through properly designed user studies. Searchers’ reference points could come from their pre-search beliefs and existing knowledge, in situ search expectations, prior search gains and efforts during the same session, as well as past search experience under similar motivating tasks and search scenarios (e.g., a recurring daily task in workplace).

As presented in the examples above, a final outcome could be perceived as either gain or loss, depending on their relative changes to the reference point. Based on the observations on people’s responses to perceived gains and losses, researchers have identified another effect related to reference dependence, namely, loss aversion, which causes asymmetric sensitivity to gains and losses of the same size (if quantitatively measurable) and other related impacts on decision-making activities, such as endowment effect and status quo bias. The following sections will discuss loss aversion bias as well as other related effects in detail.

4 Loss Aversion, Endowment Effect, and Status Quo Bias

According to the empirical evidence (at both behavioral and neural levels) on loss aversion from behavioral science experiments (e.g., Alesina & Passarelli, 2019; Erev et al., 2008; Gächter et al., 2022; Tom et al., 2007; Tversky & Kahneman, 1991), when changes relative to certain reference points occur, people tend to be more sensitive to the losses than to gains. In other words, it is always better to not lose $5 than gain $5. A major increase in product quality may not cause much improvement on customer satisfaction and adoption. However, a slight decrease in product quality from the current status may lead to a quick, major drop in customers’ product ratings.

In addition, people are generally more sensitive to the impacts of a difference on a dimension when the difference is perceived as a loss, compared to other dimensions where the same or similar size of changes is perceived as gains. Consider the example presented in Fig. 4.2. Assuming there is only one dimension, work-home distance, considered in decision-making evaluation, then the person in situation A is happy to see the decrease of the distance after moving to apartment B. However, if this apartment moving also involves a perceived loss on another dimension, such as higher rents, increased distances from home to supermarkets, or worse school district, then the option may be viewed differently. If the losses and gains are both quantifiable and can be compared with the same unit (e.g., assuming that both losses and gains could be calculated using dollar amounts), then for the same amount of losses and gains on different dimensions, people are more sensitive to the dimensions where losses compared to the status quo are perceived.

The effect of loss aversion can be written as follows:

$$ {f}^{\prime}\left(-\Delta \right)>{f}^{\prime}\left(\Delta \right)>0 $$
(4.6)
$$ \Delta ={o}_{gain}- ref $$
(4.7)
$$ -\Delta ={o}_{loss}- ref $$
(4.8)
$$ \omega \left(-{\Delta}_a\right)>\omega \left({\Delta}_b\right) $$
(4.9)

where Δ (Δ > 0) represents the perceived difference between the current outcome being evaluated and the corresponding reference point. We use a and b to denote two different dimensions over which perceived differences are evaluated by decision makers. Note that the changes of perceptions can be caused by both the variations in the outcome (e.g., waiting time in dentist office increases from 15 min to 30 min) and the changes in reference points alone (e.g., expected waiting time increases from 10 min to 35 min). The transitions between gains and losses (not necessarily the changes in outcomes) often lead to significant changes in people’s decisions and sensitivity to the variations relative to the reference point. In addition, as it is presented in Formula (4.9), the same difference between two options or statuses is often assigned a greater weight by individuals if the difference is perceived as a difference between two losses or disadvantages relative to a reference point. In naturalistic settings, however, it could be challenging to quantitatively measure and compare the differences which occur on different dimensions of the possible outcome from an option.

To explore the neural basis of loss aversion, Tom et al. (2007) conducted a study on participants’ brain activities under simulated gambling tasks. Their results indicate that when potential gain increases in gambles, there was a broad range of areas, including midbrain dopaminergic regions and their targets, showing increased brain activities. Meanwhile, potential losses perceived by the participants were associated with decreasing activity in several of the gain-sensitive regions. Based on this finding, the researchers proposed that individual differences in behavioral-level loss aversion can be estimated and predicted based on neural-level loss aversion signals, such as activities in prefrontal cortex and ventral striatum. Similarly, Canessa et al. (2013) further investigated individual differences in loss aversion tendencies with functional magnetic resonance imaging (fMRI)-based measures and found that behavioral loss aversion is associated with several neural systems and regions, suggesting both structural and functional individual differences that can be related to financial outcomes of decisions under uncertainty.

Apart from loss aversion effect per se, empirical findings from a series of classic and recent behavioral experiments demonstrate that loss aversion has several immediate consequences and impacts in decision-making activities and is associated with several other types of human biases as well. For instance, Kahneman et al. (1986) have conducted a series of experiments on loss aversion in a classroom setting. In one of the experiments, the researchers presented a decorated mug (market value of around $5) to one third of the students who participated in the research study. The participants who received the mug were asked to give an acceptable price for selling the mug to others, ranging from $0.50 to $9.50. For the rest of the participants, researchers offered another questionnaire and asked them to indicate their preferences between a mug and a certain amount of money within the same price range. Under this setting, selling the mug would be considered as a loss to the students who already received the mugs. However, for the students who did not receive them (i.e., the “choosers”), they would consider a mug (if they decide by choose or buy it) as a gain.

The experimental results echo the findings on loss aversion from other studies and confirmed the effect of instant endowment. The median value of the mug evaluated by the sellers (students who received mugs at the beginning) was $7.12. However, for the choosers, the median value was $3.12. The experiment was repeated and yield similar results ($7.00 vs. $3.50). This consequence of loss aversion is defined as endowment effect: the loss of value or utility associated with giving up a valued item is greater than the perceived utility associated with obtaining the item (Kahneman et al., 1986; Tversky & Kahneman, 1991). The significant gap between selling and buying prices demonstrates the effects of endowment and contradicts with many formal economic models where the perceived cost of a product or activity is considered consistent and fixed among different individuals. This endowment effect is also empirically confirmed in several recent studies conducted in various domains and settings (e.g., Hubbeling, 2020; Knetsch & Wong, 2009; Knutson et al., 2008; Morewedge & Giblin, 2015), which partially justifies the importance of examining this effect as well as human biases in general in broader decision-making contexts and more diverse disciplines, including information seeking, retrieval, and recommendation.

Another widely examined consequence associated with loss aversion is status quo bias (Tversky & Kahneman, 1991). In many decision-making scenarios, the retention of the status quo is often one of the key options. For instance, traders in stock market can choose to keep current stocks in their accounts. Employees who have been working in a company for years often tend to stay in their positions despite of the possible opportunities of getting promotions and pay raises in different places. When a medical plan or subscription plan is designated as the default plan, employees are more likely to stick to the plan, year after year, despite the opportunities of annual plan review and modifications. This may be because (1) when a status quo as a long-term reference point is fully established in mind, people tend to be highly sensitive to even slight changes near the reference points, especially the ones that are perceived as losses (e.g., increased transportation time after job change; high salary but also higher health insurance premium in a new company), and (2) staying with the status quo option could help people resolve possible cognitive dissonance (Akerlof & Dickens, 1982; Bem, 1967; Cooper, 2019; McGrath, 2017) and keep their current behaviors consistent with established habits, beliefs, and perspectives.

In behavioral experiments, researchers found that when an option in a simulated decision-making scenario is designated as the default status quo, participants’ choices are systematically biased toward the status quo, especially under the situations where they are not familiar with the decision-making task (Fleming et al., 2010; Kim & Kankanhalli, 2009; Samuelson & Zeckhauser, 1988). Researchers also found that when changes to status quo or default option become inevitable, people would prefer the option that brings slight changes to current status quo than the ones that cause larger changes (Tversky & Kahneman, 1991). For instance, when choosing between different medical plans and retirement plans, employees may be more likely to choose among the ones that are closest to the current default plan offered to them (with minor revisions on a few items), rather than the ones that come with major changes on a broad range of items and restructure the entire plan.

Fleming et al. (2010) explored status quo bias at neural level by investigating the ways in which neural pathways connecting cognitive activities with actions modulate status quo acceptance (or rejection), especially in situations where the status quo is suboptimal and more errors could occur when the default option is selected. The researchers found a selective increase in subthalamic nucleus (STN) when the status quo option is rejected under heightened decision difficulty. Also, researchers found that inferior frontal cortex generates an increased modulatory effect on the STN when individuals switch away from the status quo and choose non-default options. Findings from Fleming et al. (2010) provide a neural-physiological basis for examining the role of status quo, especially in difficult decision-making tasks.

Similarly, Yu et al. (2010) also adopted a neuroscience approach to investigating status quo bias and demonstrated that the increased tendency of moving away from the default option is closely related to the reduced activity in the anterior insula. However, the tendencies and decisions to select the default activated the ventral striatum, which is the same reward area as seen in winning. Yu et al. (2010)’s work emphasizes the aversive processes in insula as the underlying neural mechanism behind the status quo bias and echoes relevant findings from classic behavioral experiments (e.g., Samuelson & Zeckhauser, 1988). More details regarding the experimental design and specific changes in neural metrics under different conditions are reported in the original research papers.

Beyond the impacts of loss aversion, Samuelson and Zeckhauser (1988) noted that status quo bias could also be triggered by other contextual factors, such as switching and transaction costs, mental demands of thinking and evaluation, as well as psychological commitment to prior decisions, even in the cases where loss aversion effect is absent. This finding suggests that there are a variety of motivations that could push people to existing strategies, options, and default plans and that researchers and designers need to consider a broad range of factors, instead of merely focusing on calculatable losses and gains, when seeking to design effective intervention techniques, nudging tools, and system recommendation to change people’s decisions (e.g., Bonnichsen & Ladenburg, 2015; He & Cunha, 2020). For instance, Kim and Kankanhalli (2009) indicate that status quo bias often triggers user resistance to the implementation of new information systems and leads to the failure of new systems. To manage and resolve the issue, researchers investigated the formation of status quo bias and also explored the internal and external factors that mediate and reinforce the impacts of status quo bias and user resistance. Based on the survey data collected from the employees of an IT service company where a new enterprise office system is deployed, Kim and Kankanhalli (2009) developed a structural model to characterize user resistance and demonstrated that learning and switching cost mediates the impacts of other factors (e.g., opinions and choices of other employees, self-efficacy) on user resistance. Also, perceived value of the new system and organizational support can also help mitigate status quo bias and reduce user resistance to new systems.

To illustrate the connections among different concepts (e.g., gain, loss, reference point) and biases, Fig. 4.3 presents the structure of reference dependence as well as the associated effects, including loss aversion bias, endowment effect, and status quo bias. Note that the initial endowment and status quo can also be considered as reference points based upon which people compare and evaluate actual or estimated outcomes associated with available options. There are also other types of reference points, such as initial beliefs and knowledge, in situ expectations, and past experiences under the same or similar tasks. Our understanding is that reference dependence can serve as the role of a fundamental framework, which can help us better understand the nature of multiple associated human biases, such as framing effects, anchoring biases, and confirmation biases. We will discuss more diverse types of possible reference points and corresponding consequences in decision-making in the following sections.

Fig. 4.3
A diagram depicts how the traditional non-reference model's outcomes flow to reference points, decision-making, and evaluation.

Reference dependence and related human biases

Note that in addition to cognitive impacts, reference-dependence effect also happens at perceptual level. To illustrate the perceptual effects of reference dependence, Fig. 4.4 presents filled circles with different levels of luminance. Although the two enclosed circles have the same level of luminance, they do not seem to be equally bright: the inner circle in Fig. 4.4a seems to be brighter than its counterpart in Fig. 4.4b. This phenomenon demonstrates that the perceived brightness is not only controlled by the absolute luminance of an area but also affected by the background used as an implicit reference. Similar to this example, vision researchers have also explored the role of reference in distance estimation and found that the orientation of the body and the visual environment, both of which can alter the current reference points, have significant impacts on people’s perceived distances (Harris & Mander, 2014). Similar reference-dependence effect was also found in the variations in people’s perceived sizes of the same object under different reference distances (Barac-Cikoja & Turvey, 1995). This perceptual dimension of reference dependence may also affect other aspects of decision-making and is certainly relevant to graphical user interface (GUI)-based human-information interactions. For example, different color combinations and backgrounds may change the perceived saliency of certain items and thereby affect the distribution of attention and user actions across different rank positions.

Fig. 4.4
Two concentric illustrations depict the reference dependence effect. The concentric circle in Illustration A has a lighter color on the inside and a darker color on the outside, while Illustration B depicts the opposite.

Reference dependence effect in vision [This example is adapted from the reference-dependence vision example offered in Kahneman (2003) (p.1455, Fig. 5)]

In information seeking and retrieval, users’ interactions with information and information systems may also be affected by loss aversion bias, endowment effect, and status quo bias. For instance, due to the impacts of loss aversion, a decrease in the quality of current SERP relative to the most recent one may cause local changes in a user’s current search strategies (e.g., reformulating a completely new query, abandon current SERP without clicking or careful examining the ranked search results), despite the possibility that the current search path is correct and may eventually lead to a series of useful documents for completing the search task at hand. Also, because of status quo bias, users may choose to stick to the default information sources that they are most familiar with (e.g., a specific online forum, an expert’s blog site) and pay less attention to less familiar sites. However, the default status quo options may not cover all necessary information under all tasks. In addition, when users already learned sufficient knowledge and accumulated rich experience regarding the current system they are using, they are less likely to accept or try to learn the operations of a new search system, despite the fact that the new system could be more efficient in processing regular information-intensive work tasks and supporting information sharing and communications among workers and may significantly improve their productivity of information seeking and task performances. Our hope is that through the synthesis and analysis of empirically confirmed behavioral economics theories on biases and bounded rationality, readers can better understand the nature of decision-making under uncertainty and further explore the implicit connections between different factors associated with human biases and components of formal user models.

5 Expectation (Dis)confirmation Theory

In addition to the effects discussed above, another bias related to reference dependence is expectation confirmation or disconfirmation, which is widely examined in different application scenarios in the area of management information system (MIS) (e.g., Lankton & McKnight, 2012; McKinney et al., 2002; Oliver, 1980; Venkatesh & Goyal, 2010). Among different applications and subdomains, one of the frequently studied is information systems adoption. According to the core arguments in expectation disconfirmation theory (EDT) (Oliver, 1980; Venkatesh & Goyal, 2010), users’ acceptance and adoption of a new information system are not only affected by the perceived performance of the system but also shaped by their pre-adoption expectations. Users’ post-adoption satisfaction is determined by the disconfirmation of beliefs, which is closely associated with the difference between pre-adoption expectation (as users’ reference points for post-adoption judgments) and users’ perceived performance.

Users’ continuance intention on information systems is a central topic of study to both information systems research and service providers in online platforms. Many empirical experiments found that users’ satisfaction and continuing use of a new information system depend on the status of expectation confirmation (Lankton & McKnight, 2012; Oliver, 1980). Bhattacherjee (2001) proposed an expectation-confirmation model to investigate users’ information systems continuance behavior. The study found that users’ continuance intention is influenced by their satisfaction with information systems use and perceived usefulness of continued use of the system. User satisfaction with the system is affected by their confirmation of pre-adoption expectation from prior experience of information system use and expected usefulness. Similarly, Venkatesh and Goyal (2010) developed a polynomial modeling of expectation disconfirmation and incorporates multiple related factors into the model, such as cognitive dissonance, job preview, as well as factors of technology acceptance model (cf. Venkatesh & Davis, 2000) and prospect theory, which directly involves the role of reference dependence (cf. Kahneman & Tversky, 2013; Tversky & Kahneman, 1992). Venkatesh and Goyal (2010) showed that expectation disconfirmation is in general bad for information systems adoption and reduces users’ behavioral intention to continue their usage of new systems in both positive and negative disconfirmation scenarios. Beyond behavioral level, Fadel et al. (2022) conducted an fMRI study on expectation disconfirmation in the context of information filtering in electronic networks of practices. The results of their neuroimaging experiment show that there are neural activation differences between expectation confirmation and disconfirmation states and also between unexpected gains and unexpected losses. Thus, to successfully implement new information systems, researchers and system designers need to systematically examine target users’ expectations regarding system layout, performance, as well as other related aspects and their previous experiences and beliefs based on which pre-adoption expectations are built.

Based on previous empirical research and theoretical developments in this area (e.g., Bhattacherjee, 2001; Fadel et al., 2022; Oliver, 1980; Venkatesh & Goyal, 2010), Fig. 4.5 illustrates the structure of EDT under the theoretical umbrella of reference dependence and includes the factors that may affect different aspects of expectation confirmation. Similar to Fig. 4.3, we emphasize the difference between reference dependence model and traditional non-reference model and hope that these highlighted differences could help students and young researchers better identify the gaps between widely used final-outcome-based models in IR and EDT/reference dependence model.

Fig. 4.5
A flow diagram of the E D T structure. It depicts the flow between the expectation disconfirmation or confirmation model and the traditional non-reference model.

The structure of expectation confirmation theory

Note that other types of human biases and situational limits may also come into play at different parts of the model. For example, depending on users’ knowledge structure, immediate information needs, and existing beliefs and biases, their perceived system performance could systematically deviate from the actual performance (if objective unbiased measurement of performance is possible) to varying degrees. Also, users’ perceptions regarding expectation disconfirmation may also be affected by other factors, such as learning costs, emotional states, and the adoption behavior of peer users.

User expectation is an important form of reference point and could affect people’s interactions with information systems at both single-iteration (e.g., single query-SERP response) and whole-session levels. In multi-round user-system interactions, such as interactive information searching, it would be useful to investigate the effects of both pre-interaction expectations (e.g., expectation regarding the overall dwell time and efforts based on previous experience under similar tasks) and in situ expectations (the local, temporary expectations established and constantly revised based on the experienced gains and efforts in ongoing sessions). Although we have accumulated rich empirical evidences regarding pre-interaction expectations, such as expected task difficulty and task complexity measured through pre-search questionnaires (e.g., Arguello, 2014; Capra et al., 2018; Choi et al., 2019; C. Liu et al., 2014; O’Brien et al., 2020), the latter type of expectations, which may play an even more important role in affecting interaction effectiveness, in situ judgments, and whole-session experiences, still remains largely understudied in information seeking, retrieval, and recommender systems research, with few exceptions (e.g., Liu & Shah, 2019).

6 Framing Effect, Confirmation Bias, and Anchoring Bias

The sections above discuss several types of perceived quantitative changes under the influence of biases and explain how they might lead to suboptimal and boundedly rational decisions, both within and outside information seeking and retrieval. In addition to the measurable quantitative changes (e.g., price drops and increases, changes of daily work commute time), people’s decisions could also be affected by biases and limits in a more qualitative manner, which may not be quantitatively measurable in some decision-making scenarios. For instance, people’s perceptions may be affected by the description narrative of the options, which could be framed as either losses or gains. In addition, people are more likely to accept the options or overestimate the usefulness of certain documents if these options or documents are consistent with people’s existing beliefs and knowledge. We discuss these types of human biases and their impacts in this section.

In classic economic model, the invariance of individuals’ preferences is a fundamental assumption that facilitates the analysis of economic behaviors and enables researchers to represent individuals’ preferences with preference indifference curves (Mankiw, 2014; Schumm, 1987). Specifically, microeconomic models often assume that individuals’ preferences are not influenced by inconsequential variations in the description of outcomes (when the nature of the outcomes remain the same). This assumption, which has been called extensionality and invariance, helps researchers bypass the problem of in situ variations of individuals’ intents and preferences and reduces the computational complexity of predicting people’s preferences for varying combinations of goods (Tversky & Kahneman, 1986). The assumption of preference invariance is also considered as a key aspect of rational decision-making.

While the assumption of extensionality and invariance underpins various formal models of individual choices and decision-making, it contradicts with empirical evidences on real-life individuals’ perceptions of options and outcomes. Specifically, some behavioral economics researchers found that individuals’ preferences are affected by framing effects, where the extensionally equal descriptions of outcomes (only altering the narrative regarding certain salient aspects of the problem, without changing the substances of the problem and outcomes) can lead to systematically difference choices (Kahneman, 2003). Therefore, it would be difficult to map expected utility (cf. Harrison, 1994) to users’ preferences as the perceived value of options and potential outcomes could be changed without touching the actual utility or manipulating the nature of decision-making problem. Beyond traditional economic decisions, people’s preferences could also be easily altered by changing the layout and framing of accessible options (e.g., the presentation of ranked search results and social media information, different design of advertisements, and clickbait information on SERPs and Web pages).

To better explain framing effects, Tversky and Kahneman (1985) offered a discussion on a hypothetical problem that was presented to their study respondents.

Problem: The Asian Disease

Imagine that the United States is preparing for the outbreak of an unusual Asian disease, which is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. Assume that the exact scientific estimates of the consequences of the programs are as follows.

To measure the possible impacts of framing, Tversky and Kahneman (1985) designed two different versions of the program descriptions by altering the highlighted salient aspects of the programs but keeping the actual content of the programs unchanged.

Program Description: Version 1

If Program A is adopted, 200 people will be saved; if Program B is adopted, there is a one-third probability that 600 people will be saved and a two-thirds probability that no one will be saved.

Program Description: Version 2

If Program A’ is adopted, 400 people will die; if Program B’ is adopted, there is a one-third probability that nobody will die and a two-thirds probability that all 600 people will die.

Under the same problem statement, the two versions of program descriptions were presented to respondents. Note that under classic economic models, peoples’ preferences should not differ between Program A and Program B as they come with the same expected utility score on the dimension over which they were compared. Interestingly, however, the results of this rational mathematical analysis, which seems to be certain and straightforward in formal modeling, clearly contradict with real-life people’s preferences and decisions under both versions of program descriptions.

According to the results presented in Tversky and Kahneman (1985), in Version 1, Program A is favored by a significant majority of respondents. This result indicates a general tendency of risk aversion. Then, another group of respondents received the Version 2 of program description, which has no substantial difference from the Version 1. In contrast to the results collected under Version 1, however, researchers found that a substantial majority of respondents favor Program B’, which suggested a risk-seeking behavior. According to Kahneman and Tversky (2013), part of the reasons leading to this result is that outcomes that are certain are usually overweighted in people’s decision-making compared to the possible outcomes that are associated with high probability. In Version 2, accepting the certain death of people in Program A’ seems to be significantly less attractive and even unacceptable compared to the Program B’ where there is a chance that all people can be saved. Thus, influenced by the immediate emotional response under overweighted outcomes, people favored B’ over A’.

Tversky and Kahneman (1985)’s study demonstrates the pure impacts of framing on people’s preferences under the controlled conditions: (1) in both versions, the two programs are associated with the same expected utility, which allows researchers to reveal the gap between real-life individual preferences (and the associated changes) and fixed expected utility; (2) between the two versions, respondents’ preferences changed drastically only because the framing of available options were modified (with different salient aspects being emphasized).

Related to status quo bias, researchers also found that when options are framed as default choices, people are more likely to directly accept them in decision-making tasks without deliberate thinking and judgments. For instance, Johnson and Goldstein (2003) examined the enrollment rates in organ donation programs in seven countries and found a significant difference in enrollments depending on the ways in which the organ donation option was framed: in countries where organ donation was framed as the default option, the (automatic) enrollment rate was 97.4%; however, in places where non-enrolment was set as the default choice, the enrollment rate dropped drastically to only 18%. Similarly, Goswami and Urminsky (2016) studied the effect of certain amounts framed as default on charitable donations and found that (1) when setting a low amount as the default donation, it reduces average donation amounts among all donors, and (2) default option as a “distraction” can reduce the impacts of other informational cues, including positive charity information. Both of these studies confirm the impacts of framing certain options as default plans on people’s perceptions and decisions.

Figure 4.6 offers a general form of framing effect in the context of decision-making under uncertainty, which is inspired by a series of behavioral experiments conducted by Tversky and Kahneman. Regarding the default or status quo bias (e.g., Johnson & Goldstein, 2003), the option that is set as default can also be considered as a highlighted salient aspect or option, and the change of default option could cause significant changes in people’s perceptions and emotional responses to the same option and thereby affects the final decision-making activities.

Fig. 4.6
A diagram of the framing effect depicts the flow from option and predicted outcome to extensionally equivalent descriptions, perceptions, and emotional responses.

Framing effect

Studies discussed above demonstrate the basic principle of framing effect: people often passively accept the formulation given and are easily influenced by the highlighted salient aspects of problem, without carefully examining the presented extensionally equivalent descriptions of the problem (Kahneman, 2003). This framing effect, which could lead to suboptimal and boundedly rational decisions, is usually caused by the limit of cognitive resources. With a finite mind, decision makers cannot afford fully examining all possible options and differentiate different versions of extensionally equivalent descriptions. Thus, the assumption of invariance in individuals’ preferences is not tenable in most cases. Researchers need to recognize the limited room within which people’s cognitive systems operate and pursue good enough outcomes among accessible options (Kaufman, 1999).

Confirmation bias can be considered as an extension of reference dependence, where the reference is existing knowledge and beliefs, expectations (e.g., regarding content of the retrieved information objects), or hypotheses in mind. According to Nickerson (1998), the term confirmation bias characterizes the tendency or behavior of seeking or interpreting evidences in ways that support or confirm existing beliefs and expectations established beforehand. Confirmation bias is one of the most widely known problematic aspect of human reasoning and judgment (Evans, 1989). The behavioral impacts of confirmation bias have been observed and investigated in a broad range of experimental settings and real-world contexts. For example, based on the observation on US foreign policies, Tuchman (1984) argued that due to the effects of confirmation bias, once a policy has been established and implemented by a government, there would be a series of subsequent activities, from the same government, which try to justify the policy. During the justification process, decision makers often focus on reinforcing the justifications and insist on a rooted notion, regardless of contrary evidence coming up from multiple sources (Tuchman, 1984). Under the influence of confirmation bias, a policy or conclusion made by decision makers could be biased and become increasingly difficult to correct as more justifications and supporting evidences pile up over time.

Based on the research findings discussed above, Fig. 4.7 summarizes the underlying mechanism of confirmation bias. The confirmation bias also frequently occurs in scientific fields. according to Nickerson (1998), although the falsifiability principle has been widely accepted as part of the foundation by the scientific community, “one would look long and hard to find an example of a well-established theory that was discarded when the first bit of disconfirming evidence came to light” (p.206). Lehner et al. (2008) examined confirmation bias in experimental tasks where participants were asked to draw inferences from a small set of evidences. Researchers found that professional analysts as participants were also subject to the impact of confirmation biases in complex analysis tasks, such as law enforcement investigations, intelligent analysis, and financial decision analysis. Their findings also indicated that applying the analysis of competing hypotheses (ACH) can mitigate the confirmation bias in judgments. Kappes et al. (2020) investigated the hidden neural mechanism underlying confirmation biases by examining the utilization of others’ opinion strength in judgments. Their results show that existing judgments as the established references can change the neural representation of information strengthen. Consequently, individuals become less likely to change their opinions when facing disagreements or disconfirming evidences.

Fig. 4.7
A flow diagram depicts the mechanism of confirmation bias. It flows from prior beliefs, expectations, and hypotheses to evidence and theories. It is followed by retain existing theories, which deal with reinforcement and further justification, and alternative theories.

Confirmation bias

The effect of confirmation bias could also be interpreted from loss aversion perspective. Specifically, when encountering disconfirming evidences or disagreements from others, a decision maker needs to choose between further reinforcing or justifying their existing beliefs or abandoning their existing beliefs or hypotheses and exploring alternatives. However, the establishment of existing beliefs often comes with certain costs or efforts (e.g., learning knowledge from reading books and papers, reaching a consensus or common hypothesis among a group of people). Thus, abandoning the beliefs could be perceived as a loss of “rewards” obtained through past efforts. Furthermore, it may also weaken or overturn other related beliefs, expectations, and hypotheses that individual decision makers have in mind, which thereby further increases the potential loss of accepting alternative opinions or conclusions. In the scientific community, the ubiquitous of confirmation bias may also contribute to survivorship bias: the statistically insignificant differences (or null hypothesis not being rejected) and results that disconfirm well-established theories may end up being rejected by researchers themselves at different stages of their studies (Liu, 2021). Consequently, the “successful confirmations” supported by empirical evidences may be overrepresented through publications and further reinforce related biases or even mistakes in scientific research.

Anchoring bias could be considered as a special type of reference dependence, where people’s opinions and decisions are heavily influenced by the first piece of information encountered (Tversky & Kahneman, 1974; Caputo, 2014). For instance, when exploring an unfamiliar topic, people’s opinions are often significantly affected by the initially collected information, which may not be relevant or useful for addressing the immediate information need. Similar to confirmation bias, anchoring bias could also come from prior experience and existing beliefs and affects the way in which people process newly encountered information on related problems (Lau & Coiera, 2007). To explore the quantitative form of anchoring, Chapman and Johnson (1994) conducted a controlled experiment where participants were asked to evaluated the value of monetary lotteries under the influence of several predesigned anchors. The study results demonstrate that anchoring bias has its boundaries: (1) people are less likely to be affected by implausibly extreme anchors that go way beyond the fair value ranges of the items being evaluated, and (2) anchor and preference judgment needs to be comparable and be represented on the same scale. Going beyond textual and numerical anchors, Wesslen et al. (2019) investigated the effect of visual anchors by examining people’s interaction with a visual analytics system under different scenario videos as visual anchor conditions. Wesslen et al. (2019) found that manipulating the initial visual anchors can affect users’ interaction activities, confidence, speed, and even accuracy in some cases. These studies explored anchoring biases over multiple aspects and demonstrate the ubiquitous and multidimensional effects of anchoring in human judgments.

Related to the anchoring bias explained above, people’s judgments, especially in sequences of interactions and evaluations, are often affected by priming effect. Priming refers to the situations where an individual’s early exposure to a certain type of stimulus affects their reactions and judgments on subsequent stimuli (Tipper, 1985; Kahneman, 2003). The stimulus could be initially encountered information or opinion, early response of a system to certain actions, as well as some other types of externally provided signals prior to the judgment and decision-making tasks. When encountering the initial stimulus, people may activate certain mental concepts or memories that affect their subsequent perceptions and associated actions.

Similar to anchoring, according the empirical findings from a series of behavioral and psychological experiments, priming could also happen at multiple aspects and dimensions, such as behavioral, cognitive, and affective dimensions (e.g., Dennis et al., 2020; Kreuter et al., 2000; Kristjánsson & Ásgeirsson, 2019; Spruyt et al., 2002). Methodologically, it is worth noting that the observed priming effects in experiments and participants’ behaviors are also affected by experimenter belief and thus need to be examined under double-blind experiment designs (Gilder & Heerey, 2018). In the context of information seeking and retrieval, Scholer et al. (2013) investigated the effects of threshold priming in relevance evaluation sessions and found that the quality and relevance of initially encountered documents will shape people’s relevance thresholds in mind and thereby affect their evaluation of subsequently presented documents under varying topics. Beyond controlled experimental settings, the threshold priming effects may also influence people’s in situ search expectations and judgments of retrieved information in real-world information seeking and searching episodes over multiple evaluation dimensions, such as in situ relevance, usefulness, credibility, and informativeness.

In summary, based on the discussions above, we conclude that framing effect, confirmation bias, anchoring bias, and priming effect can affect people’s judgments and choices in different ways and that the underlying mechanisms behind these biases can be interpreted from a reference dependence perspective. Incorporating the appropriate representations of these biases into formal user models would allow researchers to better characterize the individual differences in interactions with information systems and also predict people’s interaction behaviors, perceived performance, and overall experience in a more accurate manner.

7 Decoy Effect

As shown in studies on reference dependence and framing effects, people’s preferences could be influenced or even manipulated without any change in the nature of presented options or estimated outcomes (Kahneman, 2003; Tversky & Kahneman, 1985). In a wide range of decision-making scenarios, people’s decision-making activities could be systematically changed with revised or newly introduced reference points. These reference points can be explicit new options that directly change people’s perceptions regarding existing options and associated outcomes. They can also be implicit changes in existing options, such as the change in highlighted salient aspects of current choices. In many controlled experiments and simplified economic analyses, the change of references could be measurable and quantitatively presented to decision makers, such as changes in the reward amount associated with each choice and the related possibility, despite the fact that this information might be difficult to obtain in real-life scenarios, especially for individuals. However, in many cases, the change of reference points might be qualitative in nature and difficult to quantitatively compare, such as the establishment and changes of existing beliefs or the content of initially encountered information, which may trigger confirmation bias and anchoring bias in information evaluation.

Decoy effect in decision-making usually happens in situations where a new option as reference point is introduced. Specifically, according to studies on decoy effect (e.g., Highhouse, 1996; Kahneman, 2003; Wedell & Pettibone, 1996), individuals’ preferences for options A and B may change from A to B, if a third option C is included in the decision-making conditions, where option C is clearly inferior to option B, and this significant difference is perceived by the decision maker. Particularly, decoy effects could also occur in situations where the presented options A and B are not really comparable as they may have their respective advantages and disadvantages on different dimensions. Under this circumstance, when an option C is introduced to the decision-making problem, which is inferior to option B over the same dimension(s), then option B may appear to be more attractive than not only option C, but also option A. By introducing a new reference point and creating the environment of asymmetric dominance, decoy option may lead to significant changes in individuals’ perceptions and in situ preferences without introducing any change to existing options under consideration.

Figure 4.8 illustrates the structure and basic conditions of decoy effect. As it is shown in the Figure, the original decision-making scenario involves option A and option B. The assumption is that the two options can be evaluated over two dimensions, α and β. Option A and option B have their own respective advantages and disadvantages when comparing with each other, so it might be difficult for people to decide which choice to take, especially when the weights of each dimension is uncertain. However, when the decoy option C is added, it is clearly inferior to option B (αB > αC; βB > βC). This difference may make option B more attractive to individuals because if this option is taken, the decision maker will perceive a gain through the decision compared to the added reference point C.

Fig. 4.8
A diagram depicts the structure and conditions of the decoy effect. It is divided into three categories, options A, B, and decoy option C. Each option has two dimensions. Alpha subscript A greater than B and beta subscript A less than B, and alpha subscript B greater than C and beta subscript B greater than C.

Decoy effect

Within the basic structure presented in Fig. 4.8, more concrete examples can be introduced. For instance, suppose Tom is deciding between a hamburger and an apple as his breakfast choice. This decision could be difficult to make as these two options have their respective advantages: apple is healthier and fresher, but hamburger tastes better (at least to some people). However, if we include a decoy option, a rotten apple, as a candidate option besides the original options, then it would make the apple option more attractive to Tom. Therefore, we can probably nudge Tom to the healthier option without changing the original options at all. When people are making judgments under the operations of System 1 and with limited cognitive resources, they could not fully examine the actual utility and risk associated with each option. Under this circumstance, evaluating options based on perceived gains relative to references would serve as a mental shortcut that allows individuals to save time and cognitive resources in decision-making activities.

Similar to other triggering factors behind boundedly rational decisions discussed above, the behavioral impact of decoy options has also been examined in a wide range of application scenarios, such as consumer purchase behavior (e.g., Gonzalez-Prieto et al., 2013; Wu & Cosguner, 2020; Zhang & Zhang, 2007), travel and tourism (e.g., Josiam & Hobson, 1995), medical decision-making (e.g., Stoffel et al., 2019), and food preferences (e.g., Wu et al., 2020). Hu and Yu (2014) went beyond behavioral level and examined the neural correlates of decoy effect in human decisions through fMRI analysis. The experimental results indicate that perceptual salience associated with anterior insula activation triggers heuristic decision-making under the presence of decoy option and that the activity in anterior cingulate cortex can reliably predict a decreased susceptibility to the decoy effect. This result suggests that actively rejecting the effect of decoy options and heuristics is cognitively taxing to decision makers.

Similar to framing effect, empirical evidences on the behavioral impacts and neural correlates of decoy options clearly contradict the “context invariance” axiom that underpins a wide range of individual decision-making models and evaluation metrics in economics studies and beyond. When examining users’ evaluations and comparisons of multiple options, such as different systems and ranking algorithms, recommendations, and retrieved results of varying types, it would be critical to identify the potential decoy options and incorporate decoy effect into user representation and behavior prediction models.

8 Peak-End Rule, Recency Effect, and Remembered Utility

Apart from comparing individual, discrete options based on perceived gains and losses, people also need to evaluate extended episodes or sequences of actions and outcomes. During the evaluation, people’s perceived or remembered utility may be inconsistent with the actual experienced utility during the episode being evaluated. Certain key points in the episodes may have a major impact on whole-session evaluation, such as peak value points (e.g., the action the brings in the highest amount of marginal gains), end value points (the in situ experience in the most recent round of iteration), and initial experience at the beginning of the episodes (Kahneman, 2003). Therefore, using widely applied simple representations, such as the average value across the entire episode or total sum value in the episode, may not be able to accurately predict people’s remembered utility and their subsequent actions or changes in current decision-making strategies. As it is represented in the preliminary framework in Chap. 3, this variation in remembered utility across different moments could be represented by a customized weight distribution function that assigns relatively higher weights to certain key points. The variations of in situ remembered utility could mostly be characterized by the theories on peak-end rule and recency effect in the evaluation of extended episodes.

One of the classic examples that best illustrate the peak-end rule is the colonoscopy experiment conducted by Redelmeier et al. (2003). In the experiment, patients who participated the study reported their perceived intensity of pain every 60 s during the colonoscopy procedure, so that researchers can track and calculate the in situ version of perceived pain. All patients went through roughly the same colonoscopy procedure. Thus, for the main procedure part where colonoscopy screenings were performed, patients experienced similar levels of total perceived pain. However, for half of the patients, physicians did not remove the colonoscopy instrument immediately after the clinical examination. Instead, they waited for a short period of time before removing the instrument. As a result, this half of patients experience an extra period of uncomfortable experience, for which the pain intensity was certainly lower than the actual clinical examination process.

After all procedures were completed, participants were asked to rate their overall experience with their colonoscopy procedures. The results indicate that although the participants who went through an extra period of waiting time have a higher amount of total perceived pain (calculated based on the data collected on in situ pain reports), they reported a higher rating for the overall experience than the group of patients who went through a regular colonoscopy process. This is because people’s remembered experiences or utility regarding an extended episode are heavily influenced or determined by certain typical moments. Although the “extended procedure” includes an extra waiting period, it also significantly reduced the in situ pain intensity at the end of the episode. Thus, with the peak pain intensity level remaining roughly the same across all patients, the extended procedures offer a better ending for the overall experience, which leads to higher scores in retrospective evaluation from patients. This phenomenon can also be considered as a demonstration of recency effect, where people’s memory of an episode is largely affected by the in situ experience from the most recent moment.

The peak-end rule and recency effects capture and characterize the implicit biases and intuitive process behind extended episode evaluation (Alaybek et al., 2022), which contradicts most simulated rational evaluation strategies but helps explain the seemingly counterintuitive scenarios where more (in situ) perceived pain is preferred to less as it is reflected in retrospective evaluations (Kahneman, 2003; Kahneman et al., 1993). Findings on peak-end rule also reveal the phenomenon of duration neglect: people’s global retrospective evaluation of an extended episode is not closely associated with the duration of the episode (Fredrickson & Kahneman, 1993; Hands & Avons, 2001; Redelmeier & Kahneman, 1996). These biases and associated effects have also been empirically confirmed in other experimental contexts, such as the perceived loudness and duration of unpleasant sounds (Schreiber & Kahneman, 2000), evaluation of perceived water temperature in a prolonged session (Kahneman et al., 1993), and whole-session satisfaction evaluation of information retrieval (Liu & Han, 2020).

It is worth noting that the impacts of peak- and end-effects may vary across different dimensions of human perceptions and have their boundaries and conditions. For instance, Schneider et al. (2011) conducted a study on patients’ daily recalls of pain and fatigue, which are often used as part of the basis in physicians’ clinical decision-making. Researchers found that the actual impacts of peak and end moments on retrospective judgments varied significantly across individual patients and that the peak-end rule did not have a significant impact on the recall of fatigue. Similarly, Langer et al. (2005) examined the quality of retrospective evaluations of payment sequences and found that participants’ evaluations did show the tendency of assigning relatively higher weights to peak and end moments of sequences. However, they also observed the empirical boundary of peak-end effect: the biased evaluation only happened when researchers link payments being evaluated to the performance in strenuous tasks that are strong enough to distract participants. For simple tasks without any distraction being introduced, participants’ evaluations tended to be normatively appropriate and were less affected by the peak and end moments. Apart from enhancing the understanding of related human biases, studying the boundaries of peak-end effects may also help researchers identify the implicit boundaries of intuitive thinking and boundedly rational decision-making in general.

In information seeking and retrieval, evaluating whole-session human-information interaction process has been one of the central topics to the research community (Belkin et al., 2012). To facilitate the research on this topic, researchers can explore different representations and weight distributions that utilize the knowledge of peak-end rule, duration neglect, and recency effects and develop more realistic, human-centered evaluation models for both online and offline experiments. Also, as it is indicated in several behavioral studies, peak-end effects and recency biases have their boundaries and conditions and may not have a significant impact in some scenarios. Therefore, it would also be useful to explore the possible limits and conditions associated with these effects in the evaluation of information seeking, retrieval, and evaluation and investigate the ways in which they are connected to widely studied contextual factors, such as users’ domain knowledge and search skills, perceived task difficulty, and task complexity.

9 Other Biases and Heuristics in Decision-Making Under Uncertainty

In addition to the human biases and heuristics explained above, people’s decision-making processes under uncertainty are also affected by other types of biases that contradict with the fundamentals of various oversimplified normative models.

For instance, in contrast to the widely employed assumption of seeking maximized or optimal utility, people may actually search through the accessible solution space and alternatives until an acceptable, “good enough” option is located. According to Simon (1955), people usually rely on this mental shortcut, especially in the decision-making scenarios where the optimal solution is uncertain or may involve a series of complex, intellectually challenging, computation tasks that one cannot afford in real-life settings. Thus, instead of finding or formulating an optimal solution in a simulated oversimplified setting, people may satisfice through seeking for and finding satisfactory solutions that meet their aspiration levels in a more behaviorally realistic world. The theory of satisficing and associated bounded rational approach to decision-making analysis cast doubts on the assumption that people are perfectly rational and always seek for optimized outcome. Although satisficing decision-making events are ubiquitous across different problems and environments, it is worth noting that normative, maximizing behaviors also occur, and the optimization and satisficing strategies tend to co-exist in real-world decision-making practices (Simon, 1955; Schwartz et al., 2002).

Under the satisficing basis, it would be useful to explore the potential gaps between ideal or optimized outcomes and people’s aspiration levels in specific settings and investigate the ways in which the satisficing thresholds and aspiration levels are related to the nature of presented options and the overall problematic situation within which the decisions are made by people. In particular, researchers may need to examine how people’s implicit aspiration levels are connected to potential reference points (e.g., in situ expectations, prior experiences, existing beliefs and knowledge) and the operation of System 1 in ongoing decision-making processes.

Beyond individual cognitive factors and characteristics of available options, people’s decisions are also affected by group think or herd behavior. Specifically, people may decide to take an option or make certain decisions without deliberate thinking or balanced evaluation of potential gains and losses. Instead, they may take a similar option or point of view simply because other people are doing so, even in situations where their own opinion is different from that of the majority. This effect is also referred to as Bandwagon effect. In marketing studies, researchers found that consumers’ behaviors are often heavily influenced by the actions taken by other consumers on the same or similar products (Kessous & Valette-Florence, 2019). Bandwagon effect has been examined in a wide range of application fields, such as healthcare (e.g., Kaissi & Begun, 2008), e-commerce (e.g., Mainolfi, 2020), travel and tourism (e.g., Abd Mutalib et al., 2017), as well as consumer purchase on luxury (e.g., Kastanakis & Balabanis, 2012; Kessous & Valette-Florence, 2019). A more detailed review that reports on recent research progress on Bandwagon effects can be found in Bindra et al. (2022).

Regarding human-information interaction, in addition to the quality of collected information (e.g., relevance, usefulness, offline-evaluation-metric-based scores on SERPs) and individual characteristics, researchers may also need to incorporate popularity and other social factors (if available) into modeling and examine their impacts on people’s opinions and preferences over different opinions, subtopics, and specific contents. Including and representing social factors (e.g., other users’ comments and activities on the same systems, events, and activities) can help design effective social nudging tools that shape and improve users’ interactions with new systems and programs, such as developing healthy diets, encouraging household recycling, and improving privacy protection practices (Czajkowski et al., 2019; Gonçalves et al., 2021; Wisniewski et al., 2017).

10 Summary

Continuing our exploration on developing a novel bounded rational approach to modeling users, this chapter focuses on the research on human behavior and cognition and discusses the research progress on the triggers and impacts of bounded rationality, especially on the widely examined human biases and heuristics that facilitate people’s immediate, “automatic” decision-making and judgments. We first introduce Kahneman (2003)’s theory of two systems, which offers an overall theoretical umbrella under which we could investigate and explain two largely different logics and mechanisms of human decision-making processes.

Moreover, we further clarify the importance and potential of studying boundedly rational decisions, especially for enhancing formal user models and evaluation metrics (see Chap. 2). Specifically, this chapter expands our discussion on individual human biases that are briefly introduced in Chap. 3 where we identify the gaps between formal models and theories on human biases. To fully explain the role and impacts of the biases included in discussion, we illustrate their structures and associated factors based on previous studies, present concrete examples, and discuss empirical findings on both behavioral and neural levels. In addition, for some of the biases, based on relevant studies from multiple disciplines, we also explain the boundaries and conditions for them to generate behavioral impacts. To help readers better understand the relationships among different types of biases, this chapter also discusses the similarities and intrinsic connections among these biases, especially under the analytical framework of reference dependence. We hope that our discussion on the behavioral impacts of human biases and heuristics as well as the connections among them can help readers better understand and synthesize the knowledge regarding human biases and apply the integrated knowledge in their research agenda, both conceptually and empirically, on relevant topics.

The following chapters will build upon the discussion from current and previous chapters and introduce recent progress and existing challenges on human-bias-related research in information seeking, retrieval, and recommendation. In these chapters, we will also explain the specific implications and possible applications of the empirical evidences on bounded rationality in building more accurate, behaviorally realistic user models.