1 Introduction

Scientific knowledge, engineering science, and their best practices are cumulative. Every new idea and improvement is necessarily the result of standing on the shoulders of others. New knowledge, novel and useful practice are all part of evolving and connected strands of understanding, expertise and proficiency. The progression is cumulative and advancing to more insightful understanding and more effective practice. The trajectory is not necessarily a smooth one. There are many false starts and punctuated by what Kuhn (2012) calls paradigm shifts. We think of our approach as opening a new window in a magnificent structure and as a modest punctuation. A new way to think about executive-management decisions. In this chapter, we show its multidisciplinary heritage rooted in mathematics, cognitive psychology, social science, and the practice. We want to show its punctuated continuity with, and its debt to, the achievements of the past. Our debt, notwithstanding, we also draw contrasts between our engineering decision-design methods and other traditional methods.

We sketch a survey of decision theory. The adumbration is necessarily highly selective; the body of work is so vastFootnote 1. Scholars distinguish the domain with three schools of decision theory—the normative, descriptive, and prescriptive schools (e.g. Goldstein and Hogarth 1997). To this, we add what we name as the declarative school, hybrid of the three schools. In what follows, we select representative work from each school (Fig. 2.1). We begin with the normative school, follow with the descriptive and prescriptive schools, and finally with the declarative school. We locate our work, in this book, in the prescriptive branch of the tree in Fig. 2.1.

Fig. 2.1
figure 1

Four strands of decision theory and practice

In the prescriptive stream, we select four strands of research exemplars. These are shown under the prescriptive branch of our tree in Fig. 2.1. Following a survey of the normative, descriptive, and prescriptive schools, we sketch areas of tension among scholars in these schools. We will discuss what we call the declarative school, for it appears to be important to executives and consultants. We will also show that although each prescriptive method is unique, there is a meta-process that, in the abstract, reveals the essential structure of each method. This meta-process is known in the literature as the “canonical model of decision-making” (Bazerman 2002; Tversky and Kahenman 1974). Central to the canonical model is the analyses of decision alternatives. Unexpectedly, design, the subject of constructing alternatives fails to appear prominently represented in the decision literature. It is generally assumed that alternatives exist, are easily found, or readily constructed. Simon (1997; 126) writes that: “The classical view of rationality provides no explanation where alternate courses of action originate; it simply presents them as a free gift to the decision markers.” This void is surprising because design of alternatives must necessarily precede analysis. But analysis is apparently a preferred area of research. Synthesis which deals with the design and construction of decision alternatives is presumed to be readily doable and, as a result, it is barely visible in the literature. Analysis has crowded out synthesis. We will address this gap. And consistent with our engineering approach, we will use engineering design processes and procedures to systematically specify, design, and analyze alternatives to satisfy the requirements of executive management. But we will not neglect the social and organizational management dimensions of executive decisions.

2 Origins

Counting methods to impute the odds in gambles is not a recent or modern practice. There is a long tradition that goes back many centuries (e.g. Crepel et al. 2013). Probability calculations appear early in the Roman poems of Ovid (43 BC–17 AD). Cardano (1501–1576), a physician, mathematician, and avid gambler, authored Liber de Ludo Aleae, a gambling guidebook with statistical calculations (Crepel et al. 2013). The genesis of modern decision theory is found in Bernoulli’s (1738) observation that the subjective value, i.e. utility, of money diminishes as the total amount of money increases. He argued that a poor person perceives value of a thousand ducats very differently relative to a rich one, though the quantity of money are identical. For this phenomenon of diminishing utility, he proposed a logarithmic function to represent value (e.g. Fishburn 1968; Kahneman and Tversky 2000).

However, utility remained largely a descriptive concept until the seminal axioms of Morgenstern and Neumann (1944) and Savage’s (1954) seminal contributions on subjective statistical thinking. They generalized Bernoulli’s qualitative concept of utility (which was limited to measures of wealth), developed the concept of lotteries to impute it, formulated normative axioms, and formalized the combination into a mathematical system—utility theory (Appendix 2.1). Since then, research in decision theory has exploded. Bell et al. (1988) have segmented the contributions in this field into three schools of thought “that identify different issues … and deem different methods as appropriate (Goldstein and Hogarth 1997, 3).” They are the normative, descriptive, and prescriptive schools of decision making. To which we add the declarative strand, a hybrid formed from elements of the other schools. We follow Keeney (1992a) and summarize their salient features in Table 2.1. The boundaries between the schools are blurry. For example, one needs normative principles to judge whether a prescription is meaningful or not.

Table 2.1 Summary of normative, descriptive, prescriptive, and declarative schools

3 Four Schools of Decision Theory

3.1 Normative Decision Theory

Unlike planetary motion, or charged particles attracting each other, decisions do not occur naturally; they are acts of human will. Therefore, we need norms, rules, and standards. This is the role of normative theory and its axioms that enforce rigor and consistency. Normative theory is concerned with the nature of rationality, the logic of decision making, and the optimality of outcomes determined by their utility. Utility is a cardinal, ordinal, interval, or ratio scale measure of the desirability or degree of satisfaction of the consequences from courses of action selected by the decision maker (e.g. Baron 2000). Utility assumes the gambling metaphor where only two variables are relevant: the strength on one’s belief’s (probabilities), and the desirability of the outcomes (e.g. Eisenführ et al. 2010). The expected utility function for a series of outcomes, with assigned probabilities, takes on the form of a polynomial of the product of the probabilities and outcome utilities (e.g. Kahneman et al. 1993). For the outcome set X = {x1,x2, … , xn}, their associated utilities u(xi) and probabilities pi for the index set i = 1,2, … ,n, the expected utility for this risky situation is:

$$ u(X)=\varSigma {p}_iu\left({x}_i\right)\ \mathrm{where}\ \varSigma {p}_i={1}_{.} $$
(2.1)

In order to construct a utility function over lotteries, there are assumptions that need to be made about preferences. A preference order must exist over the outcome set {xi}. And the axioms of: completeness, transitivity, continuity, monotinicity, and independence must apply (Appendix 2.1). The outcomes and their utilities can be single attribute or multiattribute. For a multiattribute objective X = {X1, X2,…XN } and N ≥ 3, under the assumptions of utility independence, the utility function takes the form:

$$ KU\left(\boldsymbol{X}\right)+ 1=\varPi \left({Kk}_iU\left({X}_i\right)+ 1\right)\ \mathrm{for}\ \mathrm{some}\ \mathrm{constant}\ K\ \mathrm{and}\ \mathrm{scaling}\ \mathrm{constant}\mathrm{s}\ {k}_i. $$
(2.2)

Where the attributes are independent, the utility function takes the form of a polynomial. A person’s choices are rational , when the von Neumann and Morgenstern axioms are satisfied by their choice behavior (Appendix 2.1). Subsequently, new principles have been judged to be required as normative principles. For example, the sure-thing principle (e.g. Pearl 2016), which states that any choice should not be altered by independent events. And its close cousin the independence axiom (e.g. Samuelson 1952). And the “no money-pump principle” (Howard, Appendix 2.2), which says that a preference ranking system cannot be circular. The axioms, principles, and desiderata collectively establish ideal standards for rational thinking and decision making. Savage (1954) asserted a principle of rationality, which is now widely accepted. The principle declares that the utility of a decision alternative is calculated by the product of two psychological scales—a subjective probability of the event and a numerical measure of the utilities of the outcomes. The principle also implies how rational choice is be modified with new information. Bayes’ rule of conditional probability is an example.

The completeness axiom asserts that given any two lotteries, one is always preferred to the other, or they are equally good. No exceptions. But, Auman the 2005 Nobel economics laureate proved that relative to the von Neumann-Morgenstern utility, a parallel utility theory does not need the completeness axiom (Aumann 1962). Bewley (2002) formulates an alternative theory of choice that does not need the completeness axiom. He introduces an inertia assumption, which says that a person never accepts s lottery unless acceptance is preferred to rejection, i.e. one stays with the status quo unless an alternative is preferred. These are new, novel and fundamental contributions to normative theory. But these results do not obviate the usefulness of the von Neumann and Morgenstern axioms, which are widely used with proven efficacy.

In spite of its mathematical elegance, utility theory is not without crises or critics. Among the early crises were the famous paradoxes of Allais and Ellsberg (Allais 1953; Ellsberg 1961, e.g. Resnick 1987). People prefer certainty to a risky gamble with higher utility. People also have a preference for certainty to an ambiguous gamble with higher utility. Worse yet, preferences can be reversed when choices are presented differently (Baron 2000). Howard (1992) retorts that the issue is one of education. Enlighten those that make these “errors” and they too will become utility maximizers. Others claim that incentives will lower the cost of analysis and improve rationality, but violations of stochastic dominance are not influenced by incentives (Slovic and Lichenstein 1983). These paradoxes were the beginning of an accumulation of empirical evidence that people are not consistent utility maximizers or rational in the von Neumann and Morgenstern axiomatic sense. People are frequently arational (Ariely 2008; Kahneman 2011). The so-called paradoxes are just normal human behavior.

A significant critique of normative theory is put forward with Simon’s thesis of bounded rationality (Simon 1997). Simon’s critique strikes normative decision theory at its most fundamental level. Perfect rationality far exceeds people’s cognitive capabilities to calculate, have knowledge about consequences of choice, or to adjudicate among competing alternatives. Therefore, people satisfice ; they will be satisfied with a sufficiently good outcome. They do not maximize. Bounded rationality is rational choice that takes into consideration people’s cognitive limitations. Similarly, March (1997), a bounded rationalist, observes that all decisions are really about making two guesses—a guess about the future consequences of current action and a guess about future attitudes with respect to those consequences (March 1997). These guesses assume stable and consistent preferences, which may not always be true, e.g. regret is possible (e.g. Connolly and Zeelenberg 2002).

Kahneman’s seminal experiments cast doubt on the assumptions of perfect rationality; for example, they show that decision utility and predicted utility are not the same (Kahneman et al. 1993). Keeney (1992b) a strong defender of classical normative theory, identifies fairness as an important missing factor in classical utility theory. In general, people are not egotistically single-minded about maximizing utility. For example, many employers do not cut wages during periods of unemployment when it is in their interest to do so (Solow 1980). The absence of fairness also poses the question about the “impossibility of interpersonal utility comparisons (Hausman 1995).” Sense of fairness is not uniform. Nor does utility theory address the issues of regret (e.g. Connolly and Zeelenberg 2002), something that has become an important research agenda for legal scholars (Parisi and Smith 2005).

New experimental evidence is another major contributing factor to the paradigmatic crises of normative theory. Psychologists have shown that people consistently depart from the rational normative model of decision making, and not just in experimental situations using colored balls in urns. The research avalanche in this direction can be traced to Tversky and Kahneman’s (1974) article in Science and their subsequent book (Kahneman et al. 1982) where they report that people have systematic biases. For example, Baron (2000) reports 53 distinct biases. In light of these research results, Fischoff (1999), Edwards and von Winterfeldt (1986) report on ways to debias judgments. Moreover, Redelmeier and Kahneman (1996), Kahneman et al. (1993) report cases in which people preferred pain to a less painful alternative, which does not appear rational. The purely rational choice model is not completely supported, by experiments or human behavior, because it does not address many human cognitive “inconsistencies” or “paradoxes” reported by descriptive scholars. As a result, the contributions from psychologists to economic theory and decision-making have acquired a high level of legitimacy and acceptance. Simon and Kahneman have both become Nobel laureates. And research in behavioral economics is thriving (e.g. Camerer 2004).

The arguments and experiments that critique the normative theory are fundamentally grounded on empirical observations and descriptions of how decision making actually takes place, which are not necessarily consistent with how they “should”, according to normative axioms. Therefore, we now turn our attention to descriptive theory and then consider prescriptive theory.

3.2 Descriptive Theories

3.2.1 Introduction

Whereas normative theory concentrates on how people should make decisions, descriptive theory concentrates on the question of how and why people make the decisions they actually make. Fjellman (1976, 77) argued that “decision makers found in decision theory [normative] should not be confused with real people.” He points out that people are not nearly as well informed, discriminating, or rational as generally presumed. Nobel laureate Simon (e.g. 1997) cogently argues that rational choice imposes impossible standards on people. He argues for satisficing in lieu of maximizing. The Allais and Ellsberg paradoxes illustrate how people violate the norm of expected utility theory (e.g. Allais 1953; Ellsberg 1961; Baron 2000; Resnick 1987).

Kahneman’s et al. (1982) publication, of “judgments under uncertainty: heuristics and biases”, report three biased heuristics: representativeness, availability, and anchoring. These heuristics lead to systematic biases, e.g. insensitivity to prior outcomes, sample size, regression to the mean; evaluation of conjunctive and disjunctive events; anchoring; and others. Their paper launched an explosive program of research concentrating on violations of the normative theory of decision making. Edwards and von Winterfeldt (1986) write that the subject of errors in inference and decision making is “large and complex, and the literature is unmanageable.” Scholars in this area are known as the “pessimists” (Jungermann 1986; Doherty 2003).” For our work, the bias of overconfidence is very important (Lichtenstein et al. 1999). They found that people who were 65–70% confident were correct only 50% of the time. Nevertheless, there are methods that can reduce overconfidence (e.g. Koriat et al. 1980; Griffin et al. 1990). In spite of, or possibly because of, the “pessimistic” critiques of the normative school and descriptive efforts have produced many models of psychological representations of decision making. Three prominent theories are: Prospect Theory (Kahneman and Tversky 2000), Social Judgment Theory (e.g. Hammond et al. 1986), and ecological rational theory (e.g. Gigerenzer and Selten 2001; Klein 1999, 2001).

3.2.2 Prospect Theory

Prospect theory is similar to expected-utility theory in that it retains the basic construct that decisions are made as a result of the arithmetic product of “something like utility” and “something like subject probability” (Baron 2000). The something like utility is a value function of gains and losses. The central idea of Prospect Theory (Kahneman and Tversky 2000) is that we think of value as changes in gains or losses relative to a reference point (Fig. 2.2).

Fig. 2.2
figure 2

Hypothetical value function using prospect theoretic representation

The carriers of value are changes in wealth or welfare, rather than their magnitude from which the cardinal utility is established. In prospect theory, the issue is not utility, but changes in value. The value function treats losses as more serious than equivalent gains. It is convex for losses and concave for gains. This is intuitively appealing, we all prefer gains to losses. But if we consider the invariance principle of normative decision theory, this principle is easily violated. Invariance requires that preferences remain unchanged on the manner in which they are described. In prospect theory the gains and losses are relative to a reference point. A change in the reference point can change the magnitude of the change in gains or losses, which in turn result in different changes in the value function that induces different judgements. Invariance, absolutely necessary in normative theory and intuitively appealing, is not always psychologically feasible. In business, the current asset base of the firm (the status quo) is usually taken as the reference point for strategic corporate investments. But the status quo can be posed as a loss if one considers opportunity costs and therefore a decision maker may be led to consider favorably a modest investment for a modest result as a gain. Framing matters.

The second key idea of prospect theory is that we distort probabilities. Instead of multiplying value by its subjective probability, a decision weight (which is a function of that probability) is used. This is the so-called π function (Fig. 2.3). The values of the subjective probability p are underweighed relative to p = 1.0 by the π function. And the values of p are overweighed relative to p = 0.0. In other words, people are most sensitive to changes in probability near the boundaries of impossibility (p = 0) and certainty (p = 1). This helps explain why people buy insurance—the decision is weighed near the origin. And why people prefer a certainty of a lower utility than a gamble of higher expected utility. This decision is weighed near the upper right-hand corner. The latter is called the “certainty effect” e.g. Baron (2000). This effect produces arational decisions (e.g. Baron 2000; de Neufville and Delquié 1998).

Fig. 2.3
figure 3

A hypothetical weighing function under prospect theory

In summary, prospect theory is descriptive. It identifies discrepancies in the expected utility approach and proposes an approach to better predict actual behavior. Prospect theory is a significant contribution from psychology to the classical domain of economics.

3.2.3 Social Judgment Theory

Another contribution from psychology to decision theory is Social Judgment Theory (SJT) (e.g. Hammond et al. 1986). SJT derives from Brunswick’s observation that the decision maker decodes the environment via the mediation of cues. It assumes that a person, aware of the presence of cues, aggregates them with processes that can be represented in the “same” way as the environmental side. Unlike utility theory or prospect theory, the future context does not play a central role in SJT. Why is this social theory? Because different individuals, for example experts, faced with the same situation will pick different cues or integrate them differently (Yates et al. 2003). The SJT descriptive model (lens model) is shown in Fig. 2.4Footnote 2. The left-hand side (LHS) shows the environment; the right-hand side (RHS) shows the judgment side where the decision maker is interpreting the cues, {Xi}, from the environment. The ability of the decision maker to predict the world is completely determined by how well the world can be predicted from the cues Ye, how consistently the person uses the available data Ys, and how well the person understands the world G, C. These ideas can be modeled analytically.

Fig. 2.4
figure 4

The len’s model of social judgment theory

The system used to capture the aggregation process is typically multiple regression. We have a set of observations, Ys. We also have ex post information on the true state Ye. The statistic ra, the correlation between the person’s responses and the ecological criterion values, reflects correspondence with the environment. Rs ≤ 1.0 is the degree to which the person’s judgment is predictable using a linear additive model. The cue utilization coefficients ris ought to match the ecological validities rie through correlations. G is the correlation between the predicted values of the two linear models. G represents the validity of the person’s knowledge of the environment. C is the same between the residuals of both models, and reflects the extent to which the unmodeled aspects of the person’s knowledge match the unmodeled aspects of the environmental side. Achievement is represented by

$$ {r}_a={R_e}^{\ast }{R_s}^{\ast }G+C{\left[{\left( 1-{R_e}^2\right)}^{\ast}\left( 1-{R_s}^2\right)\right]}^{\mathit{\frac{1}{2}}} $$
(2.3)

The model is somewhat controversial (e.g. Hogarth 2001) on the process of cues, but it is an approach to operationalize and measure judgments. At the cybernetic and systems level, there is similarity of this model with Ashby’s (1957) Law of Requisite Variety from complex systems theory. It states that the complexity of environmental outcomes must be matched by the complexity of the system so that it can respond effectively. In order for the system to be effective in its environment, it must be of greater and consistent complexity relative to the environment that is producing the outcomes. Were this not so, the responding system will be consistently overwhelmed by its environment sending signals the system cannot understand.

3.2.4 Ecological Rational Theory

We must bring up another strand in the descriptive school, the nascent Ecological Rational Theory. Scholars and practitioners of this strand do not accept the classical notions of utility maximizing and economic rationality; they opt for descriptive realism (e.g. Gigerenzer 2008; Gigerenzer and SeltenFootnote 3 2001; Klein 2001; Pliske and Klein 2003). Ecological Rational Theory asserts that people act quickly, without necessarily logical or analytic models using probabilities. Whimsically, Gigerenzer (2014; 68) book shows a cartoon of a caveman being attacked by a ferocious lion. The bubble, on top of the caveman defending himself, shows a complicated mathematical equation with many trigonometric and nonlinear functions. The message is that there are many situations in which people must take action without delay or consideration for decision models . Deciding is not necessarily dominated by axiomatic logic alone, but also efficiency. Decision making is “not just logical, but ecological; it is defined by correspondence [with the environment] rather than [analytic or model] coherence (Gigerenzer 2008).” Ecological rationality is an evolutionary perspective, the goal is the pursuit of objectives in the context of its environment. Gigerenzer conceives the mind as a modular system composed of heuristic tools and capabilities. He offers an “adaptive toolbox,” a set of “fast and frugal” heuristics comprised of search rules, stopping rules, and decision rules.

We note that the theory is both descriptive and prescriptive. In contrast to normative methods, Klein’s (1999, 2008, 2011) Naturalistic Decision Making (NDM) can be said to be an exemplar of Ecological Rational Theory. He describes decision making in exceptional situations which are characterized by high time pressure, context rich settings, and volatile conditions. Klein has extensively studied experienced professionals with domain expertise and strong cognitive skills, such as, firefighters, front-line combat-officers, economics professors, and the like. He finds that they are capable of “mental simulations,” that is “building a sequence of snapshots to play out and to [mentally] observe what occurs (Klein 1999).” They rely on just a few factors—“rarely more than three … a mental simulation [that] can be completed in approximately six steps (Klein 1999).” This is an important result; we will combine this finding with other similar research findings for our work.

3.3 Prescriptive Decision Theories

3.3.1 Introduction

Prescriptive decision theory is concerned with the practical application of normative and descriptive decision theory in real world settings. The practice is called decision analysis the body of knowledge, methods, and practices, based on axioms, inferred principles, and effective practices of decision-making. The ethos is social: to help people and organizations make better decisions (Howard 2007) and to make them act more wisely in the presence of uncertainties (Edwards and von Winterfeldt 1986). Decision analysis is a science for the “formalization of common sense for decision problems, which are far too complex for informal use of common sense (Keeney 1982).” Decision analysis includes the design of alternative choices—the task of “… logical balancing of the factors that influence a decision … these factors might be technical, economic, environmental, or competitive; but they could be also legal or medical or any other kind of factor that affects whether the decision is a good one (Howard and Matheson 2004; 63)… There is no such thing as a final or complete analysis; there is only an economic [sic] analysis given the resources available (Howard and Matheson 2004; 10).” Decision analysis is, therefore, boundedly rational. “The overall aim of decision analysis is insight, not numbers (Howard and Matheson 2004; 184).”

A comprehensive survey of decision analysis and their applications can be found in Keefer et al. (2004) and Edwards et al. (2007). We will limit our coverage to four prescriptive methods, each representing a distinctive way to think about decisions (Table 2.2).

Table 2.2 Summary of four descriptive methods

They are: AHP (Saaty 2009); Ron Howard’s method, published by Strategic Decisions Group (SDG) representing the Stanford’s school of decision analysis (Howard 2007); Keeney’s Value Focused Thinking (Keeney 1992b), real options (e.g. Adner and Levinthal 2004) and ecological rationality (e.g. Gigerenzer 2008; Klein 1999).

We begin with AHP. It is original and distinctive. It does not use utility theory. Instead, it uses “importance” as the criterion for decisions. It is an exemplar of a prescriptive approach that departs from the conventional approaches of utility theory. In contrast, Howard’s method adheres rigorously to the normative rules of normative expected utility theory. As such, it is an example of a normative prescriptive approach. Keeney’s Value Focused Thinking (VFT) is also utility theory based. Keeney has defined and specified comprehensive and pragmatic processes that strengthen what are usually considered as the “soft” managerial approaches to the specification of objectives and to the creation of alternatives. It is an archetype of an analytically rigorous and simultaneously managerially pragmatic prescriptive method. Real options are discussed because it a relatively more recent trend in decision analysis. Table 2.2 presents a summary of the four descriptive methods. More detail is presented in the paragraphs that follow.

3.3.2 Analytic Hierarchy Process

The Analytic Hierarchy Process (AHP) is a distinctive prescriptive method that does not rely on classical utility theory (Saaty 2009). AHP is predicated on four principles for decision problem solving: decomposition, comparative judgments, synthesis of priorities, and a social consensual process. The decomposition principle calls for a hierarchical structure to specify all the elemental pieces of the problem. The comparative judgment principle uses pairwise comparisons using a ratio scale to determine the relative priorities within each level of the hierarchy. The principle of synthesis of priorities is applied as follows (Forman and Gass 2001):

  1. (1)

    given i = 1,2,,m objectives, determine their respective weights wi,

  2. (2)

    for each objective i , compare the j = 1,2,,n alternatives and determine their weights wij with respect to objective i , and

  3. (3)

    determine the final alternative weights (priorities) Wj with respect to all the objective by

$$ {\boldsymbol{W}}_{\boldsymbol{j}}={w}_{1j}{w}_1+{w}_{2j}{w}_2+\dots +{w}_{mj}{w}_m. $$
  1. (4)

    the alternatives are then ordered by the Wj.

The social principles are met by the enactment of a multidisciplinary open interactive and voting process that is based on open discussions to arrive at relative priorities of importance (Saaty 2009).

AHP is now widely used as an alternative to expected utility theory for decision making (Forman and Gass 2001). Forman and Gass (2001) report that over 1000 articles and about 100 doctoral dissertations have been published on AHP. AHP has been extended using fuzzy set theory (Deng 1999) and is used in a wide variety of applications (Saaty and Peniwati 2013), such as national defense, mega projects, and the like.

3.3.3 Howard’s Decision Analysis

Howard is a renown professor at Stanford University. We will use his approach to decisions as an exemplar for normative prescriptive decision-making. We will also call it the Howard’s Decision Analysis and, at times, the Stanford model. “Decision analysis” was coined by Howard (2007). His approach to decision analysis is predicated on two premises. One is the inviolate set of normative axioms (Appendix 2.1) and the other is his prescriptive method to decision analysis. Collectively these form Howard’s canons of the “old time religion” (Appendix 2.2). Non-adherents to the normative axioms and sloppy practitioners are positioned as “heathens, heretics, or cults” (Howard 1992). For example, AHP is explicitly dismissed as an invalid decision prescriptive process (Howard 2007), which we will discuss in another section of this chapter. Howard’s methodology takes the form of an iterative procedure he calls the Decision Analysis Cycle (Fig. 2.5) comprised of three phases, which either terminates the process or drives an iteration (Howard and Matheson 2004; 9). Numerous applications from various industries are reported Howard and Matheson (2004).

Fig. 2.5
figure 5

Howard’s decision analysis cycle

The first phase (deterministic) is concerned with the structure of the problem. The decision variables are defined and their relationships characterized in formal models. Then values are assigned to possible outcomes, which Howard calls “prospects”. The importance of each decision variable is measured using sensitivity analysis, and at this stage without any consideration of uncertainty. Experience with the method suggests that “only a few of the many variables under initial consideration are crucial … (von Holstein 2004; 137)”.

Uncertainty is explicitly incorporated in the second phase (probabilistic) by assigning probabilities to the important variables, which are represented in a decision tree. Since the tree is likely to be very bushy, “back of the envelope calculations” (von Holstein 2004; 139) are used to simplify it. The probabilities are elicited from the decision makers directly or from trusted associates to whom this judgment is delegated. Outcomes at each end of the tree are determined directly or through simulation. The cumulative probability distribution for the outcome is then obtained. The decision maker’s attitude toward risk is taken into account. This can be determined through a lottery process. A utility function is then encoded. The best alternative solution in the face of uncertainty is called the certainty equivalent. Sensitivity to different variables’ probabilities are performed.

The third (informational) phase follows review of the first two phases to determine whether more information is required. If so, the process is repeated. The cost of obtaining additional information is traded-off against the potential gain in performance of the decision. Numerous application examples are presented in Edwards et al. (2007).

3.3.4 Value Focused Thinking

The prescriptive approach of Keeney’s (1992b) Value Focused Thinking (VFT) shifts the emphasis of decision making from the analysis of alternatives to values. In VFT, values are defined as what decision makers “really care about” (Keeney 1994). The emphasis on values is motivated to avoid anchoring and framing errors (Kahneman and Tversky 2000), i.e. positioning a problem or opportunity so narrowly that it will preclude creative thinking. Instead, anchor on values and frame the decision situation as an opportunity. The assumption is that value based thinking leads to more meaningful alternatives to attain what decision makers really care about. The theoretical assumptions of VFT are found in expected utility theory, multi-attribute utility theory (Keeney and Raiffa 1999) and axioms of normative decision theory (Keeney 1982, 1992b). Keeney is more liberal than Howard, Keeney is prepared to consider a suboptimal decision if it is more fair (equitable) (Kenney 1992a). He writes that “the evaluation process and the selection of an alternative can then be explicitly based on an analysis relying on any established evaluation methodology (Keeney 1992b).” Adapting from Keeney (1992b), the operational highlights of the VFT method are illustrated below (Fig. 2.6), where the arrows mean “lead to.”

Fig. 2.6
figure 6

Operational architecture of the value focused thinking process

What is distinctive is that this method has specified an iterative phase at the front-end where the values of the decision-maker are thoroughly specified prior to the analysis of alternatives. The goals of this phase are to avoid solving the wrong problem and to identify a creative set of alternatives. These steps tend avoid many of the biases identified in descriptive decision theory, such as, framing, availability, saliency and the like. Keeney (1994) observes that the most effective way to define objectives and values is to work with the stakeholders. He offers ten techniques for identifying objectives and nine desirable properties for fundamental objectives. Having an initial set of objectives is a prerequisite to creating alternatives. Creativity is the most desirable characteristic for alternatives and VFT presents 17 ways to generate alternatives (Keeney 1996). Keeney’s book VFT (1992b) discusses 113 applications.

3.3.5 Real Options

Myers (1977) is credited with coining the term real options. An option is a right, but not an obligation, to take action, such as buying (call option) or selling (put option) of a specified asset in the future at a designated price (e.g. Amram and Kulatilaka 1999). Options have value because the holder of the option has the opportunity to profit from price volatility while simultaneously limiting downside risk. Options give its holder an asymmetric advantage. Real options deal with illiquid real assets, unlike financial instruments traded in exchanges (e.g. Barnett 2005) in very efficient markets.

Holders of an option have at their command a repertoire of six types of actions: to defer, abandon, switch, expand/contract, grow, or stage (Trigeorgis 1997). Unlike traditional techniques like discounted cash flow (DCF), real options are a flexible method for making investments. Unlike DCF, A real option is not subject to a one-time evaluation, but a sequence of evaluations over the course of the life-cycle of a project. This flexibility to postpone decisions, until some of the exogenous uncertainties are resolved, reduces risk. The Black-Scholes equation is a financial tour-de-force (e.g. Brealey and Myers 2002) and it is inextricably linked with options. But its use in real options has limitations. Returns in the Black-Scholes equation must be log normal; and it is assumed that there is an efficient market for unlimited trading. For securities, the value of the asset is observable through pricing in an efficient market. For real options the value of the asset is still evolving (Brach 2003); such as, an airport. Fortunately, there are many techniques for valuation (e.g. Neely and de Neufville 2001; Luehrman 1998a, b; Copeland and Tufano 2004). However, the managerial implications for real options remain non trivial. It requires substantially more management attention and domain skill to monitor and act on the flexibility of the method (Adner and Levinthal 2004). The value of the real option lies in exploiting favorable opportunities when the right conditions present themselves. “This perspective contrasts with the traditional view of a project as set of decisions made once at the beginning and unchanged during the life of the project” (Neely and de Neufville 2001). Barnett (2005) finds that discipline and decisiveness required to abandon a project are rare and demanding traits in executive management. Many applications using real options are reported in the literature (e.g. Luehrman 1998a; Fichman et al. 2005).

Real options scholars present a three phase process for real options analysis in systems planning and design (e.g. Neely and de Neufville 2001; de Neufville 2002). It is comprised of the discovery, selection, and monitoring phases (Fig. 2.7). Discovery is a multidisciplinary activity. It entails objectives setting and identifying opportunities. The selection phase is analytic intensive to calculate the value of the options in order to select the best one. Monitoring is the process to determine when the conditions are right to take action. Copeland and Tufano (2004) concentrate on the selection phase and present a procedure using binomial trees. Luehrman (1998a, b) present an elegant and more sophisticated analytic procedure to create a partitioned options-landscape. The landscape identifies six courses of action: invest now, maybe now, probably later, maybe later, probably never, and never. These choices are based on financial metrics. Barnett (2005) describes a framework for managing real options. It is somewhat generic and not directly actionable. We adapt de Neufville’s three phase approach and combine it with Trigeorgis (1997) repertoire of six actions to illustrate a prescriptive decision process for real options (Fig. 2.7).

Fig. 2.7
figure 7

Active management of real options

In summary, real options represent a newer direction in decision analysis. It is distinctive; it avoids the limitations of the discounted cash flow (DCF) investment approach. The method is more dynamic and based on sequential incremental decision making to make temporal resolution of uncertainty workable. This makes decision-making process more flexible (e.g. de Neufville 2008).

3.3.6 Ecological Rational Theory: Adaptive Tool Box

Recall that according in Ecological Rational Theory the enactment mechanism is the mind, as a modular system, that triggers without conscious effort “fast and frugal heuristics” (Gigerenzer 2008; Gigerenzer and Selten 2001; Klein 2015). This mechanism, however, does not preclude mental deliberation. Darwinian evolution has made the mind capable of selecting a working heuristic given the decision situation. Evolution has also made the mind capable of learning through reinforcement and repeated usage (Rieskamp and Otto 2006). Thus heuristics are satisficing heuristics. We present two examples to illustrate the point.

The tit-for-tat heuristic (Axelrod 1997) is used in a game theoretic situations in which two parties have to cooperate, but one party cheats. The decision the aggrieved party has to make is to forgive and continue cooperating or to retaliate? If forgive, how many times? Modeling this game is not simple. For there are many contingencies. The heuristic suggests that imitating the other party’s behavior is effective. In other words. Immediately stop cooperating. Research from Axelrod (1997) shows the effectiveness of this simple approach over many substantially more complex statistical strategies.

I had the opportunity to host Boston Chicken’s CEO, who was then affiliated with the IBM Board of Directors. He had come to Beijing for a meeting I was leading for the IBM CEO Lou Gerstner with cabinet-level Chinese government officials. During a relaxed moment, I commented on Boston Chicken’s remarkably successful market expansion and diversification initiatives in China and the US. He said his company’s strategy was simple. Find where MacDonald is building, follow suit and also build there. This is the “imitate the successful” heuristic (Boyd and Richerson 2009). It is a widely used heuristic; it is effective and lowers the cost of learning. Overall, the case for “fast and frugal heuristics” (Gigerenzer 2008; Gigerenzer and Selten 2001) is persuasive by their research based in the Max Plank Institute, and evolutionary arguments that support the heuristic’s effectiveness. Also consistent with bounded rationality, this approach is parsimonious. And it is lean in terms of information gathering and analysis.

3.4 Declarative School

In the previous sections we have concentrated on what scholars identify as the three main schools of decision theory. This fourth school—the declarative—is our recognition of the existence of a fourth. The other three schools—normative, descriptive, and prescriptive—are all research intensive, each directly grounded on science and theory that locates the work. Many of their seminal thinkers are Nobel laureates, giants and prominent scholars in their chosen field of research, e.g. von Neumann, Savage, Simon, Selten, Kahneman, Aumann, Samuelson, Raiffa, Saaty, Nash, and so on. They shaped the foundations and influenced the directions of the research and the practice. Scholars follow and diligently discuss their work.

Our concentration is on executive decisions. We feel obligated to call attention to some of the ways executives learn how to improve their own skills and quality of decisions for which they are responsible and accountable. Many enterprises and large companies have management training programs to bring important and useful research findings and best practices to executives as they rise through the ranks. However this kind of learning opportunity does not exist for many. Without meaning any disrespect, it is unlikely that a large majority of executives, or that their direct reports or staffs, regularly read the research literature or ruminate about theory. Knowledge of sound theory, effective methods and practices are propagated, not as much by academic journals or scholars, but more by trade-press books, executive magazines, articles in prestigious newspapers, consultants, celebrity executives, self-proclaimed experts, and word of mouth. The mechanisms are by exposition and declaration of summaries, repackaging, personal and second hand experiences. These are packaged so the material is more easily understood and delivered in dosages that do not stress readers’ attention span. After all not everyone is a research scholar who is inclined to read journal papers. By definition, the corpus of work and products of this school of hybrid decision theories, is wide ranging and very diverse. We organize the declarative hybrid school into three categories.

Category 1. Much of this work is useful and solid. It does not sacrifice rigor. Academic concepts and research findings are explained in everyday language. The hurdle of academic and expert knowledge are lowered and therefore understandable to those who desire to learn from their writings. For example, The psychology of Judgment and Decision Making (Plous 1993), Administrative Behavior: A Study of Decision-Making Processes in Administrative Organizations. (Simon 1997), A Primer on Decision Making (March 1994), Smart Choices (Hammond et al. 1999), Decision Traps (Russo et al. 1989), Risk Savvy (Gigerenzer 2014), Predictably Irrational (Ariely 2008), Thinking Fast and Slow (Kahneman 2011), and so on. These are admirable exemplars of the declarative genre. Their attention to clarity, in non-technical terms, make these works conspicuous.

Category 2. This is another body of work that is informative and educational directed at more specialized practitioners. The scope is generally broader and more diverse than Category 1. The presentation style is less arcane, less intimidating, more general, very practical and notably more declarative. The prolific contributions of Peter Drucker mark his work as a distinguished exemplar (e.g. Drucker 1995, 1993, 2016). His writings are smart, erudite and bring unusual clarity and insight to the practice of executive management and decisions. They inform and provoke thinking. Other examples are Courtney et al. (2013) on how to decide and limiting bias (Soll et al. 2015), avoiding cowardice (Charan and Melino 2013), and so on. They play an important role in propagating practical knowledge that goes beyond interesting and colorful narratives.

Category 3 has the admirable goal of popularizing the field. Contributors necessarily simplify and generalize, to a high degree the technical rigor, and the specialized domain knowledge. Frequently, subtle nuances and fine texture of important ideas and theoretical concepts are omitted or lightly covered. This category is useful to popularize decision theory and practice. For example titles such as, “… the 15 min …”, “… dozen most ….”, “… seven of …”, “… art of …”, “ … every time …”, and so on, belong to this genre. We are more cautious about this body of declarative work. We call this genre—the putative strand of the declarative school.

The declarative school, a hybrid strand, is an under—investigated domain. It is a new potentially fruitful area of study to investigate—to what extent, what content, how communicated and how they impact the practice and scholarship.

4 Tensions Between the Three Schools

Rationality is only one of several factors affecting human behavior; no theory based on this one factor can be expected to yield reliable predictions. (Robert AumannFootnote 4)

We have seen how paradoxes (Allais 1953; Ellsberg 1961) and the landmark experiments of Tversky and Kahneman (1974) present evidence that people arrive at decisions in ways that are not consistent with normative theory. These paradoxes and experiments are descriptive. The Naturalistic strand of research describes how professionals under situations of extreme pressure and volatile conditions make decisions, it presents a picture that is different from normative theory. These inconsistencies with the axioms of utility theory and the requirements for “perfect” rationality are a source of tension between normative and descriptive scholars.

Zeckhauser (1986) articulates the debate with three insightful axioms and three practical corollaries. They are paraphrased below, they capture the spirit of the research directions in decision theory and opposing views.

Axiom 1

For any tenet of rational choice, the behavioralists can produce a counterexample in the laboratory.

Axiom 2

For any “violation” of rational behavior, the rationalists will reconstruct a rational explanation.

Axiom 3

Elegant formulations will be developed by both sides, frequently addressing the same points, but freedom in model building will result in different conclusions.

and …

Corollary 1

The behaviorists should focus their laboratory experiments on important real world problems.

Corollary 2

The rationalists should define the domains of economics where they can demonstrate evidence that supports their view.

Corollary 3

Choice of competing and/or conflicting formulations should be decided on predictive consistency with real world observations.

The Nobel Prizes in the behavioral sciences; such as Simon, Kahneman, Selten, Ostrom (Prizes in Economic Sciences 2015) are evidence of the importance of behavioral and social factors in decision making as an adjunct to normative theories. Frisch and Clemen (1994) assert that utility theory as normative does not justify its use by psychologists as a standard by which to evaluate decision quality.

Researchers are looking deeply at the fundamental ideas, e.g. what is utility? Is it in our interest to maximize utility? What are the deep mental and psychological processes for decision making and how do they work?

For example, utility is not a monolithic invariant. Kahneman and Thaler (2006), Kahneman and Tversky (2000) distinguish between experience utility and decision utility. They show that utility preferences will differ on how and when it is measured, as experienced or recalled. Experiments reveal that recall is imperfect and easily manipulated. This is another kind of bias. These findings go to the heart of the assumptions of normative theory: that individuals have accurate knowledge of their own preferences and that their utility is not affected by the anticipation of future events. Schooler et al. (2003) argue that people suffer from inherent inabilities to optimize their own level of utility. The deliberate efforts to maximize utility may lead individuals to engage in non-utility maximizing behaviors. They suggest that “utility maximization is an imperfect representation of human behavior, regardless of one’s definition of utility (Schooler et al. 2003).” Klein (2001) argues that “optimization is a fiction”. The cognitive processes for decision making appear to be more sophisticated than merely optimizing utility. Bracha and Brown (2012) suggest a novel framework, individuals have two internal accounting processes, a rational account and a mental account. A choice is the result of intrapersonal moves that results in a Nash equilibrium. This game theoretic approach is also adopted by Bodner and Prelec (2003) where they model utility maximization as a self-signaling game involving two kinds of utility: outcome utility and diagnostic utility.

Neuroeconomics is a new research strand. It seeks to understand decision processes at a physiological level (e.g. Camerer et al. 2005). It uses technology like f MRI to understand which areas of the brain are used during decision making. McCabe et al. (2005) found that people that cooperate and those that do not cooperate have different patterns of brain activity. The evidence suggests that different mechanisms are at work for the same problem. Legal scholars are very active in the study of irrational behavior to understand the issues of reciprocity, retaliation and their implications on judicial punishment (Parisi and Smith 2005).

The tension between the normative school and prescriptive is also visible. For example, normative scholars raise concerns about AHP (e.g. Belton and Gear 1982). Under certain conditions, intransitivity and rank reversal are two deviations from normative axioms that can occur in AHP (e.g. Dyer 1990; Triantaphyllou 2001). Saaty (1990) and Forman and Gass (2001) retort that rank reversal in systems can be expected and can be even desirable when new information is introduced; learning effects can take place. The rank reversal problem is discussed and ways to address can be found in Saaty (2009), Saaty and Peniwati (2013) and Millet and Saaty (2000). Consistent with the pragmatics of a prescriptive approach to decision-making, they write “There is no one basic rational decision model. The decision framework hinges on the rules and axioms the DM [decision maker] thinks are appropriate (Forman and Gass 2001).” Saaty (1990) quotes McCord and de Neufville (1983): “Many practicing decision analysts remember only dimly its axiomatic foundation … the axioms, though superficially attractive, are, in some way, insufficient … the conclusion is that the justification of the practical use of expected utility decision analysis as it is known today is weak.”

5 The Canonical Normal Form

We assume that the decision maker’s problem has been identified and viable action alternatives are prespecified. … with all due apologies, we assume that the pre analysis stage has been completed. (Keeney and RaiffaFootnote 5)

Although each prescriptive method is unique, we argue that they are all instantiations of the meta “canonical paradigm” of decision making (Bell et al. 1988, 18). This meta model is widely adopted in the literature in various forms (e.g. Bazerman 2002; March 1997; Simon 1997; Keeney 1992b; Hammond et al. 1986). The canonical paradigm is a meta-process—a process for defining a specific processes and procedures for the practice. For example, the Scientific Method is a meta-process. Biologists, chemists, and physicists routinely perform experiments that bear little resemblance to each other, but their methods align consistently with the Scientific Method. Even within a single domain there are many instantiations of the Method. A cosmologist and an elementary particle physicist are both doing physics according to the Scientific Method. Though the specific procedures and instruments of their practice vary widely in detail, they are completely consistent with the scientific method. One uses radio telescopes and another uses accelerators. The Engineering Method (Seering 2003) is another meta-process. Electrical, mechanical, and aeronautical engineers build artifacts that are quite distinct from each other, but their methods are isomorphic to the engineering method. In this same way, each of the prescriptive methods we have described on decision theories, although uniquely distinctive, align consistently with the canonical model for decisions. The canonical model is a meta-process for decision analysis (Table 2.3).

Table 2.3 Our instantiation of the canonical form: A systematic process

There are many ways to instantiate the meta-model with a specific model comprised of concrete and actionable procedures. Our decision complex of five spaces is such an instantiation. Our systematic approach to executive-management decisions maps coherently onto the canonical form (Table 2.3).

Our systematic process is very explicit in the solution space and operations space. Specifically, they are: (i) debiasing procedures are called for, (ii) focus on distinct sets of decision variables, managerially controllable and managerially uncontrollable, (iii) ability to systematically construct alternatives, (iv) ability to systematically explore entire solution space under the entire space of uncertainty, (v) pose and analyze any “what if” hypothetical question, (vi) systematically predict their outputs and standard deviations, (vi) construct robust alternatives of choice and also systematically predict their outputs and standard deviations. The ability to systematically construct alternatives cannot be overemphasized. Simon (1997, 126) writes:

The classical view of rationality provides no explanation where alternate courses of action originate; it simply presents them as a free gift to the decision markers.

the lengthy and crucial processes of generating alternatives, which include all the processes that we ordinarily designate by the word ‘design,’ are left out of the SEU account of economic choice.

The research on this crucial design phase of decision making (step 4 of the canonical paradigm) does not appear to be emphasized in the decision-making literature. Its importance is recognized, e.g. “the identification of new options is even more important and necessary than anchoring firmly on analysis and evaluation as goals of the analysis (Thomas and Samson 1986).” Alexander (1979) presents case studies of design of alternatives and unfortunately finds a tendency to prematurely truncate the building of the repertoire of alternatives in the overall process. He concludes that “alternatives design is a stage in the decision process whose neglect is unjustified … (Alexander 1979).” Arbel and Tong (1982) prescribe the use of AHP as a means to identify the most important variables that affect the objectives of a decision for creating alternatives. But they fall short of providing a actionable construction processes for alternatives. Yilmaz (1997) argues for a constructive approach to create alternatives and presents a way to do so using explicitly identified decision factors and their range of responses. His construction requires full-factorial information, which makes the construction process very complicated.

This thin presence in design of alternatives is discernable with the exception for our prescriptive methodology (Tables 2.3 and 2.4).

Table 2.4 Summary comparison

Given a set of alternatives, AHP offers guidelines for creating a hierarchy of decision factors. AHP assumes that the alternatives are known, the weights of the factors that will enter into the selection of an alternative are also unknown. By building a hierarchy of the decision factors, the objective, factors, followed by group discussions, the relative importance of the variables are revealed. Then using the relative importance of the factors, the AHP calculations rank the alternatives. In Stanford’s method, through sensitivity analysis one finds the variables that have the highest impact on the output. Using those variables, we are directed to specify creative alternatives, but we are not presented with explicit means to construct those creative alternatives. With the alternatives at hand, utility theory is used to identify the best one.

Value Focused Thinking makes creating alternatives the centerpiece of the method and it presents a comprehensive approach to objectives specification. Objectives are used to guide the creation of alternatives. To create alternatives, 17 useful guidelines are presented. We are reminded that “the mind is the sole source of alternatives” and therefore creativity is important. Although we are given a comprehensive set of guidelines and many examples of alternatives from a wide range of applications, Value Focused Thinking does not seem to offer a construction mechanism for the creation of alternatives.

At the core, real-options is about two things: sequential incremental decisions, and temporal resolution of uncertainties as time progresses so that the valuation and selection of alternatives are more certain. Like other prescriptive methods it assumes that alternatives can be analyzed rigorously following the procedures of their method. What is distinctive about the real options method is that has a predefined set of generic alternatives (e.g. Trigeorgis 1997). For example, see Fig. 2.7.

This void in step 4 of the canonical model—the construction of alternatives, is unexpected. It would be like having Thanksgiving dinner and assuming the turkey is there for everyone. It is generally assumed that alternatives exist, are easily found, or readily constructed. These assumptions are surprising because synthesis must necessarily precede analysis; analysis that determines the decision maker’s preferences among the alternatives and which culminates in the selection of the one choice to act upon. This is like the apocryphal basketball team that only shoots free throws at every practice. The assumption being that “the rest of the game is a straight forward extension of making free throws and can best be learned by experience in a game situation (Seering 2003).”

Our work does not assume that alternatives are present and ready for analysis. They must be constructed. We will specify prescriptive methods for the engineering of decision alternatives. We will use the engineering methods of Design of Experiments (DOE ) (e.g. Montgomery 2008; Otto and Wood 2001). These are the subjects of this book and we will show that our work is distinctive because:

  • We provide an explicit construction mechanism for designing decision alternatives.

  • Alternatives are constructed using variables that are under managerial control.

  • Variables that are not managerially controllable are used to specify a set of uncertainty regimes that span the uncertainty space.

  • Alternatives span the entire solution space and uncertainty space. The outcome and standard deviation of every decision alternative can be predicted.

  • The analysis of alternatives does not require exhaustive analysis of every possible alternative. Using our methodology, any decision alternative can be designed for the type of outcome desired; for example, for the maximum outcome regardless of standard deviation, robust outcome that has satisficing outcome and is insensitive to uncontrollable conditions.

  • The analysis does not require the subjective translation from natural units (e.g. profit, safety, … ) into subjective utility or ordinal judgments. All the analyses are performed in their natural units. A mix of variables of categorical, ordinal, interval, ratio scales are allowed.

  • We can predict the outcome and standard deviation of any hypothetical what-if question to operate under any of the specified uncertainty regimes. This permits unconstrained exploration of the solution space under any uncertainty regime.

The ultimate goal of any decision methodology is helping people make better decisions. The question we must ask is: What is a good decision? This is the subject of the next section. The more comprehensive questions of the pragmatics and rigor of our paradigm are deferred to Chaps. 4 and 10, respectively. Then we will have more data and conceptual machinery to address these two questions.

6 What Is a Good Decision?

We can never prove that someone who appeals to astrology is acting in any way inferior to what we are proposing. It is up to you to decide whose advice you would seek. (Howard)

6.1 Introduction

There is no general consensus among scholars on what is a good decision. It is an area that has drawn much scholarly research and attention (Keren and de Bruin 2003), which is not to say what makes a bad decision is a topic that has been avoided. Scholars’ differences, by and large, align along the schools of decision theory. For example, the debate centers on what Keren and de Bruin (2003) call “outcome versus process”. Good processes do not guarantee good outcomes, and bad processes can, at times, produce good outcomes (Hazelrigg 1996, Appendix 2.5). The normative school prefers good process over good outcomes. A strong argument is that the good results from a bad process are unlikely to be repeatable or reproducible. This also what Yates and Tschirhart (2006) call the “satisfying results” versus “coherence” perspective. Yates et al. (2003, 52) present data and argue that “a good decision process is one that tends to yield good decisions [outcomes]”. They note that “a striking feature of the results is that subjective notions of decision quality are overwhelmingly dominated by outcomes” (op. cit. 28). Each of the three schools has a distinct position on what is a good decision. And each school of decision theory has different criteria to evaluate decisions. For a detailed discussion, we defer to Keren and de Bruin (2003) who present a thorough review and comprehensive analyses of the process versus outcome debate and other findings about what scholars consider to be good decisions (Chaps. 4 and 10).

In this section, we adumbrate the representative positions from the main schools of decision theory. We close with a discussion with our position on the subject of “a good decision”.

6.2 Three Dogmas: Normative, Descriptive, and Prescriptive

Those that favor the normative school of decision-making draw a sharp distinction between a good decision and a good outcome (e.g. Howard 1992, 2004, 2007). A good decision is a rational decision, in which every procedure adheres to the axioms of normative theory and principles. Examples of these principles include the principles of the sure-thing, independence, non-materiality of sunk costs, and so on (Appendix 2.1). To these scholars, outcome is not a sufficiently valid determining factor because any decision can produce bad results given the stochastic nature of events (e.g. Hazelrigg 1996, Appendix 2.5). Aleatory factors exert their influence in unpredictable ways.

Stated in layman’s language, the normative axioms are:

  • Completeness. Given any two alternatives, a and b. Then a is preferred to b, or b is preferred to a, or a and b are equally attractive. (See Appendix 2.1 for a brief note on this axiom, Aumann (1962) shows that a utility theory can be built without this axiom.).

  • Transitivity. If a is preferred to b, and b is preferred to c; then a is preferred to c, i.e. preferences are transitive.

  • Continuity. If a is preferred to b, and b is preferred to c; then b can be represented as a weighted average of a and c.

  • Monotinicity. Given two alternatives with the same outcome, the choice which is more likely is the preferred one.

  • Substitution. If two alternatives are identical, i.e. indifferent to the decision-maker, then either one can be substituted for the other.

The unconditional mathematical adherence to these axioms characterize the practitioners of the “old time religion” of decision analysis (Howard 1992). Appendix 2.2 shows the additional canons of the old time religion. These four axioms assume the decision alternatives can be represented by probabilities and potential outcomes. These assumptions have proved to be extremely useful and productive in research and the practice.

In contrast, scholars from the descriptive school report on experiments where people, in fact, do consider good results, missed opportunities, difficulty, and other factors as important factors of decision quality (e.g. Yates et al. 2003). Research in behavioral decision making shows a more complicated picture about the mental processes of decision making than single minded “utility” maximization (e.g. Kahneman et al. 1993; Kahneman and Thaler 2006; Schooler et al. 2003). Yates et al. (2003) surveyed people think about their serious decisions and whether they were “good” or “bad” and why. Overwhelmingly, 95.4% of the “good” decisions, and 89.0% of the “bad” decisions were attributed to experienced outcomes. And only 6.4% attributed process to “good” decisions; while 20.2% attributed process to “bad” decisions. Many in the descriptive school argue that outcomes are a factor by which people judge decisions. These scholars would be reluctant to declare a surgical operation as successful should the patient die during the procedure. “… there is no unequivocal answer to the question of how to judge decision goodness; in particular whether it should be based on process or outcome” (Keren & de Bruin 2003).

We adopt the view that the axioms of rationality cannot be ignored, but practical criteria are appropriate, for example, “practical analysis”, “maximize professional interest” (Appendix 2.4, Keeney 1992b). Those of the prescriptive school are more pragmatic and embrace bounded rationality. Edwards (1992) presents guidelines for descriptive theory that he calls “assumptions and principles” (Appendix 2.4). Keeney (1992b) writes that the problem should guide the analysis and the choice of axioms and he offers the guidelines in Table 2.5.

Table 2.5 Objectives of axiom selection for decision analysis

To maximize the quality of an analysis, he specifies objectives for the practice (Table 2.6).

Table 2.6 Objectives of decision analysis quality

To those from the normative school, a good decision is coherence and invariance with the axioms of utility theory. Given the unpredictability of future events, the quality of a decision is uncoupled from outcomes. To those who favor descriptive theories, outcomes and other behavioral variables are important factors for decision quality. Their argument is buttressed by empirical evidence. Those in the prescriptive camp are more boundedly rational, the specific problem guides the selection of axioms, and insights that are useful to the client are determinants of decision quality.

Edwards (1992) reports on an informal survey he took at a prestigious conference. His survey showed an overwhelming agreement that expected utility theory is the appropriate normative standard for decision making under uncertainty. The same group also showed an overwhelming agreement that experimental evidence shows that expected utility theory does not fully describe the behavior of decision makers. Kahneman and Tversky (2000) summarize work from scholars that show that dominance and invariance axioms are essential and that selective relaxation of other axioms is possible. This lends force to Keeney’s (1992b) pragmatic objectives for prescriptive decision analysis and axioms selection.

6.3 Howard’s Good Decision

Howard (2007) identifies six criteria to evaluate decision quality. They have a strong influence and broad adoption by normative scholars (e.g. Edwards et al. 2007). Howard’s six criteria to evaluate decisions are as follows:

  • A committed decision-maker. By definition a decision is making a choice of what to do and what not to do with a resolute commitment to action. A decision does not exist without an executive who is ready to take action and reallocate resources for more attractive outcomes.

  • A right frame. Framing is the process of specifying the boundaries of a decision situation. It shapes a decision maker’s conception of the acts, outcomes, and contingencies associated with a particular choice to be made (Kahneman and Tversky 2000). A meaningful decision is not possible without a clear view of what is considered relevant and what is irrelevant. Framing helps do this (Weick 2001).

  • Right alternatives. This is the most “creative part of the decision analysis procedure” (Howard and Matheson 2004; Simon 1997). A creative alternative is one that might resolve a decision situation, remedy defects of the present situation and improve future prospects.

  • Right information. Information is a body of facts and/or knowledge that will avoid a chosen alternative being inferior had more accurate, complete and timely information being available.

  • Clear Preferences . Every alternative has a measurable value, that permits an ordering of “goodness”. For example, given two different alternatives x and y, a decision-maker can say x is better than y. In other words, x is preferred to y. The rules that determine preference must be defined. According to Howard the four axioms of Morgenstern and Neumann (1944) must apply, as well as his set of “decision desiderata” (Appendix 2.2).

  • Right decision procedures. Having the right decision procedure means having a process like the canonical paradigm, a process like Howard’s Decision Analysis process (Howard 2007), a set of reciprocating processes between the DMU and implementing groups throughout the decision life cycle (Spetzler 2007). Our systematic paradigm is our approach for a “right decision procedure”.

Howard’s criteria concentrate on the tasks leading to the event of decision-making. It also requires the necessary condition of a committed decision-maker who will move forward and enact the decision specification. Decisiveness is implied by his requirement of “ready to take action.” The nexus of Howard’s criteria are in the Problem Space, Solution Space, and Commitment Space (Fig. 1.2, Sect. 1.3.2.1) of the Decision life-cycle.

6.4 Carroll and Johnson’s Good Process

In contrast to Howard’s ex ante evaluation (op cit 2001), Carroll and Johnson’s (1990) six criteria for evaluating methods’ processes is an ex post evaluation process. Its locus of evaluation is the Performance Space. Carroll and Johnson’s (1990) specify six criteria.

  • Discovery. “Having the power to uncover new phenomena, surprise the researcher, and lead to new creative insights.”

  • Understanding. “Providing a cause-and-effect analysis that uncovers the mechanisms or processes by which decisions are made.”

  • Prediction. “Having logical or mathematical rules that predict the judgement and decisions that will be made. The rules need not represent the actual decision processes.”

  • Prescriptive control. “Providing opportunities and techniques for changing the decision process, as in prescribing better decision rules or testing potential manipulations.”

  • Confound control. “Creating controlled situations so as to rule out other explanations of the results (Known as confounds).”

  • Ease of use. “Taking less time and resources for the same progress to the other goals.”

6.5 Our Four R’s: Robustness, Repeatability, Reproducibility, and Reflection

The first of our three R’s—robustness—is located in the Solution Space and concentrated in Performance Space.

  • Robustness is the property of a decision, such that its outcomes are highly insensitive to uncontrollable conditions, even when the uncontrollable factors have not been eliminated.

  • Design of robust decisions uses managerially controllable and uncontrollable variables. This is an ex ante activity for ex post desirable outcomes. In the next chapter we will show exactly how this is done.

The next two R’s—Repeatability and Reproducibility—are located in the Performance Space. These measurements determine the variations that result during production of an artifact, the measuring instrument or the person who is making the measurements. The ability to isolate the causes and magnitudes of these measurements provide actionable insight into quality improvements that can be made in the sociotechnical system .

  • Repeatability is the variation in measurements taken by a person or instrument on the same artefact, under the same conditions. The objective is for measurement results that differ by only a small amount. This is indicative of good repeatability. A distinctive feature of our methodology is that we consider decisions as intellectual artefacts and use engineering and social methods for their design and implementation. The same social system using the same process and technical system produce decision outcomes that differ by only small variations. Such a sociotechnical system is said to be repeatable.

  • Reproducibility is the property of a process, or an entire experiment, to be duplicated—either by someone else working independently or the same person—and produce results that differ by only a small amount. Can the same sociotechnical system using the same processes and social system produce the same results? If so, the measurement system is reproducible.

The next R is Reflection, which is required to be practiced in all five spaces, but most intensely in the Performance Space.

  • Reflection is thinking about experiences either ex post or ex inter, both directed at learning for better ex ante decisions for the next experience (e.g. Mesirow 1990). To us “experiences” are the DMU ’s work leading to the outputs and ex post reviews, as well as, discussions of the in-process outputs and end-process outputs. Rodgers (2002, 855) writes with great pith that “reflection is not a casual affair”. It is, by no means, wooly or undisciplined thinking. Quite the contrary, “Reflection is a systematic, rigorous, way of thinking, with its roots in scientific inquiry” (Rodgers 2002, 845). The subject has its origins in Deweys’ (1933) work on thinking, learning and reflecting.

    Why reflect at all? Dewey argues that reflecting is an inherent human quality—to learn from experiences, to be able to improve subsequent experiences. Knowledge must be experienced. Survival drives this instinct. The possibilities of improved effectiveness are strong and natural drivers that motivate reflection and learning. Through reflection and thinking, we can “understand at a grander scale” (Dewey 1933). Dewey anticipated Arrow (1962, 155) who wrote that “learning is the product of experience”. Work on learning-by-doing from von Hippel and Tyre (1996). Schön’s (1983) segments reflection into reflection-in-action and reflection-on-action. Reflection-in-action is learning by doing, ex inter learning. Reflection-on-action is ex post . Reflection is not navel-gazing, it requires systematic disciplined processes, close cousins of the scientific method. Dewey (1933), Rodgers (2002) and Moon (2004) discuss various strategies for systematic reflection. Reflection can be taught. While solitary reflection is useful, carried out in a sociotechnical community environment is far more effective. It stimulates personal and organizational learning.

    Napoleon Bonaparte, one of history’s most decisive leader, famously said:

    If I seem always equal to the occasion, ready to face what comes, it is because I have thought the matter over a long time before undertaking it. I have anticipated whatever might happen. It is no genius which suddenly reveals to what I ought to do or say in any unlooked-for circumstances, but my own reflection, my own meditation. (Morgenthau 1970, 180).

6.6 Discussion

Translating the work of scholars into a single set of measures for a “good decision” will certainly be challenged from many quarters, each armed with unique, rigorous and defendable mental models. The scope, details, problem/opportunity, domain-disciplines, organizational structure and culture, and situational environment of decisions with vary greatly for every decision situation. This is particularly true of messy and wicked executive decisions.

Therefore, we must defer the judgement of goodness to the executives who are responsible and accountable for the decisions and their outcomes. This is realistic. In the final analysis, they are the ones who must defend their judgments and actions, and they are the ones who have their careers, bonuses, and promotions at risk. They, who have been given the power to command, must be able to explain their decisions to whom they must answer. Between them and collectively, their judgement of a “good decision” must have a high degree of compatibility. This is a necessary part of the sociotechnical component of reflection (Sect. 2.6.5). The judgement is unlikely to be based entirely on outcomes or exclusively on process. Personal experience and scholars’ research persuade us that having strong arguments, to justify a decision and an outcome, is an effective management practice (Keren and de Bruin’s 2003). Consequently, we find ourselves concurring with Keren and de Bruin’s (2003) assertion “there is no an unequivocal answer to the question how to judge decision goodness”. We are, by no means, suggesting a “do nothing” approach to the question of a good decision. Research must continue, and the flow of meaningful descriptions and effective prescriptions must also continue. All this will add to the cumulative knowledge about good decisions.

We are convinced that measurements and systematic reflections are necessary procedures to have in place. We are not suggesting a monolithic process, but a set of meso-processes for use at different stages of the decision life-cycle.

Considering the time dimension of the life-cycle, we mark the time at which the decision is taken, when the executive commits to a decision specification and assigns scare resources to its implementation. Using the term from the military, we call this the zero-hour. Informed by the work of scholars, for the following time periods—ex ante (before zero-hour), ex inter (during zero-hour), and ex post (after zero-hour)—the following requirements must be satisfied:

  • ex ante . The judgments must consider the actions before zero-hour. For example, Howard’s criteria for a decisive executive (Sect. 2.6.3) and design for Robustness (Sects. 2.6.5 and 1.6.2) are examples of actions taken ex ante .

  • ex inter. The sociotechnical system must have a decisive executive who can commit at zero-hour, the moment of decision (Sect. 2.6.3). At the moment of decision, the executive must decide. Executives must be resolute

  • ex post . Every decision involves an outcome, it follows that it is necessary to evaluate the quality of the sociotechnical system that produced this outcome. Recall we stated that the sociotechnical system is the production of the decision as intellectual artefacts. For example, Repeatability and Reproducibility (Sects. 2.6.5 and 1.6.2) are quality measures of such a production system. Measurements are meaningless without learning from them; learning is a key requirement of a high performance organization. It follows that reflection is a must (Sect. 2.6.5).

7 Chapter Summary

  • There are four strands in the field of decision theory—normative, descriptive, prescriptive, and our discovery of the declarative school. Their goals are to understand: how people should decide with logical consistency, how and why people decide the way they do, how to help executives and managers prepare people to design good decisions and how to evaluate decisions in a life-cycle framework.

  • We are the first to identify the existence and influence of the declarative strand. We are also the first to segment it into three categories of progressive rigor.

  • Our methodology to executive-management decisions is located in the prescriptive school of decision theory. It presents a new paradigm to help executives prepare and make robust decisions

  • The traditional canonical paradigm of decision making is meta- process. It is a structural model of specific meta-process for instantiation with actionable processes. The meta model, implicitly and explicitly, is widely accepted and used in many forms of instantiations by researchers, practitioners, writers and journalists.

  • Each school of decision theory stipulates different criteria for evaluating decisions.

The normative school insists on adherence to normative axioms and normative principles to evaluate logical consistency.

The descriptive theories concentrate of how people actually make decisions, with many imperfections and behavior that sometimes violates normative principles. Psychology is a key disciplinary domain that explain many of these phenomena. Which is why this school is also frequently referred to as the behavioral school. The numerous Nobel awards have positioned this school as a bona fide main stream research discipline. Their evaluation criteria are more pragmatic and relaxed relative to the normative scholars.

The prescriptive school draws from the normative and behavioral school to provide prescriptions to help people make decisions. It is practical and its evaluations place a stronger emphasis, than the other schools, on empirical results from the practice. Prescriptions that cannot be buttressed with theory are suspect.

The declarative school is a hybrid of the previously identified schools. It is very diverse and varied. We identify three categories of work in this hybrid school. Category 1—Concepts and research findings are explained in everyday language (without sacrificing rigor and accuracy) and therefore understandable to those who desire to learn from their writings. Category 2 material is less arcane, less intimidating, more general, and notably more declarative. The work is practical. Category 3 has the admirable goal of popularizing the field. It must be said also that many simplify and generalize, to a high degree, the technical rigor, specialized domain knowledge that is communicated. We call Category 3—the putative strand .

  • On the question of what is a good decision. We are in Keren and de Bruin’s (2003) camp which says that “there is no unequivocal answer to the question how to judge decision goodness”. To which we add the qualifier “at this time”. But we insist that consistent measures of decision quality be put in place at the key spaces of our decision life-cycle. We address this topic more fully in Chapter 10.

  • Consistent with our thesis that a decision sociotechnical system is also a production system , a factory, that manufactures designed decisions, we propose, with conviction and confidence, our four R’s—Robustness, Repeatability, Reproducibility, and Reflection as required measures of decision quality.