1 Introduction

This chapter completes the conceptual progression begun in Chaps. 1 and 2. With this chapter, we will have covered the ground of the conceptual foundations, key processes, and the unique operating conditions of executive-management decisions. In this chapter, we show how to operationalize executive-management decisions while adhering to the principles stipulated in Chap. 1 and also relative to other methods described in Chap. 2. We show why our methodology is distinctive. We pay particular attention to the neglected area of designing diverse decision alternatives. Namely, we answer the questions of: what other choices do I have? And unconstrained “what-if?” hypothetical alternatives. We show how to design decisions that are robust under uncertainty conditions, even when uncertainty variables are not removed. We also prescribe how to specify uncertainty regimes that can span the entire uncertainty space. We demonstrate how to predict decisions’ outcomes and their standard deviations under any uncertainty regime. And finally, we show how to analyze the quality of the data that is used and how to analytically evaluate the quality of the sociotechnical system that implements a decision.

Using illustrative examples, we describe the key processes, of each of the life-cycle’s spaces. In Chaps. 1 and 2, we introduced the theoretical foundations for executive-management decisions. We surveyed the decision literature and introduced the subject and the practice of executive-management decisions. We argued why our paradigm is distinct from other methods. We stated that our focus is on executive decisions—a distinctive class of generally ill-structured sociotechnical and managerial problems and opportunities that arise from messy and wicked situations. After considering these problems in a life-cycle context, we partitioned executive-management decisions into an end-end process of five spaces—the problem, solution, operations, performance and commitment spaces (Fig. 3.1).

Fig. 3.1
figure 1

Five spaces of the executive decision life-cycle

In this chapter, we concentrate on how to operationalize our approach to executive-management decisions in each of these five spaces. The object is to operationalize problems’ resolutions in ways that will meet a DMU ’s intentions and ex post inform the DMU and decision-maker of the quality of execution. Quality evaluation is an analytic process not the usual qualitative evaluation using ordinal measures. We propose to approach the operational tasks very deliberately and systematically, steps by step.

First to refresh our memory, we restate our first-principles we derived in Chap. 1. Second, we identify the operational center of gravity for each of the spaces in the decision life-cycle. The operational centers of gravity highlight the central governing concepts for the working processes in each space. Third, we will show that the working processes fulfill the requirements for rigor and systematicity of our executive decision methodology (Table 3.1).

Table 3.1 Our instantiation of the canonical form: a systematic process

Our detailed analyses in Chap. 1 on the dynamics of ill-structured, messy, and wicked executive- management decision-situations enabled us to distill the fundamental factors and key principles required by our methodology throughout the decision’s life-cycle. They are:

  • Abstraction. Reduce complexity to reduce needless imposed cognitive load by abstracting,

  • Actionability. Make abstraction actionable by concentrating on essential variables,

  • Explorability . Enable unconstrained exploration of the Solution Space by providing the capability to design decision alternatives that can cover any region in the solution space under any uncertainty regime.

  • Robustness. Make results highly insensitive to uncontrollable conditions by robust engineering, Robustness is the property of a decision that will perform well even when the uncertainty conditions are not removed. The decision is highly immune to uncertainty (Klein 2001).

  • Repeatability , Reproducibility and Reflection. Ensure and sustain quality performance by repeatable and reproducible processes, and drive improvements through disciplined reflection.

The center of gravity of each space is identified in Fig. 3.1. In the Problem Space, it is sense making. That is develop a meaningful interpretation, of the decision-situation, in order to appropriately frame the problem/opportunity. The key disciplinary science in this space is cognitive psychology applied to the problem at hand. In the Solution Space, the center of gravity is design, viz. engineering design of decision alternatives from which a preferred one can be chosen or additional better ones constructed. The key discipline in this space is robust engineering design. In the Operations Space, the center of gravity is production, viz. developing phenomenological models of the sociotechnical system that enacts decisions. The strategy is to use of gedanken experiments , thought experiments (e.g. Brown and Yiftach 2014, Sorensen 1992). The key discipline in this space is the Design of Experiments (DOE ) engineering method to discover the phenomenological behavior of socio-technical systems (e.g. Otto and Wood 2001; Montgomery 2001). Finally, the Outcomes Space’s center of gravity is measurements, viz. measuring and evaluating inputs and outcomes, analyzing operational quality, and improving performance of the sociotechnical systems that enact decisions. The strategy is to concentrate on robustness of outcomes, gage reproducibility and repeatability (Gage RR) of the operational sociotechnical systems, and making improvements and learning by reflecting on what has been measured.

The remainder of this chapter is devoted to the operationalization of each space of the executive-management decision life-cycle (Fig. 3.2). We concentrate on the know-why and the know-how of our systematic process (Table 3.1). We will narrate processes descriptively and prescriptively, and illustrate them with examples. We are motivated that our processes rise to solid standards of rigor. Inevitably some statistics creeps into the narratives. But, we will use prose to explain the math and its meaning in the context of executive-management decisions.

Fig. 3.2
figure 2

Schematic of the problem space

2 Problem Space

A surprise has come to the attention of executive-management DMU . In this section we discuss how to decode a surprise as a trigger that initiates an executive decision-situation life-cycle. A surprise signals the presence of an event that cannot be ignored. It is a call to action. A meaningful explanation of the decision situation and its causes are needed by the DMU to succinctly articulate the problem, to specify goals and objectives that will drive the design of decision alternatives (Fig. 3.2).

The key questions the DMU must address in the problem-space are: First, what is going on? The answer to this question is provided by sense-making of the decision situation. Second, what is the problem? The answer to this question frames the situation as a problem or an opportunity. And the third question: What do we want? Clear goals and concrete objectives answer this question.

A DMU can be easily overwhelmed by the complexity and uncertainty of the decision situation. The operating principle, in this space, is abstraction to reduce the apparent complexity and the cognitive load on the DMU . The principle obliges the DMU to represent the situation in an uncomplicated way in order to facilitate the formulation of a response. We will show how to do this.

2.1 The Decision Making Unit (DMU )

… most decisions derive from thought and conversation rather than computation. (Ron Howard)Footnote 1

Executives very rarely work alone to single handedly perform every analysis, task or enact every process during the decision life-cycle. There is too much to do, not enough time, too much data to process, too many people to direct and too much uncertainty. This is a typical and realistic description of the conditions that define bounded rationality. As a result, in practice, executives assign much of the analyses and key deliberations to staffs, direct reports, and experts. With the executive, this working group forms a team to make better decisions. We call this organizational ensemble, a decision-making unit, a DMU . Executive level decision situations frequently requires special expertise. In those cases, experts are invited to participate as temporary adjunct members. DMU members, because they are also executives or senior managers, also have staffs, organizations, and experts they can assign for special work. This extended network effectively expands an executive’s and organizational cognitive aperture, implementation, and execution resources. The DMU and its adjuncts serve as sociotechnical mechanisms during the executive-management decision life-cycle. DMU ’s exist for “participants [to] develop a shared understanding of the issues, generate a sense of common purpose, and gain commitment to move forward (Phillips 2007, 375)”.

In the problem space, the DMU ’s key responsibilities are sense-making and specifying the goals and objectives of the decision situation. This process is mediated by DMU members’ mental models , which must to be harmonized. Harmonized does not mean made identical. Traditional thinking emphasizes “the creation of appropriately shared mental models of the system” (Fischhoff and Johnson 1997, 223) for a group to do its work efficiently and effectively (e.g. Jones and Hunter 1995). However, our experience and current research reveal a more comprehensive and complete view of the meaning of shared mental models (e.g. Banks and Millward 2000; Mohammed et al. 2010). Shared does not necessarily mean identical or same; but consistent, aligned, and complementary. Each DMU member must understand the game plan. No one wants a basketball team of players who see the game as consisting entirely of free throws.

In Chap. 1, we discussed complexity, uncertainty, disciplinary, organizational factors, and their interactions as contributing to the difficulty of executive-management decision situations. Given the diversity of the disciplinary domains, required expertise, and the varied organizations that DMU members are responsible; the mental models must be consistent, complementary, compatible, and aligned. An executive from the manufacturing function, is unlikely to have the same mental model of executives from the technology, r&d, or sales functions. Given clearly articulated goals and an understanding of their meaning, they must frame differentiated but complementary models that enable commitment to move forward while simultaneously preserving individual pathways to action and the attainment of organizational goals and objectives (Castro 2007). Yaniv (2011) reports that group heterogeneity attenuates bias and has positive framing effects. Banks and Millward (2000) cogently argue the case of aligned complementary mental models by describing the task of navigating a US warship as a distributed task. Notably, no single person completes this task single handedly, the location of the boat is not bound by a single individual. The ship’s navigation must “move through the system of individuals and artefacts.” Consistent, complementary, compatible, and aligned mental models are required for distributed processing systems like a DMU . Scholars call this concept Team Mental Models or TMMs (Mohammed et al. 2010).

Phillips (2007) calls these DMU -like meetings “decision conferencing”. The membership, scripts, principles, mechanics for facilitating these meetings, even physical space and sitting arrangements are variously described by scholars (e.g. Rouwette et al. 2002; Andersen et al. 1997; Phillips 2007). We defer those topics to these scholars’ publications. In the discussions that follow, we concentrate on content intensive operations that are specific to our paradigm.

2.2 Sense Making and Framing

A problem is an obstacle, impediment, difficulty, challenge, or any situation that insists on a resolution. Problems are stubborn things. They cannot be left unattended. They do not go away, they must be resolved. They need a solution, which dissolves the difficulty and makes a meaningful contribution towards a known purpose or goal. A problem implies an undesirable situation, which is coupled with uncertainty, conjoined with deficiency, doubt or inconsistency that can prevent an intended outcome from taking place satisfactorily (Ackoff 1974). The first part views a problem as difficulties to overcome. The second part considers a problem as an opportunity to exploit, a prospect to contribute to the achievement of a goal. Opportunities and problems are merely decision situations that demand executive attention. We do not distinguish between a problem and an opportunity. They are two sides of the same coin, a situation. If interpreted and addressed as an opportunity, it can have an upside; or if otherwise, it is a difficulty to be resolved, dissolved, or ameliorated. In either case, we want to be better off than before. Whether a decision situation is a problem or an opportunity depends how it is posed and described to a concerned observer. Keeney (1994) argues for framing the opportunity side by concentrating on providing value. He defines value as what decision makers care about. He writes that the “idea is to create an alternative that gets you what you want and at the same time makes others better off”. Henceforth, we will use the term problem with the understanding that we mean a problem or opportunity.

The need for executive attention is triggered by a surprise, which signals the need to uncover and decode the conditions that caused it (Fig. 3.2). The imperative is to answer: “what is going on?” and “what do I have to do? (Weick et al. 2005; 412).” Frequently, triggers are the result of an executive initiated study for staff work, which results in counter intuitive information. Other triggers are: stochastic surprises, reviews of new data that challenge the validity of mental or operational models (Bredehoeft 2005), or unanticipated outcomes of known initiatives by nature of their effects, and inconsistent content or timing (Allen et al. 2010). Or even simply not knowing how to respond (Horwitz 2003). In other words, they are situations of cognitive dissonance, in which the “… world is perceived to be different from the expected state of the world, or when there is no obvious way to engage the world” (Weick et al. 2005, 409). People and organizations prefer an orderly and readily explainable world so they know what to do, explain action and reestablish predictability, stability and system homeostaticity. These imperatives drive the need for sense making.

To these ends “top managers need to provide cognitive leadership—i.e. create a common frame of reference for key employees—to assure the growth of the organization can be interpreted through this selection process (Murmann 2003, 229).” These are prerequisites for the organization to move forward with confidence (Phillips 2007). Regan-Cirincione (1994) shows that an able group facilitator and leader can make a group outperform its most skilled member of the group and improve the accuracy of the group’s judgement. Moreover, scholars have shown that there is a causal linkage between success and failure in business problem-solving and the frequency of diagnosis and the extent to which they precede action (e.g. Schulz-Hardt et al. 2006; Brodbeck et al. 2007; Lipshitz and Bar-Ilan 1996).

The case for appropriate situational analyses and sense making is very compelling. A major pitfall is to interpret a significant event too narrowly. This can cause half measures or unsustainable ameliorations to important problems. This is particularly urgent for unexpected signals from pressing, messy and wicked situations. The executive and the decision-making unit must “look past the surface details in a problem to focus on the underlying principles or big ideas embedded in the situation” (Etmer et al. 2008; 31). However, it is also dangerous to interpret the situation too broadly. This can result in vague, ambiguous, or conflated views of the situation. This can drive unimportant, irrelevant information, and noise to be included in framing the problem. This has the pernicious effect of adding complexity and complicating the cognitive load for all concerned. Executives and the DMU need to focus on the essential causes of the event, their context, cause-and-effect relationships; ignoring gratuitous details, in order to develop a meaningful interpretation that makes sense without injecting noisy information. Interpretation requires a synthesis process that puts key relevant causes together into a meaningful whole. This synthesis is a creative act that considers what is needed to satisfy the goals of the organization and how to put together what is observed and analyzed into a coherent concept that is actionable. The ethos is identical to the engineering design process. This kind of design thinking is critical to sense making.

We propose a procedure shown schematically in Fig. 3.3 and specified algorithmically in Fig. 3.4. This process generally takes more than one iteration. At each iteration, it is useful to group the most important and significant causes into thematic groups. The design of a decision should not just address the trigger situations but its thematic causes. In medicine, a thematic cause is recognized as a pathology. Like physicians, executives must not just address symptoms at the exclusion of root causes. Two aspirins will not remedy a pathology. Our process concentrates on critical decision variables, the essential variables. Unnecessarily broadening exploration, with excessive iterations, is not useful either. But what is a practical stopping rule? A useful heuristic is to constrain the scope of the explanation at the jurisdictional boundaries of the executive’s superior or nearest peers. Why? This allows the decision makers to broaden their field of vision and enable fact-finding and negotiations with their peers. This process brackets the problem and specifies its boundaries .

Fig. 3.3
figure 3

Partition and synthesis of a trigger event

Fig. 3.4
figure 4

Procedure for problem space

This heuristic provides guidelines for deciding what is integral to the problem and what can be excluded from consideration. The stopping rule prevents “boil the ocean” of unsolvable problems; such as “save the Amazon forest”, though noble, their scope and scale make them intractable and not actionable. Another pitfall is to be overwhelmed by the causes and associated information after some iterations. As we discussed, the analysis gets more complex at each iteration. Facts and information become more abundant and therefore more difficult to digest and put into a coherent picture. Recall our principles—synthesis follows partitions, and the principle of uncomplicating complexity . Synthesis and abstraction are ways to synthesize partitions and reduce complexity such that the result is cognitively uncomplicated. The synthesis must be thematically driven so that the new whole makes sense and is meaningful. The culmination of these efforts result in the aufgehoben moment in the process. This is the “ah-ha” moment when there is a crystallization that reveals new clarity of a coherent and meaningful story. This is the insight that can explain and interpret “what is going on” by pulling the right pieces into a coherent whole (Fig. 3.3). The iteration-exit condition (Fig. 3.4), delimits the scope and tightens the meaning of the problem.

Executive-management decisions have the complex systems property that there is more than a single satisfactory resolution to a complex, difficult, and risky problem. Solutions are not necessarily unique. Unlike the roots of a quadratic equation, for which solutions are unique. A developed synthesis, from multiple causes, is not the only one possible. There are other coherent, and legitimate interpretations that can differ due to organizational and stakeholders’ differences. There is substantial plasticity in the synthesis, which are socially constructed, embedded in specific organizational situations and particular mix of disciplinary domains.

As a last step, document the problem in prose. In our teaching and management experience, we find that prose documentation is one of the most effective ways to enforce clarity. These documents are also carriers of knowledge for those who have a need to know. Carlile (2002, 2004) calls these documents “boundary objects”. They travel across human and organizational boundaries to transmit information and knowledge. Gerstner, IBM CEO, insisted on prose documentation as a prerequisite to all management meetings with him. His guidelines were simple and effective: maximum of ten pages written in narrative prose; without complicated graphs, tables, numbers, or equations. These and long difficult explanations were to be attached as appendices, for which there were no page limitations. Any interpretation and conclusions inferred from graphs and complicated information required terse and clear summaries within the ten pages of prose. FOS was a format many executives found useful. FOS stood for facts, opinions, and so-what’s. Present the facts, offer your opinion, and finally explain what all this means to the deciding executive by presenting an action plan. The FO part, of FOS, addresses the question of “what is going on”. FOS is, in effect, a dialed-down version of the scientific method.

2.3 Specifying Goals and Objectives

The next step is to address the goals and objectives, i.e. “what is it we want?” What we want must be driven by goals and objectives. A goal is a what. An objective is a how. The difference is between means and ends. A goal is thus superordinate. An objective is subordinate and should be measurable. “Top managers cannot possess all the knowledge that the various individuals in and organization have about their task environment. It is more effective to specify goals and selection criteria and allow lower-level employees to find the best solution to their particular task (Murmann 2003; 290).” A goal is what you want to achieve, in qualitative terms. Consider for example, the personal goal: “to become an educated person”. The goal expresses a need, a want. Implied is a commitment of time, money, and other costs to achieve the intended goals. Objectives could be, “earn a college degree, learn two foreign languages, and play a musical instrument by age 25”. The objectives should be measurable and used as indicators of progress or failure towards a goal.

In subsequent chapters, we will discuss many business examples of goals and objectives. In the ADI case study, the surprise was the plunge in stock price from $24 to $6 (Chap. 5 and 6). This was serious, but the more ominous threat was the possibility that the company could be acquired on the cheap. It became a goal of the ADI company to avoid a hostile takeover. A key objective was to increase the market value of the firm so that it would not be affordable to potential buyers. Thus enabling the achievement of the specified goal of maintaining ownership of the firm.

Goals and objectives are necessarily contextually positioned in an organizational structure. They are recursive. An objective at one level of the organization becomes the goal at the next level of the organization (Fig. 3.5). At the executive management level, the goals are prescribed by the set {g1, g2}. The objectives to attain these goals have been specified as {o1, o2, o3, o4}. Applying the management principle of excluded reductionism (Ropohl 1999) of complex organizational structures, the objectives are partitioned to managers x, y, and z as objectives. The objectives {o1, o2} are delegated as goals to manager x, who then specifies objectives {o11, o22, o21, o22} to meet its goals. Manager y’s goal and objectives are {o3} and {o31, o32, o33}, respectively. Similar explanations apply for manager z. Manager y partitions and delegates its objectives as goals to manager y1 and manager y2. For example, manager y1’s goal and objectives are {o31} and {o311, o312, o313}. In turn, manager y2’s goal and objectives are {o32, o33} and {o321, o322, o32, o332}.

Fig. 3.5
figure 5

Relationship of goals and objectives—hierarchy and inheritance

To guide setting goals and specifying objectives, we find it useful to apply Keeney (1992, 1994) and Smith’s (1989) guidelines for conceptualizing business problems and specifying solution objectives (Appendix 3.1 and 3.2). Focus on values, opportunity, and closing aspirational gaps. In Fig. 3.5, the distribution of goals and objectives illustrate the properties of recursive hierarchy and heredity. We can express the goal and objectives setting process by the recursive expressions (3.1) and (3.2) that reflect the hierarchical and the hereditary property, respectively.

$$ \mathrm{goals}{\left|{}_{\left(\mathrm{level}\;\mathrm{i}+1\right)}\Rrightarrow {\cup}_{\mathrm{n}}\mathrm{objectives}\right|}_{\left(\mathrm{level}\;\mathrm{i}+1\right)}\Rrightarrow {\mathrm{indicates}}^{``}{\mathrm{derive}}^{"} $$
(3.1)
$$ {\cup}_{\mathrm{i}}\mathrm{objectives}{\left|{}_{\left(\mathrm{level}\;\mathrm{i}\right)}\supseteq {\cup}_{\mathrm{m}}\mathrm{goals}\right|}_{\left(\mathrm{level}\;\mathrm{i}+1\right)}\supseteq {\mathrm{i}\mathrm{ndicates}}^{``}{\mathrm{span}}^{"} $$
(3.2)

3 Solution Space

In the solution space, the problem has been clearly defined, Goals and objectives have been specified. The next step is to develop a series of decision alternatives from which a choice alternative, which satisfices intended goals and objectives, can be designed. This process is schematically shown in Fig. 3.6 (Table 3.1).

Fig. 3.6
figure 6

Schematic of the solution space

Specifying alternatives is the “most creative part” of the executive-management decision life cycle (Howard and Matheson 2004, 27). The goal of developing alternatives is to determine whether different executive-management decisions and sociotechnical systems can outperform current outcomes. This necessarily requires the ability to predict outcomes of decision alternatives. Predictions must depend on rational methods and repeatable practices. Otherwise, predictions become guesses. Guesses are not very persuasive or convincing. Rational methods result in representations of problems and potential solutions, which can be more readily accepted as meaningful. Alternatives do not appear out of thin air, they must be defined and then constructed. This construction process is one of synthesis; alternatives must be designed.

The operating principle for this phase is to design of decision alternatives that can span the entire solution space anywhere in the uncertainty space. This imposes four design requirements. The DMU must design decision alternatives that:

  • can explore anywhere in the Solution Space within the entire space of uncontrollable conditions,

  • are robust under uncontrollable conditions even when these conditions are not removed,

  • are systematically designed artefacts of technical and social processes, and have been subjected to debiasing procedures.

Decision alternatives that can meet these requirements requires knowledge of the problem domain and the behavior of sociotechnical systems that generate the intended outputs. Given the complexity of messy and wicked executive-management decisions, how do you represent the operational structure and behavior of these sociotechnical systems? Scholars and practitioners consider this as one of the most challenging problems in management decision (von Winterfeldt and Edwards 2007). These are the topics we will address next.

3.1 A New Strategy: Induction, Phenomenology

Inductive inference is the only process known to us by which essentially new knowledge comes into the world … Experimental observations are only experience carefully planned in advance, and designed to form a secure basis of new knowledge; that is they are systematically related to the body of knowledge already acquired. (R.A. Fisher)Footnote 2

… inductive reasoning is more strict that deductive reasoning since in the latter any item of data may be ignored, and valid inferences may be drown from the rest;; … where as in inductive inference the whole of data must be taken into account. (R.A. Fisher)Footnote 3

Scholars suggest two distinct strategies to develop engineering design alternatives (Otto and Wood 2001, 894). One is the analytical model development based on ex ante analytical frameworks and models. The other is the empirical development based on experiments that reveals an ex post model. This experimental method is how Watson and Crick determined the structure of the DNA and won them the Nobel Prize. The structure of the DNA revealed itself. This is the conceptual basis of our paradigm, the structure of the socio-technical system, we are dealing with, will reveal itself by means of experiments. This strategy is an exemplar of our principle of uncomplicating complexity.

The most widely used conventional strategy is to model decisions using ex ante postulated analytical representations. Then use these representations to make predictions and analyze how well they can generate the intended outputs. The challenge, for this kind of modeling, is for experts “to specify the [mathematical] relationship of the system variables (Howard and Matheson 2004, 28)”. Because they have domain knowledge to identify the variables, executive and practitioners can develop models that have satisfactory fidelity of the problem and of the sociotechnical systems. These models represent an explicit representation of the behavior of the sociotechnical machinery that will enact a decision specifications. von Winterfeldt and Edwards (2007) specify eight mathematical structures to model the system behavior to predict and analyze variables that have an influence the outputs (Appendix 3.3). This repertoire of structures exhibit mathematical rigor and precision, but the authors are largely silent about verisimilitude, i.e. the question of model fidelity. We overcome this conceptual lacuna with our executive-management decision paradigm.

We elect to use and entirely different strategy from that of an ex ante and a priori formulation of closed-form analytic representations. We exercise a strategy that does not presume to know a priori the explicit analytic representation of the sociotechnical system. Our strategy is an experimental one. The idea is to estimate predictions by gedanken experiments (e.g. Hopp 2014) rather than mathematical equations and estimates of probabilities. Unlike the analytical approach which presumes mathematical expressions of variables to make predictions, we determine ex post the sociotechnical system behavior from experimental outputs. We do not need to know the analytical representation of the machinery of the sociotechnical systems . Clearly, we still need to know the causal variables that influence the outputs, but we do not need “to specify the relationship of the system variables (Howard and Matheson 2004, 28)” using equations. We can estimate the performance of alternatives by gedanken experiments to determine a phenomenological model. Phenomenology is the scientific methodology which describes and helps explain observed experiences. Appearance reveals and explains reality (Smith 2013). This is the conceptual basis of our executive-management decision paradigm.

3.2 Essential Variables

We now turn our attention on the variables that influence the intended outcomes of executive- management decisions.

From an executive-management perspective, it is natural that the variables be partitioned into two classes—managerially controllable variables and managerially uncontrollable variables . (The literature also calls the variables—inputs, factors, or parameters. We will use these terms interchangeably.) There are many possible variables that management can control, so a critical question is: how are these variables identified? Heinz von Foerster (1981) and von Foerster and Poerksen (2002) argue forcefully and convincingly that for complex sociotechnical systems, this task is impossible without observers’ prior knowledge and experience. To manage and operate complex sociotechnical systems the role of the observer is essential to identify the most sensitive variables that influence the behavior of such a system. Achterbergh and Vriens (2009) call these variables essential variables. They coined the term essential variables in the context of controllable variables. Variables that directly influence the intended outputs, which by definition, determine the variety of the operational sociotechnical system that will implement the decision specification. Variety is defined as the number of states a system can take (Ashby 1957). Clearly the variety of a socio-technical system must exceed that of the external environmental conditions to enable the system to cope with it (Ashby 1957). It follows that the uncontrollable variables that are relevant to the decision situation must be also addressed. For, they too influence the behavior of the sociotechnical system and the quality of outputs. For this reason, we include uncontrollable variables as essential variables. Naturally, prior knowledge from the observers is also mandatory to be able to identify them.

3.2.1 Controllable Variables

Controllable variables are the variables that management can directly control and have a direct impact on the outputs. Executives have the power and the resources to use these controllable factors to meet goals and objectives. Controllable variables can be continuous or categorical. For example, closing or not closing a manufacturing plant is a categorical variable. On the other hand, the number of new employees to be hired is generally a continuous variable. Discrete settings, of a variable’s value, are called levels. For example, the hiring level can be specified as 10% higher than the current employee population, 5% lower, or it can remain exactly at the same level. The desirability of higher or lower levels is very much dependent on context. If the firm is on a growth spurt and under favorable market conditions of booming demand, then 10% higher is better, and 5% lower is worse. But if the firm is in an unprofitable down market with uncompetitive products, then 10% higher unprofitability is worse, but 5% lower unprofitability is better.

This points out a defining property, of decision variables, known as their characteristic. Variables can be characterized into one of three types. Those for which more of their output is better, less of their output is better, or exact value of an output is better. As illustrated in the previous examples, the desirability of “more” or “less” is determined in the solution context.

From a managerial perspective, the DMU needs to address these key questions:

  • What is the characteristic of the variable and its output?

  • How many levels for each variable? How to specify the levels?

  • How many controllable variables do we need?

  • How do I identify a meaningful set of the uncontrollable variables?

Identifying the controllable variables . Corporate problems, proposed solutions, and their consequences depend on the behavior of corporate business systems and processes. To find the requisite controllable variables, we must focus on the essential variables they can directly control to affect the system behavior and the outputs that are important to the executives. Therefore, goals and objectives must be considered to determine the controllable variables that are to be chosen. Once goals and objectives are specified, executives and the DMU ’s must filter through layers of a problem situation in order to determine the key essential controllable variables. And as a first step to conceptualize the ill-defined issues of a problem, they must draw on their previous knowledge and personal experiences (e.g. von Foerster 1981; von Foerster and Poerksen 2002; Etmer et al. 2008). Clearly one ignorant of domain knowledge will not be able to specify the control variables.

Prior knowledge on the part of the executive and DMU members, means that for a given decision situation, the controllable factors must be specified at a consistent level of abstraction and scale of their mental capacities to meet the objectives that have been specified. Scale is a system descriptor that determines the level of abstraction and detail that are visible and consistent to the DMU . At a higher scale, less detail is visible and fewer descriptions of the systems processes are necessary for the observer. At a lower scale, more detailed and textured descriptions of the system behavior are visible and required (Bar-Yam 2003). Paraphrasing Simon (2000, 9), looking from the top downwards, at a large scale, we can say that the behavior of the units at any given scale does not depend on the details of the structures of the lower scale below, but only upon the steady state behavior, where the detail can be replaced by a few aggregated variables. The decision situation, the goals and objectives, and prior knowledge enable the appropriate scale for the definition of decision alternatives (von Foerster 1981; von Foerster and Poerksen 2002).

Given a consistent level of abstraction for the decision maker and the decision-making unit, the variables must be specified to meet the objectives that are being studied. The variables must be actionable and consistent with the principle of excluded reductionism, (Ropohl 1999) and Ashby’s (1957) principle of requisite variety, we discussed previously. DMU members are experts. As experts, the expertise they bring to the discussion is invaluable. Experts are able to perceive the “deep structure” of a problem or situation (Chi 2006, 23) and “scan the problem features for regularities, incorporate abstraction, integrate multiple cues, and accept natural variation in patterns to invoke aspects of the relevant concept” (Feltovich et al. 2006, 55). “Experts are good at picking out the right predictor variables and at coding them in a way that they exhibit a conditionally monotonic relationship with the criterion” (Dawes 1979, 574). It is entirely appropriate and necessary to have the DMU membership identify the essential controllable variables and specify their levels.

Setting the levels of the variables. In general, we recommend a three-point specification for the controllable variables. More than three levels may be necessary for complex and complicated problems. Two levels would work almost as well, at a cost of detail of the outputs, e.g. determining whether there is a curvature. Of the three kevels, we require that one level be the point that marks the current operational condition, assuming no change. This the “business-as-usual” (BAU) condition. This establishes a base line. The “maximum effort” level is that at which management is still in control, but at the edge of impossibility. Operating at the highest level should be a “stretch”, i.e. doable with a maximum strong effort, but not impossible. This could be the current operating BAU -level, if currently operating under those conditions. To determine the maximum requires domain expertise and deep operational knowledge of the firm’s business processes. The “minimum effort” level should be at a level of performance, which is adjacent to not-acceptable. It could be the BAU level of performance or less. Note that the “maximum effort” may, in fact, be a small number. For example, consider the controllable variable “scrap and rework” in manufacturing. Ideally that level should be zero, which requires a maximum and heroic effort. Why three points? This is a compromise between just two points and four or five or more points. With two points we cannot get a picture of any potential curvatures in the response. But there are many cases where two levels are appropriate. With more than three points, we risk making the cognitive load intractable.

As we shall see, r&d budget is a controllable variable in one of our case studies. Top management of the ADI company can choose to invest in r&d at three levels. Level 1 is lowest acceptable level of $747.8 M, at the current level (BAU , level 2) of $753.3 M, and at a more intense level 3 of $760.4 M. These r&d levels are at the discretion of the senior management of the firm; therefore, r&d budget is a controllable variable.

But how many variables are needed? For complex systems and complicated decision situations, decision makers should consider only as many variables as they can cognitively address (Bar-Yam 2003). This is what is meant by requisite variety. The chosen variables must be appropriate to the cognitive level of abstraction that the decision maker can handle. And it must also be consistent with the decision objectives, and the maximum variety of the decision situation the sociotechnical system is able to handle (Ashby 1957). The requisite variety of the controller must be larger than that of the controlled system. Research shows that, in general, the number of variables is not large. Klein (1999) reports that in high stress environments, like in fire-fighting or combat, line officers “rely on just a few factors—rarely more than three.” Isenberg (1988) writes that “… senior managers I studied were preoccupied with a very limited number Footnote 4 of quite general issues, each of which subsumed a large number of specific issues.” A study on the number of factors to predict heart failure identified five factors (Skånér et al. 1988; Hoffman et al. 1986). Another study of a $150 M investment of a pesticide product-development and manufacturing decision shows that seven variables were used for the decision (Carl-Axel and von Holstein 2004). Corner and Corner (1995) in a large survey of strategy decisions report that, in 73% of the cases they studied, use less than nine attributes (decision variables), and that only six alternatives are considered. These studies support Miller’s (1956) “magical” 7 ± 2. In summary, we propose the following rules for identifying controllable variables:

  • domain knowledge and expertise is mandatory,

  • specify only as many controllable variables as the decision-maker needs and can address, a useful rule-of-thumb is Miller’s 7 ± 2,

  • ensure the variables are at a consistent scale and level of abstraction,

  • specify three levels for each of the variables,

    the highest level as one that requires maximum effort, but at the edge of impossibility,

    the lowest level that is the minimum level of acceptable performance, but at the edge of doability.

3.2.2 No Free Lunch

As in nature, in business and organizational management, it is not possible to get something for nothing. This is colloquially articulated as the axiom of “There is no free lunch.” It follows that the set of controllable and uncontrollable factors of a decision specification must reflect the spirit of this reality. Otherwise, the ensemble of variables are meaningless. The DMU can simply specify that all controllable variables be exercised at the levels that achieve all the objectives without regard to cost or effort. It would be like trying to design a frictionless machine, a money pump, or a perpetual motion machine. In one of our case studies, we discuss r&d budget as a controllable variable. This variable reflects the need to commit of resources to meet an objective. At the same time, the customer’s budget flexibility to pay for project overruns was specified as an uncontrollable variable. This case reflects the need to identify resources as an important factor that influences intended outcomes. In every decision, the no-free-lunch rule must be reflected.

3.2.3 Uncontrollable Variables

Uncontrollable variables are secular variables that management cannot control, or are so costly to control that they are, in effect, uncontrollable. But nevertheless uncontrollable variables can have a direct and strong influence on the outcome objectives of the decision. Uncontrollable variables are the key sources of uncertainty and risk. As in controllable variables, the questions on the number of variables and their levels apply here as well. Domain experience and expertise are required to identify and use them. Their number must be cognitively manageable. Lempert (2002) reports that in a policy study for climate-abatement strategies that from a set of 60 possible uncontrollable variables, only 6 were found to produce meaningful scenario differences. The levels for the uncontrollable variables represet the extreme but realistic conditions of the secular variables that can influence the outcomes (e.g. Otto and Wood 2001).

Consider for example, a case which we will discuss in later chapters. A consulting firm is performing a special risky project, which very likely require the client to make potentially costly repairs to the sociotechnical system in question. Whether the client has budget flexibility, and, how much, to handle potential cost overruns is an unknown to the consulting manager. Budget flexibility is thus an uncontrollable variable. The lowest cost level for the client, is the condition that the client will accept no overruns, causing the consulting firm to bear all costs. The highest cost level for the client is the condition that the client will pay for all overruns to keep the project from failing. These unknowns represent uncertainties to the consulting firm, and two levels is acceptable.

3.2.4 Social Process for Identifying Essential Variables

The solution space is also a social space. The DMU is a social organization tasked with the responsibility of identifying the essential variables for decision situations in its jurisdiction. We will now sketch a process to obtain an integrated and harmonious perspective from the participants (Rentsch et al. 2008). It is a procedure that improves individual judgements by providing complementary knowledge and information from each member in the DMU . Harmony here is used to mean the absence of mental dissonance or cognitive conflict. It is not meant to mean that the DMU holds hands as a group and sing Kumbaya. Harmony is desirable because it promotes a complementary mental model that enriches the ones carried individually but which collectively forms an integrated whole. Complementary does not mean identical mental models, but it does mean consistent. The social goal is consensus so that the diverse actions from executives with diverse responsibilities will deal with their distinct domain so that the whole will make sense. For example, the executive from finance is unlikely to have a mental model that identical to that of a technology executive. But their mental models must cohere as a whole. The goal is to cultivate a consensual sense-making to improve alignment of goals and objectives, and coherent action. Complementary mental models cannot be imposed, they are cultivated.

With a small group of 10–20 people, the social process is straightforward. Without great difficulty, an experienced group facilitator can readily obtain a set of 7 ± 2 controllable variables. In this case, the process we will sketch may not be necessary. However, if the group is large, the procedures to be described next, we have found to be effective.

In the prescription that follows, we concentrate on controllable variables. It works equally well for uncontrollable variables. The process is an adaptation of the Language Processing (LP) Method of Shiba and Walden (2001). LP is a refinement of the well-known JK method to gather and organize ideas from a group of experts (Kawakita 1991; Ulrich 2003). We have used this process in many different decision situations; for engineering design problems, strategy formulation, financial investment strategy, public sector social creativity and innovation workshops, issue definition, and so on.

The social process has seven steps. All steps must be performed in silence. No talking is permitted to avoid discussions that inject bias and disrupt individual reflective thinking.

Step 1

Specify the goals and objectives . A goal is superordinate, it is thematic (Sect. 3.2). They are the “what”. Objectives are the “how and by how much”. Goals and objectives must make sense to all participants.

:

Step 2 Write the controllable variables . Each group person writes a controllable variable on a 3 × 5 post-it ® card. Each person can write as many as desired. Paste all cards on the conference table or a wall so that they can be easily read by all. The wall is now dense with cards. The variables, on the cards, will not be of equal importance, nor be at same level of abstraction. Some will be trivial, others inappropriate, and many will be subordinate elements of other variables. Down selection will be necessary to organize and cull the proposed set of variables.

:

Step 3 Down-select variables. Begin by giving each person α number of dots to paste on cards, where 1 ≤ α ≤ k and k ≈ 15–25% of the group size. Each person is permitted to paste one or more dots on a card, as they wish, until their dots are exhausted. This forces making choices and judging the importance each person attaches to a variable. Cards that have no dots or have the least number of dots can be discarded. Continue until there are approximately 40–60 cards left.

:

Step 4 Group variables. All cards will be at random locations. Instruct the group that each person must move cards close to other cards they judge to have some kind of affinity. The affinity criteria are personal and forbidden to be communicated. The silence rule applies. Any card can be moved an unlimited number of times. Proceed until all card movement stops and sets of grouped cards appear. There will be a few singletons that have no affinity to any group. That is permitted. The final grouping represents the closest group’s mental model that is closely aligned with individual mental model.

:

Step 5 Name each group. Each group is given a thematic name that characterizes and reflects the collective character of the cards in the group. As a result of the repeated movement of card to ones that have some affinity, the group will now represent a decision factor. The next step is to show the relationships among the groups (factors) in such a way that it provides a whole system view of the ensemble.

:

Step 6 Show relationships among groups. Develop a system interpretation of the ensemble of groups by making logical connections among the groups. This will require a disciplined discussion. The connections will appear as a network of groups. Follow this with an interpretation of the network with a system narrative that is consistent with the goals and the logic of the variables.

Step 7

Review and reflect on the process and its results.

3.3 Subspaces of Solution Space

The outputs of the solution space is the Cartesian product of two spaces, Eq. (3.3), In the next two sections, we show how to construct the controllable space and the uncontrollable space to obtain the output space.

$$ \left( controllable\kern0.17em space\right)\times \left( uncontrollable\kern0.17em space\right)=\left[ output\kern0.17em space\right]. $$
(3.3)

3.3.1 Controllable Space: Alternatives Space

Decision alternatives must be managerially actionable. The elemental building blocks of the alternatives are the set of the n controllable variables {Cij} for i = 1,2, … ,n, at each of their three levels j = 1,2,3.

Consider a simple example. Assume we have four controllable variables, C1, C2, C3, and C4. (C for controllable). For variable C1 (level 1), we denote as C11; i.e. C1 (level 1) = C11, C1 (level 2) = C12, and C1 (level 3) = C13. The same naming convention apply for C2, C3, and C4. We can array this as in Table 3.2 with 4 × 3 elemental blocks from which alternatives are built.

Table 3.2 Controllable variables C1, C2, C3, and C4 at three levels each

Table 3.3 shows the entire complete set of actionable decision alternatives without uncertainty. It is the full factorial set obtained from Table 3.2, i.e. all possible combinations of the elements in Table 3.2. Each row of Table 3.3 represents a decision alternative, as a 4-tuple of configurations of the controllable variables, e.g. alternative 3 is represented as (C11, C21, C31, C41) with output y3.

Table 3.3 All alternatives in the controllable space, and their outputs. Under NO uncertainty

Notably, Table 3.3 shows how the complexity of the variety of actionable alternatives has been discretized into a finite set. The set is still large, and can get very large very rapidly as the number of controllable variables increase. But we will show how it can reduced to a manageable set. In this case, for 4 variables at 3 levels, the full factorial is comprised of 34 = 81 alternatives as shown in Table 3.3. The number of alternatives increase exponentially by the number of factors, Eq. (3.4),

$$ number\kern0.17em of\kern0.17em alternatives={n}^f $$
(3.4)

where n is the number of levels of the variables and f is the number of factors.

Consider another example to illustrate how this combinatorial complexity rises. For 6 variables with 3 levels each, we have 36 = 729 alternatives. For 11 variables of 2 levels, we have 211 = 2048 alternatives. This volume of alternatives is too large to analyze. This is a serious challenge, we can represent the entire space of alternatives, but it is still too large for it to be practical. In the Operations Space (Sect. 4), we show how to uncomplicate this complexity.

The purpose of decision alternatives is to estimate how they will perform in order to select one that will satisfice the stated goals and objectives. The outputs of each alternative is shown by the column identified as output, yi = f(alternative i) in Table 3.3. But this output is under ideal conditions without any uncertainty, which is not realistic. How to address this question is the topic of discussion in the next paragraph, Sect. 3.2.2.

3.3.2 Uncontrollable Space: Uncertainty Space

All the alternatives will operate under some uncertainties. As in the case of controllable variables, we need to discretize the uncertainty space to make it manageable. As in the case of controllable variables, we use the uncontrollable variables to represent the space of uncertainty. As a simple example, say we have three uncontrollable variables, U1, U2, and U3. (U for uncontrollable). The subscript notation s identical to that of controllable variables. The space of uncertainty is determined, Table 3.4.

Table 3.4 Uncontrollable variables U1, U2, and U3 at three levels each

Table 3.5 which shows the entire complete set of uncertainty conditions. This is the full factorial set obtained from Table 3.4, the entire set of uncertainty conditions. The complexity of the uncertainties has been discretized into a small set. For 3 variables at 3 levels, we have 33 = 27 alternatives (Table 3.5).

Table 3.5 Entire set of uncertainties (uncontrollable space)

3.4 Output Space = All Alternatives Under All Uncertainties

Recall that the output space is the Cartesian product of two mutually exclusive sets,

$$ \left( controllable\ space\right)\times \left( uncontrollable\ space\right)=\left[ output\ space\right]\kern1.25em \mathrm{e}.\mathrm{g}.\kern0.4em \mathrm{Table}\;3.5 $$
(3.5)
$$ \left( controllable\kern0.17em space\right)=\left({alternative}_1,\dots, {alternative}_a\right)\kern1.25em \mathrm{e}.\mathrm{g}.\mathrm{Table}\;3.3 $$
(3.6)
$$ \left( uncontrollable\kern0.17em space\right)=\left({uncertainty}_1,\dots, {uncertainty}_u\right)\kern1em \mathrm{e}.\mathrm{g}.\mathrm{Table}\;3.5 $$
(3.7)
$$ \mathrm{thus}\;\left({alternative}_1,\dots, {alternative}_a\right)\times \left({uncertainty}_1,\dots, {uncertainty}_u\right)=\left[{outputs}_{ab}\right]= matrix\kern0.17em of\; all\; outputs\kern0.17em under\; all\; uncertainties $$
(3.8)

Schematically, the output matrix looks like Table 3.6:

Table 3.6 Schematic of the output space

Each matrix entry, such as output 32, represents the DOE predicted output from alternative 3 under uncertainty condition 2. The schematic is filled as in Table 3.7. The remainder of this section is to show how to derive the outputs.

Table 3.7 Complete set of alternatives set under entire set of uncertainty conditions

The universe of alternatives under certainty is the set {alternative α}, where 1 ≤α≤ 81 at each of 27 uncertainty conditions. Therefore, the universe of alternatives under uncertainty is the set of 81 alternatives under the 27 uncertainty conditions. Thus the number of alternatives under uncertainty is 43 x 33 = 2187. This set is shown as follows in shorthand in Table 3.7. We have discretized the complexity of the entire set of alternatives under the entire set of uncertainties by the Cartesian product of two discrete sets.

We have discussed three important points.

  • How to represent the entire set of decision alternatives under certainty Eq. (3.8).

  • How to represent the entire set of uncertainty conditions (Table 3.5).

  • How the Cartesian product of the alternatives and the uncertainty space produce the set of alternatives within every uncertainty condition (Table 3.7).

Clearly the complexity of the output set {yua} is sizeable.

In the next section we will show how to reduce the size of this set, how to estimate the outcomes for this reduced set, and how to construct the optimally robust decision alternative (Klein 2001).

3.5 Base Line = Do-Nothing-Different Case = Business As Usual (BAU )

We need alternatives to find better prospects for the organization. Any improvement requires a reference point to evaluate results. It is natural and convenient to establish the reference point as the current state of the controllable variables and the uncontrollable variables. This is reasonable and practical. There is data on organizational performance and information on the uncontrollable variables. This is the current state of the decision specification; it is also the condition should the decision-makers choose to do nothing different. Taking no new action leaves the organization to run “business as usual”. This is the origin for using the expression business-as-usual (BAU ). It can be said that the idea of executive decisions is to improve on BAU or to confirm BAU as suitable.

However going forward, we cannot assume the uncontrollable environment will remain unchanged while doing nothing. Therefore, in addition to the base-line’s specification, we need to complete four additional tasks, viz. specifying the: (i) current state of the controllable variables, (ii) current state of the controllable variables designing, (iii) one or more specifications of favorable states of uncontrollable states, and (iv) one or more specifications of less favorable states of uncontrollable states. More or less favorable conditions are defined relative to the actual state of uncontrollable conditions. In the paragraphs that follow, we will show how to do these tasks. We will use Tables 3.2, 3.3, 3.4, 3.5 for a hypothetical example.

Assume the four controllable variables—C1, C2, C3, and C4—are used to characterize the actual state (Table 3.3). And that, the current state is specified by C1 at level 3, C2 at level 2, C3 at level 1, and C4 at level 3, i.e. (C13, C22, C31, C43) specifies the actual state of the executive- management decision. This is alternative 66 in Table 3.3.

Specify the configuration of the actual state of uncontrollable variables. Hypothetically for example, consider the set of uncontrollable variables as described in Table 3.4. The uncertainty-state of the actual decision situation is an element in Table 3.5. Suppose the actual uncertainty state is represented by uncontrollable variables U1 at level 2, U2 at level 1, and U3 at level 2, i.e. (U12, U21, U32). This is uncertainty condition #11 in Table 3.5. Specification of this actual condition poses no difficulty it is self-evident by observation from the DMU . The next two tasks are:

  • first is the specification of one or more favorable uncontrollable conditions, and

  • second is the specification of one or more less favorable uncontrollable conditions.

More or less favorable conditions are relative to the actual state of uncontrollable conditions. Suppose the less favorable uncontrollable condition is (U11, U22, U31) and the more favorable uncontrollable condition is (U13, U22, U32). We can line up the set of uncontrollable conditions facing the BAU decision alternative from least favorable, to BAU, to most favorable as: {(U11, U22, U31), (U12, U21, U32), (U13, U22, U32)}. In summary, putting it altogether, the BAU uncontrollable situations are represented by Table 3.8.

Table 3.8 BAU baseline

The number of uncontrollable environments is not limited to three as shown in Table 3.8. Many more can be described. For example, in addition to those in Table 3.8, one could specify a much more favorable state, and a very unfavorable state. As a practical matter, do not recommend more than five states because it encumbers the cognitive load and makes the set uncontrollable excessively complicated. In this example, the DMU must record three results to complete the table. “Actual performance” data can be easily obtained from actual business records. The other performance data are provided by each member of the DMU working independently. It is not advisable to exert peer pressure, foment herd instincts, to produce a false convergence. The task of each DMU member is to fill each of the cells marked by “enter your forecast” with a value that represents their best professional judgment. Each DMU member is permitted and encouraged to consult their staffs, non-DMU colleagues, or subject matter experts to arrive at their forecasts. But DMU members are prohibited from communicating with each other. Research findings show that knowing others’ preferences degrades the quality of group decisions (e.g Mojzisch and Schulz-Hardt 2010). The rule of non-disclosure of individual forecasts is supported by research.

There is no such thing as a “right” or “wrong” forecast. It is a forecast, a professionally informed judgement; it is what the literature calls “judgmental forecasting” (e.g. Fildes et al. 2009; Wright and Rowe 2011). DMU members must not automatically make symmetric intervals centered on “actual” for the “more favorable configuration” and the “less favorable configuration” of the uncontrollable environment. There is no logical reason to suppose they should be; but they can be for special and explainable situations.

The DMU facilitator averages the input for each uncontrollable environment to produce the forecasts for each of the “current,” “worst,” and “best” forecasts. This average represents the forecast of the DMU as a group. Averaging is a specific method of combining forecasts (e.g. Armstrong 2001; Makridakis 1989; Makridakis and Winkler 1983). Averaging of independent forecasts is recognized in the literature as a valid method of group-based judgmental forecasting. The requirement is that the forecasts must be arrived independently and using a systematically developed and documented procedure than can be replicated consistently (Armstrong 2001). Hibon and Evgenious (2005) are less sanguine and report that special experiments reveal that combining is inferior to the best alternative, but acknowledge that “[A] limitation of this study is on how to choose among methods or combinations in an optimal way (Hibon and Evgenious 2005, 23)”. This is a severe limitation. As such their findings are not actionable and thus of limited practical use. Therefore, we concentrate on the reasons that make combing effective. There are practical reasons why combining forecasts is useful. Combining reduces errors from bias and flawed assumptions. “Combining forecasts improves accuracy to the extent that the component forecasts contain useful and independent information (Armstrong 2001).” These key questions of debiasing, useful and independent information are addressed in the next Sect. 3.3.6.

3.6 Debiasing Social Process

Granger the 2003 Nobel laureate in economics observed that “aggregating forecasts is not the same as aggregating information … (Wallis 2011, 15).” Kerr and Tindale (2011) confirm the view that information exchange among group members strengthens benefits beyond averaging. Scholars’ findings reveal a series of crucial points that must be considered in the forecasting social-process. The importance of non-disclosure has already been pointed out (Mojzisch and Schulz-Hardt 2010). We have also addressed the pivotal role of an able facilitator during the social process (Regan-Cirincione 1994). Schulz-Hardt’s et al. (2006) work finds that group discussions that go beyond anonymous Delphi type meetings, but also encourage face-to-face discussions improve the quality of the group members’ judgements. Significantly, Wright and Rowe (2011) and Russo and Schoemaker (1992) find that of dissenting discussions are very effective in forecasting group meetings. Especially when openly exercised by a heterogeneous group (Yaniv 2011). Constructive dissent, with meaningful new information and informed judgments, is useful. In fact counterfactual thinking foments creative thinking in problem solving (e.g. Markman et al. 2007).

In this section, we try to put these findings to work in a forecasting social-process geared to debiasing mental models and improving group and individual performance. We discuss what kinds of information are needed to debias mental models, what is the debiasing social process, and what are the requirements on the composition of the DMU membership. In addition to the information, we present our facilitated social process combined with information to improve the accuracy of the group’s judgements. Regan-Cirincione’s (1994) work shows this kind of integrated process is effective in producing forecasting quality.

We begin with counter-argumentation (Russo and Schoemaker 1992) as the central debiasing social process. Counter-argumentation is designed to mitigate the danger of group think (Janis 1992; Carroll et al. 1998), narrow framing (Tversky and Kahneman 1974; Russo and Schoemaker 1989), and false anchoring (Baron 2000). Counter-argumentation procedures reduce systematic biases by insisting on explicit, but anonymous, articulation of the reasons why a forecast derived from mental models might be correct and why it might not be correct (Fischhoff 1999; Russo and Schoemaker 1992; Arkes 2001; Koriat et al. 1980). The strategy is to search for disconfirmatory information in group decisions to debias and improve accuracy (Kray and Galinsky 2003). Our debiasing approach insists on counter-argumentation without disclosure or discussion of the forecast figures so that “concentering” (Roth 1995) takes place without peer pressure, which can drive a false convergence (Mest and Plummer 2003; Hanson 1998; Boje and Mirninghan 1982). Counter-argumentation also improves the DMU ’s effectiveness in problem solving by enriching and complementing team members’ individual mental models (Mohammed et al. 2010; Mohammed and Dumville 2001; Kray and Galinsky 2003; Lerner and Tetlock 2003). Winquist and Larson (1998) show that information pooling of fresh information, which is shared, improves decision quality and the ability to conceptualize alternatives.

Emphasizing the reasons for both having strong confidence on the forecasts and its weakness, combining anonymity, and discussions on rationale and logic (rather than numbers), all together puts a premium on information, knowledge and meaning. All these are important because they can positively influence accuracy (e.g. Ashton 1985; Dawes and Mulford 1996; Winquist and Larson 1998). Another vital aspect of counter-argumentation cannot be overlooked. The diversity of the group doing the decision analysis (Yaniv 2011; Cummings 2004; Cummings and Cross 2003) is important so that rich, subtle and nuanced arguments are brought to the table. To these ends, our debiasing processes are an adaptation of Lerner and Tetlock’s (2003) framework, which considers all these factors (Appendix 3.4). Leading management consultants use a similar approach to debias and enrich information exchange (Sorrell et al. 2010). Debiasing is designed to “activate integratively-complex thought that reduces biases” (Appendix 3.4). The framework: also “predicts that integratively-complex and open-minded thought is most likely to be activated when decision makers learn prior to forming any opinions that they will be accountable to an audience (a) whose views are unknown, (b) who is interested in accuracy, (c) who is reasonably well informed, and (d) who has legitimate reason for inquiring into the reasons behind participants’ judgments/choices (Lerner and Tetlock 2003).”

We introduce accountability into “integratively-complex and open-minded” thought process by means of counter-argumentation and learning through feedback. For the BAU case, we have two forecasting rounds. Counter-argumentation is done at the end of the first round before moving to the next one. The first round includes a discussion session where the counter-arguments are disclosed (without attribution) and openly discussed. We then proceed to the second round of BAU forecasting. We ask the participants to record their individual confidence level at the end of each round because we would like to know the effect on confidence resulting from the new information disclosed during counter-argumentation. This is why complementary, heterogeneous knowledge and DMU membership is important (Mohammed et al. 2010; Banks and Millward 2000). At no time are the actual forecast figures permitted to be disclosed or discussed to anyone in the group.

Documented rationales, in support or in doubt, of individual forecasts are anonymously disclosed to the DMU . This is followed by a discussion for each documented rationale. Because the rationales are anonymous, there is less posturing and defensiveness than expected from these type of discussions. The goal is for the DMU members to learn from each other, each other’s reasons, and less about the actual forecast numbers. Using the documented rationales as a whole, we ask that each person review, reflect, and adjust their individual forecasts in light of new information. The adjustments are done individuality, for which the no-discussion rule still applies.

Everyone is reminded that we are not seeking consensus numbers, but improved judgment in light of new and complementary information and to improve tacit knowledge (e.g. Polanyi 2009; Erden et al. 2008). There is no evaluative appraisal implied or penalty imposed just because the forecasts differ from person to person. The differences reflect distinct domain expertise and tacit knowledge each individual brings to their forecasts. The revised forecasts are used to calculate the averages, as discussed in the previous section, Sect. 3.3.5. Figure 3.6 shows typical results of this debiasing procedure.

The forecasts from round 1 (without debiasing) and round 2 (after debiasing) for the actual (or current), worst, and best uncontrollable situations (Fig. 3.7). In the current situation (the left hand panel), the mean has not changed but the variation is less. In the worst uncontrollable situation (the middle panel) the mean is not as bad as initially judged and the variation has diminished substantially. In the best uncontrollable situation (right hand panel of Fig. 3.7), the mean is not as good as initially estimated, but the variation has declined substantially. This suggests to us that debiasing has introduced new information and knowledge to each DMU member and that the judgments have improved.

Fig. 3.7
figure 7

Improved forecasts before and after debiasing

We have operationalized Nobel-laureate Granger’s requirement for aggregating information (Wallis 2011) and have prescribed a debiased, social process as well.

4 Operations Space

4.1 New Strategy: Gedanken Experiments to Uncover System Behavior

We now focus our attention on the operations space. Figure 3.8 is a schematic of this space.

Fig. 3.8
figure 8

Schematic of the operations space

The key questions in the operations space are: “How do we determine the behavior of the socio-technical organization that will implement a decision’s specifications?” The complex, messy, and wicked nature of the situation and the socio-technical systems responsible for implementing solutions, surfaces the next crucial questions: “How do we determine the system behavior of the decision specification?” What is the quality of implementation? What is the quality of the DMUs estimates? Decisions’ processes are not like technical systems, which can be ex ante characterized by means of the physical sciences and its equations to model them with accuracy and precision. Through very detailed and comprehensive surveys supported by interviews, a complex enterprise can be modeled. But this is a labor intensive and protracted process. The model for the ADI company, which we will use as a case study to illustrate our paradigm, has over 200 loops of interactions and over 600 variables. It took months for MIT faculty members to model, calibrate, and run simulations for analyses and to gain meaningful and useful insights. The question is then: “is there a way to obtain the same result in a substantially more efficient way?” We answer is in the affirmative. This is what this book is about. But how?

First, by eschewing the traditional thinking of developing ex ante analytic models. Second, by insisting on a fresh strategy. One that does not presume to know the explicit analytic equations that represent the sociotechnical system’s machinery. Unlike the conventional approach that presumes knowledge of mathematical expressions among variables to represent sociotechnical systems’ behavior, we use experiments. Using gedanken experiments , we observe and measure the behavior and output from the sociotechnical system. Ex post we infer system-behavior patterns from the outcomes of alternatives from gedanken experiments that reveal a phenomenological representation of the system. Phenomenology is a scientific methodology to describe and explain observations. Appearance reveals and explains reality (Smith 2013; Otto and Wood 2001).

Using what kind of experiments? Gedanken (thought) experiments (e.g. Sorensen 1992; Brown and Yiftach 2014). Gendanken experiments are structured tests designed to answer or raise questions about a hypothesis without the need for physical equipment. But whose results can be observed and measured.

An experiment is a test (e.g. Montgomery 2001). An experiment is a well-structured and purposeful procedure to investigate a principle, hypothesis, or phenomenon. The principles, hypotheses, or phenomena can be about nature, systems, processes, philosophy, and so on. Our experiments concentrate on the behavior of corporate systems and processes resulting from potential decisions specifications. The goals of our experiments are to understand and to determine the behavior and performance of the sociotechnical systems that operationalize a decision specification.

The vast majority of experiments are performed with physical apparatus, e.g. Michelson and Morely’s celebrated inquiry about the speed of light (Michelson and Morley 1887). CERN’s experiments to find the Higgs boson. But many equally insightful experiments can be performed without any physical artifacts, like Galileo’s gedanken experiment on the speed of falling objects, Maxwell’s demon, Schrödinger’s cat, Einstein’s falling elevator, and so on. These are famous examples of gedanken experiments .

Galileo’s experiment, on the question whether heavier objects will fall faster than lighter ones, is an exemplar of gedanken experiments. Contrary to apocryphal accounts, he did not drop objects from the Tower of Pisa. He supposedly arrived at his legendary scientific conclusion by reasoning. He imagined dropping a heavy and light object that are “bundled” together. If a heavier object falls faster than lighter ones, than the bundle would fall faster than the heavy object alone. But since the bundle contains a lighter object, the lighter object should slow down the fall of the bundle. The bundle cannot fall faster and slower. By the principle of the excluded middle, they fall at the same speed. With no physical equipment, he proved that heavy and light objects will fall at the same speed.

Gendanken experiments are structured tests designed to answer or raise questions about a hypothesis without the need for physical equipment. But whose results can be observed and measured. The experiments about such questions are framed and manipulated by “varying and tracking the relations among variables” (Hopp 2014, 250). Gedanken experiments are performed using mental models that require experts’ domain expertise and tacit knowledge (e.g. Polanyi 2009; Erden et al. 2008). Tacit knowledge is layered on detailed cumulative understanding of the particulars, experience, failures, and effective practice. Tacit knowledge is not something that can be acquired from books, manuals and the like. Driving is tacit knowledge, heart surgery is tacit knowledge, and so is piloting an F-35 fighter jet. This kind of knowledge is acquired by doing. Which is why use of gedanken experiments us so useful to seasoned scientists and engineers. Our executive-management decision-paradigm uses gedanken experiments, as well as, real tests, for confirmatory and disconfirmatory data to analyze outcomes and execution quality.

4.2 Operations Space: Conceptual Framework

Regardless of the type of experimental assets that are used, physical or intellectual; given the objectives of our investigation, the fundamental questions that need to be addressed are:

  • What kind of experiments do I need?

  • What is a sufficient and comprehensive number of experiments?

  • How will the findings improve my understanding of the problem?

  • Is there a science to address this?

The science to address these questions is called Design of Experiments (DOE ). DOE answers these questions by first positing that a system or a process can be represented by a simple, abstract, and uncomplicated construct shown in Fig. 3.9.

Fig. 3.9
figure 9

Schematic of gedanken experiments for executive-management decision

The input variables of the system or process, {C1, … , Cp} are managerially controllable by the experimenter and {U1, … ,Uq} are managerially uncontrollable. The response, output, is given by yα = f(C1, … ,Cp, U1, … ,Uq). In our use of the DOE , the experiments are gedanken experiments about decision alternatives. The gedanken experiments are about sociotechnical systems and processes and the organizational units to implement the decision specifications committed by the decision-maker. The arrows pointing up represent the methods and mechanisms used by the organizational units to execute and implement. The key methods are DOE , Measurement System Analysis (MSA ) (AIAG 2002), debiasing procedures, evaluation methods for decision outcomes and implementation quality.

DOE is an experimental strategy to determine the kind of experiments and the sufficient number required to systematically make inferences and predictions about the behavior of a system. DOE allows the experimenter to determine the phenomenological behavior of the system/process. The idea is to use the set of responses from the experiments to fit a relationship over the design space of controllable and uncontrollable variables (also called factors) (Otto and Wood 2001).

“A well-organized experiment, followed by thorough data analysis … can provide answers to the following important questions:

  • Which are the most important factors affecting the performance characteristics?

  • How do the performance characteristics change when the factors are varied?

  • What is the joint influence of factors on the performance characteristics?

  • Which is the optimal combination of factor values?” (Vucjkov and Boyadjieva 2001).

DOE presents methods to answer the questions of the kind of experiments that can be constructed, the ways to analyze them, and how to make reasoned and informed predictions about the output y. A key feature of the DOE methodology is that it provides methods to determine the smallest number of experiments that will satisfy the criteria of sufficiency, comprehensiveness, and ability to predict outcomes and variances over the entire space of alternatives under uncertainty.

4.3 Design of Experiments (DOE )

The genesis of modern experimental methods is recent. We follow Montgomery (2001), Wu and Hamada (2000) to punctuate the historical development of DOE into stages of progressively more sophisticated methodologies and increasing applications in different domains of inquiry. (Appendix 3.1 shows a sample of typical engineering problems studied using DOE .) We extend this progression by including our executive-management decision paradigm as the most recent new advance in DOE . We will discuss the reasons why this is a frame-breaking and challenging undertaking.

Stage 1 is the agricultural era. As the inventor of the DOE methodology (Fisher 1966, Box 1978), Fisher’s interest was in producing high yield crops under different controllable and uncontrollable variables of water, fertilizer, rain, sunshine, and other factors. He systematically formulated experiments, which specified crop treatments with different combinations of variables at different values. Some plots were fertilized others not, some were irrigated more intensely than others, and so on. Of course, they were all subjected to many different uncontrollable conditions; such as rain, sunshine, and so forth. As one would expect, the set of possible treatments became very large, and the variety of uncontrollable conditions were many. The combinatorial explosion of controllable and uncontrollable factors grew very large. (His term, “treatments”, is till used today as a synonym for “experiments”.) To address this complexity, Fisher devised methods to reduce the number of experiments to merely a fraction of the total possible experiments. And to analyze experimental results, he created the Analysis of Variance (ANOVA) as a new statistical method to study the joint effect many factors. Fisher also articulated the renown experimental principles of randomization, replication, and blocking. His work stands as a landmark in original and practical thinking.

Stage 2 is the industrial era ushered by Box and Wilson (1951), statistician and chemist, respectively. They recognized that unlike protracted agricultural experiments; chemical and process types of experiments can produce results with much greater immediacy. Learning from immediate results, they were able to rapidly plan an improved next experiment. Armed with this insight and more sophisticated statistical methods, they developed the Response Surface Methodology (RSM). RSM is a sequential procedure. The objective is to move incrementally from the current operating region to an optimum. The investigator begins with simple models, and as knowledge about the solution space improves, more advanced models are used to explore the regions of interest and to determine the extremum. Box and Wilson’s (1951) innovation was to demonstrate the efficacy of Fisher’s method in another domain of inquiry.

Stage 3 is the product and manufacturing quality era. Taguchi (1987, 1991) introduced the DOE and the concept of robustness for use in product design and manufacturing. A product or a process is robust when:

  • the performance, its response or output, is highly insensitive to uncontrollable or difficult to control environmental factors even they are not removed,

  • the performance is insensitive to variations transmitted from uncontrollable variables of the exterior environment.

Using Taguchi’s innovations in DOE methods, robustness is achieved through robust product design (e.g. Phadke 1989; Taguchi and Clausing 1990; Fowlkes and Creveling 1995; Vucjkov and Boyadjieva 2001; Otto and Wood 2001). The design engineer specifies settings of controllable variables that drive the mean response to a desired value, while simultaneously reducing variability around this value. It is rare that both of these objectives can be met simultaneously; the designer must make an artful compromise. Taguchi defined the signal-to-noise ratio heuristics that simplify this task. He further simplifies the task of designing treatments by providing another innovation, the specifications of a comprehensive set of pre-defined treatments in the form of orthogonal arrays, also called Taguchi arrays. These arrays are sample subsets of the entire set of experiments from which one can predict the outcome of any experiment. These arrays vastly reduce the number of alternatives that need to analyzed and considered. It is a breakthrough complexity -reduction mechanism.

We saw in Sect. 3.1, the number of alternatives rises as the exponent of the number of factors, and by including uncertainty, the complexity escalates further. Table 3.9 shows the growth of combinatorial complexity and the dramatic efficiency of the Taguchi arrays sampling. Using these arrays, the one can predict the results of any other alternative in the entire space. For example, with 10 variables, one of which is specified at 2 levels, and 9 of which are specified at 4 levels, a sample set of 32, as defined by a Taguchi array, suffices to predict outcomes over the entire space of 262,146. Sampling efficiency is, therefore, 1−[(32)/(262,146)] = 99.998+%. We will use this approach to address the combinatorial complexity of alternatives in Chap. 9. Table 3.9 presents more detail.

Table 3.9 Sampling efficiency of Taguchi arrays

Consider the first entry in Table 3.9. In this case, we have 4 controllable variables at 3 levels each. The full factorial set consists of 34 = 81 experiments. However, with a sample of 9 experiments, we can predict the outcomes for the entire 81 experimental constructs (Table 3.10). The sampling efficiency is [1−(81/9)] = 88.889%. This sample of 9 experiments is known as the L9(34) array. L stands for Latin square, or orthogonal array, 3 is the number of levels of the variables, and the superscript 4 stands for the number of variables.

Table 3.10 Nine experiments suffice to predict the full factorial of 81 experiments

These ideas have been successfully applied in a wide variety of engineering applications, e.g. Wu and Wu (2000), Clausing (1994), Phadke (1989) and Taguchi et al. (2000). Specific examples of applications are in Appendix 3.3.

Stage 4 is what this book is about. Leveraging the aforementioned achievements, we apply DOE to the practice of executive-management decisions. But we must do more than that. We must do so in a complementary, meaningful and insightful way. This is a challenging undertaking. For example, in each of the previous stages, scientists and engineers have taken the lead in the innovations of DOE . To inform them, they had the singular advantage of well-established science, its laws, and theorems for sense making, framing, and problem solving.

The practice and the discipline of executive-management does not have these advantages to nearly the same extent. The problems and opportunities facing executive-management tend to be ill structured, messy, and wicked. The field of executive-management decisions is a sociotechnical discipline. The interplay between social and technical variables create a set of unique dynamics. We must integrate and synthesize findings from thinkers in complex systems, cognitive psychology, organizational theory, economics and managerial practice. To understand and appreciate the significance of DOE in decision theory and practice, we must first develop some strong intuition about the methodology. This the subject of the remainder of this chapter.

4.3.1 DOE Foundations

DOE has three pillars: Analysis of Variance (ANOVA), regression analysis, and the principles of DOE (e.g. Vucjkov and Boyadjieva 2001; Wu and Hamada 2000).

ANOVA is a statistical method to quantitatively derive from a multivariate experimental data the relative contribution that each controllable variable, interaction, or error together make to the overall measured response. Common practice is to present the results of an experiment using an ANOVA table as shown below for two factors A and B (Table 3.11) (Montgomery 2001).

Table 3.11 Analysis of Variance table for two-factor fixed effects model

The second pillar is regression analysis. Regression analysis is a powerful method for model building because experimental data can often be modeled by a general linear model (also called the regression model). Given response (output) y is related to p variables x1, … , xp as y = Σβixi + ε. If we have N observations y1, … , yN , then the model takes the linear polynomial form of yi = β0 + β1x1 +  + βpxp + εi, with i = 1, … ,N. These N equations are then y = X β + ε in matrix form. X is the N×(p + 1) model matrix. Since the experiment gives us only a sample, we want \( \widehat{\boldsymbol{y}}=\boldsymbol{X}\widehat{\boldsymbol{\beta}} \) and from the least squares estimate we obtain β ̂ = X X 1 X y and using the R2 statistic we can determine the proportion of total variation explained by the fitted regression model \( \boldsymbol{X}\widehat{\boldsymbol{\beta}} \). And using the F statistic we can get the p values for the explanatory variables x1, … , xp. (e.g. Vucjkov and Boyadjieva 2001; Wu and Hamada 2000). (We will discuss this topic in more detail in the Results Space, Sect. 5 and show how it is used in case study applications in later chapters.)

The third pillar is the set of principles first formulated by Fisher (1966). They are randomization, replication, and blocking. Randomization is a fundamental principle of any statistical analysis. It refers to both the allocation of experimental assets, as well as, the time and sequence in which treatments are performed. Randomization minimizes the impact of systematic bias that may exist. Replication is a distinct concept about repeated measurements of a single experiment. It refers to performing the same experiment and taking measurements for each. Replication permits us to determine repeatability and reproducibility of experiments. Blocking is a way to control for factors that are not considered critical to the response of the experiment, e.g. the time or day when the experiment is performed or the supplier of materials.

4.3.2 Advantages of DOE

There are many practical attributes that make DOE useful and practical. The salient ones are discussed next. Demonstrably Effective

DOE methods are widely researched, reviewed in refereed journals, and documented in the literature. Wu and Hamada (2000) present 80 examples in their book. Frey et al. (2003) identify a very wide variety of applications in engineering and science. Antonsson and Otto (1995) use Taguchi methods of DOE in product design. Clausing (1994) based on his experience in Xerox presents examples how Taguchi methods was used at different phases of the product development life cycle. Fowlkes and Creveling (1995) do the same based on their experiences in Kodak. Taguchi et al. (2000) and Wu and Wu (2000) present data, models, and analysis from numerous successful industry experiments. Appendix 3.1 contains a sample of DOE applications in engineering with pointers to references.

Addresses Key Difficulties

DOE ’s statistical methods overcome many difficulties facing an experimenter. The key difficulties are noise, complexity, interactions, and causation versus correlation (Box et al. 2005). Noise is a major source of uncertainty. DOE clearly separates controllable variables from uncontrollable variables (noise variables) to analyze the effect of the interactions among the controllable and aleatory variables on the output. The ANOVA table reports the factor interactions (Sect. 5.2). To address complexity, accumulated empirical evidence has distilled three very practical principles for the analysis of factorial effects; they are the hierarchy, sparsity, and heredity principles (Wu and Hamada 2000). Hierarchy means that the n factor effects dominate n + 1 factor interactions, n ≥ 1. Sparsity asserts that the number of important variables in a factorial experiment is small. This is important because DOE naturally reduces complexity. Heredity is the observation that for an interaction to be significant, at least one of its parent should be significant. For causation and correlation, “interplay between theory and practice” must come together (Box et al. 1978). Experimenters must rely on domain knowledge working principles to construct relationships among variables.

“Black-box” Approach

A distinctive DOE advantage is its phenomenological approach to the analysis of systems and processes. The systems under investigation are considered as a “black box” and provided the inputs and variables are known, we can characterize the behavior of the system, ex post , by analyzing its output. The ability to view systems phenomenologically as a black-box combined with the ability to consider the effect of uncontrollable variables gives us the ability to make predictions about the performance of complex systems. And very significantly, we can design and build systems or processes to be robust against noise, i.e. against the effect of uncontrollable conditions. These black-box”benefits are particularly useful when the experimenter may not know or be able, ex-ante, express the behavior of the product or system with equations. Using DOE methods the experimenter can, ex-post, empirically derive a transfer function that represents the behavior of the system over the solution space. All these are significant and practical advantages in the study and solving challenging executive-management decision situations.

4.3.3 New Idea: DOE for Executive-Management Decisions

Applications of DOE for management decisions made at senior-corporate executive levels is barely visible in the literature. The role of experiments in business looks narrowly limited to product screening concepts and product testing during the early phases of product development. This is useful and traditional. To our knowledge, these methods do not explore the entire solution space under all uncertainty conditions; the large majority of “what-if” questions remain a mystery. Work on a problem of optimal scheduling of earth-moving equipment using simulations with a queueing model is reported (Smith et al. 1995). There the objective is to find the optimal setting of variables to optimize their output. They do not appear to exploit uncontrollable variables, so the system effects under uncertainty remain largely unexplored. Marketing scholars test a variety of the mix of product, price, promotion, and place (Kotler and Keller 2009) for consumer products (Almquist and Wyner 2001). But the use of uncontrollable variables is not discussed and therefore the effect of uncertainty is indeterminate. Thomke (2001, 2003a, b) argues that experiments using prototypes, computer simulations, and field tests of service offerings should be integrated into a company’s business process and management system. We impose the following challenging requirements to extend and complement, to a new level, the usefulness of these ideas. The design decision alternatives must be:

  • the result of explorations of the entire solution space anywhere in the entire space of uncertainty, i.e. uncontrollable conditions,

  • robust under uncontrollable conditions even when these conditions are not removed,

  • artefacts of technical and social processes, which have been systematically designed, and

  • have been subjected to debiasing procedures.

These requirements are new and novel to executive-management decisions and address a significant void in research and the practice.

A goal of this research is to address this void. There is an abundance of research literature on DOE applications in engineering, manufacturing, and the sciences, but their absence, in managerial applications, is conspicuous. This can be explained by the fact that the traditional applications are in disciplines rooted in the sciences, engineering, or operations research. Experimenters, in these disciplines, have the benefit of the laws of physics and their analytic equations to guide them in identifying variables and framing their experiments. Students of corporate decisions do not have these advantages to nearly the same extent, which is why the science of DOE is so new and practical for executive-management.

4.3.4 Our Use of DOE

Our strategy is to approach executive-management decisions as engineers of complex sociotechnical systems. We use product development as analogous to engineering decisions. The former is about physical products, the latter is about intellectual artifacts. Like engineered products, executive-management decisions must be systematically planned, designed, and operated to perform to specifications. These considerations and the advantages of DOE , motivate us to use DOE to frame our executive-management decision-decisions and design decision alternatives. To address uncertainty, the DOE methodology unambiguously distinguishes controllable and uncontrollable variables. It also provides us with methods to analyze their interactions and effects on the system that generates the output.

5 Performance Space

5.1 New Strategy: Robustness, Repeatability, Reproducibility, Reflection

There are four key questions in the performance space (Fig. 3.10). One, the structure of the decision specifications. Two, the production quality of the sociotechnical processes that enact decision specifications, i.e. the consistency and trustworthiness of the production system. Three, quality of the input data and the forecasting quality from the DMU . And four, the ability to learn from good and bad outcomes by reflecting on the experiences throughout the life cycle. The machinery we will use to answer these questions are the ANOVA statistics, DOE main effects and standard deviations response tables, and Gage RR statistics Measurement System Analysis (MSA ).

Fig. 3.10
figure 10

Schematic of the performance space

5.2 Analyzing Data: Analysis of Variance (ANOVA)

Our gedanken experiments uses of a number of controllable input-variables that interact with a number of uncontrollable variables. DMU , adjunct experts, members and their organizations identify these variables. The questions of interest are:

  • which controllable variables are most important? What are the intensities of their contributions to the outcomes?

  • do the controllable variables explain sufficiently the observed outputs? I.e. is there additional important information that was missed?

  • are we learning from what we are doing and from the results we are getting?

ANOVA is a statistical method to quantitatively estimate the % contribution that each controllable variable, interaction, and error makes to the outputs (responses) that are being measured (e.g. Box et al. 1978; Montgomery 2001; Levine et al. 2001). The intensity of each contribution is determined by the relative contribution to the variation of each controllable variable, interaction, and error to the total variation observed from the measurements. Variation is obtained from the sum of squares analysis and they are reported in an ANOVA table (e.g. Table 3.12).

Knowing the % contribution, each controllable variable or interaction makes to the output, is important because it gives the decision-maker insight into the relative importance of each controllable factors to the output. In our field experiments (Part III of this book) a senior executive noted, “[I] always thought this factor was important, but for the first time, I am told how important. And with numbers no less.” In our Japanese experiment, we found that one variable, which the decision-maker agonized intensely about, turned out to have negligible effect on the final result. Knowing the % contribution of the interactions is significant because if the interactions are small, it informs the decision-makers that they can think about the controllable variables additively. The allocation of company resources to the implementation of corporate decisions can now be made in a way that is consistent with the contribution that a controllable factor can make to the outcome.

A typical ANOVA table is, for example, Table 3.12 taken from our case study in Chap. 6.

Table 3.12 Example of ANOVA table

“Source” is the column that identifies a controllable variable, interaction, or error. The controllable variables are all present here, r&d, yield, cogs, price. The term yield*cogs is the interaction between these two variables.

DF (or DOF) means degrees of freedom. One can think of DF as the number of equations needed to solve for unknowns. The number of equations represents their capacity to solve for the variables. Similarly, DF represents the capacity of a variable (or an experimental design) to produce additional information. Statisticians estimate a statistic by using different pieces of information, and the number of independent pieces of information they use to calculate a statistic is called the degrees of freedom. For controllable and uncontrollable variables with n levels, the DF is n-1.

Sum of Squares Total (SST) represents the total variation among all the observations around the grand mean. The sum of squares, SS, due to a factor A (SSA) represents the differences the various level of factor A and the grand mean. Seq SS measures the SS when each variable is considered in the sequence they are listed under the column with the heading of “Source”.

Adjusted SS (Adj SS) measures the amount of additional variation in the response that is explained by the specific variable, given that all other variables have already been considered. Hence, the value for the Adj SS does not depend on the order in which they are presented in the Source column. The Seq SS and Adj SS are identical when the model is balanced. Balance is a combinatorial property of the model. For any pair of columns in the array that is formed for all the experiments, all factor-level combinations occur an equal number of times (e.g. Wu and Hamada 2000). By definition, orthogonal arrays are balanced, so the Seq SS and the Adj SS columns display identical values. This will be the case for all our experiments because we will be using orthogonal arrays exclusively. Our orthogonal arrays are always balanced. We note that the SS’s for yield*cogs is small relative to the other variables. Its contribution to the outcomes is very small, 227/102,107 = 0.0022.

(Adj MS) = (Adj SS)/(DF) for a particular variable. For example for r&d, Adj MS = ½ × 712. It is the variance of the measurements for a particular variable. We obtain its % contribution through a simple division of its Adj MS by the sum of the individual elements.

F(A) is: F(controllable variable A) = (Adj MS of variable A)/(Adj MS error). This is also called the variance ratio and used to test the statistical significance of the variable A.

  • For F < 1, the experimental error is bigger than the effect of the variable. The effects of the variable cannot be discriminated from the error contribution. The particular variable is therefore statistically insignificant as a predictor of the output. In Table 3.12 all variables are statistically significant with p < 0.05, they are good predictors of the output. If p << 0.05, then the variables are strong predictors of the output being studied.

  • For F ~ 2, the controllable variable has a modest effect on the output.

  • For F > 4, the controllable variable effect is much stronger than the effect of error and is therefore statistically significant and a good predictor of the output.

  • For F > 5, the controllable variable effect is dominates the effect of error and is therefore statistically very strong predictor of the output.

R-Sq (R2 or R-squared) is the percent of variance explained by the model. It is the fraction by which the variance of the errors is less than the variance of the dependent variable. Or, how good the fit, of the data points is for the regression line, or how well the controllable variables explain the outputs.

R-Sq(adj) or (R2 adjusted) is the standard error of the regression rather than the standard deviation of the errors. R-Sq(adj) compares the descriptive power of regression models that include many predictors. Every predictor added to a model increases R-Sq and never decreases it. Adding more useless variables to a model, R-Sq(adj) will decrease, but adding useful variables, adjusted R-squared will increase. In our example, it does not appear that we have useless variables.

It is good practice to examine the residuals from the ANOVA model to convince ourselves that they are random normal with a mean of zero. A residual is simply expressed by the equation: residual = [(observed value)(predicted value)]. Ideally the residuals are always zero, but there are always aleatory factors that cause observed values to diverge from predictions. Although their presence is inevitable, we would like them to be randomly distributed. This indicates that they are not carriers of input information from factors that have not been considered. Random normal residuals increase our confidence in the validity of the choice of controllable and uncontrollable variables, as well as, how they represent the sociotechnical system behavior.

Showing the residuals graphically is a simple and effective way to analyze the distribution of the residuals. We like the half-normal plot as in the plot of residuals in Fig. 3.11. The x-axis shows the range of the values of the residuals. The sloping diagonal line is a logarithmic plot of a cumulative normal distribution with mean zero and standard deviation (SD) of the residuals. The dots are the residuals. If all the residuals lie on the line, they are normally distributed. Or if they are close. Close can be determined by the “fat pencil” test. In other words, if the residuals are covered within the diameter of a “fat pencil,” the distribution can be judged to be normal. This is MIT’s Roy Welsch “fat-pencil” test. The box in the chart shows some statistics that can tell us more definitively whether the residuals are normal with mean zero. In this case, for a sample of 27 numbers, the mean is 3.789561 × 10−14, which is a very small number—close to zero. The standard deviation is acceptable given the Anderson-Darling (AD) statistic, we reject the hypothesis that they are not random. The value p > 0.05 supports this fact. We conclude that the residuals are normal. This may appear confusing, but it becomes clearer when one considers that the null hypothesis H0 is that the residuals are normal, so that is p > 0.05.

Fig. 3.11
figure 11

A statistically significant residual plot

5.3 Analyzing Performance: Response Tables

From the ANOVA data, we can determine whether we have chosen the appropriate variables and the extent to which they contribute to the intended outcomes. In addition, we know that at the scale they are chosen, whether we have not omitted any other key variables. The next questions are:

  • Can we predict the outcomes of designed alternatives? For any “what-if?” question. For any alternative, under any uncertainty conditions? If so, how?

The answers are affirmative and we will show how this is accomplished. In addition to the ANOVA table, using our orthogonal arrays, we are able to obtain the Response Tables and the Tables for the standard deviations. For example Table 3.13, which we will discuss in detail in Chap. 6. Here we sketch how the Response Tables are used to design an alternative.

Table 3.13 Response tables for variables’ means and standard deviations

We focus on the LHS of the Table 3.13, the response table for the output Market Value-of-the-Firm, MVF. Under the “Level” column, listing levels “1”, “2”, and “3” for the controllable variables are shown. For example, cogs at level 2 is determined to have value 753.4. In this example, level 2 is the existing operating condition, the (BAU ) level. The controllable variables are expenditures for r&d, manufacturing yield , cost to the company of the goods sold (cogs), and price at which they are sold.

Delta is the maximum distance between any two levels; for example for the variable yield, 70.6 = (782.6−712.1). Rank is simply an ordering of Delta from high to low. Rank tells us which variable has the greatest influence on the output. In this case, it is price with a Delta = 113.3.

The standard deviations are calculated from the output data, for each variable at a level, to obtain the response table for the standard deviations. We get the RHS plots in Fig. 3.12.

Fig. 3.12
figure 12

Graphs of response tables for variables’ means and standard deviations

We can design alternatives to meet any specification, by inspection of Table 3.13 data on the RHS and LHS. For example, suppose that r&d is not a problem, and the objective is to maximize yield, have the lowest cogs, and raise price; the alternative can be specified as:

  • C((r&d(level-1), yield(level-3), cogs(level-3), price(level-3)), or more simplified as

  • (r&d-1, yield-3, cogs-3, price-3), or even simpler as C(1,3,3,3).

    In this case price(level-3) produces a salutary positive effect on profit—the higher price combined with the lower cost, i.e. hitting both the “top-line” and the cost-line. This is an aggressive strategy. But we must ask: what is the risk? The C(r&d-1, yield-3, cogs-3, price-3) strategy implies standard deviations SD(r&d-1, yield-3, cogs-3, price-3) for those controllable variables. In other words, the decision specification C(1,3,3,3) will result in the highest standard deviations for controllable levels r&d, yield and price (right hand panel of Fig. 3.12). Higher standard deviations means a large spread in the outcomes. This means more risk.

Suppose we design a decision-specification that is less aggressive, i.e. C(2,3,3,2). We elect r&d-2 because from the upper left hand panel of Fig. 3.12, we observe that the impact of r&d is low on the outputs. This is shown by Delta in the r&d column of Table 3.13. We keep price at level-2 because we choose to only have lower cogs and exert its effect on profit and cogs has the lowest SD. Compared to the C(1,3,3,3) alternative, the standard deviations, for r&d and price, are less in alternative C(2,3,3,2). Alternative C(2,3,3,2) is less risky than C(1,3,3,3). It is robust, which is a less risky decision and which still optimizes yield and cogs. It is the better decision.

Using orthogonal arrays, we can predict the outcomes of any specific alternative under any uncertainty condition, e.g. for the Business-As-Usual (BAU ), the do-nothing-different, behavior of the firm, under nine different uncontrollable (uncertainty) conditions. We show DOE predicted output of market-value-of-the-firm (MVF) for BAU under these nine uncontrollable uncertainty conditions (Fig. 3.13). As expected the BAU in the current environment is bracketed by the best SD (2 2,1) and worst environments SD(1,1,2). (This example comes from Part II.)

Fig. 3.13
figure 13

Predicted MVF for BAU (2,2,2,2) under nine uncertainty conditions

Using orthogonal arrays, we can predict the outcomes for a range of alternatives under a specific condition. For example, we now illustrate the case of nine alternatives (including BAU ) under a single environment—the worst. The DOE predicted output of nine alternatives for raising the market-value-of-the-firm (MVF) under the worst environment SD(1,1,2) shown in Fig. 3.14. The best alternative is specified as C(1,3,1,3) because it maximizes MVF. The worst alternative is specified as C(3,1,3,1) it produces the lowest MVF. As expected the BAU is bracketed by the best and worst alternatives. (This example comes from the case studies in Part II).

Fig. 3.14
figure 14

Predicted market-value-of-the-firm using nine alternative decision alternatives, including BAU , under single worst uncertainty condition

We find the transfer function and can also plot the graphical relationship between two interacting variables, e.g. Fig. 3.15.

Fig. 3.15
figure 15

Surface plot of predicted market-value-of-the-firm using as function of price and the cogs (cost of goods sold)

5.4 Analyzing Sociotechnical System Quality: Gage R&R

The discussions in Sect. 5.1 have centered on the use of DOE methods to construct decision alternatives (DOE experiments/treatments) and predict performance. Experiments depend on data. How do we know the data are “good enough”? What is good enough? Why or why not? What can we learn from this additional knowledge? Why is it important? These are the questions we explore and discuss in the context of executive-management decisions. The discussions are grounded on the science and practice of Measurement Systems Analysis (AIAG 2002).

Good-enough means that the data are accurate and precise. “Accurate” means that the data is located where it is supposed to be relative to a reference value. The reference point is the intended output. “Precise” means that repeated readings under different conditions produce data are close to each other. Accuracy and precision can be determined by the statistical property of variation. Given that variations will be present, we need to know the sources of these variations. Knowing the origins of variations, we can think about how to take corrective action, if necessary. Who and what are contributing to these variations? The variation can be inherent in the data itself, the people who are taking the measurements, or they can be due to measuring instrument quality. Gage R&R (Gage Repeatability and Reproducibility) methods (e.g. Breyfogle 2003; Montgomery 2001) from Measurement Systems Analysis (e.g. AIAG 2002; Creveling et al. 2003) give us the machinery to perform for this analysis.

The genesis of Measurement System Analysis (MSA ) is in manufacturing for the production of physical objects. MSA is a statistical method to assess the performance of a measurement system . The concept reflects its roots in manufacturing. A measurement system is defined as:

  • “the equipment, fixtures, procedures, gages, instruments, software, environment, and personnel that together make it possible to assign a number to measured characteristic or response (Creveling et al. 2003)”.

Gage R&R is a MSA method to study the components’ variability in a measurement system (e.g. AIAG 2002; Montgomery 2001). Gage R&R is a widely used in engineering and production management (Wang 2004; Foster et al. 2011). We will show that the concept of a measurement system, conceptually remapped to the engineering of executive-management decisions is very meaningful and useful. All this is somewhat abstract, so we will sketch the key ideas using Fig. 3.16, explain the statistics, and discuss the mapping to decision engineering. To make these ideas more intuitive, we begin by discussing the Gage R&R idea in a hypothetical manufacturing production environment.

Consider the manufacturing line of bolts (Fig. 3.16) as a direct analogy of a DMU forecasting outputs of decision alternatives, and the orthogonal arrays as DMU productions. The measurement in question is the diameter of the bolts. Variation in the diameter is an indicator of the quality of the production system, the people, and the measurement instruments. These variations need to be understood and interpreted to determine the quality of the production system . There are three sources of variations, they are:

  • Part-part is the variability in measurements across different parts from the same batch. In this example, this is the variation in the diameter measurements of the bolts. Ideally, we want this variation to dominate all the remaining variations. In other words, the variations introduced by people and the measuring instruments are to be small.

  • Reproducibility is the variability in measurements obtained when parts are measured by different operators. That is to say, for a given part, are different people making the measurements able to reproduce a measurement?

  • Repeatability is the variability in measurements obtained when parts from the same batch are measured by the same person, i.e. is an operator able to repeat the measurement value for a given bolt?

  • Gage R&R is the sum of reproducibility and repeatability. This sum is the overall measurement variation.

    Fig. 3.16
    figure 16

    Sources of variability for measurements

The sources of variations in measurements are mathematically related as shown in Fig. 3.17.

Fig. 3.17
figure 17

Graphical illustration of the various variations’ relationship

A simple sum expresses this relationship.

$$ {\sigma^2}_{total}={\sigma^2}_{part}+{\sigma^2}_{meas. sys.}={\sigma^2}_{part}+{\sigma^2}_{rpt}+{\sigma^2}_{rpd}. $$
(3.9)

5.5 MSA and Executive-Management Decisions

What does MSA have to do specifically, in detail, with executive-management decisions? A lot. Recall that we consider an organization’s sociotechnical systems as a decision factory, a manufacturing production system . A decision specifies how an organizations and associated sociotechnical systems must behave so they will generate the intended outcomes in the outcomes space. Which is why we consider organizations as production or manufacturing systems. They are factories of sociotechnical systems and machinery designed to recognize meaningful opportunities and problems, to analyze, engineer solutions, execute, and gain additional knowledge from its outputs. We measure the quality of this decision-specification execution system using Gage R&R.

The analogy is an isomorphic mapping (Table 3.14). The fidelity of the analogy is remarkably high. Decision specifications, the intellectual non-physical artefact, is mapped to parts, physical artefacts. Measurements become forecasts of decision outcomes. Operators who are doing the measuring with instruments are mapped to the DMU who are making the forecasts of outcomes.

Table 3.14 Adaptation of gage R&R to DOE -based decision analysis

ASQC and AIAG provide guidelines for measurement system statistics (AIAG 2002). A useful and accepted AIAG guideline is that the Gage R&R variation should be <10% and the part-part should be >90%. This makes sense for a mass production environment where the ideal is to have identical parts, i.e. without any variations so that the variations are all isolated in the measurement system. Specifically, the AIAG and ASQ guidelines stipulate that σ2part = 90% and that the rest be equally divided between σ2rpt and σ2rpd, i.e. 5% each. The 90-5-5 are indicators of a quality manufacturing line (Fig. 3.18). It is important to note that these guidelines are based on decades of manufacturing experience of American industry, which without exaggeration has produced billions of parts. The heuristic has a strong and a long history of empirical evidence. Therefore, we can, with confidence, adopt this quality heuristic for measuring the executive-management sociotechnical systems.

Fig. 3.18
figure 18

AIAG recommended distribution for measurement system quality

However, this 90-5-5 GR&R distribution sets a very high bar for executive- management decisions. This is discussed next.

The AIAG and ASQ standard is defined on the assumption that the parts are identical, but because people, technical systems, and processes are imperfect, there are variations in the final measurement. MSA seeks to determine the sources of these variations to enable management to take corrective action (Eq. 3.9). Our approach reverses the logic. We start with “parts”, i.e. gedanken experimental results. Unlike manufacturing, these “parts” are specifically designed to be different. We then ask, does our measurement system detect this intentional variation? In other words, is σ2part very large as it must be by design? And are σ2rpt and σ2rpd small, i.e. the DMU members and their sociotechnical systems capable? But we are not satisfied with this only.

As another test of the DMU and their sociotechnical capability of producing quality forecasts, we designed special verification experiments: does our measurement system detect the variations of a DMU member for different forecasts? And does the measurement system pick the variations from different DMU members for the same forecast? In other words, do σ2rpt and σ2rpd data reveal these facts? If the data to all these questions support repeatability and reproducibility, then there is support for the quality of the sociotechnical system.

5.6 Reflection and Learning

Reflection is thinking about experiences ex ante , ex post or ex inter, directed at learning for better decisions for the next decision situation and experience. To us “experiences” are the DMU ’s work leading to the outputs and ex post reviews, as well as, discussions of the in-process outputs and end-process outputs. “Reflection is not a casual affair” (Rogers 2002, 855). It is not wooly or undisciplined rumination. “Reflection is a systematic, rigorous, way of thinking, with its roots in scientific inquiry” (Rogers 2002, 845).

Why reflect at all? Reflecting is an inherent human quality—to learn from experiences, to improve subsequent experiences. Knowledge must be experienced. Survival drives this instinct. The possibilities of improved effectiveness are strong and natural drivers that motivate reflection and learning. Schön’s (1983) segments reflection into reflection-in-action and reflection-on-action. Reflection-in-action is learning by doing, ex inter learning. Reflection-on-action is ex post learning. Reflection is not navel-gazing, it requires systematic disciplined processes, close cousins of the scientific method. Dewey (1933), Rogers (2002) and Moon (2013) discuss various strategies for systematic reflection. Reflection can be taught. While solitary reflection is useful, carried out in a sociotechnical community environment is far more effective. It stimulates personal and organizational learning (e.g. Kolb 1984, McLeod 2013).

Without reflection, two malevolent laws of organizations begin to take root. Phil Kotler (1980) names them as the Law of Slow Learning and the Law of Fast Forgetting. Without reflection, people increasingly do things by rote, numb to the changes and uncertainties in the exterior environment. As a result, they add less and less to the organization’s body of effective knowledge. This is organizational slow-learning. Lack of reflection makes people think less and less about business processes and forget that processes were predicated on epistemological and ontological assumptions about their effectiveness. This is organizational fast-forgetting. He, as does Kolb (1984), argues that these two laws are perniciously mutually reinforcing in the absence of reflection.

6 Commitment Space

The hallmark of decisive executive is their ability to cross the Rubicon. Executives must commit themselves to a course of action (Fig 3.19). In Sect. 1.62 we discussed decisiveness as a principle of executive decision-making. And in Appendix 1.5 we summarized the 15 types of indecisiveness and indecisions. Our paradigm, facilitates commitment, by making the outcome more immune to uncontrollable conditions. Besides decisiveness, commitment requires the following:

  • Concurrence from the executive to whom the executive must answer to.

  • A Plan with milestones and work products that mark the achievements of milestones.

  • Consistent with robustness, it must contain risk analyses that include uncontrollable variables.

  • Resources, funds, physical assets (plant, equipment), and people to execute.

  • Interlock with organizations for which there are upstream and downstream dependencies.

  • Required financial information.

Fig. 3.19
figure 19

Schematic of the commitment space

7 Chapter Summary

  • The processes of executive-management decisions take place in five sociotechnical operational spaces. They are the Problem Space, the Solution Space, the Operational Space, the Performance Space, and the Commitment Space. Collectively, they form the five phases of the executive-management decision life-cycle all directed at the singular goal of designing robust decisions and sociotechnical systems that support its implementation and execution.

  • The Problem Space deals with sense-making of the decision situation triggered by a problem or opportunity. The trigger, frequently as a surprise, signals the presence of a new decision situation. The situation needs to be interpreted for significance and meaning. The operating principle is abstraction to reduce cognitive load of the DMU . The principal processes are to understand the decision situation using an uncomplicated, but accurate, representation of the problem/opportunity. The goal is to cultivate complementary and consistent mental models by the DMU . This enables the appropriate framing of the decision situation and the specification of goals and objectives.

  • The Solution Space concentrates on the design of decision alternatives. This is a very challenging and creative part of the decision life-cycle. The operating principle is to make the goals and intended outcomes actionable by the sociotechnical organizational systems. The goals are to design actionable solutions that are robust. The design processes must address the entire set of possibilities in the solution space, under any uncertainty conditions.

  • To systematically design alternatives requires identifying the essential variables. Essential variables are either managerially controllable variables or managerially uncontrollable variables. Managerially controllable variables are those which directly influence the intended outcomes and which can be manipulated by executives. Uncontrollable variables are those that managers cannot control. The uncontrollable variables shape the uncertainty conditions under which the decision will operate. Uncontrollable variables must not be omitted because they interact with the controllable variables to affect intended outcomes.

  • The normative axiom of “There Is No Free Lunch” applies to the controllable variables. The spirit of this axiom must be reflected directly in the set of controllable variables. Otherwise, exercising all controllable variables to the fullest benefit would entail no cost and there would be no need to make decisions.

  • A fundamental sociotechnical process is debiasing forecasts and aligning mental models. We prescribed a social process that includes counter-argumentation that encourages dissent based on information, not actual numerical quantities. The process results in DMU members’ mental models that are complementary, more complete, and aligned. The goal is not for the DMU to have identical mental models. The process helps improve the decision deliberations throughout the decision life cycle.

  • The Operations Space deals with the execution of decision-specifications. The operating principle is to enable unconstrained explorability, i.e. the ability to explore the entire solution set under any uncertainty condition. We can estimate the performance of alternatives by experiments to reveal a phenomenological model of the sociotechnical system. This strategy is grounded on the DOE methodology. The experiments are gedanken experiments . In our paradigm, the fundamental objective of decisions is robustness.

  • The Performance Space concentrates on the quality of the implementation and execution of the decision specification. Recall that we consider the sociotechnical system, which implements and executes decisions, as a production system, the manufacturing arm. The key measurements are robustness, repeatability, and reproducibility. The measurement science is Gage repeatability, and reproducibility (Gage R&R).

  • The Commitment Space addresses the need for a decisive executive that will commit scare resources at the time decision making is required.

  • The innovation of our operational approach is to depart from conventional strategies. We eschew the traditional way of thinking, which insists on ex ante analytic models. We do not presume to know a priori the explicit mathematical equations that represent the decision’s sociotechnical machinery. We adopt a fresh strategy. We use gedanken experiments rather than equations and probabilities to infer a phenomenological representation of the system. The inference is drawn from the results and data of the gedanken experiments . The system behavior is revealed ex post , not specified ex ante using equations about presumed system behavior. This is a phenomenological approach. Phenomenology is a scientific methodology to describe and explain observations. Appearance reveals and explains reality.