Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

Actions that are designed to have an impact on a challenging task environment require readiness to deal with the problems at hand, no matter what they might be. Readiness including readiness to make decisions within complex task settings involves several aspects of cognitive functioning, among them (1) motivation, (2) specific content knowledge of the task components, generally generated by training and/or experience, and (3) the capacity to deal effectively with multiple components of the task, their interrelationships and their interplay over time. For the purpose of this chapter, we will specifically focus on the third of these components. Let us assume, for present purposes, that adequate motivation is present and that content knowledge or technical skill of the task is at hand. As a matter of fact, for most highly professional individuals functioning over considerable time in repeatedly challenging (especially managerial and decision making) task settings, these aspects are, at least in good part, given. Still, success in responding to a difficult and/or challenging task is often not attained or insufficient. As a result, resolving a certain particular problem at hand may not be possible. Yet, often the lack of success is not due to the structure of the task itself, but due to inadequate cognitive readiness for dealing with the multiple interrelationships among task components and with their changes over time.

Tasks differ widely in their demands (what kind of task it is and what kind of knowledge, experience and/or technical and cognitive competency may be required). Let us distinguish between at least three “types” of tasks (Streufert & Swezey, 1986): those that are simple, those that are complicated, and those that are complex. For both simple and complicated tasks, procedural (content) knowledge and related skills are adequate to achieve the desired outcome. Of course, many simple and complicated tasks exist. Yet various other tasks are “complex.” Challenges by complex tasks may be generated when the task environment is volatile, when uncertainty about the setting or the intended outcome prevails, when the interplay of task components reflects complexity, when task requirements are ambiguous, and when feedback about consequences of actions taken to deal with the task are slow (delayed beyond the need to take subsequent actions). Task settings of that kind have been described as VUCAD (volatility, uncertainty, complexity, ambiguity, and delayed feedback) (Satish & Streufert, 2004; Streufert, 1993). The first four among these characteristics were coined by the US military (US Army War College) to describe decision making in military campaigns, and the last was added by complexity theorists.

When we deal with VUCAD, we can no longer expect that we will be able to discover the “perfect” and certainly not the “correct” solution to problems at hand. Readiness now means the capacity to find a good (at least more than adequate) solution via continuous active (re)orientation and continuous adaptation that monitors and adjusts activities to generate and maintain sufficiently effective outcomes over time. Successful dealing with VUCAD requires continued awareness and utilization of multiple task components (challenges); it requires an understanding of possibly changing events and event interrelationships (impacts of events upon one another). It requires monitoring those components, as well as the (potentially interactive) effect of each component upon the intended outcome. Whenever we are concerned with managerial effectiveness in today’s world, we need to ask ourselves some pertinent questions: Does an individual who has to make important decisions possess the capacity to function effectively under VUCAD? How certain can we be, that this individual will attain an excellent outcome? And, if the person of interest is not yet cognitively ready when confronted with VUCAD, is training toward greater effectiveness possible? How can it be achieved?

2 Model-Based vs. Scenario-Based Approaches

To answer such questions, theorists and researchers have employed different approaches to human readiness under VUCAD task demands. Some theorists and researchers concerned with complex problem solving and decision making have created computer-based models and have exposed decision makers to such settings. Often the model components were derived from theory or from attained “insights” of the theorist. Such approaches are in good part based on systemic models and, to some extent, on anticipated time change effects. Relevant variables are defined in a computer program that represents the theoretically specified dynamics of the environment. The features of the model are at least in part hidden from participants who, over time, may discover some of those variable characteristics (including feedback loops and time delays), once feedback is received in response to a decision. In other words, once decisions are made, the decisions have an impact on the task setting (based on the computational model). Subsequent decisions by the participant have to deal with the modified task setting, and so forth (Fig. 13.1).

Fig. 13.1
figure 00131

The model-based approach to diagnostic complex simulations

From a measurement (diagnostic) standpoint, the model status in such designs is partially confounded with the participants’ decision-making process (Breuer, Molkenthin, & Tennyson, 2006; Breuer & Streufert, 1995a, 1995b). An example of such an approach is the micro-world methodology developed by Doerner (e.g., 1996; Dörner & Wearing, 1995). In Doerner’s designs, the initial characteristics of a task environment, as well as resources (decision options), are presented to participants at the beginning of their task. Participant(s) then make(s) a sequence of decisions. Each set of decisions generates specific (model calculated) outcomes that modify the task requirements prior to the next set of decisions. Outcomes are based both on the interrelations between the modeled variables and on the changes of the variables status over time. The micro-world approach has been frequently utilized to diagnose managerial effectiveness (Funke, 1993; Hussy, 1998; Tennyson & Breuer, 2002). Since the action of the participant(s) as well as the model characteristics affect outcomes, a micro-world is able to demonstrate that specific action patterns lead to failure of even expert participants (e.g., Doerner, Kreuzig, Reither, & Stäudel, 1983). Nonetheless, this methodology has its limits if we wish to measure (diagnose) the capabilities and readiness of an individual who must deal with VUCAD (Molkenthin, Breuer, & Tennyson, 2008).

3 Free Simulation Technologies

While micro-worlds differ from most simulations and in-basket techniques by typically requiring sequential interactions (decisions—changed setting response—decisions—response, etc.) between participant and computer, micro-worlds nonetheless have much in common with most other simulation technologies. Fromkin and Streufert (1976) defined such methods as “free simulations” where actions of the participants have a direct impact on changes in the task environment over time. Free simulations allow the introduction of highly complex scenarios that can challenge decision makers with continuous VUCAD environments. Which decisions are made and whether and how resources are utilized are entirely under the control of decision makers. Participants typically enjoy the experiences, in part because they can encounter the consequences of their own actions over time; they can readjust their actions and modify their approaches to attain desired goals.

Many free simulations have been employed in the military, business, medicine, and in other fields where VUCAD is encountered by decision makers. Yet these methods still suffer from the same measurement problem that we encountered with micro-worlds (cf., Streufert & Swezey, 1985). Because of the “free” nature of the participant–task interaction, precise measurement of performance is severely restricted. Evaluation of performance within free simulations must be left to a (potentially biased or even unreliable—yet mostly well trained) observer who estimates effectiveness based on specific activities and decisions made. Measurement precision is necessarily restricted because different decision makers generate diverse subsequent environments to which they (again) respond in different unique ways. Comparisons among decision makers are therefore restricted. In other words, each participant ends up with a different flow of events. As a consequence, the reliability of performance measurement remains in some question.

4 Quasi-Experimental Simulations

An attempt to resolve such problems was made by Streufert and his associates (Streufert & Streufert, 1981), as well as subsequently by Breuer and associates (Breuer & Satish, 2003; Breuer & Streufert, 1995a, 1995b) as well as Satish and associates who developed or utilized “quasi-experimental simulations.” This approach avoids the exposure of participants to the internal dynamics that, for example, characterize the model aspect of micro-worlds. As in other simulation technologies, the quasi-experimental approach exposes the participant(s) to a VUCAD setting. However, in this method, all important (relevant to measurement) events that occur during the simulation are preprogrammed in content and time, i.e., all participants are exposed to the same sets of inputs at identical time points. Some structurally unimportant (to task and to measurement) events can be influenced by actions of the participant decision maker(s) to provide the impression of a responsive task environment (Fig. 13.2).

Fig. 13.2
figure 00132

The quasi-experimental approach to diagnostic complex simulations

Because of preprogrammed information inputs (task events) over time, precise performance measurement becomes possible. Using this method, at least two kinds of independent variables can be introduced into research and training methodologies: (1) individual differences (e.g., experience, training, specific competency levels, etc.) and (2) environmental challenge characteristics (e.g., task load, stress, etc.). Furthermore, environmental (VUCAD relevant) task characteristics can be changed or specifically varied across time when useful. Moreover (3) multiple performance characteristics (among them response frequency, strategic capacity, response to stress, and many others) can be assessed. Finally, (4) because the participants are exposed to identical experiences, comparisons of performance across individuals, across manipulated environmental conditions, and across other introduced variables of interest can be obtained and, most of all, can be validated.

5 Complexity and Meta-complexity

Quasi-Experimental technology is scenario-based, not model-based. It provides the participant with a VUCAD environmental setting, resources to deal with events in that setting over time and information (as stated earlier in good part preprogrammed in both content and timing) about events that occur in that setting. Participants are able to make decisions at any time and can make decisions of any kind, as long as the resources to make a particular decision are (or remain) available. Typically participants get deeply involved in the task (high motivation) and their responses mirror (established validity) their behavior in normal real-world tasks. Since the scenarios appear “familiar” to participants (for example, from newscasts or media), but neither mirror the participants’ own job characteristics nor their prior experience, the resulting measured behavior indicates the individual’s underlying capacity to function (cognitive readiness) in response to VUCAD settings.

Early research using quasi-experimental simulations tended to focus on determining whether “cognitive complexity” was evident in participants’ actions (e.g., Streufert, 1970). That approach, however, did not take into account that task characteristics can differ widely: Some tasks or task components merely require simple procedural action; others are best handled by a breadth of approach that considers choices among two or more alternative solutions; yet others (where considerable VUCAD is present), necessitate multifaceted functioning that has been described both by cognitive (Streufert, 1997) and by science wide (e.g., Kauffman, 1992, 1995, 2002) complexity theory. Cognitive readiness to deal effectively with the task environment must, in part, depend on the specific task at hand, no matter whether it represents a simple procedural task or a multifaceted task involving VUCAD, and so forth. An approach that takes account of diverse task requirements, encompassing simple, intermediate and highly complex functioning, and the appropriate handling of each is described by meta-complexity (e.g., Streufert, 2005). Quality of performance is based on effective functioning that takes account of the individual’s optimal handling of the specific task and its characteristics at hand. Contemporary research with quasi-experimental simulation technology takes account of meta-complexity.

6 Measuring Decision-Making Effectiveness

The Strategic Management Simulations (SMS) are quasi-experimental technologies that were developed to assess and train multiple aspects of decision-making competence that are discussed in the next paragraph (Streufert, 1970; Streufert & Swezey, 1986). A number of matched (in task demands and measurement outcome) scenarios have been developed and widely used to measure cognitive readiness across professional specialties (e.g., Shamba, Woodline County, Astaban). Other scenarios have been specifically developed for clients with particular interests.

If we intend to generate an assessment of a person’s actual decision-making competence, we need to provide a setting that generates indicators of that competence. For that purpose, two requirements are of necessity: complexity of the task and time to utilize those competencies. The SMS, which provides the basis for this chapter, provides both. Participants are exposed to a highly complex (multifaceted) simulation that dynamically and meaningfully changes over time. Secondly, while the time point where the simulation ends is not stated to the participant beforehand, it continues over six half hour periods of real time while simulated time may reflect days, weeks, or even months (depending on the internal logic of a specific scenario).

The SMS and its predecessors were initially developed by Streufert and associates (e.g., Streufert, 1970; Streufert & Streufert, 1978). To generate an inclusive list of decision-making abilities, these authors collected more than 90 measurement-based indicators of decision making. Data were obtained from several 100 participants across continents and finally subjected to statistical techniques that identified the degrees of overlap or independence of the measurement technologies (such as multidimensional scaling, varimax factor analysis, and so forth). The generated data indicated (again across nations) a set of between 9 and 12 independent measures of decision making that go beyond knowledge/experience and motivation, i.e., measures that do assess decision-making competence independently of each other. The most common nine measures are listed in Table 13.1.

Table 13.1 Nine basic measures of decision -making competence/cognitive readiness

Based on subsequent validity data (see below), some of the measures were subsequently subdivided into components that are based on a common overall competence, yet where people who are more successful frequently differ from people who are only moderately successful. For example, in reference to basic measure 9, measures of strategy include

  1. 1.

    Contextual strategy: strategy is used in a specific context.

  2. 2.

    Basic strategy: an assessment of the frequency in which strategy is used overall.

  3. 3.

    Encompassing strategy: strategy is utilized across multiple aspects of the task.

  4. 4.

    Advanced strategy: strategic action interconnect multiple aspects of the task toward common goals.

  5. 5.

    Strategic complexity: multiple sequential strategic coupling of actions over task aspects and over time toward multiple often interrelated goals.

The SMS assess multiple specific decision-making competencies at a general level, focusing on the underlying competence that decision makers would bring to a wide variety of divergent situations. As such, these simulations are useful across professional specializations and across cultures and languages. They have been validated in various business contexts, in pharmacology (e.g., effects of drugs on decision-making competence; cf., Streufert & Gengo, 1993), medicine (Streufert & Satish, 2003), crisis management (Streufert—emergency decision making; Breuer & Satish, 2003), and more. The simulations provide validated measurement of competence (cognitive readiness to handle various levels of tasks) in terms of a set of quantitative scores and in terms of visually effective “qualitative” graphic representations of functioning.

7 Quantitative Measurement

As already suggested above, scenario-based (non-model-based) simulations where all events related to measurement are preprogrammed generate information on the performance of individuals that is subject to direct quantitative measurement. We can determine (meta-complexity, meta-readiness) whether the response to a specific task component is relevant, e.g., whether the participant is sensitive to the particular level of task demands (e.g., handling a simple procedural task in the same fashion one handles a VUCAD task would not be useful). We can determine whether a participant does or does not engage in specific behaviors (actions) that are of interest, how often—and relevant to what kinds of information—those actions do occur, whether those actions are related to other actions as part of overall or specific planning and/or strategy, whether behavior changes in kind (either effectively or ineffectively) occurs in response to stress, to emergencies, to failure experience, and more. All of these (and more) performance characteristics are numerically scored by a computer program, eliminating the problem of observer error or bias. Validity, reliability, and independence (factor structure) of the obtained measures have been repeatedly demonstrated across cultures and languages over several decades (e.g., Streufert, Pogash, & Piasecki, 1988).

8 Qualitative (Graphic) Measurement Representation

In addition to quantitative measurement, a qualitative graphic representation of the multiple components of performance can be generated (cf., Breuer & Satish, 2003; Streufert & Satish, 1997). While these graphs are “qualitative” in terms of their visual communication, they are nonetheless based on the same hard “quantitative” data that are considered in the section above. Described as “Time/Event Matrices” these graphs plot time (typically several hours of simulation participation) on the horizontal axis and decision categories on the vertical axis. Each decision made is identified as a point (vertically) above the time where the decision occurs and (horizontally) at the level of the decision category to which it refers to.

If a participant in the simulation makes a decision that is an intended antecedent of a future decision (involving planning and/or strategy), the first decision can be connected with the second (once the second decision is carried out) via a diagonal with an arrowhead pointing toward the second decision (reflecting use of strategy). If the later decision is not carried out (either because other actions took care of the problem or because the decision maker forgot or neglected future action), the arrow becomes a vertical line, pointing to the decision category that was planned (reflecting planning that was not followed up). If a decision maker finds a previous action useful to generate a new decision (but the later decision had not been preplanned), the latter decision is connected to the earlier decision via a diagonal line with the arrowhead pointing toward the earlier decision (reflecting utilization of opportunity). When information received during simulation participation is utilized to generate a particular decision, the point of information receipt is marked with a star (horizontally) ahead of the relevant decision type and vertically above the time point of information receipt. A decision that utilizes received information as (at least part of) the reason for making the decision is circled.

9 Reliability and Validity

The SMS reliability is excellent for test-retest (e.g., Streufert et al., 1988): data that have been obtained across the different SMS scenarios show high reliability (0.8–0.94 across different measures). Test-retest results were obtained on 2 subsequent days, a week apart, and for about 30 participants 1 year apart. Meaningful test-retest data can be obtained as long as participants do not know (unless told) what the simulation measures. The different information content of the SMS scenarios, despite the equivalent task demands, prevents participants from learning how to perform better—unless, of course, they become trained (for information on training, see below).

No matter how reliable a measurement technique may be, if it is not valid it is not useful. Although there are a number of validity studies supporting the prediction of success for performance on the various simulation measures (e.g., Breuer & Streufert, 1995a, 1995b; Funke, 1993), one striking example may be sufficient to make the point. It is well known to almost everybody, and it is demonstrated as well as widely accepted in the behavioral sciences that more than a minimum of alcohol consumption has negative effects on human functioning, including upon cognitive capacity (readiness). This frame of reference has been used as an anchor for a series of studies on drug effects on cognitive readiness. In a double-blind placebo-controlled effort, meaning that neither the participants nor the administrators of the simulation runs knew the treatment conditions effective in any one simulation run, decision makers were exposed to placebo (disguised as alcohol), to alcohol exposure at the 0.05 level or at the 0.10 level. Maintenance of blood alcohol was measured by breathalyzers with the data collected by a researcher who was not administering the simulation. The individuals participated in three different SMS scenario runs in randomized order. Measures of cognitive functioning have been assessed across the established profile (compare Table 13.1 and Fig. 13.3). A plot of the respective results is presented in Fig. 13.4 (Streufert & Pogash, 1998). As predicted performance was worse under 0.05 alcohol than under placebo and much worse under treatment that generated the 0.10 alcohol blood level. Performance under alcohol conditions was worse for almost all measures under placebo condition. Performance under 0.1 alcohol level was worse in 21 out of 24 measures compared to the 0.05 level of blood alcohol. Similar results were obtained for treatment with a tranquilizer (Streufert et al., 1996) and with certain other (psychoactive) drugs that are able to cross the blood–brain barrier. Together with findings from additional research, the conclusions for the validity of the simulation measures upon cognitive functioning are substantiated.

Fig. 13.3
figure 00133

A time/event matrix representing the decision-making process of a participant

Fig. 13.4
figure 00134

Validation results based on double-blind “treatment” with alcohol

10 Training

Of course we could train most motivated individuals to be effective in dealing with a procedural task where “right” and “wrong” responses would be identified in advance. But can we train all managers to become qualified decision makers when VUCAD strikes? What would be the procedure to generate cognitive readiness to function effectively enough under such challenging conditions?

Past research appears to suggest that not everyone (interestingly enough irrelevant of measured intelligence) is trainable to handle VUCAD (Streufert & Streufert, 1978; Streufert & Swezey, 1985). Some basic capacity to deal effectively with VUCAD has to be present. In many cases an individual may be able to function under VUCAD—something that science-wide complexity theorists might call the “edge of chaos” (Kauffman, 1995)—in some (or few) specific task settings. Where that capacity is present in one realm or another, it can be expanded to other task challenges. One should note, however, that training and learning to deal with very specific ambiguous, complex, and delayed feedback settings may merely reflect the acquisition of a highly complicated procedure that becomes useless when uncertainty and especially volatility create major changes in task demands.

Effective training toward improved functioning on specific SMS measures has been reported. Interviews with supervisors on the job have also shown that training after simulation participation generates improved functioning. Interestingly enough, training of individuals with prior mild to moderate head injury, utilizing practice with training vignettes developed by Streufert has had some strikingly favorable results. Still, the underlying capacity to deal with VCAD must be present, before training efforts can be generally effective.

11 Future Efforts

Without question, cognitive readiness to perform real-world tasks is of importance if we wish to obtain successful functioning and meaningful productivity. In this chapter we have focused on tasks that involve VUCAD, yet we have recognized that tasks differ. We must consider readiness in terms of the task demands, the existing level of relevant competence of the individual involved and the degree to which differences (or changes) in task demands over time translate into effective performance. General use of the SMS (see above) can certainly generate data we need to select, place, and evaluate individuals whose relevant competence is matched to the task environment. We could select managers that “match” task-specific demands. And finally, we can train individuals to deal better with various levels of demands, including task challenges that involve VUCAD (e.g., Haritz & Breuer, 1995).

Beyond the potential application of SMS as it has been used in the past, specific versions of the quasi-experimental simulation approach, related techniques, and their associated measurement technologies might and probably should be developed to match specific challenges that occur in specific work environment settings (e.g., Breuer et al., 2006; Molkenthin et al., 2008). Such efforts have already been effectively utilized in specific environments in the air force, medicine, business administration, and in some other fields.