Keywords

1 Introduction

Uncertainty and Risk lie along a spectrum, which includes, of course, Certainty.

This chapter consists of three main components:

  • Scoping the Risk Spectrum

  • Profiling Uncertainty

  • Methods, Tools, and Techniques (MTTs)

1.1 Scoping the Risk Spectrum: Positioning Uncertainty

In this first section, Uncertainty is broken down into a number of contributing elements. Figure 2.1 illustrates how Uncertainty is positioned across a broader risk spectrum.

Fig. 2.1
An illustration of uncertainty spectrum, which has 3 factors, namely, uncertainty, risk, and certainty. Uncertainty has Fuzzy Occlusions that has An image of cloud depicts the fuzzy occlusions of uncertainty and straight blue line indicates the quantitative method and quantitative analytics respectively.

The risk/uncertainty spectrum

A brief examination of the semantics involved shows that:

1.1.1 Certainty

Certainty occurs when it is assumed that perfect information exists and that all relevant information to a problem is known. In reality it can be argued that the complete veracity of perfect information can be challenged due to interpretative issues, and that the relevance of the information can only be assumed.

1.1.2 Risk

Risk, on the other hand, indicates that partial information (often involving metrics), is available and generally, is probabilistic, so that when future events or activities occur they do so with some measure of probability. Alternatively, risk can be defined as the probability or threat of a damage, injury, liability, loss or negative occurrence, caused by external or internal vulnerabilities and may be neutralised through pre-meditated action (risk management). A risk is not an uncertainty, a peril (cause of loss), or a hazard.

In essence, risk generally refers to the likelihood that some future unplanned event might occur and which can be assigned a numeric probability.

1.1.3 Uncertainty

Uncertainty implies incomplete information where much or all of the relevant information to a problem is unavailable. Uncertainty can also be explained as being a situation where the current state of knowledge is such that:

  • The order or nature of things is unknown.

  • The consequences, extent or magnitude of circumstances, conditions, or events are unpredictable.

  • Credible probabilities to possible outcomes cannot be assigned.

  • A situation where neither the probability distribution of a variable nor its mode of occurrence is known.

Whilst Risk can be quantified (via probabilities), Uncertainty cannot, as it is not measurable. Other structural components creating difficulties for practitioners reside at the system level and include complexity and interconnectivity. Indeed, understanding the characteristics and scope of the conditions of this key component is a key first stage in shaping broader analytical templates.

However, many people still confuse Risk and Uncertainty, which has led to the premature use of quantitative methods and where a more qualitative evaluation would be of greater use. This distinction is crucial, since the appearance of precision through quantification can convey a validity that cannot always be justified.

Uncertainty in all its imprecision needs to assert itself as a powerful condition (and yes—alongside risk), the understanding and acceptance of which can increase our foresight and preparedness in the face of the unexpected. The diversity of outcomes that might occur has to be understood in order to mitigate the impact of future events whether or not they have emanated as unintended consequences of past actions, or from situations over which we have no means of controlling.

As far back as 1921, Frank Knight in his seminal work “Risk, Uncertainty, and Profit”, established the distinction between risk and uncertainty—a distinction which still is the most concise:

... Uncertainty must be taken in a sense radically distinct from the familiar notion of Risk, from which it has never been properly separated. ... The essential fact is that “risk” means in some cases a quantity susceptible of measurement, while at other times it is something distinctly not of this character; and there are far-reaching and crucial differences in the bearings of the phenomena depending on which of the two is really present and operating. ... It will appear that a measurable uncertainty, or “risk” proper, as we shall use the term, is so far different from an un-measurable one that it is not in effect an uncertainty at all.

The states of uncertainty and risk are not discrete—represented, as it were, by a sliding scale from Genuine Uncertainty though to Risk based on varying levels of probability and on to (near) Certainty. This is illustrated in Fig. 2.1. Quantification and measurement in turn should not be treated as existing or not in such discrete domains.

As one moves from Certainty towards the Uncertain end of the spectrum, probable outcomes are reduced to being only possible outcomes and where information, especially in its (metric) quantitative form becomes increasingly unavailable and/or not relevant. Although too much uncertainty might be seen as being undesirable, manageable uncertainty can provide the freedom to make creative decisions.

1.1.3.1 Uncertainty and Risk: A Confusion of Terms When It Comes to Measurement

Nonetheless, the confusion about the difference between Uncertainty and Risk still exists largely due to the issue of measurement. Measurement can be defined as a set of observations that reduces uncertainty where the result is expressed as a quantity. The scientific community is generally satisfied with a reduction, rather than an elimination, of uncertainty. Hubbard (2007) states: “The fact that some amount of error is unavoidable but can still be an improvement on prior knowledge is central to how experiments, surveys, and other scientific measurements are performed”.

Hubbard goes on to say that a measurement does not have to eliminate uncertainty but rather that:

a mere reduction in uncertainty counts as a measurement and possibly can be worth much more than the cost of measurement.

He adds that a measurement does not have to be about a quantity. This uncertainty does not have to be quantified and that the subject of observation might not be a quantity itself but qualitative. He refers to the work of psychologist, Stanley Smith Adams, who describes different scales of measurement, including “nominal” and “ordinal”: nominal measurements being set membership statements—a thing is simply in one of the possible sets. Ordinal scales allow us to say one value is “more or less” than another but not by how much and are relevant when we talk about risk mitigation and not risk elimination—which is a finite state of affairs and which in the complex world of business, economics, and politics, is nigh on impossible to guarantee.

We can but mitigate—and develop methodologies which through empirical observation and experience, allow us to offer up workable templates (rather than causal models), against which we can compare current sets of conditions and information.

Hubbard concludes by stating that:

The commonplace notion that presumes measurements are exact quantities ignores the usefulness of simply reducing uncertainty, if eliminating uncertainty is not possible or economical. In business, decision makers make decisions under uncertainty. When that uncertainty is about big, risky decisions, then uncertainty reduction has a lot of value.

How Uncertainty is treated in terms of methods has been a moot point over the last century or so and has developed into arguments as to the relative value of qualitative as opposed to quantitative methods.

Ernest Rutherford, the Nobel Prize winning physicist claimed that—“qualitative is nothing but poor quantitative”. This 100-year-old dictum unfortunately still casts a long shadow in relation to the qualitative/quantitative divide present in the analytical process and subsequent decision-making. Recent dramatic increases in computing power including big data analytics have supported the view that quantitative is best. Within the financial sector much financial analysis has concentrated on risk, whereby probabilistic methods can allow decision makers to make decisions based on a belief that quantitative, and therefore measurable indicators, validate such decisions. As highlighted earlier the appearance of precision through quantification, mathematics, and excel sheets often conveys a validity that is not justified.

But how do we treat uncertainty, a situation where there is little or no measurable data and where the decision environment may not only be rapidly changing but rapidly and randomly evolving in terms of its structure? Rules of thumb do not work anymore, correlations no longer hold, or worse, sometimes they hold and sometimes not. Mathematicians such as Rene Thom (1972) the founder of “catastrophe theory” think differently, being convinced that the qualitative is a great deal more than just a mediocre form of the quantitative. When qualitative data is issued side by side with quantitative analysis, decision makers have access to more valid and more powerful information on current and potential future performance. A framework, which applies iterative monitoring of earlier judgments, has the virtue of being both more flexible and dynamic, helping practitioners and decision makers to mitigate uncertainty as well as risk.

1.1.3.2 Different Interpretations of Uncertainty: Confused Dot Com(plexity)!

In spite of all this—and it may appear to the reader that Uncertainty is highly complicated, which it is—“you ain’t seen nothing yet”. Uncertainty comes in different shapes and sizes and a number of academics and practitioners have attempted to differentiate its different forms—although it has to be said they all seem to be uncertain as to how many types there are! This publication will give guidance to uncertainty for decision support purposes.

1.1.3.3 Version 1

For example, Michael Goldstein (2011) at Durham University identified some 9 different sources of Uncertainty albeit qualifying such uncertainties within the domain of computer modelling, and to quote:

  1. 1.

    parametric uncertainty (each model requires a, typically high dimensional, parametric specification),

  2. 2.

    conditional uncertainty (uncertainty as to boundary conditions, initial conditions, and forcing functions),

  3. 3.

    functional uncertainty (model evaluations take a long time, so the function is unknown almost everywhere),

  4. 4.

    stochastic uncertainty (either the model is stochastic, or it should be),

  5. 5.

    solution uncertainty (as the system equations can only be solved to some necessary level of approximation).

  6. 6.

    structural uncertainty (the model only approximates the physical system),

  7. 7.

    measurement uncertainty (as the model is calibrated against system data all of which is measured with error),

  8. 8.

    multi-model uncertainty (usually we have not one but many models related to the physical system),

  9. 9.

    decision uncertainty (to use the model to influence real-world outcomes, we need to relate things in the world that we can influence to inputs to the simulator and through outputs to actual impacts. These links are uncertain.)

Wow and you thought Uncertainty was simple?

1.1.3.4 Version 2

A simpler, more practitioner based classification of uncertainty has been put forward by Courtney et al. (2000) of consulting company McKinsey & Company, in a memo entitled “Four levels of uncertainty (Strategy under Uncertainty)” namely:

  • Level one: A clear enough future (where a single forecast that is sufficiently precise basis for defining a strategy).

  • Level two: Alternative futures (one of a few discrete scenarios, usually with probabilities).

  • Level three: A range of futures (A limited number of key variables define the range, but the actual outcome may lie anywhere within it. There are no natural discrete scenarios).

  • Level four: True ambiguity (A number of dimensions of uncertainty interact to create an environment that is virtually impossible to predict at any level—it is impossible to identify a range of potential outcomes, let alone scenarios within a range. It might not even be possible to identify, much less predict, all the relevant variables that will define the future.

What Courtney, Kirkland, and Viguerie have described is less four types of uncertainty but a range of conditions across the Uncertainty/Risk spectrum. The only true uncertainty described being level four—true ambiguity. Its value though is that it does encapsulate scalable uncertainty, albeit some of the conditions are closer to risk.

1.1.3.5 Version 3

The Uncertainty Toolkit for Analysts in Government (2016) identifies three types of uncertainty:

  1. 1.

    Aleatory uncertainty—the things we know that we know (aka “known knowns”) and which relates to the inherent uncertainty that is always present in “underlying probabilistic variability”. In reality it acknowledges that in practice there is no such thing as certainty!

  2. 2.

    Epistemic uncertainty—things that we know we do not know (aka known unknowns)—due to a lack of knowledge about complexity of the system assumptions are used to address gaps in the knowledge base.

  3. 3.

    Ontological uncertainty—things that we do not know we do not know (aka unknown unknowns). Based on no experience or knowledge whatsoever of an occurrence.

This interpretation is succinct and nearly fully comprehensive, in terms of cognitive variants of uncertainty. It is though somewhat academic in its use of language to describe the different types of uncertainty: aleatory, epistemic, and ontological are not everyday terms used by practitioners when communicating to real-world decision makers. It is also essentially a representation of the Rumsfeld interpretation. However, as with the Rumsfeld version it avoids identification of the fourth element of the Known Unknown axis—“Unknown-knowns”.

1.1.3.6 Version 4

In 2011, Swedish methodologist Tom Ritchey (2011) specified another four types of uncertainty:

  • Risk which he defined as a type of uncertainty being based on quantitative probability. He reinforces the Knightian position by stating that if risk has well-grounded probability, then there is no uncertainty at all.

  • Genuine uncertainty conversely embodies outcomes which cannot be ascribed probabilities.

  • Unspecified uncertainty Ritchey positions in relation to long-term future developments and as such is “inherently ineradicable—you cannot get rid of it by trying to obtain more information about it, because the information needed to reduce it simply isn’t there”.

  • Agonistic uncertainty which “refers to a network of conscious agents (e.g. individuals, organisations, institutions or nations) acting concurrently and reacting to each other”, so that its development is unpredictable.

1.1.3.7 Version 5

Another interpretation of Uncertainty was put forward by the former Governor of the Bank of England, Mervyn King and Economist and FT journalist John Kay in 2020, who introduced the term “Radical Uncertainty”. They define “Radical Uncertainty” as being the kind of uncertainty that statistical analysis cannot deal with (as non-quantifiable risk). This interpretation is very much a hybrid, as it is akin to Ritchey’s definition of “Genuine Uncertainty”, Epistemic and Ontological uncertainty, and an amalgam of Goldstein’s conditional, functional, solution, and decision uncertainties—but certainly Knightian!

1.1.3.8 Version 6

Yet another example of populating different types of uncertainty has been put forward in “Decision Support Tools for Complex Decisions Under Uncertainty” (French 2018) where five types of uncertainty are put forward. These are stated as follows in the publication with examples provided for each type:

  • Stochastic uncertainties (physical randomness and variations), e.g.:

    • Will the next card be an ace?

    • What will be the height of a randomly selected child in Year 7 in Surrey?

    • What proportion of car batteries will fail in the first year of use?

  • Epistemological uncertainties (lack of knowledge), e.g.:

    • What is happening?

    • What can we learn from the data?

    • What might our competitors do?

    • How good is our understanding of the causes of this phenomenon?

  • Analytical uncertainties (model fit and accuracy), e.g.:

    • How well do we know the model parameters?

    • How accurate are the calculations, given approximations made for tractability?

    • How well does that model fit the world?

  • Ambiguities (ill-defined meaning), e.g.:

    • What do we mean by “normal working conditions” for a machine?

    • What do we mean by “human error”?

  • Value uncertainties (ill-defined objectives), e.g.:

    • What do we mean by the patient being in “good health”?

    • What weight should we put on this objective relative to others?

    • What is the right—ethical—thing to do?

Simon French (the editor) goes on to say that the stochastic, epistemological, and analytical uncertainties relate largely to questions about the external environment whilst ambiguities and value uncertainties reflect uncertainty about ourselves.

1.1.3.9 Version 7

Another term used in relation to uncertainty is “Deep Uncertainty”—a form adopted by a mixed academic-practitioner group called “The Society for Decision Making Under Deep Uncertainty” (DMDU). The Society and publisher Springer in 2019 produced a book dedicated to the topic (Marchau et al., 2019).

The editors state that, “Decision makers, feel decreasing confidence in their ability to anticipate correctly future technological, economic, and social developments, future changes in the system they are trying to improve, or the multiplicity and time-varying preferences of stakeholders regarding the system’s outcomes”.

They define “deep uncertainty” situations as arising from actions taken over time in response to unpredictable evolving situations—in effect such situations are non-linear and asymmetric, and where the different stakeholders are often in disagreement.

Marchau et al. see decision-making in the context of deep uncertainty and which requires a paradigm that is not based on predictions of the future (known as the “predict-then-act” paradigm). Rather the aim is to prepare and adapt, by tracking how the future evolves and allowing adaptations over time as more information or knowledge becomes available so as to implement long-term strategies. The “track and adapt” approach explicitly acknowledges the deep uncertainty surrounding decision-making for uncertain events. In other words, the deep uncertainty environment requires continual iterative and objective monitoring of future events. As we shall see later, in Chap. 7, this interpretation is very much aligned with what can be termed Exploratory in relation to future scenarios.

Finally we can add Rumsfeld’s own classification (2002) as Version 8 (added as a simplified alternative and is broadly understood):

  • Known-knowns

  • Known-unknowns

  • Unknown-unknowns

So, there we have eight interpretations of Uncertainty—as I said earlier, “confusing” isn’t it? Remember such interpretations are not exclusive—I have attempted to present a mix of the academic and the practitioner to demonstrate the range of opinion on the subject—no doubt you will find out others.

How do these different versions line-up against one another? And how can we portray the different types of uncertainty within a workable and meaningful structure without oversimplification of its various sub-components? In Table 2.1, we present the eight versions as described, to identify the similarities and differences of interpretation. This combination of the different versions and interpretations of uncertainty will be synthesised into a template that will act as a core reference tool as we move through the programme.

Table 2.1 Interpretations of uncertainty

This exercise has I believe been worthwhile in that it has helped in:

  • Identifying the various types of uncertainty being bandied about by both practitioners and academics—(so at least exposing the reader to such versions).

  • Filtering the core components and version of uncertainty that we shall be using in the book.

  • Creating the basis for a practical representation of the various forms of uncertainty that can be deployed when addressing various types of problem (see Chap. 3), and which can be used as a useful template for identifying major categories of uncertainty across the risk spectrum.

There are unfortunately two further conditions which muddy the waters of uncertainty—complexity and interconnectivity which we will address before moving onto the main reference template cited above.

1.1.4 Complexity and Interconnectivity

Understanding uncertainty unfortunately is not just about being able to classify its various components as to their condition of being predictable or identifiable. Into an already confusing arena the problem itself is compounded by matters of complexity. Before answering the question “what is complexity?”, it is important to distinguish the difference between Complex and Complicated, and indeed “simple”. Kuosa (2012) identifies that many things—such as a leaf—appear simple but on closer examination are highly complex. Just because a system made up of a large number of parts can be described in terms of its individual components, that system is best described as complicated rather than complex—such as a modern jet aircraft. Kuosa goes on to say that:

In complexity, the interaction between the system and its environment are of such a nature that the system as a whole cannot be fully understood simply by analysing its components. Moreover, these relationships are not fixed but shift and change, often as a result of self-organization.

Thus, in summary we can state that:

  • A system may be complicated, but have very low complexity.

  • A large number of parts does not generally imply high complexity. It does, in general, imply a complicated system (for example, a mechanical watch or clock).

  • Complexity implies capacity to surprise, to suddenly deliver unexpected behaviour.

  • In order to assess the amount of complexity it is necessary to take uncertainty into account, not just the number of parts.

  • The combination of complexity within a system with interconnectivity reflects the non-linearity and multidimensional interactions within the system.

What is complexity? The answer to this question is contentious. Theorists such as Mitchell (2009) states that “no single ‘science of complexity’ nor a single complexity theory exists yet”. She does identify some common properties of complex systems, as having:

  • Complex collective behaviour—it being the collective actions of vast numbers of components that give rise to hard-to-predict and changing patterns of behaviour.

  • Signalling and information processing: all systems produce and use information and signals from both their internal and external environments.

  • Adaptation: many complex systems adapt—i.e. change behaviour to improve their chances of survival or success—through learning or evolutionary processes.

Mitchell goes on to propose a definition of the term “complex system”:

a system in which large networks of components with no central control and simple rules of operation give rise to complex collective behaviour, sophisticated information processing, and adaptation via learning or evolution.

It is to be noted that Mitchell does not specifically identify uncertainty per se but rather concentrates on the nature of a system as a network.

On the other hand, Jacek Marczyk (2009), a practitioner and theorist in the area of uncertainty and complexity management, adopts a more operationally pragmatic approach and states that complexity is a fundamental property of all systems, just like energy. He identifies complexity specifically (as opposed to Mitchell’s complex system) as being a function of structure and uncertainty, where there are:

  • Multiple information sources

  • And which are linked (inter-dependent)

  • And which are often uncertain

  • An increasing number of links and in the presence of high uncertainty, it becomes impossible to comprehend a system and to manage it. This corresponds to critical complexity.

Marczyck appears to go further than Mitchell by indicating that at certain levels of uncertainty “it becomes impossible to comprehend a system and to manage it”. His interpretation of high levels of uncertainty brings it much closer to a number of key characteristics of a “wicked problems” (see Chap. 3), namely that:

  • There is no definitive formulation of a wicked problem

  • Wicked problems have no stopping rule

  • There is no immediate and no ultimate test of a solution to a wicked problem.

There is one area that Marczyck can be challenged, in that he states that one can quantify the amount of structured information within a system and its “functional potential”. It can be argued that under conditions of high volatility and dynamism, quantification is all but irrelevant.

1.1.4.1 Properties of Complexity

Marczyk further describes key properties of complexity as being where:

  • Rapidly rising complexity is observed prior to a crisis, an extreme event, or collapse.

  • Collapse is Nature’s most efficient mechanism of simplification (very prominent in social systems).

  • High complexity corresponds to high risk of contagion and fast stress propagation—i.e. matters can get easily out of hand by applying the wrong solutions to a perceived problem or indeed the perception of the problem itself may be erroneous.

  • Interconnectedness between system parts is dynamic and volatile, aggravated by incomplete identification of end points (c.f. the number of software bugs continually being discovered in high-profile commercial software) and indeed most software where complex coding protocols are being used.

1.1.4.2 Relevance of Complexity

No matter that Mitchell may be correct in that there is no agreed theory of complexity, the practitioner world well understands the concept of complexity and how it impacts their respective domains.

In the area of strategy development, problems of uncertainty and complexity are readily apparent. Senior executives and by implication, high-level practitioners, struggle to conceive or adapt strategies and implement them, whilst remaining relevant, and is becoming increasingly difficult to achieve. (Camillus 2008).

The real challenge is for a decision-making team to maximise the length of time it has to consider its situation before applying solutions in order to exploit opportunities, or to avoid threats, or unintended consequences.

A 2011 report published by the Economist Intelligence Unit (EIU) entitled “The Complexity Challenge” identified the growing debate about such issues and highlighted strategic management concerns and awareness. The report asked some 300 global senior executives how severely increasing complexity is affecting business. The main findings were:

  • Doing business has become more complex since the 2008 global financial crisis (and now we have the 2011 financial crisis as well)

  • Firms are finding it increasingly hard to cope with the rise in complexity

  • The single biggest cause of business complexity is greater expectation on the part of the customer

  • Complexity is exposing firms to new and more dangerous risks

  • Businesses are focusing on technological solutions to tackle complexity

  • A majority of firms have an organisational structure that may be adding to complexity.

Although the report is slanted mainly towards senior business executives, its findings can be ported over to apply to decision makers in general.

1.2 The Uncertainty Profile: From “Known-knowns” to “Unknown-unknowns”

1.2.1 Background: The Existential Poetry of Donald H. Rumsfeld

In 2002, Donald Rumsfeld, Defence Secretary in George W. Bush’s administration, the latter being a key evangelist of the Third Gulf WarFootnote 1 famously said in a press conference to a largely cynical world, and in response to a question about the lack of evidence linking the government of Iraq with the supply of weapons of mass destruction to terrorist groups.

As we know,

There are known knowns.

There are things we know we know.

We also know

There are knowns unknowns.

That is to say

We know there are some things

We do not know.

But there are also unknown unknowns,

The ones we don’t know we don’t know.Footnote 2

There have been many references to this statement—most noticeably related to use of the term “unknown-unknowns” and indeed the recent UK’s Uncertainty Toolkit for Analysts in Government referred to in the previous section uses the Rumsfeld comment as the basis for its three classifications of uncertainty almost verbatim. Yet as the combination of the two variables used in the argument (known and unknown) lends itself to a 2×2 matrix, Rumsfeld’s statement (and the Uncertainty Toolkit) only addresses three of them. There is a fourth variant which is unaddressed—the Unknown-Knowns.

Let us first look at some of the key vocabulary, terms, and concepts that can be inserted into the “Uncertainty Profile” and which, I believe address a key element which has tended to be, if not overlooked, under-represented in these other frameworks—the “unknown-knowns”.

Where are we at in having a better understand of Uncertainty and Risk? We have seen that there are a number of similar approaches and versions to visualising the various contexts and conditions in which uncertainty and risk are present.

So, let us then explain the four quadrants or permutations of this uncertainty matrix further, namely:

  • Known-knowns

  • Known-Unknowns

  • Unknown-knowns

  • Unknown-Unknowns

As seen with the presentation of the risk spectrum in Fig. 2.1, and the discussion above, there are various forms of uncertainty and risk, often with occluded boundaries. Such outcomes or events, whether they reside at the risk or uncertain end of the spectrum, can be inserted into a matrix governed by Predictability and Visibility. How can these main axes be defined?

1.2.2 Event Predictability

Events can be either Predictable or Unpredictable. By “predictable” is meant “to be made known beforehand” or simply, “capable of being foretold”. Thus, the range of options from an event being predictable to being unpredictable can range from something which is an event that is (almost) certain—such as, “my alarm always goes off at 6.30 in the morning” to “I always know what the weather will be like tomorrow without reading the forecast”. This interpretation would qualify as an Aleatory uncertainty or Ritchey’s Risk.

Thus, the ability to identify how well an outcome can be predicted is dependent on how much control we have in making an event happen. Secondly what historical data (or experience if available) allows very high probability in the confirmation that an outcome can happen. To each event that may have an impact on our lives, the best we can do is to attach varying levels of probability.

1.2.3 Event Visibility

Can, “what type of event may occur”, be identified? How different is event “Visibility” from “Predictability”? Visibility implies being able to determine what type of event may impact us as opposed to the likelihood of that event occurring—can we visualise it? This requires identification in greater detail of the kind of event or events, which can have an impact.

Before an event can be predicted it is important to identify (make visible) those events which are likely to have the greatest impact from a subjective standpoint—tempered by the probability of such an event happening.

Of course, both elements are interrelated—some events are predictable and identifiable whilst at the other extreme there exist future events that are neither identifiable nor predictable. The next section is where we present a template as a workable tool that readers can use on projects and which form part of the evolutionary use of MTTs when working with the various Uncertainty components.

1.2.4 The Uncertainty Profile Template

The relationship between event visibility and event predictability can be visualised in the following matrix:

Following the presentation of an array of various interpretations of uncertainty presented in Table 2.1, the schema/template in Fig. 2.2, presented above is a synthesis that acts as a core reference tool (or leitmotif) as we move through the guidebook.

Fig. 2.2
An infographics titled identifiable and predictable which has the questions in blue rectangular boxes, known known data and unknown known data. Unidentifiable and predictable has known unknown data. Unidentifiable and unpredictable has unknown-unknowns data. An inner rectangle is within one of the blue box and another rectangle intersects with the other rectangles.

Profiling uncertainty template

Those events which, any decision maker must take into account, can be positioned as follows:

1.2.5 Quadrant 1 (Q1): Predictable and Identifiable (Known knowns)

These events are likely to be extrapolations of events and trends that have already occurred, such as the likelihood of further regulatory and compliance measures. In this quadrant, information required to help make a decision is likely to be in formal policy documents, methods and calculations, and quantitative data previously recorded and validated.

1.2.6 Quadrant 2 (Q2): Identifies Predictable Events Not Yet Identifiable (Known Unknowns)

A typical example would be the July seventh Underground and bus bombings in London 2005. The public had been warned a number of times by the police and security forces that such an event would occur (i.e. it was predictable), it was just a matter of when. These events have been called, “Inevitable Surprises” by Peter Schwartz (2003): (other examples being that of a major earthquake along the San Andreas fault in California or the next overdue major eruption of Vesuvius). Such events, being predictable but not yet visible can be addressed by advanced, foresight-based contingency planning or emergency response so that when they do happen the consequences can be mitigated to some degree. (Is Covid-19 an inevitable surprise?). It is worth stating that identifying an event as an inevitable surprise and the putting in place of contingency plans offers little certainty that substantial mitigation to outcomes will occur and that responses to such events can still go very wrong. These plans have to be robust to address a wide range of event “surprises” as they challenge our ability to visualise the unthinkable.

Two recent tragic events indicate that operational foresight weaknesses still occur even when such events might have been seen as “an inevitable surprise”. The Grenfell House fire with its prior warnings of the flammability of the cladding combined with the lack of a high-rise fire ladder to reach fire sources in high-rise buildings is just one example—the second being the Manchester Arena bombing where paramedic and emergency response teams were delayed in gaining access to the injured inside the Arena due to procedural safety measures, based on the possibility of secondary explosive devices, being imposed.

We now jump to quadrant 4—the third section of Rumsfeld’s statement.

1.2.7 Quadrant 4 (Q4): Unpredictable and Not Identifiable (Unknown Unknowns)

Events here move into the realm of unknown territory “Terra Incognita”. At the most extreme these are Rumsfeld’s “unknown unknowns”,—the ones we do not know we do not know. Such events have also been called “Black Swans”. As defined by Taleb (2007) the term is a true unknown unknown. However, the reference to an event being “a Black Swan”Footnote 3 has been hi-jacked and used to justify what, in essence, is a straightforward lack of foresight and proper due diligence. Post the 2008 financial crisis commentators, bankers, and financiers alike were using the term “Black Swan event” as a reason as to why the crisis could not have been foreseen. This is a misrepresentation of the term as the event was foreseen by numerous but unfashionable commentators such as Nouriel Roubini (2010) and Raghuram Rajan (2005), who were just ignored. 2008 was not a true “black swan event” nor is the Covid-19 pandemic (We will see later in this section how Animal Metaphors have been frequently used to explain problematic situations).

These events can be described though as being “pseudo black swans”. It can be argued that if we can think it—it is possible, and if we cannot then it is an “unknown unknown” or “true black swan” event and thus time should not be spent on worrying about the latter. On the other hand, one reason for not thinking about “IT” maybe just lack of vision and imagination so that its rightful place is in quadrant 3! Anything else, no matter how improbable, does qualify for consideration, and methods should be adopted which might in some way allow us to recognise such an eventuality and to develop (robust) contingency plans to mitigate their impact.

The cataclysmic nature of such events makes for uncomfortable reading.Footnote 4 In addition to the maintenance of entrenched paradigms by various vested interests, decision makers are required to enter the “zone of uncomfortable debate”, the ZOUD (Bowman, 1995), which many organisations, policy makers, designers, and practitioners, find difficult to confront—again see the section at the end of this chapter on Animal Metaphors.

1.2.8 Quadrant 3 (Q3): Unpredictable and Identifiable (Unknown Knowns)

In many cases this cell is a flipped version of its Predictable/Identifiable (Quadrant 2) partner except that the level of probability is far less certain. The level of certainty is reduced not only by how far in the future an event might occur but exacerbated by the numerous permutations influencing the outcome of an event in the intervening period. An example in the sphere of international relations might be that confronting analysts as to what will happen in the Middle East over the coming 12 months in a post Trump world. We can identify the areas of concern—Iran/Saudi tensions, Yemen, Libya, and Syria and the impact of the Taliban taking charge in Afghanistan, but the outcomes are highly unpredictable due to the variety of different stakeholders with interests in the region—each with their own agenda (the countries themselves, USA, Russia, UK, France, China, Turkey, UN). Now in early 2022, it is the Ukraine where attention is heavily focussed by a variety of actors.

These Unknown Knowns can be interpreted as meaning “I don’t know what I know” and hence lead to analysts missing weak signals, and/or “I think I know but turns out that I don’t”—a form of hubris. Either way weak signals can be manifest but often due to a variety of behavioural factors and biases, ignored. Behavioural factors are to the fore in this quadrant with actors suffering from amnesia, denial, blind spots, silo thinking, and hubris—sometimes all or a combination of such behaviours—any of which can distort judgement. Event indicators in this quadrant are also asymmetric and non-linear which makes it difficult to ascertain the relative importance of both individual and clustered signals. This is a challenge to the inclination to weight the variables too early. The dictum that policy driven evidence should be challenged by evidence based policy is pertinent to this quadrant.

As a result, this is the most insidious, nay pernicious, of the quadrants, largely due to a number of behavioural factors as identified above—we think we know what or where the problem is or might occur, but can we handle the inherent complexities or do we even have the tools, let alone the desire to deploy them, to address such complexity based uncertainty?

Groupthink and dogma based policy are the real enemies here and the issue is whether the stakeholders themselves are prepared to explore outcomes which may produce uncomfortable truths a theme expounded by Nik Gowing and Chris Langdon (2017). In other words, weak signal identification is as much challenged by individual and group willingness to accept such signals through bias intervention, as by the strength of the signal itself.

Quadrants 3 and 2 and to some extent even Quadrant 4 require an open-mindedness not always present in organisations. In addition to the maintenance of entrenched paradigms by various vested interests, decision makers are required to enter the “zone of uncomfortable debate”, the ZOUD, which many organisations, policy makers, designers, and practitioners find difficult to confront (similar to the animal metaphors). The disruptive and often cataclysmic nature of such events makes for uncomfortable reading—thinking about the unthinkable, which is why, when they occur, they are too readily deemed to belong to Quadrant 4: in the vast majority of cases they are not and just demonstrate a lack of foresight.

Similarly, it should be pointed that events such as the “Kodak moment”Footnote 5 is not a black swan event—it could be foreseen and demonstrates poor judgement by management.

Weak signals, which by definition may have already become manifest in some way, tend to be allocated to the third quadrant of Unknown/Knowns (as per the major area bordered by a thin red line). Also by definition it can be argued that they cannot exist in the unknown/unknown quadrant because they are already manifest as a weak signal. In turn there is a strong justification that weak signals actually reside in Q2 but requiring vision if their impacts are to be acknowledged as possible.

In the end, management has to accept reality and realise that in order to avoid inevitable, outcomes, it has to be more readily amenable to acknowledging that precision and the future are incompatible terms. Thus, those events which, as a minimum, any decision maker must take into account, are represented in the top left-hand quadrant.

Mapping “wicked problems” and messes (see next chapter) against the above schematic is not precise as the boundaries of a problem can be fuzzy. It can be argued that all problems identified as having characteristics of Q4—“Terra Incognita”, can automatically be classified as “wicked” type problems. Yet due to situations where some elements of the problem data (both hard and soft) are lost or misplaced, then degrees of “wickedness” can manifest themselves in Q3 and to an even lesser extent in Q2.

It will be this schema that will act as a major heuristic device as we develop the knowledge base when looking at the deconstructed components of Uncertainty. The importance of the schema presented in Fig. 2.2 will become apparent as we explore the other components in Part A and beyond.

1.3 Methods, Tools, and Techniques (MTTs)

1.3.1 Other Templates

In addition to the main “Uncertainty Profile” template presented in Fig. 2.2, there are of course numerous other such templates which have been devised over the years. Below are presented two such examples, “Cynefin” and “VUCA” which the user may wish to work with alongside the Uncertainty Profile introduced above. Already in the text there has been reference to a number of animal metaphors used to explain various states of knowledge or lack thereof in an organisational setting—such as Black and Grey Swans. To end this chapter, a number of such animal metaphors are highlighted with explanations as to how they have been deployed.

1.3.2 Cynefin

In 1999, David Snowden, when working at IBM, produced a different conceptual framework for categorising uncertainty called Cynefin, a Welsh word for habitat and used here to describe five different decision-making contexts and is represented below.

The central cross-over of the axes represents the fifth context “disorder” and confusion.

These contexts or domains have been changed by Snowdon and colleagues over the years and the main titles used here reflect largely the latest nomenclature. In the Known Space, also called Obvious, relationships between cause and effect are well understood, so we will know what will happen if we take a specific action. All systems and behaviours can be fully modelled so that the consequences of any course of action can be predicted with near certainty. In such contexts, decision-making tends to take the form of recognising patterns and responding to them with well-rehearsed actions often in the form of established policy documents and laws derived from familiarity. This implies that we have some certainty or at least high probability about what will happen as a result of any action. There will be little ambiguity or value uncertainty in such contexts. This could also be described as a “known known”?

In the Knowable Space, earlier called Complicated, cause and effect relationships are generally understood, but for any specific decision further data is needed before the consequences of any action can be predicted with certainty. The decision makers will face epistemological (knowledge), stochastic (probabilistic), and analytical uncertainties too as per the types of uncertainty outlined previously. Decision analysis and support will include the fitting and use of models to forecast the potential outcomes of actions with appropriate levels of uncertainty. As decision makers will have experienced such situations before they can mitigate the uncertainties by detailed contingency planning. One may use the term “known- unknowns” for this element or “inevitable surprise”.

In the Complex Space, decision-making faces many poorly understood, interacting causes and effects. Knowledge is at best qualitative: there are simply too many potential interactions to disentangle particular causes and effects. There are no precise quantitative models to predict system behaviours such as in the Known and Knowable spaces. Decision analysis is still possible, but its style will be broader, with less emphasis on details, and more focus on exploring judgement and issues, and on developing broad strategies that are flexible enough to accommodate changes as the situation evolves. Analysis may begin and, perhaps, end with much more informal qualitative models. Issues are close to being not just complex problems but have elements of “wickedness” as well (see the next chapter for more details on this).

Contexts in the Chaotic Space involve events and behaviours beyond our current experience and there are no obvious candidates for cause and effect in effect “unknown-unknowns”. Decision-making cannot be based upon analysis because there are no concepts of how to separate entities and predict their interactions. These are fully “wicked problems”. Decision makers will need to take probing actions and see what happens, until they can make some sort of sense of the situation, gradually drawing the context back into one of the other spaces (if this is possible).

The central cross over the axes intersection in Fig. 2.3 below is sometimes called the Disordered Space. It simply refers to those contexts that we have not had time to categorise. The Disordered Space and the Chaotic Space are far from the same. Contexts in the former may well lie in the Known, Knowable, or Complex Spaces; we just need to recognise that they do. Those in the latter will be completely novel. Using this interpretation of “disorder” then one could state that they are “unknown-knowns”.

Fig. 2.3
An illustration of a cynefin model has four portions divides into a x symbol. They are knowable, known, chaotic and complex. In Knowable, it has cause and effect can be determined with sufficient data. Known has cause and effect understood and predictable. Chaotic has cause and effect Not discernable. The Complex has cause and effect determined after the event.

The Cynefin model

1.3.3 VUCA

Another common approach for visualising various types of uncertainty is VUCA. This is an acronym dating from 1987 developed by Bennis and Nanus from their original 1985 publication on leadership, and stands for Volatility, Uncertainty, Complexity, and Ambiguity. Following the collapse of the Soviet Union in the early 1990s, the world became more unpredictable and a new way of identifying and reacting to potential threats was required and the adoption of the VUCA template appeared to fit well with the new political landscape. As a set, the four elements themselves characterise potential organisational uncertainty in terms of both systemic and behavioural failures. Thus:

  • V = Volatility signifies both the nature and dynamics of change, and the nature and speed of change forces and catalysts.

  • U = Uncertainty reflects the lack of predictability and the prospects for surprise.

  • C = Complexity indicates the multiplicity of interconnectivities present with a particular system.

  • A = Ambiguity highlights the fuzziness of reality, the potential for misreads, and the mixed meanings of conditions.

These elements illustrate the context in which organisations envisage their current and future state in an asymmetric world and encourage management to challenge assumptions that the (immediate) future will resemble the recent past. For most organisations—business, government, education, health, military, and others—VUCA is a simple acronym to prompt organisational preparedness, anticipation, evolution, and intervention.

Bennett and Lemoine (2014) in the Harvard Business Review (HBR) produced a useful guide where each of the VUCA elements is explained in terms of its characteristic and approach. In addition, an example is provided for each element as well (Fig. 2.4).

Fig. 2.4
An infographics with four parts, namely, complexity, volatility, ambiguity and uncertainty. Each portion is subdivided into three categories. They are characteristics, example and approach. In complexity, information is available or predicted. In volatility, the information is unexpected and can not understand. In Ambiguity, it has unknown, unknowns. In uncertainty, a lack of information and the change is possible but not a given.

The VUCA matrix

Of course, the 4 elements within the VUCA acronym can be used in a variety of combinations. The table shows some 13 different combinatory options in addition to the full VUCA acronym (Table 2.2).

Table 2.2 Different options emanating from the VUCA matrix

1.3.4 Animal Metaphors

Although not strictly a MTT, the use of metaphor is regularly used as an aid to introduce or simplify a concept. A number of metaphors can be used in the explanation of uncertainty—here are a few that are in regular use. Animal metaphors generally highlight human and organisational frailties and limitations.

1.3.4.1 Which Metaphor to Use?

In recent times, there has been a proliferation of animal based metaphors to describe both situations and organisational behaviour in the broader domain of uncertainty. Two animals have been noticeably used—Elephants and Swans. In this note I have outlined the most common types and their variants. Whilst metaphors can be a useful shorthand to describe a situation, the danger is not only overuse but misinterpretation and misuse.

The first “beast” is the elephant and common metaphorical expressions include:

  • The Elephant in the room: This idiom is used to describe a major issue especially a controversial one that is obvious (elephantine in size) but no-one wants to discuss as it arouses behavioural responses which makes people feel uncomfortable or is embarrassing, even inflammatory, or dangerous. A typical example could be where management is discussing staff diversity but where there are no women and/or people from an ethnic background and/or staff from a different social background in the management group itself.

  • Black Elephants: a more recent addition to the business metaphor library (after Thomas Friedman, 2014; Peter Ho, 2017), this is a cross between a black swan and “the elephant in the room” (a problem that is visible to everyone, yet no one still wants to deal with it so they pretend it is not there). When a problem blows up a common reaction is one of shock and surprise, and is treated as though it was a black swan. In reality, it should not have come as a surprise.

  • Another big beast metaphor similar in size and habitat as the elephant is the rhino. Michele Wucker (2016) recently introduced the term “Grey Rhino”, defined as being a highly probable, high impact yet neglected threat. Although similar to elephants in the room and black swans, grey rhinos are not random surprises, but occur after a series of warnings and visible evidence. Interestingly enough Wucker states that the 2008 financial crash was not a black swan event but a grey rhino—it was evident in advance but was selectively ignored in spite of the growing evidence. This is almost identical to an Unknown-known as described in the quadrant 3 profile of Fig. 2.2 earlier.

The second high-profile animal used in metaphors is the swan largely thanks to Nassim Nicholas Taleb’s 2007 book “The Black Swan”.

  • Black Swan: Taleb’s metaphor has majorly entered the management lexicon but has been frequently misused. A black swan is an extremely rare event with severe consequences. It is strictly an Unknown-Unknown but increasingly used to describe low probability/high impact events which are highly unpredictable. Unfortunately, the term has been misused to describe an event which could or should have been predicted beforehand such as the 2008 financial crash (which of course was predicted but ignored). Is this also just a grey rhino?

  • Grey Swans: this variant of the swan metaphor was introduced around 2012 by a team at management consultants PwC (2012). The paper states that although black swan events should only occur at unpredictable (and rare) intervals “recent experience suggests events that fit the definition of black swans are happening more and more frequently. So, are black swans actually turning grey? Rather than being infrequent ‘outlier’ events, are they now just part of a faster-changing and more uncertain world”? PwC observe that organisations can have blind spots from which high impact risks emerge.

Whilst there has been a profusion of metaphors—particularly since Taleb’s 2007 book—in essence elephants, swans, and rhinos are all quite similar in trying to use metaphor when describing the occurrence of uncertain events, or more specifically our (behavioural) responses to them.

I see the elephant and rhino metaphors as reflecting largely behavioural responses, individually and organisationally, in dealing with unpleasant yet highly visible circumstances—if the parties are prepared to look harder and overcome prejudice and bias.

On the other hand, the black swan metaphors seem much more allied to identification, or lack, of risk and uncertainty. The main area of misrepresentation appears to be a lack of understanding as to the real nature of uncertainty in the term “black swan”—which should be defined as an “unknown-unknown”. In numerous instances, the term has been hi-jacked to justify the failure to act to an event even when evidence was present, even as a weak signal. My preferred term, rather than grey rhino or grey swan is to call such event profiles, “pseudo-black swans”—a more pejorative description for a paucity of foresight especially when accompanied by behavioural failures such as bias, group think, silo mentality, etc.

So, in conclusion, I recommend we use elephant metaphors for behaviourally driven responses in decision-making and the two swan metaphors when faced with different levels of uncertainty. You choose!

2 Summary

This chapter has set out to introduce a better understanding of the difference between risk and uncertainty. In spite of numerous forms of uncertainty, we have synthesised these into four main variants presented as the “Uncertainty” matrix. This can be used as a template for a variety of scenarios, so as to offer the analyst or decision maker a starting point for the further exploration of a problem with varying levels of uncertainty. However, understanding of the nature of the problem itself is also critical to narrowing down those variables to be used when developing such scenarios. This is the topic of the following chapter.