Keywords

1 Introduction

Having identified and deconstructed the major analytical components of uncertainty in Chaps. 26, we need to find a suitable vehicle which enables us to explore uncertain environments. Scenarios provide such a vehicle.

The term “scenario”, now in common use, is typified as stories or narratives of alternative possible futures. Kahn and Wiener, in their 1967 book The Year 2000, provided one of the earliest formal definitions of scenarios,…a hypothetical sequence of events constructed for the purpose of focusing attention on causal events and decision points.” In the corporate world, the use of scenarios has been used by the petroleum giant Shell (2005) since the mid-1960s under the guidance of pioneers such as Pierre Wack and Kees van der Heijden (1996). This led to a broader definition whereby scenarios were seen as being “…descriptions of possible futures that reflect different perspectives on the past, present, and future.” (Van Notten et al., 2003).

Scenarios are descriptions of alternative development paths of an issue. They are not predictions of the future per se but help to explore what could happen and how to prepare for various contingencies (Kuosa & Stucki, 2020). Stakeholder participation and collaboration is essential to the scenario activity. Ringland et al. (2012) make a distinction between scenarios and forecasting in that “scenarios explore the space ofuncertaintiesin defining possible futures”, whilst forecasts tend to be used more for anticipating timing in relation to specific stimuli such as technology. Ringland does point out though that there is no reason not to integrate more specific forecasts within a broader scenario-based horizon.

As was flagged in the previous chapter, scenarios need to be seen within the context of an ongoing, long-term, “closed-loop” organisational process and provide a useful tool for generating shared forward views, helping to align strategic action across an organisation on its journey into the future. The main purpose of a scenario is to guide exploration of possible future states with the best scenarios describing alternative future outcomes that diverge significantly from the present (Curry & Schultz, 2009) and thus avoid falling into the trap that the future will generally resemble the past. Scenarios can help us look out for surprises!

Like Ringland, Curry and Schultz also emphasise the collaborative, cross-functional nature of carrying out scenario activities, stating that it “means creating participatory processes: scenarios create new behaviour only insofar as they create new patterns of thinking across a significant population within an organisation”. As will be indicated in Chaps. 9 and 10, behavioural considerations are a major component when confronted with decision-making governed by an uncertain landscape.

One critique of certain scenario practices and their output (List, 2004) is that too much consists of “snapshot” scenarios, “which merely describe the future conditions without explaining how they evolved”. In other words, such scenarios lack comprehensive enough but all important accompanying narratives. We saw in Chap. 6 how methods such as causal layered analysis and being aware of the history and its litanies are crucial in establishing the deep roots of a problem. This theme will be picked up later in the next section when introducing reactive and exploratory scenario approaches.

Voros states that the creation of scenarios requires an in-depth process of information gathering and careful analysis. He goes on to add that “scenarios based solely on trends and forecasts will generate a very narrow range of alternative potential futures” and that where decision-makers assume too much credibility due to hard/quantitative data, such organisations fall into the trap of the dictum identified in Chap. 3 where “the appearance of precision through quantification can convey a validity that cannot always be justified”. In this guidebook, the primary resource for such a strategic foresight exercise has been presented in Chaps. 26, “Uncertainty Deconstructed”, which is why it is recommended that elements within the template are used to prepare the groundwork for scenario development.

2 Lenses

In this chapter, I shall be using scenarios as seen through two but distinct lenses—the reactive and the exploratory (plus a hybrid).

2.1 Reactive

A reactive scenario is defined as being how the future may roll out based on a current problem or issue (one that has manifested itself), as the main starting point or driver. For example, the Global Risks Report 2021, recently published by the World Economic Forum, looks at the future impact of the COVID-19 outbreak based on three horizons: short term up to 2 years, medium term 3–5 years, and long-term 5–10 years. Note, the problem here is that such a scenario relates to a single discrete event rather than potential asymmetric exponential effects based on interconnecting trends and events. To re-iterate, the WEF report offers scenarios in reaction to a specific event that has already manifested itself. The additional danger of such an approach is that there is a tendency to marginalise tangential, second-order, third-order, or cumulative events and effects in the scenario chain.

Reactive scenarios are problem-oriented as they seek to explore how society and organisations may respond to, usually, shorter-term challenges. Voros (2001) claims it is where most futures work takes place and “often touches upon the” big-picture “problems”. The event has happened and the main response is to ask “how do we deal with it?” (e.g. the COVID-19 pandemic).

2.2 Exploratory

An exploratory scenario, on the other hand, is much broader in scope as it seeks to identify both observable and latent drivers or trends over various future time horizons—in effect multiple futures with a much larger range of possible outcomes, impacted by weak signals and outliers. Peter Schwartz (2003) said it simply: “What has not been imagined will not be foreseen…in time.” An exploratory approach comes with much fewer preconceptions about what are the main drivers when exploring the future or rather future uncertainties. Indeed, some of these drivers or indicators may not have been sighted or even emerged. It offers the analyst the freedom to investigate a more expansive array of future (non-discrete) outcomes using an array of methods, techniques, and tools to help in the investigation.

Exploratory analysis is much more engaged with issues of foresight rather than in response to one specific main driver (reactive). It involves ways in which one can imagine multiple futures that we must foresee—whether that future is tomorrow or months, years, or decades from now. The main challenge here is how do we get decision-makers and others to listen out for and filter an array of signals which may never happen—the classic low occurrence/high impact scenario? Whilst exploratory analysis is more open ended than the reactive approach—it is fuzzier—with varying time horizons, inputs and outputs, resource commitments, etc. and where second and subsequent order occurrences can be non-linear. This makes it more difficult for decision-makers to grasp the essentials, let alone identify them, when formulating policy—and where such formulation is subject to asymmetrically evolving challenges which majorly reduce the efficacy of traditional planning cycles and methods and upon which management still relies too heavily.

The exploratory approach requires that an organisation be more prepared to formalise the foresight process as a continuing strategic AND operational activity in its own right rather than in reactive mode. It should be highlighted nonetheless that such a mind-set should not be siphoned off just to one department or division (or sub-contracted out to consultants) but be integral to all functions, strategic and operational, within the organisation.

This more challenging form of analysis with its range of possible futures (as represented in the futures cone) is based on major levels of interconnectivity. It is here where the programme seeks to offer enhanced structured insight processes for decision- and policy-makers as well as decision support analysts—and where current signals are weak, overlooked, or indeed ignored, inadvertently or deliberately. We are concerned less with the major event itself but with secondary (second order), tertiary, and more layers which may be derived from any singular event and which in turn may generate their own causal and non-causal effects. Moreover, these derivative triggers are often asymmetric and non-linear in impact adding to the difficulties of carrying out foresight exercises. Linear forecasting approaches are not realistic in such circumstances and therefore the futures analysis must be a continuous activity unconstrained by formal planning cycles—after all a pandemic doesn’t recognise planning cycles nor does climate change. Thus, the exploration of what can be termed derivative scenarios is crucial to the process as they can manifest themselves not just in exploratory mode but in reactive mode as well.

It is strange that in an era where our access to knowledge and information and its litany, as well as the volume of data itself, is greater than ever before, we still struggle to better identify the future or rather optional or possible futures. Perhaps there is too much information—too much noise and too few easily identifiable signals? In addition, when, in an attempt to bring order, we seek out new information technologies such as data analytics and AI methods—we need to be aware that even these “neutral” algorithmic approaches are subject to originator bias—and has been discussed by a number of commentators (O’Neil, 2016; Wachter-Boettcher, 2017; Mau, 2019).

There are three additional challenges which are related and need to be considered. First, in some cases the evidence is in front of our eyes, but we do not see it, or do not recognise the significance of what we are seeing. We are surprised by the result. Alternatively, there are occasions when the evidence is not a reliable guide to sudden shifts. In both cases, surprise manifests itself all too often and we need to ask not only whether the foresight approaches are robust enough but whether our own thought processes are robust enough. Finally, there is the danger of just relying on reactive scenarios so that decision- and policy-makers spend too much time in respond mode and reduce their chances to explore “potential” future events—whether such events be low occurrence/high impact or not.

I’d like to clarify a further point in relation to the reactive and exploratory variants. As mentioned, a reactive scenario uses an actual event occurrence as its starting point. It will then use foresight methods when exploring subsequent potential future developments in specific response to such an event—e.g. COVID-19—irrespective of whether we should have seen it coming. It has already been stated that the exploratory approach is more open-ended, and by definition exploratory, and is less constrained in its vision of the future both in terms of ideas generated and the length of the time horizon. On the other hand, there is no reason why the exploratory approach should not be used to examine an initial single identifiable issue as its starting point as long as that issue may have been identified somewhere before the horizon is arrived at—possibly in the form of a weak signal or outlier—or as a recognised ongoing problem which has not been fully addressed or resolved. Topics such as social mobility and social inequality come to mind as examples and, of course, climate change.

By offering a structured and holistic approach to the topic—the main advantage is that future projections, having passed through a particular time horizon, can then be audited against actuals. Variances can be identified and rationales for such variances articulated so that the inputs into the structure can be improved over time and future errors mitigated—offering the insurance of iteration as a learning and feed-back process.

Within both the reactive and exploratory scenarios the role of the social and behavioural sciences must not be overlooked, as many of the issues being addressed which make up the dynamics of a scenario have socially based inputs and outputs. Part A3 will highlight such behavioural factors as being a critical indicator of how problems are created and how, or how not, they may be identified and resolved.

An alternative approach for conducting exploratory scenarios, and worth reviewing, was developed in the 1970s by Jim Dator of the Hawai’i Research Centre for Futures Studies (2002), and which uses a workshop-based forecasting technique called “incasting” (Curry & Schultz, 2009). Participants are presented with roughly defined scenarios to explore and are then required to add details to the scenarios, using their creative imaginations. This is almost akin to a form of script writing that a science fiction author might employ and in Chap. 6 we saw what S-F could offer in scenario formulation. “With organisations, participants may be asked to consider how they would redefine, reinvent, or otherwise transform their mission, activities, services, or products to succeed in the conditions of each scenario” (Curry & Schultz, 2009). This tool aims to increase the flexibility when planning futures and increasing creativity and exploring what strengths and weaknesses might emerge during the changes taking place in the scenario.

2.3 Scenarios as a Design Process

Scenario development can be defined as a design activity which at inception is unstructured and faced with an array of uncertainties. If not addressed early enough these uncertainties can gestate into undesirable outcomes which the design team will find difficult to redress at later stages in project (Garvey & Childs, 2016)—especially where the project is constrained by resources (time, money, people). In order to militate against such circumstances occurring the design team has to understand both the nature of the problem facing it and the nature of the uncertainties contained with the problem space.

2.4 So, What Is the Design Process?

Nelson and Stolterman (2012) state that “Design is the ability to imagine that-which-does-not-yet exist, to make it appear in concrete form as a new, purposeful addition to the real world”. In other words, it can be argued that early-stage design, whether for products, services, or concepts, contains numerous uncertainties.

What can be said is that, at the beginning of a new design project (and this can also be interpreted as identifying a scenario), a problem exists for the designer or the design team caused by uncertainty of outcome (and for designer we can substitute analyst). Indeed, the further away a design concept is from realisation as a finished item, the more it is prone to varying conditions of uncertainty and asymmetric inflexions.

Garvey and Childs (2016) identify further areas of specificity by stating, “Physical design methods and the behavioural responses to such design (many of which are not quantifiable), are highly complex, exacerbated by high levels of interconnectivity. This is not just due to the variety of components that have to be considered in the design process (physical complexity), but to intangible factors inherent within the nature of individual and group behaviour in response to designed objects”.

Sounds familiar, doesn’t it?

2.5 Strategic Options Analysis or “What If” Scenarios: An Exploratory Approach

We shall see in Chaps. 9 and 10 how behavioural biases at all levels (individual, group, organisational) can constrain the generation, and hinder the manifestation, of ideas under conditions of uncertainty. The exploratory mind-set which encourages us to ask the question “what if?” is paramount if scenarios can truly provide a variety of “stretched” alternatives which might happen. In Chap. 3, we were introduced to the importance of the ability to structure a problem prior to actually attempting to solve it (that is if the problem concerned is “solvable”—and unfortunately not all problems are).

One of the key questions I’ve always tried to answer is “who decides what the inputs are when constructing a scenario?” and is perhaps more crucial than picking a scenario in the first place whether looked through a reactive or exploratory lens. There are a number of methods the reader may have heard of such as “brainstorming” (and its variants) and mind maps. The common weakness of such methods is that groups engaged in the process can be overly influenced by a dominant member or suffer from groupthink, hubris, and/or adherence to rigid ideological positions, often put forward by the dominant participant themselves.

I often recall the old story of the American tourist who gets lost trying to look for the correct route to Limerick in Ireland, and on finding the local Garda (police) officer asks how to get there—to which the policeman replied, “well if I were you Sir, I wouldn’t be starting from here”. Whilst seen as an example of Irish quirky humour, the response is actually correct—too often people ask the wrong question from where they are standing.

A method expanded upon in the MTT section of this chapter is eminently suitable in helping decision-makers and analysts to address the issue of defining the boundaries of any one scenario. Unfortunately, it has a rather scary title of morphological analysis (MA) but may be better understood using the term “strategic options analysis” (see Sect. 7.5.1 below).

3 Types of Outcome: A Work Through of a Tentative Options Analysis Process

Setting the Scene

So, what types of outcome should we anticipate as potential future scenarios? Various categories of outcome can be identified. Whilst it is generally assumed that the further one looks into the future the greater the uncertainty, we should not take this as a given. There are black swan events but not as many as we are often told there are—and I refer here to those characteristics of quadrant 3 in the uncertainty profile introduced in Chap. 2—or “pseudo-black swans”.

Outcomes which the scenario process allows us to explore come in a variety of forms, albeit that like uncertainty, numerous different interpretations have been put forward. For the purposes of this document, a number of different viewpoints by academics and researchers are reviewed. From this review, a hybrid schema of types of outcome has been developed. Following a further editing process (as will be explained), the key components will be positioned into their respective quadrants in the uncertainty profile model.

3.1 Towards a Synthesis of Scenario Options

We have introduced a number of different but overlapping interpretations as to how we can represent scenario-based outcomes. We shall now bring together the various strands so as to provide the user with a template for classifying different types of scenarios with varying levels of certainty/uncertainty and over a range of time periods. This is followed by allocating the various types of scenario outcome options to the four quadrants within the risk/uncertainty profile matrix. The objective here will be to provide guidance to practitioners as to the range of pathways expressed in the form of narratives that they may follow when confronted by uncertainty and in relation to reactive and/or exploratory responses.

3.2 Strategic Options Analysis for Different Scenarios

Synthesising the various scenario options by researchers we have earlier referred to experts such as Voros, Marchau, and Kuosa, I shall now present different but often overlapping interpretations in the form of a problem space or options landscape.

Four main variables or parameters selected are:

  • Contextual: where is a particular event or issue positioned along the risk/uncertainty spectrum?

  • Conditional: defined as being how we might visualise and feel about a future event or issue.

  • Occurrence impact: a range of “stretched” possible outcomes from the expected to outliers, wild cards, and fringe possibilities. This variable refines each contextual condition in terms of some form of expectation.

  • Time horizon: Readers may have noted that the futures cone presented in Chap. 7 is usually shown with a “time” axis. As was discussed in that chapter time periods are flexible. This main variable is thus flexible as to how the scenario analyst wishes to define the future and will change from scenario objective to scenario objective. In one scenario a time horizon of, say 10 years, may be broken down into three periods: short term may cover up to the first 2 years, medium term up to 5 years, etc. On the other hand, scenarios may define short term as being the next 10 years and medium term up to 20 years.

  • Note: It is a common error to believe that the (usually) more undesirable impacts will occur further into the future than the more preferable ones. As humans, we have a tendency to procrastinate and defer unpleasantness until it is clearly visible and close at hand (ref: the Kim Stanley Robinson quote on the section on science fiction in the previous chapter). Uncertainty has no time determined horizon—the COVID-19 outbreak manifested itself and spread rapidly and globally within a relatively short time period which caught most countries unaware. An earthquake can also occur with very little, if any, warning.

Each of the four main variables is then populated with various conditions extracted from those identified in Chap. 6; thus:

Contextual future conditions are:

Predicted future—this condition reflects both the Marchau level 1 uncertainty and Voros interpretations. There is a clear enough future for short-term decisions and where historical data (which of course may not always be accurate) can be used as predictors for the future usually for a singular event with a very high probability of occurring. Forecasting methods rather than foresight ones are deployed here

Probable—something with few alternative futures and likely to happen—it is probable. Quantitative data and stochastic methods are generally used here to support a prediction.

Possible—Voros defines this condition as being something which might happen. Possible events are seen to be reasonable and unreasonable—they could happen, even if undesirable.

Plausible—on the other hand, “plausible” refers to possibilities that are reasonable and it excludes possibilities that are unreasonable. Marchau calls this category a level 3 uncertainty which relates to situations with a few plausible futures but where probabilities cannot be assigned. He expands this interpretation by stating “the future can be predicted well enough to identify policies in a few specific plausible future worlds”.

In semantic terms, “The main difference between “plausible” and “possible” is that “plausible” means you could make a reasonably valid case for something, while “possible” means something is capable of becoming true, though it’s not always reasonable”.

Highly unlikely—here we are starting to stretch the boundaries of both plausibility and possibility, also called radical uncertainty or deep uncertainty and conforms to Marchau’s level 4 uncertainty. He makes a distinction between those situations which contain many plausible futures and situations that we are not sure about. The term “wild card or outlier” is often used here—there is a hint of possibility of something occurring we are just unsure as to when and how it might manifest itself. However, it should be stated that such is the weakness of a signal for a wild card event that it can occur even under possible and plausible conditions.

Unthinkable—is a condition and is included as it is on the outer fringes of both possibility and imagination. Such event realisation or even visibility is heavily constrained by behavioural factors and boundaries as per Gowing and Langdon’s (2017) interpretations.

Conditional conditions are:

  • Preferable (or desirable)

  • Undesirable (not preferable)

  • Not sure (agnostic)

All three conditions tend to be subjective with values attached. Both the preferable and undesirable conditions offer the decision-maker the choice of determining which future event is more preferable than the other so that policies to encourage or mitigate such occurrences can be developed in advance. Of course, the greater the uncertainty of something, the more an agnostic or non-action standpoint is likely to be adopted. For example, the argument surrounding the future of artificial intelligence as being a force for good or evil—many arguments have been put forward for both visions but the jury is still out.

Occurrence/impact (or impact probability) states are:

  • High occurrence/low impact—in the predicted zone, e.g. seasonal flu

  • High occurrence/high impact—as above e.g. hurricanes, tornadoes, annual monsoon

  • Low occurrence/low impact—“mast” years for acorns (every few/7? years)

  • Low occurrence/high impact—UK hurricanes, pandemics, earthquakes, climate change

  • Possible in science fiction—as per Kuosa and Ota and Maki-Teeri—AI, robotics, genetic engineering

  • Possible in S-F but not according to current knowledge (e.g. warp drive)—as above

  • Possible in imagination and therefore in theory—dystopias and utopias.

For this category, it should be noted that uncertainties, by definition, ought to preclude those two states with high occurrence or probability. However, as we have shown earlier in Chaps. 2 and 6 the apparently obvious—from known-knowns to known-unknowns—are often ignored and end up in quadrant 3—an unknown-known or even an unknown-unknown. For this reason, these two states are initially included within the options outcome model.

Time horizon—Time frames are open to variation (as mentioned in Chap. 4) as these are dependent on the boundaries defined by the scenario authors. For the model used in this chapter I’ve selected seven different time futures.

  • Less than 1 year

  • Less than 3 years

  • Less than 5 years

  • 5–10 years

  • 11–20 years

  • 21–30 years

  • 31 years +

The scenario options analysis problem space is presented as Fig. 7.1 above:

Fig. 7.1
A table of a problem space with four columns. The headline of the column is labelled as contextual, conditional, occurrence or impact and time horizon. Time horizons are explained with lesser than and greater than symbols.

A problem space

This schedule indicates that there are 882 different configurations of the 4 variables and their states (6 contextual × 3 conditional × 7 occurrences/impact × 7 Time). Within these configurations, we can assume that there will be a number of inconsistence pairs. Pair-wise analysis (as illustrated in Fig. 7.2—partial) allows us to strip out those configurations within the problem space with inconsistent pairs. There are 752 configurations with inconsistent pairs, leaving 130 consistent configurations which can work. It should be pointed out that the pair-wise evaluation can be subject to subjectivity, but by identifying inconsistent pairs within any of the configurations generated by the problem space, the process mitigates against the worst of such inconsistencies. Readers should also be aware that this representation is mainly for demonstration purposes only and different pairing outcomes may occur depending on the specific problem being addressed. In Fig. 7.2, inconsistent pairs are identified by an “X” in a red cell.

Fig. 7.2
An illustration of the partial view of pair wise analysis with two columns. Contextual has Predicted, Probable, Possible, Plausible, Highly Unlikely, and Unthinkable, while the second column has Preferable, Undesirable, and Not sure. Rows entries have Conditional has Preferable, Undesirable, and Not sure and Impact probability has High and Low probability. Blocks with cross mark symbols are as equal as hyphen symbols.

Pair-wise analysis table (partial view)

In Fig. 7.3 above, within the set of solutions and using the six different states within the contextual variable as the main driver, we can see that for predicted scenarios there are four options:

Fig. 7.3
A table of the predicted input variable with five columns. It has 882 total solutions, 130 total viable solutions, and 4 selected solutions. The headlines of the columns titles as solution number, contextual, conditional, occurrence or impact and time horizon. The input variable predicted has a solution number 1, conditional is preferable, Occurrence is High Occurrence Low impact and time horizon is less than 1 year.

Input variable (predicted)

Note: In Figs.7.3,7.4,7.5, and7.6, the following images are in black and white only for visualisation purposes. In the matrix itself (below the variable headings), the input variable is shown inbold text (e.g. predicted)and the output options are the shaded cells.

Further viable (i.e. solutions) mini-scenarios are detailed in Appendix 3.

Note for Reader: Due to the amount of detail provided in Appendix 3 and Appendices 4 through 7 referred to below, the information is best referenced by a link to the author’s website at https://www.strategyforesight.co.uk/general-5

Alternatively once on the landing page called “Insights”—scroll down page to section “On-line resources” and follow instructions to access appendices.

Using a decision support method called morphological analysis (or MA), which is explained in more detail in the MTT section below, it has been identified that any state within any of the main variables can be an input or an output. In the example below, if we wish to identify potential scenario characteristics using “low occurrence/high impact” as the main input driver we see that there are 54 options:

Fig. 7.4
A table of low occurrence with five columns. It has 882 total solutions, 130 total viable solutions and 54 selected solutions. The input variable has 50, 57, Contextual is Possible, Plausible, Highly Unlikely, conditional is preferable, undesirable, Occurrence is Low Occurrence High impact and time horizon is 31 years plus.

Low occurrence/high impact as input

Fig. 7.5
A table of the input variable less than 5 years with six columns. It has 882 total solutions, 130 total viable solutions, and 9 selected solutions. The input variable has Contextual is Possible, Plausible, Highly Unlikely, Conditional is preferable, undesirable, Occurrence is Low Occurrence High impact and time horizon is less than 5 years.

Less than 5 years as main input driver

Using the time horizon as a main input/driver we see that within 5 years there are nine viable options:

Fig. 7.6
An table of the input variable 5 to 10 years with six columns. It has 882 total solutions, 130 total viable solutions, and 16 selected solutions. The input variable has Contextual is Possible, Plausible, Highly Unlikely, conditional is preferable, undesirable, Occurrence is Low Occurrence High impact, Possible in Imagination and theory, and time horizon is 5-10.

5–10 years as input option

Whilst in the 5–10 year horizon there are 16 options to explore:

As indicated previously the original solution set following pair-wise analysis was reduced from 882 to 130 different scenario profiles. All the configurations which make up this set of 130 are presented in Appendix 4 and as mentioned above can be accessed at https://www.strategyforesight.co.uk/general-5

However, the prime objective of this exercise is really to identify not just the usual suspects which appear in the list of scenarios which work (the known-knowns), but those scenarios excluding these usual suspects. In this case, we can therefore afford to ignore those scenario configurations which are less challenging or do not yield any great insight beyond what we already know. Thus, for this exercise we have chosen to discard any scenario configuration which:

  1. (a)

    Has a high level of occurrence and a low impact (yellow)

  2. (b)

    Has a low level of occurrence and with a low impact (red)

We are looking for those scenario configurations which are the most impactful.

These configurations are identified in Appendix 5 (yellow and red rows) also at https://www.strategyforesight.co.uk/general-5

From our original 130 scenarios, there are 34 configurations identified as being in these categories. This reduces the key scenarios to 96 out of the original 882 or just 11% of the original problem space set.

From a subjective policy point of view, there are two distinct drivers present in the model and they abide within the conditional variable—

  • The preferable (or desirable)

  • The undesirable

If a future scenario is to be preferred or desired, specific pro-active policies need to be pursued or actioned in order to create and bring about the realisation of such an outcome. On the other hand, if potential scenarios are deemed undesirable, avoidance and/or mitigation strategies need to developed and deployed.

Under the same conditional variable there is a third state—not sure (or agnostic). To some extent, such a condition is the worst of the three states as the level of uncertainty relating to any action is highest.

Of the remaining 96 scenario solutions 31 are deemed to be preferable or desirable, whereas 33 are undesirable. The balance of the 96 leaves 32 agnostic scenarios. If we exclude the agnostic scenarios, then there remains 64 preferable or undesirable scenarios which are shown in Appendix 6—https://www.strategyforesight.co.uk/general-5

Incidentally, Voros (2017) pointed out that the predicted (expected) is often left out as the probability of an event not happening is deemed to be very small. In our solutions set there are two scenarios which appear in the predicted category. I wouldn’t totally assume that what we expect will happen will actually happen—there are always surprises to take us off guard—indeed have we not been rigorous enough in seeking outliers and wild cards? My inclination in this instance is to leave this group as a form of insurance—it can be audited in the future.

4 Allocation of Viable Scenarios to the Uncertainty Profile Template

Preamble

The final phase in the process is to allocate each of the 64 preferable and undesirable scenario configurations to one of four quadrants in the uncertainty profile template (Chap. 2).

Table 7.1 Allocation of preferable and undesirable scenarios to the uncertainty profile

The reader will notice that some of the scenarios appear in more than one quadrant. For example, whilst all the scenarios in the possible category can be allocated to quadrant 2—the known-unknown, it is not unrealistic that some of these scenarios manifest themselves in quadrant 3—the unknown-known—namely scenarios 8 and 10—preferable but with low probability and high impact and scenarios 17 and 18—undesirable with low probability and high impact (within 1 and 3 years).

Examples of a scenario 17 or 19 situation include the Grenfell high-rise fire and the Manchester Arena bombing. In both cases, such events would normally reside in Q2—the known-unknown (or Schwarz’s inevitable surprise) and appropriate contingency plans put in place for such an eventuality. Poor fire evacuation guidelines and inadequate testing of the building materials made the Grenfell fire to be many times worse had such foresight been up to the mark. In the case of the Manchester Arena bombing it has been argued that a number of deaths could have been avoided if emergency response crews had been allowed in the first instance to treat the injured rather than being held back for fear of secondary devices. In both cases, inadequate mitigation and emergency response planning fell short of the type of vision required for them to have been addressed by quadrant 2 foresight.

Similarly, we see in quadrant 4 scenarios 45, 46 and 54, 55 in the highly unlikely category also appear in quadrant 3: (scenario 45 is highly unlikely but preferable with low probability/high impact whereas scenario 54 is also highly unlikely but is undesirable with low probability/high impact). The events being unpredictable could appear any time—even within 3 years. Such is the high level of uncertainty pertaining to this category of scenario that depending on the rate of change of awareness within an outlier event, an occurrence may morph from a Q4 black swan into a Q3 grey swan without the analyst or decision-maker being aware of what is happening. A detailed breakdown is shown in Appendix 7.

https://www.strategyforesight.co.uk/general-5

4.1 So, How Does This All Work and What Does It All Mean: A Process Summary?

The final part of this section describes a summarised version of the above process and allows the readers to view the whole path and rationale for deploying such a methodology.

  1. 1.

    Phase 1: Analysis of various types of scenario outcomes (sources include Vuosa, Marchau et al. (DMDU), Kuosa, Curry, & Schultz, Gowing & Langdon).

  2. 2.

    Phase 2: Establishment of a problem space (PS) for scenario options based on four core variables:

    1. (a)

      Contextual (with six different states)

    2. (b)

      Conditional (with three states)

    3. (c)

      Occurrence/impact (seven states)

    4. (d)

      Time horizon (seven states)

PS matrix generates 882 different configuration scenarios as follows (Fig. 7.7):

Fig. 7.7
A table of a problem space with four columns. The headline of the column is labelled as contextual, conditional, occurrence or impact and time horizon. Time horizons are explained with lesser than and greater than symbols.

Generation of configurations from the problem space

  • Pair-wise analysis reduces the PS to a solution space of 130 viable/consistent solutions.

  • A second reductive iteration discards an additional 34 configurations (high occurrence/high impact and high occurrence/low impact options) as being scenarios which we know about and thus of little exploratory interest.

  • A third iteration reduces the set to 64 scenarios based on those configurations which are either preferable or undesirable.

  1. 3.

    Phase 3: Allocate the 64 solution options to the four quadrants within the uncertainty/risk profile matrix (from Chap. 2) as in Table 7.1 below.

  2. 4.

    Phase 4: Allocate scenario to options list and location within uncertainty profile matrix detailed in Appendix 7—https://www.strategyforesight.co.uk/general-5

What this process allows us to do is to help decision-makers and their analysts to narrow down the options faced by scenario strategists and planners. Importantly, it helps identify what are the challenges faced by them in order to elicit responses which will lead to action. Quadrants 2 and especially 3 present the greatest challenges as the boundaries between the various scenario categories become increasingly blurred—a situation which makes decision-makers constrained by key resources such as time and money particularly leads them to concentrate more on shorter time frames. This, of course, yields ground to a position of decreased preparedness for low probability/high impact events and the intendant consequences.

5 Methods, Tools, and Techniques

In this extended final section, I’ll be introducing two decision support methods (DSM) along with a case example. One of the DSMs has already been used in the first parts of this chapter to help identify viable scenario categories.

5.1 Morphological Analysis (MA)

Unfortunately, this first DSM has the rather scary title of morphological analysis (MA) but may be better understood using the term “strategic options analysis” (SOA) which is how I have described it in earlier sections. To date its exposure to the uncertainty and futures community has been limited and overlooked, partly through it being seen as rather time-consuming combined, more importantly, with the “combinatorial explosion” that the method generates so that it can appear unmanageable—unless computer assisted. Such computer software has been around for the last couple of decades but not been easy to access. Encouraged by the intervention of a neutral facilitator along with a preference for cross-functional and cross-disciplinary teams, it can prove invaluable as a decision support tool under conditionals of high levels of uncertainty where hard data is in short supply.

What really is MA? Morphological analysis (MA) belongs to a broader set of methods in the decision support area known as problem structuring methods (PSMs)—methods which were highlighted in Chap. 3. As we know PSMs are a set of methodologies to support groups confronted with problems involving multiple actors, conflicting perspectives, and key uncertainties. PSMs enable the structuring and analysis of complex problems which:

  • Are inherently non-quantifiable

  • Are stakeholder orientated with strong socio-political, cultural, and technical

    positions

  • Contain non-resolvable uncertainties

  • Cannot be modelled easily or simulated

  • Require a judgemental approach to be placed on a sound methodological basis

Morphological analysis (MA) itself is an extension of the morphological method, and was developed in its more generalised form in the period 1940–70 by the astrophysicist Fritz Zwicky (1947, 1948, 1962, 1967, 1969). Heuer and Pherson (2011) offer a broad definition of MA which encapsulates its generic status as a PSM:

…is a method for systematically structuring and examining all the possible relationships in a multidimensional, highly complex, usually non-quantifiable problem space. The basic idea is to identify a set of variables and then look at all the possible combinations of these variables……and reduces the chance that events will play out in a way that the analyst has not previously imagined and considered.

Zwicky and later Jantsch (1967) and Ayres (1969) initially applied morphological analysis to explore potential technological breakthroughs for engineering design purposes. In the 1980s, Majaro (1988) advocated its value as a creativity and ideation method. In the 1990s, MA was applied more to the futures and socio-economic fields and in the 2000s into a more general methodology (Ritchey, 2006, 2011) targeting the broader aspect of “wicked problems”, with inherent high levels of “systems uncertainties” (Funtowicz & Ravetz, 1994). As a generic method, MA can be used in any one or combination of the following core streams:

  1. 1.

    Ideation and technology forecasting

  2. 2.

    Futures and scenario planning

  3. 3.

    Systems uncertainties (aka “wicked” problems)

In a morphological model, there is no pre-defined driver or independent variable (or parameter). Any variable—or set of variables or discrete conditions within the main variable—can be designated as a driver. It is this ability to define any combination of conditions as an input or output that gives morphological models such flexibility. Thus, given a certain set of conditions—what is inferred with respect to other conditions in the model? This “what-if” functionality makes MA an extremely powerful tool, and when combined with software, allows researchers to explore viable alternatives in real time from very large configurations of variables and conditions (also known as the problem space). In essence MA can be introduced to help shape and identify possible paths for analysts of all types (be they designers, forecasters, creatives, and policy framework initiators). This flexibility in determining what the main variables and parameters to a problem are makes MA a particularly useful tool for developing exploratory scenarios whilst encouraging high levels of objectivity. For reactive scenarios, a MA model can be used to interrogate second and third order scenario options as well.

MA fits our criteria for modelling uncertainty, especially when dealing with large amounts of intangible data, and can be updated and modified in real time, especially where it incorporates strong facilitation with “stretched” teams of multi-disciplinary experts.

This form of morphological analysis straddles the fence between “hard” and “soft” scientific modelling. It is built upon the basic scientific method of going through cycles of analysis and synthesis and parameterising a problem space. It defines structured variables, and thus creates a real, dynamic model, i.e. a linked variable space in which inputs can be given, outputs obtained, and hypotheses (“what-if” assertions) made.

MA can help us discover new relationships or configurations, which may not be so evident, or that might have been overlooked by other, less structured, methods. It can also be used to highlight potential weak signals and outliers using the concept of “morphological distance” (Ayres, 1969). Importantly, it encourages the identification and investigation of boundary conditions, i.e. the limits and extremes of different contexts and problem variables. It provides a structured environment within which to handle uncertainty (and even deep or radical uncertainty) and is an exploratory method par excellence.

In its most basic form MA can be broken into three core processes.

  • Generation of the problem space

  • Pair-wise analysis

  • Compilation of the solution space

  • Recently developed and available software and processes (Garvey, 2016) have now overcome a traditional criticism of MA, in that it creates so many potential configurations or outcomes in the problem space as to be unmanageable. The combinatorial explosion of options created by MA can now be majorly reduced (typically over 95%), leaving the analyst to review a much smaller set of viable, internally consistent solutions. A summary of the overall MA process is illustrated in three definite phases as below (Table 7.2).

Table 7.2 MA three phase process

As highlighted earlier in a morphological model, there is no automatically designated driver or independent variable. Any variable (parameter)—or set of conditions within a variable—can be designated as such. Thus, anything can be an input and anything an output. For instance, instead of simply letting a scenario stakeholder define a relevant strategy, one can reverse the process and let chosen states within a proposed strategy configuration designate relevant scenarios. This is the basis of an inference model: given a certain set of conditions, what is inferred with respect to other conditions in the model?

The “what-if” functionality makes the model an extremely powerful tool, for not only looking at a wide array of possible outcomes, but through computerisation enables management and researchers to examine alternatives in real time.

Operating at the fuzzier end of the uncertainty/risk spectrum, a central feature of morphological analysis is the flexibility it provides to parameterise a problem complex, acting as scene setter for other decision support methods. In this case, the results of a morphological model can provide input for the development of other (possibly more complex) models such as Bayesian belief networks (BBNs) and multi-criteria decision analysis (MCDA) methods such as AHP—the analytic hierarchy method. Here, possible outcomes derived via a MA exercise can be compared according to a hierarchy of goals and goal criteria, providing validated inputs for scenario planning exercises.

When supported by pair-wise analysis, where pairs of sub-variables within all the variables are assessed for viability or consistency (i.e. can the pairs live with each other), MA is a method for rigorously structuring and investigating the internal properties of inherently non-quantifiable problem complexes and empowers practitioners to explore a wide variety of contrasting configurations and policy solutions. As a method for identifying and investigating the total set of possible relationships or “configurations” contained in a given problem complex MA’s primary task is to generate ideas with the aim of generating as many opportunities as possible. Such functionality makes it ideal for not only generating new scenarios but qualifying them as well.Footnote 1

Yes, but…!

Whilst an excellent concept for generating (thousands or even millions of) ideas or scenarios/configurations derived from multiple variables, it does create a practical problem of how to analyse all the configurations generated by the model’s initial problem space.

The solution is to reduce this vast number to examining “… the internal relationships between the field parameters and to reduce the field by identifying, and weeding out, all mutually contradictory conditions” (Ritchey, 2011). This is carried out for each matrix by an exercise called pair-wise analysis or cross-consistency assessment (CCA), where all of the parameter values or main variables in the matrix field are compared with one another on a pair-wise basis—similar to a cross-impact matrix. As each pair of conditions is explored a judgement is made to see if the pair can co-exist. Note: It is important to understand there is no reference to causality—only to mutual consistency. Via this process, a typical morphological field can be reduced by well over 95% internally consistent configurations. Actual exercises carried out by the author show that the larger the problem space in terms of configurations generated, the greater the tendency there is to increase the 95% reduction, highlighted above, to much higher levels. Conversely where a problem space has just a few thousand configurations or even a few hundred then lower percentage reductions are to be expected.

The graphic below shows an example of a completed pair-wise matrix, post-problem space creation but prior to compilation of the viable solutions. The original problem space consisting of seven main variables yielded 80,640 configurations. After the pair-wise analysis (as shown in the matrix), a solution space of 5671 of potentially viable solutions was generated—a reduction of 93% (Fig. 7.8).

Fig. 7.8
An image of the pair wise analysis table with 6 columns. They are time scale, system status, problem type, future environment, route to future, and interpretation. Blocks with cross mark symbols are as equal as hyphen symbols.

Pair-wise analysis table

Compilation, whereby inconsistent pairs as presented in the matrix are eliminated, allows the original problem space to be converted into a solution space—the latter operating as an interactive inference model where any parameter or state can be selected as an input and any other as an output.

6 A New Approach to Identifying Weak Signals for Scenario Development Using MA: Distance Analysis

6.1 Using MA as a Tool to Draw Out Weak Signals

Whilst the combinatorial explosion of options created by MA can now be majorly reduced (typically over 95%), leaving the analyst to review a much smaller set of viable, internally consistent solutions. The question still exists as to “what can we do with the viable options identified”?

Those concerns that manifest themselves in quadrant 3 in the uncertainty profile matrix (unknown/knowns) can be mitigated by the deployment of two mutually supporting methodological methods—morphological analysis (MA) and morphological distance analysis (MD).

Level 1 Reduction—MA Using Pair-Wise Analysis

Suffice it to say that we have seen in the previous section on MA the first phase of the process reduces a large problem space to a much smaller set of viable solutions using pair-wise analysis (cross-consistency assessment—CCA). And, as we have seen, this part of the process can eliminate very large volumes of configurations present in the original problem space. Even so how does one analyse the often large number of solutions produced by the model in order to classify them in some way?

Level 2 Reduction—Morphological Distance

When used as a follow-on from the MA process once viable solutions have been generated, MD can be deployed to classify the remaining configurations (Garvey et al., 2013) via a triage process (Ayres, 1969).

By resurrecting Robert Ayres’ concept of morphological distance (MD), and using it as a follow-on process, once the pair-wise analysis has been conducted, the remaining configurations can be more meaningfully classified via a triage process. Ayres identifies three forms of classification (or triage): occupied territory (state of the art), the perimeter zone, and “terra incognita”, the latter two criteria consisting of those configurations differing from state of the art whilst still remaining viable solutions. Given the distance from identified viable solutions in the occupied territory, terra incognita solutions are likely to be truly creative and “off radar”. Such configurations can thus be assumed to be very similar in nature to a weak signal as they will be at the periphery of the analyst’s vision. Ayres’ triage approach can be said to introduce a form of disintermediation into the distance determining process.

Ayres defines morphological distance (MD) as being the distance between two points in the (problem) space and is:

the number of parameters wherein the two configurations differ from one another. Two configurations differing in only a single parameter are morphologically close together, while two configurations differing in many parameters are morphologically far apart.

Note: It is important to clarify firstly that Ayres’ use of the term parameter is really a discrete state within the selected parameter and that a configuration consists of a selected individual state in each of the parameters which make up the overall problem space. Secondly, it should be noted that the areas and boundaries of each sector will be subjective according to the particular technology or design being evaluated, and to the consensual subjectivity of a team of experts introduced to assist in determining the problem parameters and states of the morphological space.

The three sectors Ayres specifies are:

  1. 1.

    Known or occupied territory (OT)—composed of those configurations identified as representing “existing art” or (state of the art—SoA). This is the area where minimal innovation is likely to occur because it is already known.

  2. 2.

    The perimeter zone (PZ)—those configurations which contain between, say, 2 and 3 parameter/states different from state of the art (SoA). Configurations with just a two parameter/state distance are closest to the SoA or occupied territory sector and thus will have limited innovative potential. In essence they can be said to represent some form of basic product development: the low-risk option. On the other hand, those configurations with a parameter/state distance of 3 show a heightened level of innovation being further away from OT at the outer fringes of the perimeter zone. Event indicators in this quadrant are asymmetric which makes it difficult to ascertain the relative importance of signals.

  3. 3.

    “Terra incognita” or unknown territory (TI) is composed of those configurations characterised by a distance factor of 4 or more parameter/states from SoA. According to Ayres, these configurations are so different from SoA that they are likely to embrace configurations containing something which has not previously been considered, thus increasing the probability of some form of technological breakthrough. Possible configurations appearing in this sector are as likely to be truly creative as well as innovative or in the case of scenarios offer wild card options as true outliers. Within scenarios they may reflect unintended consequences—good or bad, but nonetheless possible and worth identification and examination. Conversely where refinements/improvements occur which are similar to an “existing art” configuration (differing up to 1 parameter/state cell), there is little chance of a breakthrough.

Specifically, Ayres mentions that:

The probability of a breakthrough in a technological area, per unit of time, is a decreasing function of its morphological distance from existing art, other things being equal.

Such configurations, allocated to the terra incognita (TI) zone, and being significantly different from SoA options, are more likely to offer analysts the probability of a technological or alternative scenario breakthrough as given the distance from identified viable solutions in the OT territory, outlier solutions are likely to be truly creative and “off radar”. Such configurations can thus be assumed to be very similar in nature to a weak signal as they will be at the periphery of the analyst’s vision.

The combined MA and MD process can be represented as follows (Fig. 7.9):

Fig. 7.9
A schematic representation of isolating weak signals with key variables and their states. The solution space is divided into three categories, namely, current state, perimeter zone, and terra incognita weak signals. A downward arrow is on the left side labelled, approx. 95 percent reduction.

Process for isolating weak signals

6.2 Issue of Determining What Is Current Knowledge (State of the Art)

It is apparent that the determination of distance begs the question, “distance from what”? Within the viable solution sets, those configurations deemed to reflect SoA (in occupied territory) act as the base set of configurations from which more distance sets in the PZ or TI zones can be determined. It is thus crucial that identifying the parameter profile of OT solutions be made easily and as objectively as possible. As a manual exercise this may take some time, bearing in mind that OT selection should be carried by the team who generated the original problem space and CCA exercise. In the future, it may be of value to see how data mining might short-circuit the process. By way of comparison the Delphi method process is somewhat cumbersome and iterative. The above form of process would undoubtedly reduce the risk of introducing highly levels of subjectivity introduced by using weighting factors too early.

In the graphic (Fig. 7.10) one of the outlier scenarios is shown (configuration with red cells), which is at the maximum distance from the anchor scenario (purple cells)—i.e. the selected scene has passed muster as a solution, but is very much an outlier in relation to current knowledge or policy—across all six of the variables.

Fig. 7.10
A schematic diagram of the outliers or weak signals. It has the inner borderline enclosed by sphere, and current knowledge at the center. A red colored small circles within the outer layer represents the outliers or weak signals.

Identifying outliers

For ease of presentation, it is assumed that following a CCA exercise the original problem space of 5184 configurations has been reduced to some 38 internally consistent scenes.

More recent researchers have attempted to introduce distance concepts in relation to morphological spaces. Gallasch et al. (2017) talk about the degree of overlap and degrees of divergence when comparing configurations in a morphological space. Overlap and divergence are terms very similar to Ayres’ concept of difference mentioned earlier and repeated here for context:

the number of parameters wherein the two configurations differ from one another. Two configurations differing in only a single parameter are morphologically close together, while two configurations differing in many parameters are morphologically far apart.

However, Gallasch et al. then go on to try and weight factors within a sector as well as weighting the sectors themselves (within the morphological space). The issue of weighting has arisen before by observers of the morphological method. The authors’ view is that the introduction of weighting too early in the process can introduce a quantitative-based bias into a space which in turn can degrade the inherent objectivity of the morphological process. MA itself is a method which is highly suitable for representing highly uncertain and “fuzzy” situations which allows for the integration of both qualitative and quantitative factors. By introducing weighting metrics too early may introduce subjective bias (as to the weighting criteria themselves) and/or attempt to quantify the unquantifiable in terms of the qualitative inputs into the morphological problem space and the cross-consistency assessment analysis phases.

The author recognises nonetheless that the status of configurations within the occupied territory zone is crucial to the parametric distance process as the OT configurations themselves provide the base from which the parameter distances are determined. However, the Gallasch weighting approach may be a useful way of placing into a hierarchy those configurations identified as being in the TI zone.

7 Catastrophic and Existential Risks

“Doomed—we’re all doomed” Private Frazer—a stalwart of “Dad’s Army”

This section looks at those risks which are truly external—and by external, I mean those events which occur beyond the control of an individual, group, organisation, or even a nation or nations—and in the case of catastrophic risks they are generally global in scale and impact.

These risks are typically low probability/high impact (LP/HI) events and fall into the category of highly unlikely or unthinkable—in most cases they are not pleasant. The challenge of such events for decision- and policy-makers is how much resources and scaled-up contingency planning should be committed for events which may not occur in the foreseeable future—which prompts the question what is the foreseeable future?

In reaction to a LP/HI event such as a global pandemic (e.g. COVID-19), reactive scenarios are based on an event which has already happened—although it can be argued that the pandemic was not such a low probability event. Climate change, on the other hand, was originally seen as a LP/HI event but has morphed in character so that it can no longer be seen as a low probability event. There is too much evidence that climate change is happening, albeit the ongoing changes are spread over lengthy time periods which may reduce the sense of urgency by humans (and policy-makers). Although it is happening and indeed accelerating, no one can forecast with any accuracy what the outcomes will be over short-, medium-, and long-term time horizons—multiple variables which are also dynamic in nature create a highly uncertain set of future circumstances.

Two terms are used for this category of risks (or levels of uncertainty)—catastrophic and existential. But what is a catastrophic risk compared to an existential one?

They are different but like those two terms, risk and uncertainty, they are often used interchangeably so that the terms global catastrophic risk and existential risk are misinterpreted as being the same and used interchangeably.

Much of the reference source material in this chapter has been extracted from two eminent experts in catastrophic and existential risk—Nick Bostrom at Oxford and Toby Ord (2020) at Cambridge. Bostrom is the founding director of the Future of Humanity Institute at Oxford University. Ord is senior research fellow at the same institute. Other work carried out in a similar vein is at Cambridge University’s Centre for the Study of Existential Risk (CSER), and studies possible extinction-level threats posed by present or future technology.

Back in 2008 Bostrom and Cirkovic (2008) defined the term “global catastrophic risks” as

a risk that might have a potential to inflict serious damage to human well-being on a global scale.

They acknowledged that such a definition was very broad which embraced events ranging from

volcanic eruptions to pandemic infections, nuclear accidents to worldwide tyrannies, out-of-control scientific experiments to climate changes and cosmic hazards to economic collapse.

Bostrom and Cirkovic go on to suggest that a sub-set of global catastrophic risk is existential risks and where:

An existential risk is one that threatens to cause the extinction of Earth-originating intelligent life or to reduce its quality of life permanently and drastically.

They identify that specific to an existential risk is the feature that:

as it is not possible to recover from existential risks we cannot allow even one existential disaster to happen; there would be no opportunity to learn from the experience.

Interestingly, they qualify this interpretation by stating that managing such risks must be pro-active—there is no opportunity to be reactive. This would seem to place scenario development primarily in the domain of the exploratory—negating automatically a reactive response.

This interpretation claims that whilst a global catastrophic risk may kill the vast majority of life on earth, humanity could still potentially recover. On the other hand, an existential risk either destroys humanity entirely or prevents any chance of civilisation’s recovery. It is strange therefore that the term “existential” tends to be used more by observers, pundits, and politicians alike rather than the term “catastrophic”. Numerous commentators during the COVID-19 pandemic have described it as being an “existential” threat. It isn’t, because although severe at the global level it is unlikely to wipe out all human life—but it is catastrophic! Remember that in Medieval Europe the Black Death wiped out roughly one-third of the population yet European mankind survived without suffering anything close to a collapse of civilisation (although by bringing about a shortage of manual labour due to the high death rate it did help bring about the demise of feudalism in much of Europe).

In brief then, “a global catastrophic risk may kill the vast majority of life on earth, humanity could still potentially recover. An existential risk, on the other hand, is one that either destroys humanity entirely or prevents any chance of civilization’s recovery”Footnote 2—the latter is definitely nastier!

7.1 Welcome to the Anthropocene

The Anthropocene era has been postulated as reflecting the influence and impact of human behaviour and action (and indeed inaction) on the Earth’s geology, ecosystems, and atmosphere in recent times. This influence is seen as being so significant as to constitute a new geological epoch.

The term was popularised in 2000 by chemist Paul Crutzen, and there are a number of different opinions as to when it began. One view is that it started as soon as humans became established on the planet, another being the start of the industrial revolution whilst a more recent argument is that it was the Trinity explosion—the testing of the first atomic bomb in July 1945. The latter position is based on the ease with which mankind could destroy itself through its own technological inventiveness. Whatever the specific date of origin, Anthropogenic risks are caused by humans or their activities.

Bostrom, Cirkovic, and Ord’s interpretation of global catastrophic and existential risk, along with examples, can be summarised as follows (Table 7.3):

Table 7.3 Different interpretations summarised

As we can see there are a number of ways to cut the catastrophic/existential risk “cake”.

Both Ord and Bostrom agree on natural risks (risks from nature). Ord specifically includes Anthropogenic risks, whereas the closest Bostrom comes to anthropogenic influence is in the identification of risks from hostile acts (i.e. by people). Ord is a little more expansive in his categories which he classifies as future risks such as pandemics, AI, and alien invasion, whereas Bostrom sees pandemics and climate change as unintended consequences (which of course do take place in the future).

Can Global Catastrophic Risks Turn into Existential Risks?

The other consideration is whether global catastrophic risks can morph into existential ones. Well, of course, they can—for example, a pandemic virus can move from being classified as a catastrophic/anthropogenic risk to an existential/bad actor risk via deliberate genetic manipulation by a bad actor and where the virus runs out of control through untreatable (in time) virulence.

7.2 MTT: Support Tool

In order to identify both catastrophic and existential risks as separate categories, the following schema allocates different types of such risks to three event groups.

  • Natural risks

  • Anthropogenic risks

  • Hostile/bad actor risks

Each cell is populated with risks according to the catastrophic/existential and the natural/anthropogenic/bad actor axes. It is to be noted that the separate events are not exclusive but are just indications of how different events can be categorised (Table 7.4).

Table 7.4 Risk type categorisation

7.3 Allocating the Risks to the Uncertainty Profile

Each of the cells contains identifiable risks. Uncertainty resides in when, if, and how such events will take place. Recognition of these events, no matter how unthinkable and unpleasant, is a basic step so that contingency scenarios can be developed. Planners and analysts should therefore attempt to allocate different scenario options identified in the previous chapter to each of the items listed in the catastrophe/risk matrix above. If they can remove themselves from being in an unthinkable mode of thought then at least some provision will have been made to avoid a future situation whereby “we didn’t see it coming”. The main judgement call, however, will relate to the time horizons that analysts and decision-makers might base selected scenarios on.

The key take-away though is to treat this category of uncertainty as worthy of an exploratory approach. Scenarios need to reflect here: the development of strategies and actions which are less mitigation but set up to avoid such events or at least avoiding the worst effects should they occur in one form or another. Such is the severity of the impacts of each of the above that they need to be treated as more existential impacts whether catastrophic or existential, as the likelihood of learning from the experience is majorly reduced, i.e. let’s scare people.

As highlighted earlier we can treat exploratory scenarios as initially primary events—as long as the analysts are aware of second and third order and more events. On this assumption, then of the six cells identified above, planners should concentrate on those events which a technologically enhanced political society “could” influence in some way. The two prime areas are thus defined as being anthropogenic risks and bad actor risks along the global catastrophic risks axis—the bad actor group itself being anthropogenic in the ways such risks manifest themselves. Across the existential axis, how mankind reacts to the development of technologies such as AI and nanotechnology, both potentially capable of becoming runaway and uncontrollable, places them firmly in the anthropogenic camp—humans generated such technologies. Similarly, the deliberate release of doomsday weapons by bad actors also requires “manual” intervention and thus inception of such devices needs to be closely observed and monitored.

From this analysis, it does appear that the major threats will be generated by our own species—as a result these should not be thought of as Q4 unknown-unknowns but as Q2 known-unknowns. Unfortunately, mitigation responses are likely to be limited by mankind continuing to manifest Q3 behaviour, treating the risk as a Q4 and failing to think about the unthinkable—unable or unwilling to stop treating the risk as an unknown-known. This is the true challenge for analyst and decision-makers—bringing uncertainty in from the cold!

8 Case Examples

In this section, a summary is given which validates the MA/MD approach in generating identifiable outlier and weak signals.

The case study (Garvey et al., 2013) addressed the focus question: “What possible configurations can the design of an apartment block take, which ensures cross-ventilation and sufficient daylight”. Admittedly this is not a futures exercise per se but it does highlight the concept of morphological distance and configuration reduction.

The question was chosen so that it could be easily translated in parametric geometry terms and to show that additional focus questions (e.g. energy usage or glare analysis) could be used applying the same methodological approach. The methodology helped to design new options for apartment typology. A ten-parameter problem space, composed of 155,520 configurations, was initially reduced by deploying morphological analysis (and CCA) then reduced further using morphological distance triage principles. The two-stage process generated a 99.9% reduction of the initial (155k configuration) problem space, to a mere 213 internally consistent options classified as being in the terra incognita zone. Such was the remoteness from current state of the art that these 213 options qualify as representing weak signal configurations.

The final 213 solutions, post-morphological distance, were found to be distanced 4–5 parameters away from existing, state-of-the-art, solutions (from a ten-parameter configuration set). Finally, these terra incognita solutions were processed by a visual algorithmic editor and output as tri-dimensional CAD models which in turn could be easily evaluated and analysed by the designer. A more detailed description is provided in Appendices 8 and 9.

8.1 A Scenario Based Example

The chapter ends with a simplified and theoretical case example as to how the process plays out at a geo-economic level.

This basic example looks at the impact of a major volcanic eruption in the Northern European Hemisphere. As such an event has not occurred in the last 5 years the scenario can be termed an exploratory one and not a reactive one, albeit that it is based on one major anticipated event.

The most recent impactful volcanic eruption in Northern Europe occurred in 2010 with the Eyjafjallajökull eruption in Iceland. We are fortunate in that past eruptions can provide major insights into the impact of such events and thus it is worth reviewing what actually happened with that eruption and what were the impacts. The following sub-set, taken mainly from Wikipedia, outlines some of the historical data and features of the event.

Although comparatively small for volcanic eruptions, the Eyjafjallajökull eruption caused enormous disruption to air travel across western and northern Europe in April of that year as ash from the eruption covered large areas of Northern Europe. About 20 countries closed their airspace to commercial jet traffic affecting around 10 million travellers—the highest level of air travel disruption since WW2. The volcanic activity impacted air travel due to a number of factors:

  • The volcano is directly under the jet stream.

  • The direction of the jet stream was unusually stable at the time of the eruption’s second phase, continuously southeast.

  • The second eruptive phase happened under 200 m (660 ft) of glacial ice. The resulting meltwater flowed back into the erupting volcano, which created two specific phenomena:

    • The rapidly vaporising water significantly increased the eruption’s explosive power.

    • The erupting lava cooled very fast, which created a cloud of highly abrasive, glass-rich ash.

  • The volcano’s explosive power was enough to inject ash directly into the jet stream.

  • Volcanic ash is a major hazard to aircraft. Smoke and ash from eruptions reduce visibility for visual navigation, and microscopic debris in the ash can sandblast windscreens and melt in the heat of aircraft turbine engines, damaging engines and making them shut down.

The eruption was not large enough to have an effect on global temperatures like that of Mount Pinatubo in 1991 which resulted in worldwide abnormal weather and decrease in global temperature over the next few years.

Its short-term economic effects were sharp and profound, but it was the combination of factors—each with its own level of probability, which exacerbated the problem—so again applying the dictum “if it can happen—it will happen”.

To begin with let us try and identify how such scenarios can be categorised—how are they configured? From the risk/uncertainty profile matrix, such an event will be located in quadrant 2 as a known-unknown. We know such an eruption will happen—we just don’t know when and where precisely.

Within Q2 there are two scenario categories “probable” and “possible”. Based on historical records it can be stated that the event is “probable”, being at a higher order of probability than “possible”. Any new eruption on a scale similar to or greater than Eyjafjallajökull can be characterised as having the following configuration “probable—undesirable—high occurrence/high impact within 1 year or 3 years”. Beyond 3 years we might add scenarios characterised as being “possible—undesirable—low occurrence/high impact within 5 or 10 years”.

One may ask why add the extra “possible” scenario? The argument here is that one needs to challenge or stretch the policy-makers’ thought processes across an array of viable alternatives. The danger of not using the possible longer term horizon scenarios is that mitigating actions might be delayed, thus endangering the efficacy of contingency planning should the event occur sooner rather than later. Whilst accepting that in the longer term some scenarios will require a longer lead time for the development of such contingency plans. Using this argument, it would be logical therefore for decision-makers to request the detailed development of multiple scenarios.

However, there is the additional danger that by using the Icelandic eruption as the main historical model, the analysts might erroneously assume that any future eruption in Northern Europe would be of the same intensity as that of Eyjafjallajökull—identified as a Category 4 on the VEI index (Volcanic Explosivity Index). What if the eruption were to be a VEI 5 category (similar to Mount St Helens in 1980) or a VEI 6 (similar to Mount Pinatubo of 1991)—there are eight VEI categories in total. In Italy, Vesuvius is the only volcano on the European mainland to have erupted within the last 100 years (Etna is on an island), and regarded as one of the most dangerous volcanoes in the world because of the population of 3,000,000 people living nearby as well as its tendency towards violent, explosive eruptions of the Plinian type (as was Mount St Helens). And Vesuvius is overdue a major eruption!

So, whilst a model based on the original Icelandic explosion might yield a variety of scenario development options, it would be remiss of policy-makers and planners not to include additional scenarios in quadrant 3—the dangerous “unknown-known” sector such as “possible-undesirable-low occurrence/high impact within 1 or 3 years” and “plausible-undesirable-low occurrence/high impact with 1 or 3 years”. Yet by doing so we can elevate the event for a quadrant 3 (where post-event excuses can be made) to a quadrant 2 type event and where contingency plans can be prepared.

By including these additional four narratives the decision-makers, faced with lower probability occurrences within shorter time frames, would be acting with major foresight by seeking out these outliers as a form of insurance. In other words, by just altering one variable description of an event, it is shown how important it is to adjust, if not increase, the number of scenarios requiring review in a controlled and responsible manner.

The example being addressed has concentrated on exploring the impact of a major first order event. It is also necessary to look at second order events emanating from this first event apart from the immediate damage to the surrounding area and risk to life (described as derivative scenarios). As an example, we saw that the particular height and nature of the Icelandic eruption created severe disruption to air traffic as the plume was spread by the jet stream. Depending on the severity of the eruption—such as that of Mount Pinatubo—other conditions such as the ejection of a very large number of particulates into the atmosphere causing climate change effects can be anticipated. Derivative effects, some causal and some asymmetric, could be severe economic and social disruption and health service breakdown especially amongst large urban populations close to the eruption—as in the case of Vesuvius—apart from more geographic dispersion of the physical effects. And with climate change already a critical concern, additional stimuli could accelerate such change further at a very climatically sensitive time. Each of these second and third order events need to be examined according to a separate set of profile configurations—hence the usefulness of the scenario outcome options index (see Sect. 8.3) albeit that different configurations might be selected from the index.

9 A Concluding Set of Questions

At a more general level of enquiry one can compile a short list of questions to be asked by the scenario writer as a starting point and in relation to an event identified via this exploratory process such as:

  • What time horizon should we be looking at for another major eruption?

  • What type and severity of eruption should be considered?

  • What locations are the most active and need to be considered?

  • What primary impacts should be expected?

  • What second and third order impact might we expect?

  • What climate impact should we expect—short, medium, long?

  • How long will such impacts affect us?

  • What uncertainties have not been analysed or considered? (Maybe the unthinkable?)

  • What scenario options do we need to identify and prioritise?

  • How quickly should we carry out the scenarios and implement their findings?

  • What resources do we have? Do we need to access more within the time frame?

This is by no means an exhaustive list and I’m sure you can add to them—think of these as a part of a starter pack.

Summary

The presence of high levels of uncertainty makes the task of identifying weak signals and outliers, which can mutate rapidly or conversely very slowly, problematical for analysts and decision-makers. Corporate and individual behaviours can act as additional barriers to weak signal identification. We have shown how, by combining two problem structuring methods, viable and internally consistent outlier scenarios can be identified at the periphery of the analyst’s vision: scenarios which might have been overlooked or ignored using more traditional forecasting approaches. Such is the distance of these scenarios from current knowledge that they offer real insights into options that are difficult to identify in multi-variable and highly complex problem spaces.

The next chapter continues exploring different examples of scenario in both reactive and exploratory forms. In addition, we shall be looking at second and third order scenarios where the first order or prime scenario allows for spin-off or derivative options to manifest themselves.