Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

8.1 Introduction

The integration phase in ForSTI involves examining the models underpinning the alternative futures and their appraisals. This follows the construction of alternative scenarios in the Imagination phase, mainly with the use of future scenarios (Chap. 7). Models used at the Integration phase depict how things are related together—or, more accurately, how we think they are related together. Thus, models illustrate how components of future systems are seen to be interlinked and interdependent, and to examine what kind of synergies they may create through their interaction. The relationships between components may be derived from logical or theoretical analysis, or using statistical estimation techniquesFootnote 1; the results of interactions, however, may be quite unexpected, especially in complex systems.

Whatever the focal topic of a ForSTI exercise, models of some sort will be used—in our thinking and discussions about this topic, in our development of appraisals of future prospects, in our construction of strategies. Models may be spelled out in some detail, or remain largely implicit. The question is not whether to use models: it is whether we are aware that we are doing so, and that we are using some models and not others, and thus that we need to be aware of the validity and limitations of the models we develop or choose (or refuse). How fit for purpose are our models? This chapter examines how the modelling process can be made explicit and subject to analysis—whether and how we spell them out, including whether we put them into mathematical language. It will thus begin with a fairly extensive discussion of qualitative and quantitative approaches and tools.

Following the explication of the models, several steps may be undertaken. The consequences of different actions and events can be explored—as can the implications of experimenting with different initial assumptions as to the values of variables or the strength and direction of relationships.Footnote 2 This can be one way of establishing a range of possible alternative futures. But it can also contribute to more normative analysis. Different actions and strategies can be considered, and their effects in isolation and together can be estimated. The chapter will thus go on to examine some modelling approaches that contribute to activities that bring us closer to prioritisation, planning and policy analysis, which are the topics of subsequent chapters.

8.2 Becoming Aware of Modelling in ForSTI

In some ForSTI exercises models are expressed in highly formal terms, and documented with illustrations or equations (Often, parts of the exercise may draw on the results of such models that have been developed and applied elsewhere, for example to outline economic growth possibilities, or the employment implications of new technologies, and so on. Sometimes an existing model will be modified to take into account the focal object). In many ForSTI exercises models are less explicit, being most evident in the arguments that participants and rapporteurs are making. (They may well be making use of some broad frameworks that have been developed elsewhere, and often these will be ones that are in wide circulation as commonplace economic, management, sociological thinking.) When several models are drawn upon, their mutual coherence may not be taken for granted. For example, economic models that assume no structural change may be employed alongside forecasts of sweeping social or technological change: the projections of the former are liable to be undermined by the latter. This is just one reason why it is important to reflect upon the adequacy of the models that are used.

Let us put to one side the use of the word “model” to describe the people whose job it is to pose in the clothes that fashion houses launch on the world with the hope that the rest of us will aspire to look like (or at least dress like) these people. Otherwise, in everyday speech a “model” most commonly refers to a scaled-down three-dimensional depiction of some three-dimensional object. Common examples of such models include miniature versions of buildings or townscapes, of ships and aircraft, and model railway systems. Some of these are actually four-dimensional—the model trains may move around the tracks, for example. Some are toys; some have more practical applications in education, design, and research. The aim is often just to create a sense of how the various components of the system or structure that is depicted relate together; miniaturisation, like distance, can provide perspective. We may be given a sense of how things work, as when the model aircraft’s propeller enables the craft to lift off, and its ailerons are used to control its trajectory. We can experiment with different circumstances, testing things to breaking point or finding more efficient ways of arranging things.

A general point about these—and many other—models is that very often they do not have to be composed of the same materials as the structure that they are intended to represent, nor do their power systems and other operations have to follow those of the original. (Electric power will often be used in models of steam railways and aircraft that use aviation fuel, for example.) Different sorts of models will be used when we want to demonstrate the properties of building materials or the operation of various types of engine—and such models are built for such relatively specialised purposes.

The word ‘model’ is commonly used in scientific discussions, reflecting the impact of twentieth century developments in the philosophy of science. These can be summarised (with huge simplification) by the contrasting positions of Karl Popper (we cannot prove a theory and thus know the truth, we can only test and sometimes disprove one or other theory) and Thomas Kuhn (we see the world in the frame of different paradigms, which not only provide present different explanations of events, but also tell us what problems are interesting and what evidence is relevant). The result has been that researchers in natural sciences as well as social studies will often talk of their task as basically being that of formulating and testing models (this probably underplays the role of research in discovering unexpected things via serendipity). Sometimes ‘models’ are held to very highly specified versions of ‘theories’—a given theoretical statement can be represented by numerous models which only differ minimally from each other, and models may take different forms as we have seen.Footnote 3

Models are efforts to represent structures and systems. Various forms of representation can be devised—and are more or less tailored to particular purposes. A street map is a 2D (scaled-down) model of a city, and is much more useful to carry around for navigational purposes than a 3D construction of balsawood and Plaster of Paris would be. A conventional map is not very dynamic, but it can be used to plan or record journeys over time. This can be accomplished manually—and now online maps can suggest optimum routes and predict the expected duration of a journey. The online map goes beyond the traditional paper-based map: geographical positions have been translated into coordinates, and we are dealing with something closer to a mathematical model from which visual representations (the map) can be derived as well as texts describing routes.

Richardson (1984) discusses some 12 different approaches to modelling. Modelling by changing scale is just one approach; others range through categories like analogy, simplification, sampling and (mainly mathematical) symbolism. The term “simulation” is generally used where we are talking of models of processes: models that seek to represent the way in which a system operates over time. “Simulation models” are generally understood to be computer-based models, however; “simulation games”, in contrast, usually refers to games that human players engage in, for example taking the roles of different stakeholders in the system being modelled.

Physical models are still often used, for hobbyist and educational purposes, for testing physical structures (e.g. wind tunnels for testing aircraft design and pylon stability). But models may be expressed in words, as mathematical statements, in the form of graphical illustrations (e.g. flow diagrams and other visual depictions of phenomena) and so on. This chapter contains two long sections, discussing respectively qualitative models—expressed mainly through words and diagrams—and quantitative models—using the specialised words and logic of mathematics (and usually computer simulation) separately.

Whether qualitative or quantitative, the critical feature of a model is its underlying conceptual frameworks—what elements, and what relations between elements, are invoked to represent the system or structure of concern? The particular language and terminology, tools and techniques that are used to elaborate this, and to derive analyses and even forecasts from it, can be highly significant (not least because they may conceal these conceptual underpinnings!). But it is the underlying framework that constitutes the model that we need to be able to (de)construct. Just what variables, and what relationships between variables, are being taken into account? Are these borrowed from previous work (and if so, what, and how has it been selected?), or developed by an expert group and/or a team working at their desks. Is there opportunity for different stakeholders to examine and comment on the key assumptions? Are we relying upon some standard theories, are relationships evidence-based (and on what evidence)?

Human beings are always using implicit or explicit models in viewing the world and deciding what actions to take. The focus of the discussion below is on modelling as explicit practice in ForSTI work. Here we may be dealing with innovation systems, with particular STI organisations, with specific technologies or application areas, with topics like skills or risks, and so on. A system involves various elements (often called “nodes”) and the relations between them. These elements may be rather abstract phenomena (e.g. demand, employment levels), or actual actors or actor groups. The system also has boundaries, though it may be affected by, and have its own impacts upon, its external environment. Much conventional modelling deals with fairly abstract variables, but it is possible to have actor-based approaches (e.g. a qualitative approach is simulation gaming; a quantitative one is agent-based modelling).

What may be harder to deal with are the emergent properties of systems—the ways in which new actors, behaviours, and of course innovations may arise. This reflects a critical limitation of many modelling efforts. Our focal object (or some related topic) is being modelled. The model treats it as a system, a set of relationships among elements. One issue then concerns how well the description of elements and relationships serves as a description of key features of the focal object or topic. Over time the relationships, and the elements themselves, are liable to change (for example, new social actors, economic sectors, new behaviours may emerge; quantitative changes may turn into qualitative ones—like ice turning to water turning to steam, as the temperature rises; relationships may become unstable as they mature). Often the whole point of a ForSTI exercise is to get a better grasp on such potential changes, and the threats and opportunities they may present. Before placing too much reliance on models, it is important to consider whether their fundamentals may be disturbed by new phenomena, which we have not been able to identify, and build into the models.

Before discussing the main qualitative and quantitative types of modelling, we should briefly consider just what modelling is being used for.

  • Examination of the behaviour and future prospects of the focal object, and of the consequences of action.

    The model may be taken to be the most realistic view of the forces acting on and within the focal object, and thus providing a guide to how it will evolve over time (and in response to various policy interventions or actor strategies). Models can be used in “exploratory” or “normative” modes, the former asking where current developments may take us, the latter asking how we might get to particular future states. In policy and strategy formulation for ForSTI, as elsewhere, one or other model (or sometimes a melange of elements drawn from different models) will always be underpinning and proposing decisions. Modelling may be used to help structure strategies and choices, not least by allowing for some estimation of the consequences of taking different actions (and also, of course, the consequences of unplanned or ongoing exogenous and endogenous influences).

    As noted above, modelling may be limited in various ways—by addressing only a few parts of a large system, by failing to capture emergent phenomena, and so on. Even a huge model that has been developed with the aid of many experts feeding in much data is still only a model, and the conclusions that we draw about its behaviour should not be assumed to be automatically accurate accounts of how the focal object could behave.

  • Testing of assumptions—as when researchers specify what might be expected to apply in particular empirical circumstances, given the theories we are bringing to bear.

    This is more of a general application of modelling than one used extensively in ForSTI. However, in the course of conducting ForSTI work, we may need to test some of our assumptions against data (or against expert opinion). For example, if it is being argued that a particular innovation is going to bring about massive efficiency gains, or reductions in jobs or energy use, can we examine this claim in the light of early data on the innovation, or data concerning analogous innovations in earlier periods (often it may be concluded that such claims often prove to be substantially overstated, or at least to anticipate a much more rapid rate of change than actually materialises). The results of such tests may also prompt further elaboration of theories, for example with proponents of the case for rapid and large changes having to articulate why it is that they believe the future will differ from the past. Modelling may push practitioners and researchers to clarify their thinking, with more systematic specification of theory and of data requirements. It may allow for detailed analysis of the consequence of minor differences in assumptions (the values of variables or the form of the relationships between them); and of more major changes in assumptions, applications to different situations and data, special cases, etc. One role often played by numerical models is rather basic accountancy—simply telling users that there will need to be choices made between investment decisions, for example, because the costs of various proposals do not add up.

  • Communication of assumptions, and of the implications of applying these assumptions.

    Model-building involves explicating and formalising understanding of how a system works—and often this understanding is largely implicit and has been only partly spelled out. The task is to (more or less systematically) determine the main elements of the system to be modelled, and the main relationships between these. Undertaking this task provides an opportunity to discuss points of view on these issues. This can allow for debate within the ForSTI team, and between the modellers and the audience of this part of the ForSTI work. It may involve tricky matters of operationalisation and approximation—for example, about how far existing available indicators can serve as measures of the model parameters. Often the discussions are mainly conducted within an expert group, and are then relayed to a wider audience only in the context of the whole model—where it can be difficult to identify the main assumptions, and how far and in what ways the results are driven by these. Time pressures in ForSTI may limit opportunities to critically reflect upon the models used, but discussion of assumptions can be a real learning experience for experts and wider stakeholders alike.

  • Obfuscation

    Hopefully, it is rare that a modelling activity deliberately aims to conceal its designers’ actual view of the focal object. There may be a conscious decision to omit some issues from the analysis—often because they are hard to deal with (for example there is no good data on trends that can be drawn on), but sometimes because they could be embarrassing. This is rather frequent in the mobilisation of statistics to make a political case, but can also apply when a model is used in the course of making the case for an STI programme of some sort (for example, some portions of the population may suffer from a major infrastructure development from which most are expected benefit—this can be obscured by using only population averages for costs and benefits. In 2014, there was controversy in the UK over the planned construction of a new High Speed railway line, and efforts were made not to keep details of cost-benefit modelling secret, since these pinpointed some substantial losers). Even when obfuscation is not the intention, complex models may be very hard for non-experts to assess. Some people find visualisations difficult to follow, for instance, while many find it hard to follow elaborate mathematical formulations.

    It would thus be helpful for communication and education purposes to have key assumptions of the model spelled out. Unfortunately, the modellers themselves may not always be aware that some of their assumptions are debatable, rather than the logical statements they assume them to be (In Kuhnian terms, this is because they are working within an unquestioned paradigmFootnote 4). Often, for example, simplistic economic arguments are assumed to be uncontroversial, when in fact they are very problematic (because the neoclassical framework they derive from itself involves all sorts of problematic formulationsFootnote 5); or indicators are assumed to be adequate for purposes far from original intentions or actual capabilities (example: the use of per capita GDP as a measure of social wellbeing). Model results may provide dramatic presentations that can “wake up” audiences to important ForSTI issues, as has recently been the case for climate change forecasts. But highly technical language and displays of expertise with sophisticated techniques may create a mystique which leads to uncritical attitudes or apathy on part of audiences.

A model may be intended to fulfil more than one of these functions. Furthermore, it may be that a model is applied to purposes remote from those it was originally designed for. Since some forms of modelling can be quite costly to undertake, it is not unknown for an established model—developed for other purposes—to be used to set some parameters for a ForSTI study, without its numbers being taken too literally.

8.2.1 Qualitative Modelling Approaches

This section will outline a number of approaches to constructing and using models that do not depend on elaborate mathematical analysis. In practice, such approaches often involve some simple arithmetic—for example, score-keeping in games; and qualitative approaches may be a first step to quantitative modelling. As computer analysis of texts becomes more familiar, powerful and user-friendly, we may also expect to see normal speech and writing more often rendered into quantitative forms—word counts and concordances, for example.Footnote 6

8.2.1.1 Gaming

A game is a sort of model; a game consists of a set of rules which players should follow in order to achieve their objectives. In simulation gaming, participants are assigned roles to play—they may be, for example, consumers, financiers, policymakers, pressure groups, companies etc. They are presented with an outline scenario of the situation they find themselves in, and with specifications of their interests and goals, their resources and capabilities. The designers of the simulation game have prepared these materials, and also created (1) a process whereby “actions” (or “moves”) undertaken by the players result in changes in the situation; (2) a structure for the players to make their moves and interact with each other, over the course of a session of role-playing that is intended to represent a period of time in the real world (potentially years in the case of ForSTI); and often also (3) further interventions in the form of changes in the situation (e.g. wild cards to introduce). Some games are “turn-based”, with players taking it in turns to choose their moves; some involve periods where they come together and periods for reflection or discussion within groups. The moves permitted may be completely open, with the players needing to communicate or expose what they are doing (to all, or just to selected others); they may be highly structured, e.g. when the moves are limited to making investments or casting votes. Alliances are often permitted, with players pooling some resources and acting together for periods of time.

Simulation gaming has a long history of application in military contexts—war gaming—and was used in RAND and other environments alongside scenario planning and other forecasting tools, by Herman Kahn and his colleagues (cf. Kahn 1960; Wilson 1970). It may be accompanied by the use of elaborate models representing armies, navies, etc., and as such it was often used for edu cational purposes; indeed, education and training is a common arena for application of simulation games more generally. Business games—often continuing the metaphor of warfare into the commercial environment—are also used in management and management training contexts. Because of the strong element of novelty and change in most ForSTI studies, and the long time horizons often considered, there is relatively little use of such games in this context.Footnote 7 However, role-playing games may be valuable in scenario development. For example, we might ask workshop members to play various roles as they would be confronted in a specific scenario. This could be a stimulating approach for eliciting ideas about the responses and counter-responses of various stakeholders in the context of, for example, breakthroughs in technological potential, or large-scale development projects. It might be particularly useful to examine the possible “reinvention” of technologies for purposes other than those they were originally intended, for example.

In its applications for training and decision making simulation gaming is often known as “serious gaming”, drawing on the terminology used by Clark Abt in his 1971 review of the topic.Footnote 8 Such games have been extensively documented in a variety of guides (The journal Simulation and Gaming is devoted to the areaFootnote 9). One set of practices that has received less attention is “New Games”, which are designed to promote cooperation and learning, as well as fun. There may well be scope for making use of these ideas in the ForSTI context, too [see Fluegelman (1976, 1981), PGP (n.d.) and Yehuda (2008)].

Role-playing games have entered popular culture on a grand scale in recent years, in the forms of board games (e.g. Dungeons and Dragons) and Live Action Role Playing, and in computer games. Often the latter are set in science fictional scenarios—most frequently apocalypses or dystopias of one sort or another—and while they occasionally display flashes of creativity and originality, they rarely deal with plausible near futures, and mainly focus on combat situations. Some games involve multiple players, perhaps large numbers of them interacting in an environment that they can largely construct themselves. There is clearly scope for designing futuristic scenarios within such environments, for example Second Life, as well as simply using them as places to hold discussions.

Another method, which is widely employed in ForSTI projects for modelling, is mind-mapping, which was presented in Chap. 4 as a method of interaction along with Brainstorming.

8.2.1.2 System Mapping

A wide range of approaches can be applied to visually depict a system’s component features (elements or nodes) and the relationships between them. Usually the features are represented by icons or text boxes of some kind, and the linkages by arrows.Footnote 10 System mapping is concerned with causal relationships, as the arrows imply. Usually these are fairly simple relationships: an increase in feature A leads to an increase or decrease in feature B (for example, if an innovation becomes cheaper, it is liable to be adopted more rapidly). Sometimes there will be a U-shaped relationship, as small amounts of A lead to increases in B up to a threshold, after which more A means less B. There may be reciprocal causality: feature B may exert influence back onto feature A (for example, higher levels of adoption of the innovation lead to increased production which leads to economies of scale and decreasing costs). If each element influences the other one positively, then we have a situation of growth of both, usually known as a virtuous cycle or an example of positive feedback (In reality, the outcome may be far from virtuous, of course, because unrestrained growth is often harmful to other elements of the wider system). A system map will contain more than two variables, though a good place to begin constructing such a map can be one such relationship.

A system map can be created in desk- or group-work by using a structured form of mind-mapping. The central starting point is the focal object. What are its important features—and how stable are these? The focal object can be represented as a set of boxes. What factors are influencing, or could lead to (or impede) changes in these features? These are represented as further boxes, with their influences on the focal object indicated by arrows linking them. The next step is to consider what factors might be influencing these influencing factors, and so on. This approach diverges from standard mind-mapping in that the links between the various branches of the mind-map are explored. Factors that are influential in one causal chain can also be influential in another (as opposed to being only related to one chain). As factors and linkages between them accumulate, discussion of which factors are most influential in shaping the system’s evolution may mean that the diagram can be simplified, without great loss, by excluding the less important factors.

Beyond simple arrows, more elaborate analysis of the scale of influence can be provided. For example, estimates of the direction of, and scale of, influence of each factor on the node that is being influenced can be made using, say, 5-point scales (ranging from +1 = small positive influence to +5 = very high positive influence, “positive” meaning increasing the size or likelihood of the nodes influenced; and −1 = small negative influence to −5 = very high negative influence, “negative” meaning decreasing the size or likelihood of the nodes influenced).

An excellent example of system mapping comes from the UK Foresight Programme, in its project “Tackling Obesity: Future Choices”. Obesity may seem to be an unusual focus for ForSTI: but it is well-known that obesity has become a highly problematic issue in Western societies especially, with huge ramifications for their populations’ well-being and the load on health services in particular. It is also a classic “wicked problem”, involving different sorts of professional knowledge and organisational responsibilities. It is, for example, of relevance to the several departments of government and public services concerned with education, food, nutrition, sports, as well as with health, as well as to private and voluntary organisations. Among the ForSTI issues that arise here are, for example, ones concerning bioscience (e.g. individual propensities to gain weight), medical treatments (from surgical to pharmaceutical interventions), food and nutritional sciences (analysis of the impacts of foodstuffs and dietary patterns of various kinds), and matters of education, lifestyle and exercise.

Box 8.1 reproduces the broad principles under which this exercise operated. It provides a helpful account of the background to qualitative system modelling, and thus we quote at length from the original text. The stress very much is on “causal loops”, in which (sets of) variables are mutually reinforcing each other, giving rise to long-term trajectories of change (in this case, the result being increasingly obese societies). The thinking of this project on the value of causal loop approaches is reproduced in Box 8.2. These models were constructed through groupwork during the course of the project, as outlined in Box 8.3.

This exercise was very time-consuming, and drew on a large number of expert and stakeholder inputs: group processes generated ideas as to key variables and relationships; the project consultants worked these up into more systematic and evermore elaborate versions of the system map. The consultants also input many ideas about variables and relationships, deriving from a series of reviews of the various scientific literatures bearing on obesity (Projects in the third cycle of UK Foresight, like this one, generally commissioned such reviews of the “state of science” at an early stage of investigation of the focal topic). The eventual map developed and used in the exercise had over 100 variables, with over 300 relationships. It was used as a framework for discussion (not just within the exercise, but in later rolling-out of the results to a wide range of interested parties), in the development and assessment of scenarios, and for other purposes.

The system map (over 100 elements with over 300 arrows linking them) may not do complete justice to the complexity of obesity in modern societies: there could always be important factors that are not recognised, for example. But it succeeds in foregrounding the interdependencies among highly diverse variables (ranging from industry strategies to consumer habits and behaviour change). The dialogue is reported to have helped to exchange insight and forge relationships across stakeholders and disciplines, and that this provides support for more “joined up” policymaking.

(It should be noted that regulation of the food industry and its products remains highly controversial in the UK. Arguments have erupted over the years about food labelling, about the use of sugar and hydrogenated fats, about school meals, about the industrialisation of farming, and about many other related topics. There is widespread suspicion that successive governments have been unwilling to challenge major corporate interests and to support adequate inspections of food supply chains. Difficulties here demonstrate a basic point—policy decisions are ultimately political ones, and influential lobbies may be more important than the best available evidence. Many STI policy decisions are relatively apolitical, in the sense that there is little conflict between major political parties about them, and major lobbyists are less prominent than in many other areas of policy. STI related to food is one of the exceptions,Footnote 11 with debates about introduction of GMOs in agriculture being a hot issue at the time of writing).

But the model is highly complex; indeed it is difficult to represent on a two-dimensional surface like a (very large) sheet of paper. This is a common problem with large-scale system maps. As noted, there are over 100 variables ultimately influencing energy balance; each is interconnected with others. They have varying numbers of inputs and outputs from each; there are feedback loops where merely two variables are mutually influencing each other in positive or negative ways, and feedback loops that encompass numerous variables. In one sensemaking effort at simplification, the variables were clustered into seven themes ranging from Food Production to Physiology.

Another approach was also taken, as a response to concerns that such a complex map is not just difficult to assimilate, it may actually deter or disempower some of the intended audience. (People may not know where to start; or they may feel that any action at one point is liable to be overwhelmed by the large number of influences that are depicted.) Finegood et al. (2010) suggested that one way to overcome the problem would be to group elements from the Foresight system map into various themes or clusters, as mentioned above. A quick overview of the map is then created by counting the connections within and between the themes (the numbers of linkages between individual variables situated in the same theme, or each of the other themes), and applying social network analysis software to these counts. The outcome is a simplified map, depicting the strength of linkages within and between themes (reproduced in Fig. 8.1).

Fig. 8.1
figure 1

Network diagram of the obesity system map. Source: Finegood et al. (2010)

In this simplified map, the thick border around Physiology reflects the numerous (33) interconnections among the variables within this cluster on the original Foresight map, whereas the thin border around Physical Activity Environment reflects few (8) interconnections within this theme. The thick arrow from Food Production to Food Consumption reflects the large number (22) of direct influences from Food Production variables on Food Consumption variables—while no variables in the latter cluster impact directly on the former. The complexity of the visual image has thus been reduced, while much critical information is summarised and some key relationships among clusters are highlighted. It makes a great deal of sense for ForSTI exercises to experiment with approaches like this one for simplifying complex maps—and they could be applied to computer simulation models, as well—in order to communicate assumptions and results more intelligibly.

Box 8.1: System Mapping for UK Foresight Obesity Project

“The system under study in the present project is the ‘obesity system’. Obesity is an attribute of a human being. For a given individual, obesity is associated with being over a normal body weight for their gender, age, height and build....

The key assumption underlying this qualitative mapping exercise is that obesity is the result of the interplay between a wide variety of factors, deriving, for example, from a person’s physical make-up, eating behaviour and physical activity pattern.

The obesity system, therefore, is pragmatically defined here as the sum of all the relevant factors and their interdependencies that determine the condition of obesity for an individual or a group of people.

  • What has been called a ‘factor’ is an attribute (characteristic) of a person or their environment that has an influence on that person’s level of obesity. ‘Factors’ are often referred to as ‘variables’… The term ‘variable’ suggests that the corresponding attribute is measured against a qualitative (ordinal) or quantitative scale and can vary over that scale. Some of the factors are fairly straightforward to describe in measurable quantities (for example ‘energy density’ of food), whilst others are psychological, cultural or environmental attributes that are more difficult to quantify (for example, ‘walkability of living environment’, referring to a physical environment’s suitability for movement on foot). As many of our variables are not readily quantifiable, we do not attempt to use the model to draw quantitatively-based conclusions, rather we consider overall trends and direction.

  • ‘Relevance’ is a pragmatic criterion for deciding which factors belong to the system:

In this project, relevance was chiefly determined by judgments of academic experts. The particular representation of the obesity system that has been developed constitutes our best understanding of the system within the given constraints of time and other resources in the project…

  • The interdependencies that the definition refers to are of a causal nature. In other words, the obesity system is a set of relevant, causally linked variables that determine the condition of obesity. So, any link between variables a and b in the system needs to be interpreted as ‘the level of a is causally linked to the level of b.’ A distinction will be made between positive and negative linkages.

  • The obesity system can be scaled at various levels of aggregation (individual, group, society), depending upon the level of aggregation of the constitutive variables. One can think of an obesity system operating ‘around’ an individual or a group of people. In the latter case, the variables represent average values for a given group…

The causalities discussed so far are linear causalities (from a’ to ‘b’). Circular causalities (e.g. from ‘a’ to ‘b’ to ‘a’) in systems maps are called feedback loops. They are an important feature of causal loop models because they help to explain the dynamic behaviour of the system.

There are two kinds of feedback loops: reinforcing (or positive) and balancing (or negative) loops. Reinforcing loops encapsulate exponential growth whilst balancing loops push the system towards an equilibrium value:

  • An example of a reinforcing loop from the obesity system map is the following: if the ‘demand for convenience’ by consumers increases, the ‘convenience of food offerings’ from food manufacturers is likely to increase in response. If consumers then become habituated to these convenient products, their cooking skills are likely to diminish. Hence, an increase in the ‘convenience of food offerings’ triggers ‘de-skilling’ of people. And this, in turn, can be expected to increase the demand for convenience. And so on, until compromises on taste or price will flatten the dynamic.

  • A balancing loop is at the very core of the obesity system: when human beings’ ‘level of available energy’ decreases, they experience a ‘physical need for energy’. The stronger that need is, the more effort will be invested in ‘acquiring new sources of energy’ or to ‘preserving the energy’ that is already available. This, in turn, will lead to a higher level of available energy, which will finally dampen the physical need for energy. By this means, the system remains in equilibrium. The primary purpose of this exercise is to understand how the broad range of variables influences energy balance, leading to it becoming imbalanced…

The key purpose of building a causal loop model is to gain insight in the underlying structure of a messy, complex situation. A system map shows how ‘variables interrelate’ and where there are opportunities to intervene in the modelled system to influence its behaviour. A secondary objective could be to impart that insight to a wider audience. System maps are arguably one of the most effective tools with which to visualise complexity. In short, the essential contribution of a causal loop model is to summarise and communicate current trends, relationships and constraints that may influence the future behaviour of a system.

However, a causal loop model is not a predictive model. It does not allow future levels of system variables (and, hence, prediction of the level of obesity at a given point in the future) to be foreseen.”

Source: quoted from Vandenbroeck et al. (2007) Chapter 2 (p2 passim)

Box 8.2: The Use of Causal Loops in System Maps (UK Foresight Obesity Project)

“A causal loop model is a device to describe the systemic structure of a complex problem. As such it serves three very general purposes:

  • to make sense of complexity: individuals who have been deeply involved in the construction or study of a causal loop model will appreciate its considerable heuristic power. In particular, once the top-level architecture of a model (rather than its fine detail) has been thoroughly absorbed, it becomes a powerful filter for identifying relevant variables and an aid to thinking about the issue.

  • to communicate complexity: the anatomy of a system map—particularly with a fairly large number of variables and many causal linkages between them—is a clear confirmation of the inescapably systemic and messy nature of the issue under study. This approach highlights the need for broad and diversified policies or strategies to change the dynamics of the system.

  • to support the development of a strategy to intervene in a complex system: careful study of a causal loop model will reveal features that help in deciding where to intervene most effectively in the system… leverage points, feedback loops and causal cascades....

Decision-makers need to focus on a system’s leverage points if they are to effect change. Leverage points are variables in a system map that have an important effect on the system’s behaviour. They can be recognised as ‘hubs’, where many arrows are leaving from and coming into different variables. Leverage points pick up changes from many variables and transfer these on to other parts of the system (first to those variables linked directly to the hub, and then further afield). Particularly important are those leverage points that are directly connected to the system map’s central engine. These are called key variables. They will be sensitive conduits of change to the system’s basic dynamic architecture. In the obesity model, four variables have been identified as key variables:

  • the level of psychological ambivalence experienced by UK citizens in deciding lifestyle choices (food, exercise)

  • the force of dietary habits preventing UK citizens from adopting healthier alternatives

  • the level of physical activity UK citizens engage in

  • the level of primary appetite control in the brain.

These four variables impose themselves as crucial elements of any obesity policy portfolio. It is perhaps noteworthy that there are four key variables rather than just two (eating and exercise) and that the key food variable focuses on habits rather than actual intake.

Feedback loops are a defining feature of causal loop diagrams as they determine the dynamics of the system… Focusing, for example, on undesirable positive feedback loops may suggest useful options for policy by evaluating where balancing loops can be imposed or where linkages in reinforcing loops can be broken. Similarly, inflexible situations which are in equilibrium (lockins) could be countered by removing bottlenecks and restrictions in resources or by stimulating new reinforcing loops to undermine the status quo…

Studying feedback loops and leverage points in system maps can be a fertile basis for developing policy options. However, maps can also be used to study how a given policy option might affect the system. First, an inventory is made of the variables in the system map that are apparently affected by the policy measure. Secondly, verification of how these measures causally propagate through the model and how they affect the feedback loops driving the system. By mapping out these causal cascades, it is often also possible to verify whether a given measure may at some point result in unintended consequences in another part of the system…”

Source: Vandenbroeck et al. (2007) Chapter 2 (p8 passim)

Box 8.3: Development of the Obesity System Map

Vandenbroeck et al. (2007, Appendix B) explain that the development of the obesity system map involved two main phases of work, which are summarised below.

Phase One: involved focused work undertaken by the Belgian consultancy group facilitating the project (WS—now shiftN), accompanied by four interactive workshops. These involved experts (from a variety of disciplines) and individuals from stakeholder groups, including policy makers and representatives of business and civil society. The workshops generated and evaluated ideas that were formalised by the consultants.

  • The first workshop laid the foundations of the system map. This involved identifying the system map’s nodal variable, which was defined as ‘energy balance’: the difference between energy intake and expenditure. Using the results of this first workshop, WS proceeded to develop the first version of a causal loop model, “the core system engine”. WS also put together a preliminary database of causal links influencing the elements of this model, drawn from short science review papers (another part of this Foresight project) and grouped into four sets:

    • food and food environment;

    • cultural and psychological;

    • socioeconomic; and

    • physiological influences on energy intake or expenditure, or on obesity.

  • The second workshop reviewed the draft causal loop diagram, and the database of causal links (considering factors like size and direction of impact, uncertainty about and explanations of the link). Subgroups examined the four sets of influence. Twenty five key linkages were emphasised by the workshop. These were used by the consultants, subsequently, to develop a much more elaborate system map than the draft causal loop diagram the workshop started with, the “first draft model”.

  • The third workshop reviewed this first draft model, discussing whether important variables and relationships were missing. Again, the workshop results were used to build a more elaborate model. However, it was felt that the physiology subsystem was underdeveloped in this version, so a fourth workshop was set up to provide further input.

Phase Two: Following these rounds of activity, the work involved gradual “streamlining” of the map into the final version of the model. A version of the map was produced that was circulated together with a questionnaire to experts, in this case asking for views on the relative importance of the causal linkages featured in the system map, not for yet more suggestions about influential variables. An interesting further step here was to ask the experts to consider how far the map varies across social groups: to what extent the influences were subject to gender, age, ethnicity and socioeconomic class effects. Thus versions of the map could be produced for specific social groups. Additionally, sections of the overall map could be “pulled out” as derivative maps for examining one or other set of influences in more detail.

8.2.1.3 Relevance Trees and Morphological Analysis

A number of tools that are often described as “normative forecasting” methods are relevant to long-term STI planning. Many of these tools were developed within the context of large managerial and technological programmes, aimed at specific objectives requiring much effort at development of complex technological systems. Outstanding among these was the US space programme, which in the 1960s was facing such questions as “how can we get an American safely to, and back from the surface of the Moon?” Roadmapping is one such tool, but this is discussed in a later chapter (Chap. 9), since this widely-used technique puts particular stress on the actions that need to be taken to achieve future objectives. The two tools examined here are Relevance Trees and Morphological Analysis. These methods can be applied for various purposes: one of which is to examine a broad range of possible future circumstances and choices, and how different sorts of capability and knowledge requirements would need to be developed and deployed.

A relevance tree is, as its name implies, a tree-like diagram—or, when presented so that the “branches” spread out downwards, is more like a diagram of the roots of a tree, resembling a typical organogram to look at. The branches divide a broad topic into increasingly smaller subtopics. Transport might be divided into transport of people and of freight; each of these into land, sea or air transport; and so on. The diagram then represents the various critical aspects of a system—or of a problem and/or its possible solutions. So if the focal object concerned reducing CO2 emissions associated with transport, attention would be directed towards these different modes and functions of transport. Specific relevance trees could be constructed for specific modes of transport, for example. One relevance tree might consider the different technologies used in one mode, leading to consideration of the emissions (and other costs) associated with different technologies. Thus there are relevance trees outlining different options for solar powered automobiles. Another tree might consider the different purposes to which transport is being put, leading us to consider the various forces influencing transport usage in and across modes. Relevance trees can also be used in the planning and design of ForSTI exercises.

Morphological analysis similarly involves breaking a problem area or focal topic up into various subcategories. A grid structure is created, rather than a tree. In the grid, different options for each of these categories (e.g. land, water, air in the case of transport) form different cells—this is known as the “morphological field”. The grid can be used to explore combinations of different possibilities. Some combinations will prove logically impossible, others may be ruled out as too costly or technically difficult. Other combinations may provide possible routes to an objective, or solutions to a problem. The approach can also be used for constructing scenarios, where different combinations are explored as the basis for distinctive scenarios. In a study of futures for agricultural biotechnology in the UK, specifically concerned with crops used for purposes other than food, the Institute of Innovation Research (2003) developed four such scenarios. The combinations concerned two alternatives for each of two issues. First, would new bioscience be applied to create GMOs; or would it be applied only to other purposes, such as improved plant breeding (the use of GMOs being a controversial theme)? Second, if the GMO route were to be taken, would it be used only in “contained environments” (such as greenhouses) or more widely? This is a relatively simple scenario structure—often the scenario definition is much more complex, with numerous sets of options being related together to form a set of scenario pathways.

Much material on morphological approaches is provided by the Swedish Morphological Society.Footnote 12 This Society has published accounts of the application of these approaches in a scenario study for the Ministry of the Environment in Sweden, concerning Extended Producer Responsibility (EPR). EPR involves making producers of goods responsible for waste or used products: the products will need to be reused, recycled or applied to energy production. Eight major “external” factors were identified that might influence a Swedish EPR system. Over 20,000 configurations of these factors were notionally possible: of these, around 2000 were believed to be internally consistent and thus feasible combinations—and, in principle, these could be the profiles of alternative scenarios. Ritchey (2009), however, argues that usually some 8–12 configurations can be chosen to cover all of the cells and give a good spread of possible scenarios. In this study, eight configurations were chosen by a working group, with every state of the factors being represented in at least one scenario.Footnote 13 Figure 8.2 displays the cells selected for one of these scenarios.

Fig. 8.2
figure 2

A scenario profile identified through morphological analysis. Source: Ritchey (2009)

In a further step, different EPR strategies were developed, concerning “internal” factors (those that can be more or less controlled); these were then examined for their fit to each of the scenarios, giving an assessment of the robustness of strategies across ranges of scenarios. Figure 8.3 depicts how three strategy alternatives (developed from a strategy matrix) are related to the eight scenarios (from a scenario matrix), together with a fourth alternative—“no strategy available”. Here, strategy B is shown to be able to cope with 3 of the 8 hypothesised scenarios, and 2 of the scenarios (light blue) are beyond any of the strategies.

Fig. 8.3
figure 3

Morphological mapping of strategies against scenarios. Source: Ritchey (2009)

Both relevance trees and morphological analyses are ways of thinking systematically about the topic of concern. They are used in planning, but are more than just routine planning tools—this process can allow for unexpected possibilities to emerge, new appraisals of the future to be established, and new options to be identified. This process requires some substantial effort on the part of the people that implement them, since these approaches are far from easy to use. They require in-depth analysis, and usually need to be supported by people familiar with the techniques; expertise in the problem fields will be required. Lengthy work may be involved. Numerous alternative elements and combinations of these elements involved may arise. The outputs may be rich in technical detail—indeed they run the danger of being overloaded with it, making it difficult for lay people to fully grasp and use them. The Swedish Morphological Society argues that new computer-assisted approaches may make the process easier and easier to understand. As with system mapping, it is important to maintain the comprehensibility of the assumptions and the results that flow from applying them.

8.2.1.4 Cross-Impact Analysis

Developed by Theodore Gordon and Olaf Helmer in the USA in the mid-1960s, this method has some relation to systems mapping (described above). Its focus is also on interrelations between variables—or rather, on expert views of these interrelations, which we can rarely determine from pure logic or common sense. Cross-impact analysis is labour-intensive. It requires systematic consideration of the links between each pair of variables that are being considered. The more variables that are considered, the more the number of relationships to be examined rises, and rises dramatically fast. Since expert judgement is required to estimate how each variable interacts with each other, and experts time and patience is limited, it is common to work with a much smaller number of variables in cross-impact work than in systems mapping.

While the method may be employed in different ways, the typical approach is to ask first, how likely is it that A will happen? that B will happen? etc.; and then to ask if A were to happen, what would be now the likelihood that B will happen? and vice versa (ForSTI applications of such analysis often deal with events—e.g. a particular technology being developed, a variable such as the level of adoption of an innovation reaching some threshold value). The result is a matrix of cross-impacts in the form of conditional probabilities. In effect this is a small system map, represented as a matrix, with each variable connected to each other. The matrix can be analysed in various ways. One use may be to see which variables have greatest direct and indirect influences overall. Another use is to see which variables are having greatest impact on a parameter of particular concern. Indirect effects can be determined through this approach, which may be overlooked in less detailed analyses. A further use of the matrix is to examine all potential combinations of variables and estimate the probability of each occurring—these are, in effect, the profiles for a number of scenarios (with some assessment of probabilities).

The cross-impact method thus seeks to apply structured reflection to assess the interrelations between elements of the system we are examining. It has been widely used in the French “la prospective” tradition (and also, apparently, within the CIA and wider US intelligence community), where a high level of commitment has been obtained from participants. Since it involves much estimation of conditional probabilities, the approach can be very demanding of those required to make the judgements. The task requires careful assessment of discrete elements—perhaps too painstaking for some participants’ patience. Cross-impact estimations tend to be completed by means of questionnaires—which reduces the opportunities for discussion and knowledge exchange (except perhaps in the initial specification of key variables).

The approach has the virtue of aiming at a comprehensive mapping of interrelationships, and is thus regarded by some commentators as superior to methods such as Delphi, where individual developments are usually examined in isolation. Just how comprehensive this is may be questioned, however. It is possible that combinations of events will act together in nonlinear ways, even cancelling each other out; in some circumstances one event may shift the conditional probabilities associated with another one. The approach provides seemingly more precise results than a system map, and works with fewer variables. Proponents of the method argue strongly for its usefulness, and again it can be hoped that computer tools will allow for more engaging versions of the technique to be developed. Finally, we should note that in attaching numbers to the various relationships between a set of elements, cross-impact approaches are moving very close towards quantitative modelling, and some of the analytic procedures that are applied to the matrices take this forward further still. We now turn to other quantitative techniques.

8.2.2 Quantitative Modelling

Trend analysis (see Chap. 5) is a basic form of quantitative modelling, though often the only variable that is seen to be influencing the trend is the passage of time. Time is actually the stage on which processes—relations between variables, actions of stakeholders—pan out, and more sophisticated quantitative approaches will seek to explore the effects of actions and interrelationships over time. Already methods like cross-impact analysis involve some quantification of links between variables, but a range of techniques allow for consideration of many more variables than can feasibly be handled by the cross-impact approach.

As with other methods in ForSTI, the assumptions that are used in modelling are critical to the outcomes of the analysis. All models have to make assumptions about what variables (and what sorts of variables) to include, and similarly about the (sorts of) relationships between variables to work with; all have to face the challenge of finding data (or making “guesstimates” to calibrate the model with numbers that it can process. There are, however, fundamental differences in assumptions between broad classes of model. For example, a whole family of economic models is based around the notion that economic systems are fundamentally tending toward equilibrium conditions, and apply computational procedures aimed at finding what these equilibrium states are. Again, some models focus on stocks and flows between variables, while others (games theory and agent-based approaches) try to model the behaviour of actors who influence each other. Within any approach, choices are being made about what factors to treat as important, and which relationships to take into account. Implications of this basic point will be apparent when we discuss major world modelling efforts below.

Ciarli et al. (2013a—and see also 2013b) provide a useful discussion of quantitative tools used in ForSTI exercises, with some 64 tools being identified, and a smaller set of 26 of the most common (and some recent but heavily used) tools being selected for further analysis. The study reviews the advantages and limitations of the tools, which organisations most typically use them, and within what context they are deployed. As Fig. 8.4 displays, the techniques can be characterised by the sort of application they find; some are more about describing and making sense of the current situation of the focal object; some are more used for more or less sophisticated forecasting of the development and outcome of trends; and some are applied more in a fully-fledged ForSTI context. This is a helpful way of considering the range of quantitative tools that may be brought to bear within exercises; and the study provides a provocative and illuminating review of various features of these tools, to which the reader is referred.

Fig. 8.4
figure 4

Quantitative methods used in ForSTI and related activities. Source: Table 1 (p. 22) in Ciarli et al. (2013a, b)

It will be apparent that some quantitative tools—such as survey analysis or bibliometrics—are liable to be more about identifying trends or hot issues than to do with modelling as considered here. Models appear in their framework as tools that are used more when we are considering such things as alternative futures, the results of different interventions, and the like. (Further along the axis that points towards “Foresight” is roadmapping, reflecting both its normative components and its typically involving dialogue within groupwork.)

When we speak of quantitative modelling, we usually refer to the use of computer simulation models of one kind or another. This may be somewhat misleading. Extrapolation of trends, we have argued, is a very simple form of modelling, and it is not difficult to apply more sophisticated methods of trend analysis without use of computers (though these days, just about all analysts will typically be employing at least spreadsheets, if not more complex methods of mathematical statistics, to organise data, fit curves, and so on). For example, it is likely to make sense to employ theories, such as those involved in diffusion analysis or product/industry cycle theories, to model trend data into S-curve or similar “extrapolations”. This is a form of modelling, too, but is not strictly simulation modelling, which involves more dynamic approaches.

8.2.2.1 Computer Simulation

Computer simulations represent a system in terms of its key components and the main relationships between these, and to project how the system will operate over time, or how it may respond to specific interventions. Many of the issues that are dealt with in ForSTI exercises are hard to model systematically with quantitative tools, not least because there is often inadequate empirical material to inform the model. Some topics may be relatively easy to examine. Examples include: diffusion patterns of new products and processes, the labour force implications of training programmes, and the costs of cleaning up pollution. Many topics are more difficult to model—such as the pace and location of technological breakthroughs, the outcomes of R&D funding, public responses to new technologies. Being able to reliably forecast the rates of adoption of innovations before they had entered the market, or the likelihood of one or other design framework achieving dominance, would give firms huge advantages.

Computer models are, not surprisingly, used most extensively to simulate systems that have relatively easily quantifiable properties. Economic forecasts (many of which are issued on a routine basis) use models whose key variables can mainly be expressed in terms of monetary values (e.g. investment, consumption levels), or of headcounts (e.g. people in employment in different sectors, people unemployed). Demographic models also use population statistics (which, along with economic statistics, are regularly produced by the government statistical apparatus), and track and forecasts the numbers of people being born, dying, migrating, and so on. Transport and related models draw on data on traffic movements and data on location of dwellings, workplaces, shops, etc. (and specialised “gravity models” are employed here, to reflect the relative attractiveness of different locations). Weather forecasts and climate change projections are based upon meteorological models dealing with variables such as temperature and precipitation in particular areas.

The statistical data produced for governments (and sometimes for private organisations) can be used to estimate the relationships between variables—for example, the effect of price increases in one class of goods on prices in another class, or on the demand for these products. Data analyses involve sophisticated tools of mathematical statistics, though these tools often carry latent assumptions about the sorts of relationship that may be described. The models mentioned usually deal with aggregates—economic sectors, population groups, etc.—and consider actors as behaving in predictable ways within these aggregates. For some purposes this may be quite reasonable. But when we are dealing with STI issues we may well find actors seeking to behave in new ways—to innovate, with or without technological innovation—and to learn from experience. “Game theory” models have become quite popular as ways of accounting for the interaction of different actors whose behaviour will result in outcomes that vary according to how the others choose to act. “Agent-based” modelling is still a fairly novel set of approaches, albeit one with considerable promise: it describes systems in terms of different entities that are interacting (and thus the entities themselves have to be modelled in terms of their goals and capabilities).

Even when there are no obvious sources of evidence with which to “calibrate” a model, trying to create a simulation model can force us to think systematically about our assumptions concerning the dynamics of a system, and make us search for relevant data with which to test, explicate or elaborate such assumptions. It can also allow us to explore alternative starting conditions, events and interventions, and even allow us to experiment with changing assumptions and to compare the behaviour of models of the same system based on different understandings of how it operates. One of the main claims of the modelling community is that we can be enabled to deal with numerous variables simultaneously—to explore relationships and the working out of multiple changes in ways that are quite beyond our normal mental processes. Unless we have made programming mistakes, a computer can perform a huge number of calculations to process the model and its calibrated data meticulously. Finally, quantitative results can be presented many ways—graphs, charts etc.—and allow us to compare results obtained, for example, from different calibrations, for different interventions, and so on.

However, the quality of a model is only as good as that of the assumptions on which it is based (and the data with which it has been calibrated). One of the main drawbacks of modelling for ForSTI is that these assumptions may be obscured—by the technical language that is involved, and the degree of expertise required picking apart an elaborate model. The public may be less inclined to accept computer output as authoritative, even definitive, than previously. But the volume of precise results (expressed in seemingly detailed precision) may well have a dampening effect on critical inspection.

8.2.2.2 Limits to Growth and Global Models

Computer simulations began to attract widespread public interest in the 1970s, when The Limits to Growth study of the Club of Rome (Meadows et al. 1972) achieved considerable publicity. The model employed here—quite simple by current standards—employed the simulation technique known as Systems Dynamics, in an effort to describe the long-term future of the world system. Introduced by Jay W. Forrester starting from the 1950s (Forrester 1961, 1968), Systems Dynamics allows for nonlinear relations (the impact of A on B can be exponential, for example) and for feedback (A can affect B, B can affect A). In the Limits to Growth model, the world was analysed as one single economy, composed of population, agricultural production (needed to support the population), industrial capital and industrial output, non-renewable resources (consumed by industrial activity), and pollution (generated by industrial production, and reducing agri cultural production). Figure 8.5 displays the structure of the model. Economic growth is seen as consuming resources and creating pollution, and ultimately, population growth and the associated demand would outstrip the capabilities of agricultural production, leading to “overshoot and collapse”—a Malthusian disaster, with a crash of world population. The “standard run” of the model, depicting this, is reproduced in Fig. 8.6.

Fig. 8.5
figure 5

The structure of the Limits to Growth model. Source: online version of The Limits to Growth (Available at: http://collections.dartmouth.edu/teitexts/meadows/diplomatic/meadows_ltg-diplomatic.html#pg-14, accessed on: 09.09.2014)

Fig. 8.6
figure 6

The “Standard Run” of the Limits to Growth model. Source: online version of The Limits to Growth (Available at: http://collections.dartmouth.edu/teitexts/meadows/diplomatic/meadows_ltg-diplomatic.html#pg-14, accessed on: 09.09.2014)

The Limits to Growth study came under criticism from various quarters, but intensive marketing, and perhaps the novelty of the approach (at a time where computers were remote “electronic brains”, that were often seen as more objective and rational than humans), meant that it was widely disseminated and hugely influential. With translations into more than 30 languages, and extensive media coverage, this was probably the best-known futures study to date. Another factor in its success may have been the “common sense” view that resources are finite, and that we cannot continue forever creating more pollution. Recently, a number of reports have concluded, essentially, that Limits was right. Thus Turner (2014) compiled data suggesting that the main trends depicted in that study were proving uncannily prescient, concluding that resource constraints were posing growing problems for the world order—just as many of Limits’ supporters in the 1970s believed.

But Limits was also subjected to one of the most detailed critiques of any futures study, with Cole et al. (1973) dissecting the model and arguing that it failed on a number of critical points (for a detailed history of the first years of the debate, see Moll 1991). Resource estimates were consistently pessimistic and ignored scope for discovery and substitution. The “holistic” view of the world as a system was achieved by eliding the difference between rich resource-gobbling societies and poor countries where the “population explosion” was apparent. It was based on an embedded Malthusian view that would always generate cycles of growth, overshoot and collapse—in part because there was no scope for learning in the system. Innovations and structural changes that might decouple growth from resource consumption were not considered. The many criticisms led some commentators to dismiss Limits as an overambitious attempt to grapple with serious issues, but without sufficient grasp of their complexity. In some quarters this reinforced a view that the global economy would be self-correcting—“if these shortages really were severe, we would see little boys out recycling scrap” as one well-known economists remarked at a meeting attended by one of the present authors. For other commentators, it meant that we had to build bigger and better models as a matter of urgency. For others still, it was an urgent wake-up call that should not be neglected just because a multitude of “technical” errors were pinpointed by (allegedly) more mainstream researchers.

Following Limits, there was a wave of efforts to build computer models of the world system, some of them addressing (some of) these criticisms (for reviews see Cole’s contribution to Freeman and Jahoda 1978). For example, the Bariloche model (Herrera et al. 1976) took a perspective more attentive to the challenges facing developing countries, while work at IIASA (Mesarovic and Pestel 1974) considered energy options and alternative technologies in much more detail. The Bariloche team set out to demonstrate the physical viability of a more egalitarian world order in the near future, concluding that under reasonable assumptions there was good reason to believe that the basic needs of the population of all world regions could be met without running into severe resource constraints (Herrera et al. 1976). Demonstrating this was taken to be an important task (given the Malthusian controversy) and the Bariloche model accordingly paid little attention to the formal analysis of what we would now call “transition processes” (how such changes are effected in practice). Presumably, models of political and economic affairs would be required for these purposes.

Such “world models” were highly ambitious attempts to simulate economy-environment interactions for the whole planet over decades and even centuries. They involved heroic, and highly contentious, assumptions to be made concerning natural resources, technological change (or its absence) and social affairs. They became extremely complex and demanding of computer resources (and of data with which to calibrate them). This raised another problem, because the structure and especially the behaviour of highly complex models can difficult to understand—sometimes modellers have misinterpreted their own results! (For example, attributing a trend to one causal process, when it is actually driven by another). Often there are a few key relationships that are driving the major results, and it is important to be able to explicate them for example, in the climate models discussed below, global warming is essentially down to greenhouse gas emissions. The volumes of data required are so extensive that it is very laborious to validate them, too.

The world models succeeded in raising awareness of many key issues, their quality as forecasts was highly suspect—one common criticism was that the models had been constructed by general-purpose computer or management scientists rather than experts in economic or ecological affairs. The sustained criticism they received cast a long shadow over modelling efforts, though in retrospect we can see that few critics would attempt to create such ambitious long-term analyses themselves.

More recently, large-scale modelling has attracted a great deal of attention in the context of climate change research. The models used here should not be confused with the models used for weather forecasting by national and local meteorological offices—these are extremely large and complex models developed in efforts to examine and project forward trends in global climate.Footnote 14 Since these are the product of large expert teams from many countries, and have attracted the support of many leading researchers in the climate field, the controversy concerning their results has been very different from that associated with Limits. The models involved are often massive ones, requiring supercomputers to calculate the evolution of conditions in a vast number of volumes of the global atmosphere (and its oceanic and land interfaces) over many points of time across long durations.

The controversies around climate change models have demonstrated that many people now no longer view computer models as authoritative and objective. They see the models as constructed by “experts” who may be simply incompetent, or who seek to obfuscate reality. Conspiracy theorists see the modellers as acting in their own self-interest (seeking more funding for their research) or else as promoting some elite agenda (e.g. Europeans seeking to curtail American growth). The science may be criticised on the basis of anecdotal evidence (since the weather is cold today, talk of global warming makes no sense). Or there may be argument about the meaning of various indicators or the difficulty in representing, for example, the role of clouds or of microorganisms in the sea. The response to the modelling efforts of the Intergovernmental Panel on Climate Change (IPCC), and others concerned with climate change, has led to much public uncertainty as to the extent of scientific consensus and the possible role of self-interest in their pronouncements. Critical responses are organised in part through powerful interests in oil and gas industries, so it gets a level of media coverage much broader—and more vociferous—than that accorded the critics of Limits.

The early climate models in the 1960s mainly concerned air temperature and precipitation, and began to address the consequences of additional levels of CO2 entering the atmosphere. There are now actually many different models that seek to represent long-term processes in global circulation; essentially these portray the atmosphere (and ocean) as a large number of interacting cubes. Basic physical and chemical processes are captured in each cube, and relations between them are calculated over successive periods. (See Fig. 8.7: because these models involve huge numbers of relationships between the many entities modelled, it is felt that these visual representations convey the model structure most adequately.) These atmospheric and oceanic general circulation models are combined into global models, also incorporating factors such as land surface, ocean ice and clouds, and the role of aerosols and volcanic activity, together with more detail on the ocean on the carbon cycle with vegetation reacting to regional climate conditions, the role of rivers, and aspects of atmospheric chemistry (cf. IPCC 2007, 2014).Footnote 15

Fig. 8.7
figure 7

Basic processes modelled in Global Circulation Models. Sources: top figure from IPCC SAR WG1 (1996); lower figure from IPCC (2013)

Numerous models of increasing detail and complexity are incorporated into the IPCC’s forecasts, but the basic message has remained stable over the four decades—anthropogenic CO2 emissions are liable to lead to global mean temperature increases; and IPCC work on the impacts of these increases presents us with alarming scenarios (even taking into account the other IPCC work on adaptation and alleviation).

The most elaborate circulation models involve so many calculations that they require huge amounts of computer time to process. It is thus expensive and time-consuming to explore many scenarios with them. “Simple climate models” (reviewed in Harvey et al. 1997) thus provide a great deal less detail, but allow for more exploration of the consequences of different patterns of greenhouse gas emission (e.g. from different levels of use of different energy sources). They relate these and other factors to give estimates of such matters of concern as global mean surface temperature and global mean sea level rise. Harvey et al. (1997) stress that both simple and complex models have their own uses, and contrast the two in the table reproduced as Box 8.4.

Box 8.4 The Use of Simple and Complex Climate Models

Simple Models

Complex Models

Generally produce zonally- or globally-averaged results, and only for temperature and temperature changes, not for other variables such as rainfall.

Simulate the past and present geographical variation of temperature, as well as other variables of climatic interest such as rainfall, evaporation, soil moisture, cloudiness, and winds; and provide credible continental scale changes of at least some of these variables.

Cannot simulate possible changes in climatic variability as output consists of the climate change signal only.

Have the potential to simulate changes in important modes of interannual variability (e.g., El Niño) as well as mean values.

The effects of physical processes are approximated based on globally- or zonally-averaged computations with low temporal resolution.

Many physical processes are directly simulated, necessitating the use of a short time-step but allowing resolution of the diurnal cycle.

Climate sensitivity and other subsystem properties must be specified based on the results of complex models or observations. These properties can be readily altered for purposes of sensitivity testing.

Climate sensitivity and other subsystem properties are computed based on a combination of physical laws and sub-grid scale model parametrizations.

Sufficiently fast that multiple scenarios can be simulated, and that runs with a wide range of parameter values can be executed. Can be initialized in a steady state at little computational cost.

Computational cost strongly limits the number of cases that can be investigated and the ability to initialize in a steady state.

Useful for sensitivity studies involving the interaction of large-scale climate system components.

Useful for studying those fundamental processes which can be resolved by the model

Analysis is easy because simple models include relatively few processes. Interpretation of simple model results may give insights into the behaviour of more complex models.

Model behaviour is the result of many interacting processes, as in the real world. Studies with complex models indicate what processes need to be included in simple models and, in some cases, how they can be parametrized.

One-dimensional models cannot simulate climatic surprises, for example sudden ocean circulation changes. Two-dimensional ocean models can give some insight into such changes.

AOGCMs can simulate major changes in ocean circulation but the timing and nature of such changes may not yet be reliable.

  1. Source: Harvey et al. (1997)

The IPCC reports provide scenarios of climate issues (temperature, sea level, etc.) that—at the very least—a majority of experts consider plausible, with negative outcomes appearing to be highly likely (unless dramatic policy measures are undertaken). These results may be uncomfortable, but could valuably be taken into account in many ForSTI exercises. It is not the job of ForSTI to be comforting, especially not when we have many signals of impending crisis. Even if we still have inadequate understanding of global dynamics, the chances of environmental disaster cannot be discounted, and those that would discount them have better come up with their own well-grounded models.

8.2.3 Examples of Models in ForSTI

An example of the use of climate change models to support ForSTI analysis comes from the UK Foresight Programme’s project on Flood and Coastal Defence (Government Office for Science 2004). Here, global climate scenarios produced by the UK Climate Impacts Programme (drawing on IPCC work) were combined with socio-economic scenarios developed in the first cycle of UK Foresight. These integrated scenarios provided some quantitative description of both climate conditions alongside the more economic data (e.g. GDP growth), together with less quantitative specifications of such social dimensions as the system of governance (whether power remains at the national level or moving upwards or downwards e.g. to the EU or regional governments), and social values (more or less individualistic or community-oriented).

The combination helped secure wide impact for the study. Maps and charts were used to demonstrate flood risks in the various scenarios (see Fig. 8.8 for example), along with photographic images of possible future landscapes. The report led to considerable media attention, not least because it enabled property owners, developers and insurers to consider the risks associated with building (or living) in particular locations.

Fig. 8.8
figure 8

Future flooding and coastal erosion in the UK. Source: Office of Science and Technology (2004)

Moving on from climate change, it will often make sense to use the sorts of economic and demographic projection routinely produced by national governments and international agencies, to provide at least a context for ForSTI work. Neither type of forecast is infallible, nor economic forecasts—even though they are typically shorter term—can be upset very rapidly by financial and other crises. But they can often provide us with some insight into underpinning trends and/or boundary conditions we should attend to.

With the widespread availability of personal computers and associated software, computer simulations in the form of life simulation games, social simulation games and “God games” have allowed players to experiment with evolving societies (As compared to games where players simply interact with one another or virtual characters—like The Sims—or play some kind of military, crime fighting, or other action adventure). Civilisation was the first well-known example of these genres of simulation game, emerging in the early 1990s and rapidly followed by other games purporting to have more or less historical accuracy. Some models necessarily underlie these games, though these are not usually very evident to users. At the same time, however, it has become more viable for more people to design and/or modify, as well as to interact with models, using general-purpose software tools (like spreadsheets) as well as specialised simulation techniques (such as the Systems Dynamics). Cole (2008) describes how model-building can be used in educational contexts to deepen understand and communicate knowledge about critical issues—such a role is often claimed by proponents of computer modelling, but this is an example of its realisation in practice. Too often, we fear models obfuscate as much as they illuminate—especially when social and political assumptions are built into them.

Finally, we should note that new simulation approaches are being developed which hold considerable promise for application to social and ecological issues. One approach utilises “cellular automata”, of the sort familiar in the computer Game of Life, first introduced in the 1970s. Here each cell in a space filled by cells is seen as behaving according to the cells around it; in Life the rules are very simple, cells will live or die according to the prevalence of other cells in their neighbourhood. It is easy to envisage extensions of this, where people’s likelihood of adopting a particular attitude or behaviour is determined by how many of their neighbours are doing the same—indeed, models of diffusion of innovations that will be familiar to those involved in ForSTI, essentially take this form. The simple game of Life is one in which stable and unstable “social” structures emerge in the space occupied by the cellular automata.Footnote 16

Another (and related) approach goes by the name of Agent-based modelling. Instead of treating, for instance, economies in terms of the interchanges between industrial sectors (and a consumption sector), the focus shifts to the interactions of a number of “agents”. These may be people or organisations, and the task then is to represent their objectives, patterns of behaviour, and interactions in the model. This approach has attracted much interest in evolutionary economics and among theorists of technological innovation, as well as being applied to a wide range of issues in social analysis, where it is seen to be more realistic than traditional models. Instead of thinking of the word as composed of sectors, it is seen as constituted from multiple actors, agents, each of whom has a measure of bounded rationality and capability to learn about the others and the environment. The agents can communicate (or at least affect each other through their behaviour); they can be quite dissimilar from each other in terms of resources and capabilities; and they can interact in complex ways. Such simulation techniques are being explored across a wide range of ForSTI-relevant fields, with progress in moving from abstract models to representations of actual empirical situations and their possible developments. Among the topics covered are the transitions of energy systems, the migration of populations, the launch of innovative products, and the evolution of economic and innovation systems.Footnote 17

ForSTI addresses very complicated issues, and while models are necessary, the more formalised quantitative models are probably most useful for addressing specific sub-domains of these issues; rather more informal and qualitative models may be required to take a wider and more integrative view. There may be developments emerging from some of the new modelling approaches which will allow us to construct models that can address the possibilities of structural change, for example, the emergence of new actors and institutions bearing on our focal objects. At present, most quantitative models that are used in ForSTI (and forecasting in general) are far more limited, and should be understood as such. They rarely deal explicitly with technology and technological change (this is treated in economic models, for example, just in terms of levels of investment, productivity and relative balance of factors of production). They have little ability to encompass discovery, strategy and systemic transitions. However, the development and application of these models can be part of a critical and reflective process that helps ForSTI participants better understand the issues we are dealing with, and may allow for improved communication with other stakeholders—as long as we avoid the mystification of modelling! Obfuscation may be reduced by identifying the key relationships that drive the behaviour of the models, and it may be most appropriate to demonstrate this with construct simple models, and allow users to become familiar with how these operate, before adding layers of complexity.

8.3 Conclusions

Models are inevitable in, and intrinsic to ForSTI.Footnote 18 To rephrase a point made by J.M. Keynes, even the most influential ForSTI practitioner is usually applying the models of some long-dead economist—unless the assumptions and rationales involved in the models being used are made transparent. The approaches reviewed in this chapter represent ways of making the models that we are employing more transparent—more able to be communicated, critiqued, and further articulated. These approaches may take us further—we may be able to examine the results of complex interactions, we may be able to explore the implications of different assumptions, the effects of small or large variations in the data we input, and so on. As such, they can be valuable aids to the ForSTI process.

As always, the sorts of model we employ will be very much related to the circumstances and contingencies of a specific ForSTI exercise. Some types of model may take much time and require considerable expertise to develop—we may be fortunate enough to be able to apply the models recently developed by some still-living economist or environmental scientist, or have to do with much more limited types of trend analysis. ForSTI activities will often help to tell decision-makers what sorts of model they need to develop to better inform future strategies. In a given exercise, it is likely that one or more models will be developed and applied, however, even if these are far from perfect. The task is to appreciate the inevitable limitations of our models, while deploying them as effectively as possible. Following from this, we should be aware that models are liable to be employed for learning purposes as much as for forecasting.