Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

5.1 Introduction

Many systems, issues, and grand challenges are characterized by dynamic complexity, i.e., intricate time evolutionary behavior, often on multiple dimensions of interest. Many dynamically complex systems and issues are relatively well known, but have persisted for a long time due to the fact that their dynamic complexity makes them hard to understand and properly manage or solve. Other complex systems and issues—especially rapidly changing systems and future grand challenges—are largely unknown and unpredictable. Most unaided human beings are notoriously bad at dealing with dynamically complex issues—whether the issues dealt with are persistent or unknown. That is, without the help of computational approaches, most human beings are unable to assess potential dynamics of complex systems and issues, and are unable to assess the appropriateness of policies to manage or address them.

Modeling and simulation is a field that develops and applies computational methods to study complex systems and solve problems related to complex issues. Over the past half century, multiple modeling methods for simulating such issues and for advising decision makers facing them have emerged or have been further developed. Examples include system dynamics (SD) modeling, discrete event simulation (DES), multi-actor systems modeling (MAS), agent-based modeling (ABM), and complex adaptive systems modeling (CAS). All too often, these developments have taken place in distinct fields, such as the SD field or the ABM field, developing into separate “schools,” each ascribing dynamic complexity to the complex underlying mechanisms they focus on, such as feedback effects and accumulation effects in SD or heterogenous actor-specific (inter)actions in ABM. The isolated development within separate traditions has limited the potential to learn across fields and advance faster and more effectively towards the shared goal of understanding complex systems and supporting decision makers facing complex issues.

Recent evolutions in modeling and simulation together with the recent explosive growth in computational power, data, social media, and other evolutions in computer science have created new opportunities for model-based analysis and decision making. These internal and external evolutions are likely to break through silos of old, open up new opportunities for social simulation and model-based decision making, and stir up the broader field of systems modeling and simulation. Today, different modeling approaches are already used in parallel, in series, and in mixed form, and several hybrid approaches are emerging. But not only are different modeling traditions being mixed and matched in multiple ways, modeling and simulation fields have also started to adopt—or have accelerated their adoption of—useful methods and techniques from other disciplines including operations research, policy analysis, data analytics, machine learning, and computer science. The field of modeling and simulation is consequently turning into an interdisciplinary field in which various modeling schools and related disciplines are gradually being integrated. In practice, the blending process and the adoption of methodological innovations have just started. Although some ways to integrate systems modeling methods and many innovations have been demonstrated, further integration and massive adoption are still awaited. Moreover, other multimethods and potential innovations are still in an experimental phase or are yet to be demonstrated and adopted.

In this chapter, some of these developments will be discussed, a picture of the near future state of the art of modeling and simulation is drawn, and a few examples of integrated systems modeling are briefly discussed. The SD method is used to illustrate these developments. Starting with a short introduction to the traditional SD method in Sect. 5.2, some recent and current innovations in SD are discussed in Sect. 5.3, resulting in a picture of the state of modeling and simulation in Sect. 5.4. A few examples are then briefly discussed in Sect. 5.5 to illustrate what these developments could result in and what the future state-of-the-art of systems modeling and simulation could look like. Finally, conclusions are drawn in Sect. 5.6.

5.2 System Dynamics Modeling and Simulation of Old

System dynamics was first developed in the second half of the 1950s by Jay W. Forrester and was further developed into a consistent method built on specific methodological choicesFootnote 1. It is a method for modeling and simulating dynamically complex systems or issues characterized by feedback effects and accumulation effects. Feedback means that the present and future of issues or systems, depend—through a chain of causal relations—on their own past. In SD models, system boundaries are set broadly enough to include all important feedback effects and generative mechanisms. Accumulation relates not only to building up real stocks—of people, items, (infra)structures, etc.,—but also to building up mental or other states. In SD models, stock variables and the underlying integral equations are used to group largely homogenous persons/items/… and keep track of their aggregated dynamics over time. Together, feedback and accumulation effects generate dynamically complex behavior both inside SD models and—so it is assumed in SD—in real systems.

Other important characteristic of SD are (i) the reliance on relatively enduring conceptual systems representations in people’s minds, aka mental models (Doyle and Ford 1999, p. 414), as prime source of “rich” information (Forrester 1961; Doyle and Ford 1998); (ii) the use of causal loop diagrams and stock-flow diagrams to represent feedback and accumulation effects (Lane 2000); (iii) the use of credibility and fitness for purpose as main criteria for model validation (Barlas 1996); and (iv) the interpretation of simulation runs in terms of general behavior patterns, aka modes of behavior (Meadows and Robinson 1985).

In SD, the behavior of a system is to be explained by a dynamic hypothesis, i.e., a causal theory for the behavior (Lane 2000; Sterman 2000). This causal theory is formalized as a model that can be simulated to generate dynamic behavior. Simulating the model thus allows one to explore the link between the hypothesized system structure and the time evolutionary behavior arising out of it (Lane 2000).

Not surprisingly, these characteristics make SD particularly useful for dealing with complex systems or issues that are characterized by important system feedback effects and accumulation effects. SD modeling is mostly used to model core system structures or core structures underlying issues, to simulate their resulting behavior, and to study the link between the underlying causal structure of issues and models and the resulting behavior. SD models, which are mostly relatively small and manageable, thus allow for experimentation in a virtual laboratory. As a consequence, SD models are also extremely useful for model-based policy analysis, for designing adaptive policies (i.e., policies that automatically adapt to the circumstances), and for testing their policy robustness (i.e., whether they perform well enough across a large variety of circumstances).

In terms of application domains, SD is used for studying many complex social–technical systems and solving policy problems in many application domains, for example, in health policy, resource policy, energy policy, environmental policy, housing policy, education policy, innovation policy, social–economic policy, and other public policy domains. But it is also used for studying all sorts of business dynamics problems, for strategic planning, for solving supply chain problems, etc.

At the inception of the SD method, SD models were almost entirely continuous, i.e., systems of differential equations, but over time more and more discrete and other noncontinuous elements crept in. Other evolutionary adaptations in line with ideas from the earliest days of the field, like the use of Group Model Building to elicit mental models of groups of stakeholders (Vennix 1996) or the use of SD models as engines for serious games, were also readily adopted by almost the entire field. But slightly more revolutionary innovations were not as easily and massively adopted. In other words, the identity and appearance of traditional SD was well established by the mid-1980s and does—at first sight—not seem to have changed fundamentally since then.

5.3 Recent Innovations and Expected Evolutions

5.3.1 Recent and Current Innovations

Looking in somewhat more detail at innovations within the SD field and its adoption of innovations from other fields shows that many—often seemingly more revolutionary—innovations have been introduced and demonstrated, but that they have not been massively adopted yet.

For instance, in terms of quantitative modeling, system dynamicists have invested in spatially specific SD modeling (Ruth and Pieper 1994; Struben 2005; BenDor and Kaza 2012), individual agent-based SD modeling as well as mixed and hybrid ABMSD modeling (Castillo and Saysal 2005; Osgood 2009; Feola et al. 2012; Rahmandad and Sterman 2008), and micro–macro modeling (Fallah-Fini et al. 2014). Examples of recent developments in simulation setup and execution include model calibration and bootstrapping (Oliva 2003; Dogan 2007), different types of sampling (Fiddaman 2002; Ford 1990; Clemson et al. 1995; Islam and Pruyt 2014), multi-model and multimethod simulation (Pruyt and Kwakkel 2014; Moorlag 2014), and different types of optimization approaches used for a variety of purposes (Coyle 1985; Miller 1998; Coyle 1999; Graham and Ariza 1998; Hamarat et al. 2013, 2014). Recent innovations in model testing, analysis, and visualization of model outputs in SD include the development and application of new methods for sensitivity and uncertainty analysis (Hearne 2010; Eker et al. 2014), formal model analysis methods to study the link between structure and behavior (Kampmann and Oliva 2008, 2009; Saleh et al. 2010), methods for testing policy robustness across wide ranges of uncertainties (Lempert et al. 2003), statistical packages and screening techniques (Ford and Flynn 2005; Taylor et al. 2010), pattern testing and time series classification techniques (Yücel and Barlas 2011; Yücel 2012; Sucullu and Yücel 2014; Islam and Pruyt 2014), and machine learning techniques (Pruyt et al. 2013; Kwakkel et al. 2014; Pruyt et al. Pruyt et al. (2014c)). These methods and techniques can be used together with SD models to identify root causes of problems, to identify adaptive policies that properly address these root causes, to test and optimize the effectiveness of policies across wide ranges of assumptions (i.e., policy robustness), etc. From this perspective, these methods and techniques are actually just evolutionary innovations in line with early SD ideas. And large-scale adoption of the aforementioned innovations would allow the SD field, and by extension the larger systems modeling field, to move from “experiential art” to “computational science.”

Most of the aforementioned innovations are actually integrated in particular SD approaches like in exploratory system dynamics modelling and analysis (ESDMA), which is an SD approach for studying dynamic complexity under deep uncertainty. Deep uncertainty could be defined as a situation in which analysts do not know or cannot agree on (i) an underlying model, (ii) probability distributions of key variables and parameters, and/or (iii) the value of alternative outcomes (Lempert et al. 2003). It is often encountered in situations characterized by either too little information or too much information (e.g., conflicting information or different worldviews). ESDMA is the combination of exploratory modeling and analysis (EMA), aka robust decision making, developed during the past two decades (Bankes 1993; Lempert et al. 2000; Bankes 2002; Lempert et al. 2006) and SDmodeling. EMA is a research methodology for developing and using models to support decision making under deep uncertainty. It is not a modeling method, in spite of the fact that it requires computational models. EMA can be useful when relevant information that can be exploited by building computational models exists, but this information is insufficient to specify a single model that accurately describes system behavior (Kwakkel and Pruyt Kwakkel and Pruyt(2013a)). In such situations, it is better to construct and use ensembles of plausible models since ensembles of models can capture more of the un/available information than any individual model (Bankes 2002). Ensembles of models can then be used to deal with model uncertainty, different perspectives, value diversity, inconsistent information, etc.—in short, with deep uncertainty.Footnote 2

In EMA (and thus in ESDMA), the influence of a plethora of uncertainties, including method and model uncertainty, are systematically assessed and used to design policies: sampling and multi-model/multimethod simulation are used to generate ensembles of simulation runs to which time series classification and machine learning techniques are applied for generating insights. Multi-objective robust optimization (Hamarat et al. 2013, 2014) is used to identify policy levers and define policy triggers, and by doing so, support the design of adaptive robust policies. And regret-based approaches are used to test policy robustness across large ensembles of plausible runs (Lempert et al. 2003). EMA and ESDMA can be performed with TU Delft’s EMA workbench software, which is an open source toolFootnote 3 that integrates multimethod, multi-model, multi-policy simulation with data management, visualization, and analysis.

The latter is just one of the recent innovations in modeling and simulation software and platforms: online modeling and simulation platforms, online flight simulator and gaming platforms, and packages for making hybrid models have been developed too. And modeling and simulation across platforms will also become reality soon: the eXtensible Model Interchange LanguagE (XMILE) project (Diker and Allen 2005; Eberlein and Chichakly 2013) aims at facilitating the storage, sharing, and combination of simulation models and parts thereof across software packages and across modeling schools and may ease the interconnection with (real-time) databases, statistical and analytical software packages, and organizational information and communication technology(ICT) infrastructures. Note that this is already possible today with scripting languages and software packages with scripting capabilities like the aforementioned EMA workbench.

5.3.2 Current and Expected Evolutions

Three current evolutions are expected to further reinforce this shift from “experiential art” to “computational science.”

The first evolution relates to the development of “smarter” methods, techniques, and tools (i.e., methods, techniques, and tools that provide more insights and deeper understanding at reduced computational cost). Similar to the development of formal model analysis techniques that smartened the traditional SD approach, new methods, techniques, and tools are currently being developed to smarten modeling and simulation approaches that rely on “brute force” sampling, for example, adaptive output-oriented sampling to span the space of possible dynamics (Islam and Pruyt 2014) or smarter machine learning techniques (Pruyt et al. 2013; Kwakkel et al. 2014; (Pruyt et al. 2014c) and time series classification techniques (Yücel and Barlas 2011; Yücel 2012; Sucullu and Yücel 2014; Islam and Pruyt 2014), and (multi-objective) robust optimization techniques (Hamarat et al. 2013, 2014).

Partly related to the previous evolution are developments relates to “big data,” data management, and data science. Although traditional SD modeling is sometimes called data-poor modeling, it does not mean it is, nor should be. SD software packages allow one to get data from, and write simulation runs to, databases. Moreover, data are also used in SD to calibrate parameters or bootstrap parameter ranges. But more could be done, especially in the era of “big data.” Big data simply refers here to much more data than was until recently manageable. Big data requires data science techniques to make it manageable and useful. Data science may be used in modeling and simulation (i) to obtain useful inputs from data (e.g., from real-time big data sources), (ii) to analyze and interpret model-generated data (i.e., big artificial data), (iii) to compare simulated and real dynamics (i.e., for monitoring and control), and (iv) to infer parts of models from data (Pruyt et al. 2014c). Interestingly, data science techniques that are useful for obtaining useful inputs from data may also be made useful for analyzing and interpreting model-generated data, and vice versa. Online social media are interesting sources of real-world big data for modeling and simulation, both as inputs to models, to compare simulated and real dynamics, and to inform model development or model selection. There are many application domains in which the combination of data science and modeling and simulation would be beneficial. Examples, some of which are elaborated below, include policy making with regard to crime fighting, infectious diseases, cybersecurity, national safety and security, financial stress testing, energy transitions, and marketing.

Another urgently needed innovation relates to model-based empowerment of decision makers. Although existing flight simulator and gaming platforms are useful for developing and distributing educational flight simulators and games, and interfaces can be built in SD packages, using them to develop interfaces for real-world real-time decision making and integrating them into existing ICT systems is difficult and time consuming. In many cases, companies and organizations want these capabilities inhouse, even in their boardroom, instead of being dependent on analyses by external or internal analysts. The latter requires user-friendly interfaces on top of (sets of) models possibly connected to real-time data sources. These interfaces should allow for experimentation, simulation, thoroughly analysis of simulation results, adaptive robust policy design, and policy robustness testing.

5.4 Future State of Practice of Systems Modeling and Simulation

These recent evolutions in modeling and simulation together with the recent explosive growth in computational power, data, social media, and other evolutions in computer science may herald the beginning of a new wave of innovation and adoption, moving the modeling and simulation field from building a single model to simultaneously simulating multiple models and uncertainties; from single method to multimethod and hybrid modeling and simulation; from modeling and simulation with sparse data to modeling and simulation with (near real-time) big data; from simulating and analyzing a few simulation runs to simulating and simultaneously analyzing wellselected ensembles of runs; from using models for intuitive policy testing to using models as instruments for designing adaptive robust policies; and from developing educational flight simulators to fully integrated decision support.

For each of the modeling schools, additional adaptations could be foreseen too. In case of SD, it may for example involve a shift from developing purely endogenous to largely endogenous models; from fully aggregated models to sufficiently spatially explicit and heterogenous models; from qualitative participatory modeling to quantitative participatory simulation; and from using SD to combining problem structuring and policy analysis tools, modeling and simulation, machine learning techniques, and (multi-objective) robust optimization.

Fig. 5.1
figure 1

Picture of the state of science/future state of the art of modeling and simulation

Adoption of these recent, current, and expected innovations could result in the future state of the artFootnote 4 of systems modeling as displayed in Fig. 5.1. As indicated by (I) in Fig. 5.1, it will be possible to simultaneously use multiple hypotheses (i.e., simulation models from the same or different traditions or hybrids), for different goals including the search for deeper understanding and policy insights, experimentation in a virtual laboratory, future-oriented exploration, robust policy design, and robustness testing under deep uncertainty. Sets of simulation models may be used to represent different perspectives or plausible theories, to deal with methodological uncertainty, or to deal with a plethora of important characteristics (e.g., agent characteristics, feedback and accumulation effects, spatial and network effects) without necessarily having to integrate them in a single simulation model. The main advantages of using multiple models for doing so are that each of the models in the ensemble of models remains manageable and that the ensemble of simulation runs generated with the ensemble of models is likely to be more diverse which allows for testing policy robustness across a wider range of plausible futures.

Some of these models may be connected to real-time or near real-time data streams, and some models may even be inferred in part with smart data science tools from data sources (see (II) in Fig. 5.1). Storing the outputs of these simulation models in databases and applying data science techniques may enhance our understanding, may generate policy insights, and may allow for testing policy robustness across large multidimensional uncertainty spaces (see (III) in Fig. 5.1). And userfriendly interfaces on top of these interconnected models may eventually empower policy makers, enabling them to really do model-based policy making.

Note, however, that the integrated systems modeling approach sketched in Fig. 5.1 may only suit a limited set of goals, decision makers, and issues. Single model simulation properly serves many goals, decision makers, and issues well enough for multi-model/multimethod, data-rich, exploratory, policy-oriented approaches not to be required. However, there are most certainly goals, decision makers, and issues that do.

5.5 Examples

Although all of the above is possible today, it should be noted that this is the current state of science, not the state of common practice yet. Applying all these methods and techniques to real issues is still challenging, and shows where innovations are most needed. The following examples illustrate what is possible today as well as what the most important gaps are that remain to be filled.

The first example shows that relatively simple systems models simulated under deep uncertainty allow for generating useful ensembles of many simulation runs. Using methods and techniques from related disciplines to analyze the resulting artificial data sets helps to generate important policy insights. And simulation of policies across the ensembles allows to test for policy robustness. This first case nevertheless shows that there are opportunities for multimethod and hybrid approaches as well as for connecting systems models to real-time data streams.

The second example extends the first example towards a system-of-systems approach with many simulation models generating even larger ensembles of simulation runs. Smart sampling and scenario discovery techniques are then required to reduce the resulting data sets to manageable proportions.

The third example shows a recent attempt to develop a smart model-based decision-support system for dealing with another deeply uncertain issue. This example shows that it is almost possible to empower decision makers. Interfaces with advanced analytical capabilities as well as easier and better integration with existing ICT systems are required though. This example also illustrates the need for more advanced hybrid systems models as well as the need to connect systems models to real-time geo-spatial data.

5.5.1 Assessing the Risk, and Monitoring, of New Infectious Diseases

The first case, which is described in more detail in (Pruyt and Hamarat 2010; Pruyt et al. 2013), relates to assessing outbreaks of new flu variants. Outbreaks of new (variants of) infectious diseases are deeply uncertain. For example, in the first months after the first reports about the outbreak of a new flu variant in Mexico and the USA, much remained unknown about the possible dynamics and consequences of this possible epidemic/pandemic of the new flu variant, referred to today as new influenza A(H1N1)v. Table 5.1 shows that more and better information became available over time, but also that many uncertainties remained. However, even with these remaining uncertainties, it is possible to model and simulate this flu variant under deep uncertainty, for example with the simplistic simulation model displayed in Fig. 5.2, since flu outbreaks can be modeled.

Table 5.1 Information and unknowns provided by the European Centre for Disease Prevention and Control (ECDC) from 24 April until 21 August
Fig. 5.2
figure 2

Region 1 of a two-region system dynamics (SD) flu model

Simulating this model thousands of times over very wide uncertainty ranges for each of the uncertain variables generates the 3D cloud of potential outbreaks displayed in Fig. 5.3a. In this figure, the worst flu peak (0–50 months) is displayed on the X-axis, the infected fraction during the worst flu peak (0–50 %) is displayed on the Y-axis, and the cumulative number of fatal cases in the Western world (0–50.000.000) is displayed on the Z-axis. This 3D plot shows that the most catastrophic outbreaks are likely to happen within the first year or during the first winter season following the outbreak. Using machine learning algorithms to explore this ensemble of simulation runs helps to generate important policy insights (e.g., which policy levers to address). Testing different variants of the same policy shows that adaptive policies outperform their static counterparts (compare Fig. 5.3b and c). Figure 5.3d finally shows that adaptive policies can be further improved using multi-objective robust optimization.

Fig. 5.3
figure 3

3D scatter plots of 20,000 Latin-Hypercube samples for region 1 with X-axis: worst flu peak (0–50 months); Y-axis: infected fraction during the worst flu peak (0–50 %); Z-axis: fatal cases (0–5 × 107)

However, taking deep uncertainty seriously into account would require simulating more than a single model from a single modeling method: it would be better to simultaneously simulate CAS, ABM, SD, and hybrid models under deep uncertainty and use the resulting ensemble of simulation runs. Moreover, near real-time geospatial data (from twitter, medical records, etc.) may also be used in combination with simulation models, for example, to gradually reduce the ensemble of modelgenerated data. Both suggested improvements would be possible today.

5.5.2 Integrated Risk-Capability Analysis under Deep Uncertainty

Fig. 5.4
figure 4

Model-based integrated risk-capability analysis (IRCA)

The second example relates to risk assessment and capability planning for National Safety and Security. Since 2001, many nations have invested in the development of all-hazard integrated risk-capability assessment (IRCA) approaches. All-hazard IRCAs integrate scenario-based risk assessment, capability analysis, and capabilitybased planning approaches to reduce all sorts of risks—from natural hazards, over technical failures to malicious threats—by enhancing capabilities for dealing with them. Current IRCAs mainly allow dealing with one or a few specific scenarios for a limited set of relatively simple event-based and relatively certain risks, but not for dealing with a plethora of risks that are highly uncertain and complex, combinations of measures and capabilities with uncertain and dynamic effects, and divergent opinions about degrees of (un)desirability of risks and capability investments.

The next generation model-based IRCAs may solve many of the shortcomings of the IRCAs that are currently being used. Figure 5.4 displays a next generation IRCA for dealing with all sorts of highly uncertain dynamic risks. This IRCA approach, described in more detail in Pruyt et al. (2012), combines EMA and modeling and simulation, both for the risk assessment and the capability analysis phases. First, risks—like outbreaks of new flu variants—are modeled and simulated many times across their multidimensional uncertainty spaces to generate an ensemble of plausible risk scenarios for each of the risks. Time series classification and machine learning techniques are then used to identify much smaller ensembles of exemplars that are representative for the larger ensembles. These ensembles of exemplars are then used as inputs to a generic capability analysis model. The capability analysis model is subsequently simulated for different capabilities strategies under deep uncertainty (i.e., simulating the uncertainty pertaining to their effectiveness) over all ensembles of exemplars to calculate the potential of capabilities strategies to reduce these risks. Finally, multi-objective robust optimization helps to identify capabilities strategies that are robust.

Not only does this systems-of-systems approach allow to generate thousands of variants per risk type over many types of risks and to perform capability analyses across all sorts of risk and under uncertainty, it also allows one to find sets of capabilities that are effective across many uncertain risks. Hence, this integrated model-based approach allows for dealing with capabilities in an all-hazard way under deep uncertainty.

This approach is currently being smartened using adaptive output-oriented sampling techniques and new time-series classification methods that together help to identify the largest variety of dynamics with the minimal amount of simulations. Covering the largest variety of dynamics with the minimal amount of exemplars is desirable, for performing automated multi-hazard capability analysis over many risks is—due to the nature of the multi-objective robust optimization techniques used— computationally very expensive. This approach is also being changed from a multimodel approach into a multimethod approach. Whereas, until recently, sets of SD models were used; there are good reasons to extend this approach to other types of systems modeling approaches that may be better suited for particular risks or—using multiple approaches—help to deal with methodological uncertainty. Finally, settings of some of the risks and capabilities, as well as exogenous uncertainties, may also be fed with (near) real-world data.

5.5.3 Policing Under Deep Uncertainty

Fig. 5.5
figure 5

(I) Exploratory system dynamics modelling and analysis (ESDMA) model, (II) interface for policy makers, (III) analytical module for analyzing the high-impact crimes(HIC) system under deep uncertainty, (IV) real-world pilots based on analyses, and (V) monitoring of real-world data from the pilots and the HIC system

The third example relates to another deeply uncertain issue, high-impact crimes(HIC). An SD model and related tools (see Fig. 5.5) were developed some years ago in view of increasing the effectiveness of the fight against HIC, more specifically the fight against robbery and burglary. HICs require a systemic perspective and approach: These crimes are characterized by important systemic effects in time and space, such as learning and specialization effects, “waterbed effects” between different HICs and precincts, accumulations (prison time) and delays (in policing and jurisdiction), preventive effects, and other causal effects (ex-post preventive measures). HICs are also characterized by deep uncertainty: Most perpetrators are unknown and even though their archetypal crime-related habits may be known to some extent at some point in time, accurate time and geographically specific predictions cannot be made. At the same time, is part of the HIC system well known and is a lot of real-world information related to these crimes available.

Important players in the HIC system besides the police and (potential) perpetrators are potential victims (households and shopkeepers), partners in the judicial system (the public prosecution service, the prison system, etc.). Hence, the HIC system is dynamically complex, deeply uncertain, but also data rich, and contingent upon external conditions.

The main goals of this pilot project were to support strategic policy making under deep uncertainty and to test and monitor the effectiveness of policies to fight HIC. The SD model (I) was used as an engine behind the interface for policy makers (II) to explore plausible effects of policies under deep uncertainty and identify realworld pilots that could possibly increase the understanding about the system and effectiveness of interventions (III), to implement these pilots (IV), and monitor their outcomes (V). Real-world data from the pilots and improved understanding about the functioning of the real system allow for improving the model.

Today, a lot of real-world geo-spatial information related to HICs is available online and in (near) real time which allows to automatically update the data and model, and hence, increase its value for the policy makers. The model used in this project was an ESDMA model. That is, uncertainties were included by means of sets of plausible assumptions and uncertainty ranges. Although this could already be argued to be a multi-model approach, hybrid models or a multimethod approach would really be needed to deal more properly with systems, agents, and spatial characteristics. Moreover, better interfaces and connectors to existing ICT systems and databases would also be needed to turn this pilot into a real decision-support system that would allow chiefs of police to experiment in a virtual world connected to the real world, and to develop and test adaptive robust policies on the spot.

5.6 Conclusions

Recent and current evolutions in modeling and simulation together with the recent explosive growth in computational power, data, social media, and other evolutions in computer science have created new opportunities for model-based analysis and decision making.

Multi-method and hybrid modeling and simulation approaches are being developed to make existing modeling and simulation approaches appropriate for dealing with agent system characteristics, spatial and network aspects, deep uncertainty, and other important aspects. Data science and machine learning techniques are currently being developed into techniques that can provide useful inputs for simulation models as well as for building models. Machine learning algorithms, formal model analysis methods, analytical approaches, and new visualization techniques are being developed to make sense of models and generate useful policy insights. And methods and tools are being developed to turn intuitive policy making into model-based policy design. Some of these evolutions were discussed and illustrated in this chapter.

It was also argued and shown that easier connectors to databases, to social media, to other computer programs, and to ICT systems, as well as better interfacing software need to be developed to allow any systems modeler to turn systems models into real decision-support systems. Doing so would turn the art of modeling into the computational science of simulation. It would most likely also shift the focus of attention from building a model to using ensembles of systems models for adaptive robust decision making.