1.1 Introduction

In 2018, the Summer Computer Simulation Conference (SCSC) marked an important event, as another conference under leadership of the Society for Modeling and Simulation International (SCS); it celebrated its 50th anniversary, its golden jubilee. Having the honor to chair the 50th SCSC, I (Umut Durak) used the opportunity to organize a panel discussion in which we discussed the past and the future of SCSC and in this context of simulation.

Frequently scientific disciplines are experiencing hype cycles. Inflated expectations and large resources come together. The following disappointment usually leads to funding cuts. “AI Winter” from the history of Artificial Intelligence (AI) is a famous instance of this pattern. Simulation is not an exception. It is surely true that simulation as a discipline (Tolk and Ören 2017) and simulation-based disciplines (Mittal et al. 2017) had also ups and downs, some of which were more remarkable than others. Notwithstanding, the golden jubilee of SCSC, as its name suggests, is giving us a never ending 50 years of summer feeling.

This chapter summarizes the discussion from the SCSC Golden Jubilee Panel. It follows the order of speakers. We start with Ralph Coolidge Huntsinger, an eminent fellow of SCS and the historian of the society. He introduced us the first years of SCSC and how it evolved to a multi-conference, named as SummerSim. Then Gabriel Wainer gave a brief history of the later years with an outlook.

SCSC started 50 years ago as a simulation application conference. The methodology and tools areas were added over the years. I tried to structure the panel discussion in the same way. After the history part the application domains were discussed along with the SCSC. Jacob Barhak presented the seasons of disease modeling. The following section on methodology and tools consisted of a discussions about simulation software from José L. Risco-Martín, standardization and reuse from M S Raunak, and modeling and simulation (M&S) and AI from Andrea D’Ambrogio. The panel ended with the last three speakers focusing on the future of simulation: Gregory Zacharewicz discussed the “future” and simulation, Saikou Diallo promoted simulation for the big problems, and Andreas Tolk emphasized the ubiquitous nature of simulation today and in the future.

1.2 The History of Summer Computer Simulation Conference

1.2.1 The First Years

I (Ralph Coolidge Huntsinger) am the only person left alive who has been to all 50 SCSCs. The first one was by Takeshi Utsumi, who is an emeritus of the Columbia University at New York. I met him in 1970 in Denver. Then I got involved in SCSC, presented papers, later being a session chairman or being a program chairman. Eventually in 1979, I was the overall program chair of the SCSC and we had a “big” conference. This was really a “great” fun because SCSC was the prime conference and our focus was simulation applications. Now we have more philosophical ideas about simulation. That time probably more than the half of the attendees were coming from industry. This changed slowly with the increasing interest to the conference from the academia. The conference focus also evolved towards modeling and simulation methodologies and tools. The methodology and tools group eventually got so big that they became a separate conference that is co-located with SCSC. This was the foundation of the SummerSim Multi-Conference. Later, Roy Crosbie started the grand challenges of simulation track, which took a considerable interest from the defense modeling and simulation community. The track still exists in the 50th edition.

1.2.2 Establishment and Beyond

My (Gabriel Wainer) first SummerSim was in 1999, in Chicago. Coming from Argentina, my first SCS conference in the USA was something impressive, in a historic hotel downtown. Mohammad Obaidat opened the conference, and I was able to listen to top presentations on simulation topics from experts that had been involved in the conference during the first 30 years. I had the honor to meet great researchers like Axel Lehmann, François E. Cellier, Hassan Rajaei, Josep Granda, Hans Vangheluwe, and many others. Methodologies and applications on continuous and discrete-event simulation showed the maturity of a research field that was ready to move towards the next stage. At that time, SummerSim hosted SPECTS, the Symposium on Performance of Computer and Telecommunications Systems, where I presented my research on serialization of Cell-DEVS models (Wainer and Giambiasi 1999). I was able to discuss my research with top experts in the area who gave me insight on how to advance in the field. At that time Cellular Automat0061 (CA) were popular to solve complex problems (Wolfram 1986). CA are discrete-time discrete models described as n-dimensional lattices. The use of discrete time poses constraints in the precision and execution performance of these complex models, and Cell-DEVS solved these issues combining DEVS and CA (Wainer 2009). At that time there was research on how to use multiple processors and on how to prevent inherently parallel models from executing serially.

At that time, there was varied research showing how to transform modeling formalisms. We presented various efforts in this sense in SummerSim; first, numerous CA models (Ameghino and Wainer 2000), Petri Nets (Jacques and Wainer 2002), Finite State Machines (Zheng and Wainer 2003), and later Bond Graphs (D’Abreu and Wainer 2006) and Modelica (Chechiu and Wainer 2005). At this time, DEVS was proven to be the most generic Discrete-Event system Specification (Vangheluwe 2000). Another important fact was the research on modeling of continuous and hybrid systems trying to provide a uniform approach to model hybrid systems, i.e. composed of both continuous and discrete components. A new idea proposed to use quantization of the state variables to obtain a discrete-event approximation of the continuous system (Kofman and Junco 2001).

At this time, simulation technology also took another direction: web-based simulation. Our first article in SummerSim (Wainer and Chen 2003) showed a mechanism for remote execution of simulation models using web-based technologies. Shortly after, a large number of researchers focused on model standardization, and the High Level Architecture (HLA) standard was in focus (Saghir et al. 2004), spawning hundreds of researchers focusing on how to coordinate distributed simulations on the web. At present, and in the future, simulation will be ubiquitous, and there are numerous examples of simulation software running in the cloud and on mobile devices (Jeffery et al. 2013). Many of the coordination algorithms for distributed simulation were based on a large body of knowledge in the field of parallel discrete-event simulation, in which a number of conservative and optimistic algorithms and variations provided the means to guarantee correct execution and to prevent causality errors (Jafer and Wainer 2011).

Many advanced implementations are now built on Web Services to communicate. Nevertheless, building Service Oriented Architecture (SOA)-based simulations is still complex, as the services usually address the interoperability of simulation engines at a low level of abstraction. In recent years, Grid and Cloud computing introduced new ways of sharing computing power and storage in heterogeneous environments in which resources are virtualized as services consumed on demand (with minimal limitation for resource location). The Representational State Transfer (REST) style can help solving the interoperability limitations, and it makes easy the development of mashups, which can be developed in a shorter period of time The future of this area lies in making interaction of these distributed models more efficient using a plug-and-play model and easier reuse of existing models. We had the chance to see this field evolve in the 2010’s, in particular in SummerSim 2007 and 2010, which I helped organizing. I was able to involve Andreas Tolk and Pieter Mosterman in the organization of the conference, and later we involved younger and energetic emerging leaders like José-Luis Risco-Martín, Saurabh Mittal, Saikou Diallo, Gregory Zacharewicz, Abdy Abhari, Shafagh Jafer, Mohammad Moallemi, and Andrea D’Ambrogio, who have shaped the conference to be in good hands for the next 50 years.

Recent years have also seen an evolution of simulation and real-time applications. Real-Time (RT) systems are built as sets of components interacting with their surrounding environment. These are highly reactive systems, in which not only correctness is critical, but also the timing for executing the system tasks. Failing to guarantee that all the computations meet their deadlines could be catastrophic. Most design methods are known to be hard and difficult to apply to large-scale systems, and they do not guarantee error-free systems. In recent years, Modeling and Simulation (M&S) has been used as a practical approach to verification of these systems with reduced costs and risks (Shang and Wainer 2007; Yu and Wainer 2007; Ahmed et al. 2011). These techniques allow testing the systems in a risk-free environment. In particular, formal M&S provides even better results as the software artifacts can be built faster and safer, and the formal models can be used for formal verification. However, M&S techniques often require extra effort to model features of specific target systems, for example, the timing constraints in RT systems.

We thus suggest that, although there have been numerous advances in this field; that the following questions still need to be addressed:

  1. 1.

    How to interface simulation software with Smartphone Application Programming Interfaces (APIs)? How to deal with the inherent performance issues of these devices (power consumption, CPU speed, communication latency)?

  2. 2.

    How to deal with power and communication disruptions?

  3. 3.

    How to enable multi-user collaboration between numerous users participating in a joint experiment?

  4. 4.

    How to integrate different online services and real-time data available in the Cloud?

  5. 5.

    How to include advanced algorithms and methods for combining discrete-event simulation, cloud computing (with web service interfaces) and mobile devices for distributed simulation and collaboration?

  6. 6.

    How to build advanced mashup applications using simulation, sharing and reusing models and experiments?

  7. 7.

    How to use a more abstract approach to deal with these problems? (Instead of dealing with the data and simulation levels, interoperability will be dealt with at the modeling and experimentation levels, improving reuse and providing better ways to mashup models, experiments and other services).

  8. 8.

    How to handle the large amounts of simulation data through instrumentation of scenarios, aggregation policies and dynamic adaptation of the simulation to varying computing conditions based on different policies?

  9. 9.

    How to integrate RT tasks and simulation in a seamless fashion?

  10. 10.

    How to integrate machine learning methods and simulation? (Floyd and Wainer 2010)

  11. 11.

    How to interface simulation models for telecommunications and networking protocols in real-time?

50 more years and we will have Simulation Everywhere!

1.3 Summers of Simulation Application

1.3.1 Seasons of Disease Modeling

The burden of disease was a good incentive for modelers to analyze diseases and attempt to forecast their progression. Chronic diseases that bear long term economic effects were targeted. Specifically the burden of heart disease was in focus and continues to be in focus until these days. In this section, I (Jacob Barhak) would like to roughly summarize trends and seasons within some disease modeling communities in the last three decades (Table 1).

Table 1 Seasons of disease modeling

An early famous disease model was created by Weinstein for coronary heart disease (Weinstein et al. 1987). It was a fairly complex model for that time since it included influx of population whereas most disease models use a fixed starting baseline population, even these days. Most models at that time were constrained by computational power and therefore simpler modeling techniques were used.

Markov models were considered the standard and even the state of the art. In those models, cohorts of individuals were modeled and each state counted the number of individuals in that state. Transitions between states were governed by transition probabilities. Correct estimation of those transition probabilities was part of the art of modeling. This kind of modeling was not restricted to chronic diseases and one good example of a Markov disease model came from the mental health perspective (Leff and Dada 1986), a basic explanation for medical prognosis is provided in (Beck and Pauker 1983).

However, Markov models were no longer sufficient since modelers wanted to get additional information such as age and gender into the model. Chronic disease modeling therefore started transitioning into more complicated simulation types that allow incorporation of population parameters.

Risk equations that model populations were extracted from large longitudinal cohort studies. One of the most famous studies was the Framingham study that produced the Framingham model (Wilson et al. 1998; D’Agostino et al. 2008) that represented heart disease complications and was updated multiple times. It is perhaps the most famous disease model to date.

Similar attempts were carried by the United Kingdom Prospective Diabetes Study (UKPDS) group that generated multiple models for different aspects of diabetes. These included risk equations (Stevens et al. 2001; Kothari et al. 2002) and cost effectiveness models (Clarke et al. 2005). Eventually the group assembled the pieces into a more advanced microsimulation model (Clarke et al. 2004; Hayes et al. 2013). Such a model is used to conduct Monte Carlo simulations and required more computing power and provided richer modeling capabilities.

The diabetes modeling community then started comparing these richer models. The Mount Hood challenge was created (Brown et al. 2000; The Mount Hood 4 Modeling Group 2007; Palmer et al. 2013) and diabetes modelers from around the world started meeting on a regular basis to compare and contrast their models. Most interesting were the validation challenges that repeatedly showed the differences between models and the gap of understanding of disease processes.

Regular participants in those challenges included the UKPDS model, IMS model, IHE model, Mikado, and the Michigan Model.Footnote 1 The latter model was interesting since work on it started with Markov modeling in mind with emphasis on parameter estimation based on multiple trials (Isaman et al. 2006), yet over the years it evolved into a micro-simulation model with a set of public tools to support it (Barhak et al. 2010).

Another notable participant in those challenges was the Archimedes model (Eddy and Schlessinger 2003; Schlessinger and Eddy 2002). It went past the trend of building models on top of one population set and validated the model against multiple studies. The model was advanced and addressed issues of efficient simulation through discrete event simulation and dedicated code development. It also addressed issues of baseline population generation.

However, despite the efficiency of the model, it was not emphasizing the use of the growing availability of computing power.

The Reference ModelFootnote 2 (Barhak 2017) that was developed in the last decade took advantage of High Performance Computing (HPC) capabilities and could run simulations on the cloud. The Reference Model was a split from the Michigan Model and its open source set of tools. It took the idea of knowledge accumulation to the next level since the model validates multiple models against multiple populations. Its use of public data sources and open source tools allow simulations that compare models. It uses optimization techniques used in machine learning and evolutionary computation to accumulate knowledge.

It is likely that future simulations will use the availability of computing power and open source code to better accumulate knowledge. Yet the current simulation problem is less about modeling technique or modeling approach. The disease modeling community is now facing a crisis of reproducibility. The 2016 Mount Hood Challenge exposed this when multiple teams of modelers around the world could not reproduce two published models.

Therefore future disease modeling will have to focus on model exchange mechanisms such as the Systems Biology Markup Language (SBML),Footnote 3 the initial work has started in this direction (Smith et al. 2016). Moreover, recently disease databases started to gain traction. Two notable examples are ClinicalTrials.Gov (Ide et al. 2016; Zarin et al. 2016) and the Global Burden of Disease (GBD) database.Footnote 4 Those databases now feed information into newer disease models. Therefore the modeling focus in the next decade will most probably move towards model and data sharing.

1.4 Summers of Methodology and Tools

1.4.1 Simulation Software: A Historical Perspective and Future Trends

Simulation is one of the most multifaceted topics present today in both industry and academia. Simulation has been traditionally used as a tool to increase production and capacity. Nowadays, many other aspects are studied in simulation, like analysis, reliability, scalability, verification, validation, human training, etc. To visualize the future of simulation, we must first analyze its historical perspective. Such perspective can be presented from several angles: uses of simulation, simulation languages, simulation environments or application domains. Here I (José Luis Risco Martín) offer a brief treatment from the perspective of the simulation software, which somehow captures simulation languages and simulation environments. This section shows the historical perspective of the simulation software during the last 50 years. It also provides insights regarding the future directions of simulation software and simulation paradigms and how the SCSC can be a main witness of this future.

My discussion of the history of simulation software is based on Nance (Nance 1995). I have taken, simplified and adapted his original classification periods into a more “modern” point of view, resulting in the following four periods:

  • [1955–1974] First simulation languages

  • [1975–1989] First consolidation and regeneration

  • [1990–2008] Integrated environments

  • [2009–????] Second consolidation and regeneration

1955–1974 First simulation languages

Simulation was firstly conducted in FORTRAN and other general programming languages. Obviously, there was no support for simulation specific routines. In 1960, K.D. Rocher and D.G. Owen launched what is considered the first simulation language effort, named the General Simulation Program. Later from 1961 to 1965 several Simulation Programming Languages (SPL) appeared, like the General Purpose Simulation System (GPSS) developed by Geoffrey Gordon at IBM. Philip J. Kiviat began the development of the General Activity Simulation Program (GASP), in 1961. Hary Markowitz provided the major conceptual guidance for SIMSCRIPT in 1963. In Europe other simulation programming languages appeared, like SIMULA or the Control and Simulation Language (CSL).

From 1966 to 1974, the previous tools were upgraded: GPSS was released as GPSS/360 and later as GPSS/NORDEN, SIMSCRIPT evolved to SIMSCRIPT II, GASP to GASP IV, and CSL to ECSL. SIMULA also added some object-oriented programming concepts, considered the initial steps towards the modern object-oriented programming languages.

1975–1989 First consolidation and regeneration

During this period, traditional SPLs were adapted to desktop computers, with the era of the microprocessor. GPSS/H was also released in 1977 for specific IBM mainframes, which became the principal version of GPSS in use today. Two major descendants of GASP appeared: the Simulation Software for Alternative Modeling II (SLAMM II) and the SIMulation Analysis (SIMAN), both including multiple modeling perspectives and combined modeling capabilities. SIMAN was the first major simulation language for the IBM PC and designed to run in MS-DOS.

1990–2008 Integrated environments

This period is remarkable by the growth of SPLs on the personal computer and the creation of many simulation environments with graphical user interfaces, automatic reports generation, data analyzers, animation and specific visualization tools. Most of these environments attempt to simplify the modeling process avoiding the need to learn a programming syntax. Animations include from schematic-like representations to 2D and 3D approximations to reality.

Some of the most popular integrated environments created over this period include: Arena, AutoMod, Extend, Flexim, Micro Saint, ProModel, Quest, Simul8 or Witness.

2009–2018 Second consolidation and regeneration

This period is completely analogue to the first consolidation and regeneration period, but instead of an accommodation of the traditional simulation software to the personal computer, in this case we have had an evolution from multi-processor integrated environments to the cloud simulation or the simulation as a service in cloud infrastructures. The evolution of traditional message passing programming techniques (through MPI for example) to modern cloud programming paradigms has facilitated the appearance of new modeling and simulation paradigms like DEVS/SOA (Mittal 2009). Beyond that, a plethora of new simulation software (or an evolution of the previous integrated environments) has appeared in the last ten years. Some examples we found are: AnyLogic, Arena, AutoMod, Enterprise Dynamics, ExtendSim, FlexSim, GoldSim, GPSS, MS4, Plant Simulation, ProModel, Simcad Pro, SimEvents, Simio, Simul8, VisualSim, WitNess, DESMO-J, Ptolemy II, SimPy, SystemC, etc.

SCSC is 50 years old. It is enough to presume a vast experience. Regarding simulation software, SCSC has seen brilliant papers focused on simulation languages, platforms and tools like:

  • J. Leon, C. O. Alford and J. Hammond (1970). DIHYSYS—a hybrid systems simulator.

  • T. I. Ören (1971). A basis for the taxonomy of simulation languages.

  • G. E. Miles, R. N. Peart, and A. A. B. Pritsker (1976). CROPS: A GASP IV Based Crop Simulation Language.

  • R. M. Fujimoto (1985). The SIMON simulation and development system.

  • M. Gourgand and P. Kellert (1992). An object-oriented methodology for manufacturing systems modelling.

  • O. Balci and R. E. Nance (1998). A taxonomy of layout composition techniques for visual simulation.

  • J. Ameghino, E. Glinsky and G. Wainer (2003). Applying Cell-DEVS in Models of Complex Systems.

There is still room for 50 or 500 years of new simulation engines. In a new world full of complex systems of systems and with very exigent time-to-market constraints, SCSC should serve as a conductor vehicle to check the validation of such simulation software, its performance, applicability, scalability, and usefulness to both industry and academia. SCSC must serve as a joint forum to converge to (why not) a unified methodology in the art of modeling and simulation, always from a pure practical point of view.

1.4.2 Standardization and Reuse

In the past 50 years, we have had great success in defining the discipline of modeling and simulation. The discipline has steadily grown and found its presence in many disciplines including, but not limited to, engineering, computer science, operations research, and management science. Amongst many highlights from this time period, we can include:

  • Establishing processes and practices for developing effective simulation models and performing useful studies using them.

  • Development of general purpose as well as domain specific languages for creating simulation models.

  • Development of many commercial and non-commercial simulation frameworks.

  • Establishing rigorous and formal approaches to define the simulation models.

  • Establishing techniques and processes for verification and validation of simulation models.

The rapid advancement of technology and its use is likely to facilitate the proliferation of using simulation in many more areas over the next 50 years.

At the Golden Jubilee year, one important question that I (M S Raunak) would like to ask is: How mature does the discipline of modeling and simulation appear to be? An approach towards judging the maturity of the field would require us to contemplate the following questions:

  • Do we have rigorous building blocks for creating models and performing simulation studies with them?

  • Do we have fundamental rules to govern the activities of simulation practitioners?

  • Are simulation studies getting reproduced regularly for corroborating their results or to identify potential problems?

  • Do we have a standardized way of communicating the verification and validation (V&V) performed on a model?

  • Are there building blocks that are readily available for reuse in constructing new models?

The answers to many of these questions would still come out to be negative. This is an indication that the field has not yet matured like some of the other natural science and engineering disciplines.

The simulation community needs to continue working on developing standards for simulation modeling approaches, notations, implementation, and experimentation. In the 2016 winter simulation conference, there was a panel discussion on standards related to smart manufacturing systems focusing on data, process, and environmental aspects (Beck et al. 2016). More general purpose standards need to be developed in every area of simulation. One collaboration that could facilitate and fast-track this process is to involve government research agencies such as the National Institute of Standards and Technology (NIST) in the US.

We also need to work on developing the practice of proper use and reporting of validations performed on simulation models presented in published research. A 2014 survey identified the unusually low validation efforts reported in health-care related simulation research (Raunak and Olsen 2014a). This leads to reduced confidence in published results, which in turn, reduces the use of the results in the real world policy-change. In addition to following established standards for modeling and performing verification and validation, a standard way of communicating about them is also needed. A new set of research has shown ways to quantify and communicate about simulation validation (Raunak and Olsen 2014b; Olsen and Raunak 2015). With the adoption of such standard processes and practices, we are going to increase our confidence in our simulation experiments and results. Verification and validation of simulation models can also benefit from new approaches and techniques from other fields such as software testing (Olsen and Raunak 2016).

A key factor of a mature research field is the practice of reusing models, components, frameworks, and reproducing results. The simulation community is still behind in achieving reasonable progress in this regard. There are some application areas such as network simulation or agent based models, where standard modeling tools have facilitated some level of model sharing and reuse. In many other areas, sharing and reuse of simulation models remains a rarity. Factors including closed or classified environments (e.g. military simulation), intellectual property, and the size and complexity of the models have challenged the proliferation of model reuse and result reproduction. There are encouraging signs in the community, however, as they have recognized the need for reproducible research and the challenges surrounding it (Uhrmacher et al. 2016). What is missing in the discussion so far is the establishment of a repository of simulation models and related artifacts for researchers and practitioners to use in their experimentation and analysis. The software engineering community has greatly benefited from the creation of the Software artifact Infrastructure Repository (SIR)Footnote 5, where researchers get access to many different versions of software programs, test suites, bug reports and other software artifacts to perform rigorous controlled experiments. This has immensely facilitated the reuse and reproduction of research results. Developed and maintained through collaborative efforts from multiple institutions and funding support from the National Science Foundation, SIR artifacts have been used by more than 600 universities and research institutes all over the globe and have resulted in at least 700 software analysis and testing related research studies and publications. To facilitate the path to becoming a more mature discipline, a simulation model related repository is extremely essential and will help us leap-frog in the direction of model reuse and result reproduction.

The field of M&S has made great strides in the last 50 years. M&S now permeates many different areas of scientific research. With the exponential growth of technological advancement, especially in the areas of medicine, robotics, autonomous cars, unmanned aerial vehicles, and smart cities, M&S is likely to see another fifty years of intense activity. However, we need to take stock of the places we are lagging behind in terms of maturing as a scientific discipline. More focus on developing and using rigorous standards in all areas of simulation is an important aspect. Encouraging government standard bodies to get more actively involved in this process will benefit us a lot.

We need to put more emphasis on V&V activities in simulation and an effective standard way of talking about it. Our community needs to look into the factors that have contributed to a lack of reuse and reproducible results. Establishing a simulation-artifact infrastructure repository is a very important need of the community and it will help graduate the field to become a more matured discipline. Finally, with the advent of new technologies, confluence of disciplines and the challenges that come with them, our community needs to be open and in the search for discovering and adopting new ideas, methods and processes. With the right focus and a collaborative effort, the next fifty summers of computer simulation are surely going to be exciting!

1.4.3 M&S and AI: The Odd Couple

Celebrating the 50th anniversary of the SCSC is a significant step that witnesses the relevance and longstanding tradition of the computer simulation field, which is now preferably referred to as the M&S field, so to emphasize the role of modeling as the essence of any computer-based simulation effort.

Significant contributions can be found that report about the history of M&S in terms of different generations of simulation software, such as in (Nance 1995). What I (Andrea D’Ambrogio) would like to focus on is instead the thin (sometimes thick) thread that connects the lifelines of M&S and AI, by specifically looking at some cases in the past 50 years in which M&S has provided support to and/or has been influenced by AI technologies, as well as by giving a look at how these technologies could have an impact on the next generation of M&S.

A clear example of M&S and AI crossing their paths is dated back to the 1980’s, when expert systems brought a renewed enthusiasm in the AI field after one of the so-called “AI winters”, i.e., periods of reduced funding and interest that have been experienced as a direct consequence of over-inflated and unmet expectations. Expert systems were seen as systems capable of simulating the knowledge and the analytical capabilities of human beings, and simulators were essentially seen as knowledge-based expert systems using a combination of symbolic reasoning and data processing.

The proceedings of SCSC editions held in that period provide several contributions that refer both to the use of M&S to improve the prediction properties of expert systems (see, e.g., Vansteenkiste 1985) and to so-called “expert simulation systems”, or systems that result from the combination of expert systems and conventional simulation technology, so to solve complex simulation problems [3]. In the same period, specific events were organized that focused on the combination of M&S and AI (see, e.g., Xindong 1990 and Uttamsingh and Wildberger 1989). The unmet hype around expert systems led to another winter of AI, in the first half of the 1990s, with a consequential dropped interest by M&S researchers and practitioners (Gupta and Biegel 1990).

In the last two decades AI revamped, mostly due to the incredibly vast amount of storage and processing resources, which led to the introduction of innovative techniques, specifically those under the umbrella of machine learning and, recently, deep learning. Analogously, a renewed interest has been observed in the M&S community, with contributions focused on the tremendous potential resulting from the synergy of M&S, big data and deep learning for the next generation of M&S (Tolk 2015).

It’s not known if another winter of AI will be observed sooner or later, as skeptics predict in response to some well-known failures of deep learning applications in relevant domains, such as autonomous driving. What is known is that M&S didn’t experience similar “winters” in its history, as also witnessed by the 50 “summers” of simulation success stories reported in SCSC proceedings. M&S is no more seen as a variant of the experimental method, but as a novel way of doing science, thus being recognized as the third pillar of science, alongside the traditional theory and experiment pillars.

The lesson we can learn from the paths crossing AI and M&S is that M&S should not chase AI technology only to properly integrate the latest advances into M&S efforts, but rather exploit the potential behind such advances as the driver of foundational innovations that would allow M&S to work out problems that are harder or impossible to solve.

In this respect, what I expect for the next generation of M&S is something that goes far beyond the integration of recent and future technology advances and the availability of almost unlimited amounts of data and resources.

What is ever sought and deserves to be addressed is closing the “reality gap”, which refers to the difficulty of transferring simulated experience into the real world. Significant efforts in the computational biomedicine field, which aim to build the “virtual human” through the modeling and simulation of all aspects of the human body, from the genomic level down to the whole human (Lumley and Pringle 2017), are not intended to produce successful results only by exploiting the most advanced high-performance computing facilities.

To approach this and similar ambitious objectives, the role of modeling approaches for properly representing the observed reality will be essential, with the simulation aspects dealt with by increasingly powerful model transformation and execution platforms (Bocciarelli and D’Ambrogio 2016). The ability to use available data to properly build, map and orchestrate models at various levels of abstraction will be key to nurture an effective and successful M&S development in the next years.

1.5 Upcoming Summers

1.5.1 Is Simulation An Option for the Future?

Maybe my (Gregory Zacharewicz) speculation developed in this session about the future of simulation can start by asking ourselves the following general question: Do we have the choice? Is simulation an option for the future? The world is changing quickly, due to human progress, new technologies appear and help humans being more performant in all domains but at the same time human activity has a damaging influence on the environment. It appears that simulation will have to be kept at the center for the analysis of past phenomena and anticipation of the future. It will be a lever for decision making process support, giving clues and answers to anticipate the potential changes (desired and undesired) that are already appearing in the world. From my point of view, one central question that is now in all minds is: Will the world of tomorrow be still livable enough for humans or will the future generations have to spend, in the better case, their lifetime in “artificial bubbles” of simulated worlds? So simulation is now demanded to anticipate this future and convince people that it is urgent to change. But if we fail, simulation might still also be there as the only way to participate and to interact with our lost environment, where a virtual world will progressively replace all the ecosystems that will be broken down.

This pessimistic hypothesis of an open-air world that will be almost not livable (dystopian world), was still very unlikely only 10–20 years ago. But one century of human impacts accumulated on the environment has rushed effects within a short period of 10 years demonstrating that the situation is critical. The climate changes are warning us more frequently than ever about the potential ruin of our environment. It has been now developed in numerous scientific visions that the future will be difficult (Schiermeier 2018). Scientists predict that the conditions for life will be potentially broken within the next 100–200 years. For instance, microbiologist Frank Fenner postulated in 2010Footnote 6 that “humans will probably be extinct within 100 years, because of overpopulation, environmental destruction, and climate change”. Others such as (Nolan et al. 2018) report that biodiversity will be drastically affected by pollution emission. Rather optimistic scenarios are now not very common in the scientific field.

In the domain of literature and arts, on the one hand, fiction scenarios were providing utopia visions of a future with a mostly positive development. Alexandra Whittington, forecasting consultant at Fast FutureFootnote 7, enumerates that scientists, including the late Stephen Hawking, already warned that we have only 100 years of life left on earth. However, she is part of the more optimistic branch that thinks that we still may have a desirable, functional and safe ecosystem for future generations if we start reacting now. On the other hand, real life with a broken environment in the next 200 years is depicted in several opinion talks such as the one intoned by astrophysician Aurélien Barrau at the climax festivalFootnote 8 and also in popular books and movies such as Mad Max and the recent Ready Player One from Spielberg. To convince the people that do not consider this situation as critical, simulation can help to run, verify and validate warning scenarios.

Simulation can anticipate accurately this future; it has to be a tool for avoiding irreversible situations.

In addition to this crucial use of simulation in a close future, several domains will be calling simulation to give support and answers to human life.

The industrial domain, for instance under the keyword Industry 4.0, is full of perspectives only reachable thanks to simulation. We, by the use of simulation, want to glue many different things (technologies, software, concepts, …) together but not only gluing them but also making them interoperable. Nowadays, a huge amount of information exists, but the challenge is to link and give it a meaning that can be shared among the different potential stakeholders and connecting them in a big simulation world (Zacharewicz et al. 2017b).

Semantics of things in the simulation world will be another challenge. It is clear that in simulation, big data technologies can be utilized to deal with huge amounts of data. But will we still be able to understand the meaning of the data produced by simulation? It has to be correctly captured to profit from simulation. AI and semantics are already helping for matching of concepts. But in the future, it will go further by not only proposing matching of concepts, for instance to couple different simulation concepts, but also to create a new information corpus, new concepts, also starting from detail going by aggregation to more general views and vice versa, thanks to the capacity to create information based on observed and documented similar situations.

I believe also that model driven approaches are part of the future simulations (Zacharewicz et al. 2017a). These approaches can lead the transformation from concepts and human understanding to implementation of models and simulation. Thanks to AI approaches the transformation will be able to be self-driven and to deduce missing information that is today a barrier when transforming concepts to executable models.

Computation power will permit massive replication of simulations, allowing us to repeat, train and prepare better before action in real life. The leitmotiv will be to never give up with testing and anticipating situations before reaching the almost zero default solution. As assumed by NASA, defeat is not an option.

With the emergency to survive in a hostile world, computers will be focused to calculate the continuous evolution of the environment. Developing simulations that will warn us when anticipating important issues is maybe one the most urgent priorities to tackle.

Immersive environments will have the potential to train people, they will be everywhere. The use of augmented reality will prevent or reduce environment-destructive training.

To open the conclusion and end with a more optimistic discussion, we can consider that all the artwork since the appearance of the human on earth are models and simulations of the world or of a desired world. It is our responsibility to keep designing and planning what we want for the future. Simulation is only providing a support but our future is still in our physical hands.

1.5.2 Big Theory and Big Simulation

The ubiquitous presence of simulation technology is undeniable. Simulations are so pervasive and embedded within our lives that they are invisible to everyday people. The journey to where we are has been long, arduous and full of fits and starts. As a discipline, the M&S journey is in lockstep with that of Computing Sciences and Systems Sciences. This symbiotic relationship is so ingrained in the collective psyche of M&S practitioners that we tend to think of modeling as developing a system and simulation as only computer simulation. The SCSC embodies this worldview of M&S as a practice rooted in engineering and measured by the utility it delivers to mankind. In this section, I (Saikou Diallo) argue that while M&S has been successful in many application domains in terms of providing tools and solutions in the past fifty years, our biggest challenges in the next fifty years lie in our ability to tackle societal problems such as universal access to science and technology for people across all spectrums (sight, hearing, mental and physical), large scale migrations, child sex trafficking and other big problems that affect human beings at the local, national and global scale. In other words, we need methods for modeling and simulating humans and societies with the level of complexity necessary to allow us to represent and study problems that humans care most about. We discuss necessary advances in M&S theories, frameworks and tools necessary to achieve this goal in the next fifty years.

The idea of using M&S to study societal problems is not new. However, there is no integrated theory that allows us to build artificial humans in a way that reflects their emotional, cognitive and affective states in a way that is consistent with accepted theories in psychology and cognitive sciences. Similarly, we do not have a consistent way to develop artificial societies at the scale of real human societies (billions of people and objects) where the mechanism of social dynamics between people and groups reflects accepted theories in social sciences and the humanities. This type of comprehensive big theories can only be achieved in transdisciplinary teams of equals where the only concern is a meaningful blending of theories and methods from all disciplines with a shared understanding of each other’s epistemological constraints. From an M&S standpoint, the contributions to big theories are in the areas of:

  • Collaborative Model: Big theory requires a collaborative environment where engineers, social scientists, humanists and computer scientists can come together to construct a universe. Since this universe is shared, it has to be understandable by people from all disciplines including how to use it as means to investigate questions of interest but also understand and even empathize with the simulation. For simulation engineers, it means additional training in elicitation, soft system methodology and even design thinking have to take place in order to make simulations that are more appealing to a wider audience.

  • Computable Models: Big theories have a narrative, mathematical and logical component. While it is possible to derive a computational model from big theories, it might not be possible to implement it using one framework, tool or paradigm. Early attempts at big theory implementation have shown that a multi-stage, multi-simulation or multi-paradigm approach was best suited to reflect the theory. In addition, a “computation only” approach might be limiting, which means a virtual and live component might be necessary to account for non-computable aspects of big theory.

  • Verification and Validation: For large scale societies with complex cognitive processes, how do we guarantee in a reasonable amount of time that a simulation model is correct, i.e. that it is a correct representation of the model? Our current approaches are mostly informal and will not scale in light of the number of processes involved. This observation points us towards semi-automation which means that we have to be able to decide which parts of the verification process are best to automate. Consequently, formal theories of verification that can be implemented in tools or useful model transformation techniques need to be developed to deal with the size and complexity of the problem space we are dealing with.

  • Experimentation and Analysis: Large scale societies have the potential to generate large scale (even “big”) data. Current techniques for analysis and experimentation are inadequate to successfully convey the leading causes of change in such simulations. As a result, we need new techniques that immerse and engage the observers such that they can achieve the same level of insight as they currently do. Visual analytics combined with virtual and augmented reality need to be further investigated as a potential way to deliver useful and reliable insight from a simulation study of millions if not billions of people.

  • Universal Access: The idea of providing access to all users across all spectrums is an important component for future simulation design. Currently, we rely heavily on human computer interface design principles rooted in task-based applications. Principles of aesthetics, inclusion, multi-sensory feedback and presence are not always taken into account when designing simulation interfaces.

Ultimately, the goal is to have several artificial societies operating around the world. These societies should be open and accessible for all investigators. Ideally, the implementation of these societies will be validated by independent teams of researchers, and alternative artificial societies implementing competing theories will be available. Within these societies, researchers will be able to study societal challenges in a safer environment and should be able to look at alternatives and compare potential policies within and across societies. The engineering work that is required to achieve such artificial societies demands collaboration from engineering, sciences, and the humanities. Results and lessons learned will affect how we design, present and analyze simulations in the future. It has the potential to affect the way we design M&S curricula by putting more emphasis on effective modeling and communication across disciplines and problem domains. In the next fifty years, if we are successful, M&S can contribute to the important debate on the future of humanity that started in the advent of the internet and social media.

1.5.3 Ubiquity of Simulation

Today, simulation is literally everywhere, although it may not always be called simulation. In my plenary presentation for this 50th SCSC, I (Andreas Tolk) show the close relation of modeling and simulation and computational sciences (Tolk 2018). In this book celebrating the 50th anniversary of the summer simulation, the question of simulation of complex adaptive systems is addressed in detail (Tolk 2019). The broader picture is presented in a new book on simulation-based disciplines (Mittal et al. 2017); this guide deals with engineering and architecture, natural sciences, and social science and management applications of simulation methods. The application of simulation continues to thrive. In parallel to these application driven activities, the work on understanding modeling and simulation as a discipline continues as well. In the recent book on the profession of modeling and simulation (Tolk and Ören 2017), the various chapters deal with ethics, education, vocation, societies, and economic questions, providing an overview of current activities.

The success of modeling and simulation is furthermore shown by the many conference anniversaries we have been able to celebrate: the Annual Simulation Symposium, the Interservice/Industry Training, Simulation, and Education Conference, and the Winter Simulation Conference all celebrated 50 years of supporting our community. Simulation continuously pushes the boundaries of what could only be done theoretically yesterday to what we can accomplish with practical tools supporting the researchers and scientists today.

But despite all these success stories, modeling and simulation did not make it into the main stream of scientific success stories. The reason for this is that even many simulation experts see modeling and simulation mainly as a computational tool that helps to make better decisions, which can be technical or managerial in nature. Other sciences recognize the power of simulation, but as a supporting method that extends the discipline that is supported, not a discipline in itself. As a result, insights in such applied domains are hardly generalized and shared. Even more important, known validity and applicability limits and constraints are not shared either. Too often, simulation stays in the shadow of the supported discipline with the focus on providing a specific solution. Instead the general supporting methods and the theory from which such methods can be derived should be at the center of the simulationist’s attention, allowing to develop a general simulation theory to drive simulation thinking. In a recent study, Chen and Crilly (2016) evaluated commonality of issues between practitioners in the fields of synthetic biology and swarm robotics and showed that these practitioners shared more complexity related issues between each other than they did with colleagues in their original domains. Nonetheless, the sharing of information and reuse of solutions was hindered by the different terms and concepts used to describe them within their home domains. The application of a cross-domain framework allowed not only to identify shared issues, but also to align available solutions. This is a typical problem for simulation practitioners supporting different domains and disciplines as well: they are divided by the language and methods of the supported field, and they do not have a language to share the knowledge on their own.

Describing the need to think as a simulationist, the obvious similarity to systems theory and systems thinking—as presented by Arnold and Wade (Arnold and Wade 2015)—is intentional. They propose the following definition:

Systems thinking is a set of synergistic analytic skills used to improve the capability of identifying and understanding systems, predicting their behaviors, and devising modifications to them in order to produce desired effects. These skills work together as a system.

(Arnold and Wade 2015, p. 675)

Simulation thinking must also be such a set of synergistic analytic skills, utilizing the various modeling paradigms regarding modeling methodologies—such as discrete event systems, system dynamics, and agent based approaches—and model types—such as ordinary differential equations, process algebra, and temporal logic, as explained in detail in Fishwick (Fishwick 2007). The development of a simulation theory will allow for a consistent definition of hybrid simulation in support of various application domains (Mustafee et al. 2017). While systems thinking focuses on identifying and understanding systems, simulation thinking focuses on predicting the behavior, as simulation allows the numerical evaluation of the dynamic behavior of systems by generating quasi-empirical data, as long as a valid simulation is used. As such, systems and simulation thinking are mutual supportive, whereby simulation thinking provides additional insights into the simplification and abstraction of systems for the purpose of simulation.

In general, there is no other way to predict but to simulate, no matter what other term we use: smart inter- and extrapolations are simulations, estimates are simulations, etc. Furthermore, scientific work is tightly connected to modeling, as shown in the already mentioned paper by Tolk (2018). How can we as a community and a discipline step into the light, as we are doing fantastic things that deserve more recognition and that should be celebrated as simulation success stories? Here are just a couple of examples: Robotics and autonomous systems are in high demand, and simulation is a core piece of their planning and control functions. If a robot has to make a decision, it has to evaluate how this decision will influence the situated environment in the foreseeable future, which means it has to simulate. The same is true for cyber-physical systems. Mustafiz et al. (2016) observe that “the engineering of a complex cyber-physical system involves the creation and simulation of hybrid models often encompassing multiple levels of abstraction and combining different formalisms, often not expressible in any single existing formalisms.” This matches easily with the application of our findings as modeling and simulation experts in hybrid simulation in support of cyber-physical systems, which immediately translates into Industry 4.0 and the Internet of Things. One of the most challenging topics of today is to better understand and manage complex systems. When looking through the recommended methods in the complexity primer for systems engineers (Sheard et al. 2015), many simulation methods are enumerated.

The era of modeling and simulation has just begun, and many of the computational application domains are using simulation without being truly aware of it. Simulation is more powerful than data science as it adds causality to correlations. Simulations enable robots to think. Simulation is part of the solution set of the big challenges of today’s complex decision environments. We should be proud and active. By pushing the boundaries from what is theoretically possible to what is practically feasible, we are directly contributing to the progress science and research. Like a picture says more than thousand words, a simulation says more than thousand pictures. Like a telescope allows to observe a multitude more than the naked eye, the use of simulation extends the use of mathematics for a multitude of alternative evaluations. We create immersive, virtual worlds enabling a new way to share knowledge and reach people of all educational and disciplinary backgrounds. Simulation is everywhere, and our role is strengthening. The simulation winter may be coming someday, but for now we are still in the early days of a beautiful summer.

1.6 Summary

This chapter provides a summary of what has been discussed at the Summer Computer Simulation Conference Golden Jubilee Panel. It compiles the topics presented by the eminent members of the SummerSim society in their talks. The chapter does not only give an historical perspective, but also includes bold sentences from controversial viewpoints of panelists who have words to say for today and tomorrow of simulation.