Introduction

For sometime, Geographic Information Systems (GIS) have steadily advanced to a position of significance and broad reach in handling large and complex data-sets. They have played a significant role in spatially integrating the social sciences (Goodchild et al. 2000) through diverse spatial data models (Goodchild et al. 2007) and analyses (Anselin and Getis 2010); in instantiating new platforms for citizen-volunteered data (Elwood 2008); in supporting location-aware technologies (Brimicombe and Li 2006) and spatially engineered cyber-physical systems (Torrens 2008; Wright and Wang 2011); in marrying often unstructured social media (Sui and Goodchild 2011) with semantic context (Egenhofer 2002); and in catalyzing the movement toward place-based GIS (Gao et al. 2013), among other successes. Key in the support that GIS provide is the elasticity of their systems in adapting to data, software, and applications, and this has afforded them significant popularity as a substrate for building computer models and simulation (Benenson and Torrens 2004). Nevertheless, as GIS continue to advance in scope, they are often brought (back) into contact with some long-standing and thorny issues regarding the ways in which they traditionally model the world (Goodchild 2008, 2009). Three perennial conundrums often manifest, in particular, when GIS are considered as platforms for modeling and simulation. The first problem centers on how GIS should relate to process and process models (Torrens 2009). The second puzzle presents in the task of reconciling GIS developed as computer cartography with the immersive three-dimensional nature of the world as we experience it (Cowen 1988). Third, it remains difficult to dock GIS and models, particularly as both advance in complexity and functionality (Goodchild 2005; Wen et al. 2013; Zhang et al. 2015).

Table 1 The general form of the search heuristic (Torrens 2012)

In this paper, we will introduce a novel framework for addressing relationships between GIS, models of physical and human processes, and immersive three dimensionality (Moore and Drecki 2013) via a connected scheme for running agent-based models (Crooks et al. 2012) and physical process models in a Virtual Geographic Environment (VGE) (Lin et al. 2013), with free-form capabilities for data and processes to course back and forth, up-scale and down-scale within the VGE. The main innovation inherent to the scheme that we will present is in functionally integrating human–environment processes in the immersive setting directly: agents therefore act, react, and interact natively within the VGE with agencies that map to real-world strategies for acquiring geographic information and processing it cognitively ahead of behavior. Similarly, realistic physical processes operate on the built environment to lend it dynamics that are often missing from urban VGEs. We will illustrate the usefulness of the scheme with application to the problem of modeling human responses to and amid built environment collapse: a situation in which seamless transition between data, process, and representation is important in producing actionable understanding.

Background

The development of Virtual Geographic Environments (VGEs) has significantly broadened the potential reach of Geographic Information Systems (GIS) and the explorative science that they support. VGEs constitute a “virtual geography” (Batty 1997)—a sandbox for playing with ideas, hypotheses, plans, and what-if scenarios in richly immersive digital worlds that can also be reconciled to strongly referenced geographic frames. In this way, they support “imagineering” (Aluminum Company of America 1942; Marling et al. 1998), by blending the vicarious explorative appeal of virtual worlds and immersive gaming (Bainbridge 2007; Lin et al. 2009a; Shiode and Torrens 2008) with the flexible and strong empiricism provided by the cartographic and geomatic substrate of GIS (Batty et al. 2001). VGEs can also be accessed and shared via diverse paths of entry, and here again they function to support imagineering. For example, the environments can be traversed as data by machine-facing cyberinfrastructure (for spatial data access and sharing, as Web services, or by grid computing) (Xu et al. 2013; Zhang et al. 2007; Zhu et al. 2007); as user-focused virtual worlds that can be navigated vicariously by avatar-actors (Lin and Batty 2011); or as combined schemes that offer the functionality of both (Torrens 2015a, 2015b).

Initially, VGEs saw use as collaborative workspaces for exploratory spatial data analysis (ESDA) (Haining et al. 1998), particularly on three-dimensional data. Implementations by Chen et al. (2011, 2012) on lunar data and Xu et al. (2011, 2013) and Lin et al. (2009b) on results of air pollution models exemplify this approach. A parallel use of VGEs developed to enable vicarious exploration of three-dimensional data by avatars (really, natural, physical, urban, and eventually human environments constituted by those data) (Chen et al. 2013). Exemplars of this work include virtual tourism of campus environments (Lin et al. 2013) and what-if analysis of disease transmission among closed “virtual populations” (Gong et al. 2006). The ESDA functionality of VGEs has brought them into alignment with digital globes (Butler 2006; Goodchild et al. 2012), while VGE support for avatars connects them to work on virtual worlds (Bainbridge 2007; Crooks et al. 2009). A perhaps logical next step in the development of VGEs as a platform for virtual geography would be to interact with gaming engines (Eberly 2005), which can provide physical processes (often as “serious games” (Barnes et al. 2009; Zyda 2005)) and avatar–human interaction (Baillie-deByl 2004; Champandard 2003; Coco 1997; Laird Laird and E. 2002; Millington 2006) in entertainment media, albeit usually in illusory or simplified form (Scott 2002) and without the geomatic anchoring that GIS can provide (Batty et al. 2001). Building these synergies would, however, require that VGEs functionally align with human, physical, and natural process models, and this is an outstanding issue that lingers, unresolved, in part due to some quite hard constraints in building the requisite functionality via GIS.

Our contention, in this paper, is that this gap might be usefully closed by employing geographic automata as a process engine that can move nimbly and extensibly between immersive virtual environments, process models, and GIS. Circling this argument is the idea that GIS may not be the best foundation for VGE, at least not in their traditional form.

Methods

In what follows, we focus on the methodological design of the components of our framework that enable functional synergy between GIS and three-dimensional geometrically sound environmental representations, as well as how process models for rigid body physics and behavior-driven human agency can be built within the sandbox that they provide.

To demonstrate the usefulness our the approach, we will introduce a working example that includes a synthetic (but geographically accurate) immersive model of downtown Salt Lake City, UT, populated with 100,000 agent-actors that are endowed with the ability to acquire geographic information natively within that virtual environment (both from their physical and social surroundings), and use it to plan and execute movement behaviors that scale from the corporeal to the street and on up to wide areas of the downtown setting. In parallel, we will introduce a scheme for mimicking the damage caused by an earthquake in the downtown, and the resulting physical response of the built environment and human population as they interact and co-develop in tandem.

We have developed a modular pipeline to accomplish this. A set of intertwined data models are used to store diverse objects and attributes of those objects in the system. A base GIS is used to manage the static positions of two-dimensional objects as feature classes (and two-dimensional footprints of three-dimensional objects) in the VGE, and these positions are mirrored as geometry in a scene graph, where three-dimensional polygon meshes are mapped to the GIS base layer. (For the purposes of collision detection and resolution, another mirroring to a Bounding Volume Hierarchy (BVH) is maintained.) A traversal graph is used to coordinate path planning in the model and this is also connected to the GIS and scene graph. Finally, rigs are used to represent human bodies as a kinematic chain, which is resolved relative to a bounding plane (with expression in the GIS, scene graph, and traversal graph) and which is enveloped as polygon meshes and collision proxies.

In the text that follows, we will describe a set of interlocked methods that animate the actions, reactions, transactions, and interactions of these components dynamically relative to applied scenarios. The key pieces include: (1) a tessellation scheme for mimicking earthquake damage by fracturing and imprinting; (2) a rigid body physics simulator that animates the resulting objects under simulated global and local forces, performs collisions detection, and resolves collisions; and (3) an detailed agent-based model that represents individual and collective movement of agent-rigs by path-planning, way-finding, synthetic vision, steering, and kinematic locomotion.

Generating fractures in the built environment fabric

We assume that an earthquake has affected the synthetic (but geographically “correct” relative to its real-world analog) urban environment, passing geophysical energy through the foundations of buildings in a few-block area of downtown Salt Lake City, UT. This results in damage to the built fabric of those buildings, which we simulate by imposing tessellated patterns of fracturing on their representative solid polygonal mesh geometry. Fractured pieces, in turn, have the potential to mobilize, thereby causing those portions of the mesh to rub, fall, collide, and pound adjacent portions of the built setting that they come into contact with. As we will illustrate later in the paper, these then-mobilized components of the simulated scene also factor into agents’ decision making through their vision and movement.

The tessellation of the destruction mesh is based on Voronoï (1907)/Thiessen (1911) polygons as the point-pattern solution to the location-allocation problem of crack formation in rigid materials. This form of tessellation has been very well known in geometry for over one hundred years (perhaps longer), but it is worth detailing the formation again here to demonstrate how the geometry of the objects that form (and form in) the VGE ally to well-known data models and spatial analysis routines common across computational geometry (De Berg et al. 2000), GIS (Abdul-Rahman and Pilouk 2007; Okabe et al. 1992), computer graphics (Shirley 2005), and game engines (Eberly 2007).

We consider a set of finite locations for interruption of the material, among a set S of “generating points” (Okabe et al. 1992), p n (ij) ∊ S, on the half-plane formed by voxelized and polygonal components of the built structure. Generating points may be pre-defined to produce particular or stochastic fracture patterns (Fig. 1), or they may be the site of object–object collisions. We then consider all other points, {x}, on the plane that are nearest neighbors of a given p n as being encapsulated in a region, P n , within which Euclidean distance to the p n within P n is shorter than the distance to generating points beyond it (Eq. 1) (Boots 1986) (p 1):

Fig. 1
figure 1

Given an input set of generating points (left), the plane is tessellated into Voronoï polygons (right)

$$V_{1} \left( {p_{1} } \right) = \{ x/d\left( {x, i} \right) \le d\left( {x, j} \right); \quad j \ne i; j \in S\}$$
(1)

Above, V denotes the Voronoï polygon for generating point, p 1(ij). Term S references the set of generating points in which p n (ij) ∊ S is a given generating point; x is a candidate location in the plane; and d is the Euclidean distance. V 1 is a given Voronoï/Thiessen polygon that contains all x that are closer to p 1 than to any other p n on the given ground plane (in this case a face section of the building exterior). We consider the plane as being exhaustively covered (Okabe et al. 1992) by a set of V n regions, which together constitute a Voronoï diagram, which we denote with \(U= \left\{ {V_{1} , \ldots , V_{n} } \right\}\) The boundaries between each V n will form our fracture lines and V n are convex (see Fig. 1 for a two-dimensional example and Fig. 2 for the corresponding imprinted three-dimensional geometry on the relevant mesh).

Fig. 2
figure 2

Altering the parameters of the tessellation scheme can produce fracture patterns that mimic varying material reactions and stresses. The upper row shows 2D patterns, which are imprinted on 3D meshes (lower row), and subjected to physics simulation

Variations on this general idea can be extended to consider generating points in three-dimensional spaces, or three-dimensional objects formed by taking a given V n and imprinting the surface of the built structure by projection from the face to a point (or several points) at some distance interior to the object (Fig. 2). By interpreting the distance between candidate locations and generating points, p n can be weighted (W d ) to form Voronoï polygons of different sizes or Voronoï diagrams with distributions of Voronoï polygons. k-order Voronoï polygons can also be created by considering k-nearest candidate locations around generating points (Edelsbrunner 1986); indeed, the data access criteria for checking these relationships among many points have long been efficient in GIS, following Samet (1984) and clusters of such objects can be well maintained in, and fetched from, spatial databases (Fayyad et al. 1996). Moreover, an array of processes (point-pattern, stochastic, or other) can be considered for producing generating points (Boots 1986; Okabe et al. 1992).

In engineering and materials science, the physical processes that might generate fracture patterns for different structures, environments, and conditions have long been explored. Aspects of this knowledge is finding its way into computer graphics research (see Muguercia et al. (2014) for a recent review), where fracturing heuristics for different materials or outcomes are emerging (see Parker and O’Brien (2009) for a well-known soft-body implementation in gaming). Also, recent work in computer graphics has explored how fracture patterns from imagery can be mapped to meshes directly, such that they obtain “examples” from observed cracking and displacement phenomena, i.e., as detail maps (Bosch et al. 2011), texture maps (Hsieh and Tai 2006), or by matching statistical and/or geometrical properties of real cracks (Glondu et al. 2012). Our scheme for fracture, therefore, has some utility in allying the VGE to models in civil and structural engineering, materials science, construction, urban development, and architecture (Li et al. 2007).

Rigid body physics for fracture dynamics in the built fabric

To animate the effect of the fractures on the built environment fabric, we subjected the destroyed mesh geometry to a rigid body physics simulator (NVIDIA 2012). We considered two global forces: gravity and friction (Newton 1687). We also treated local forces of collision and impulse produced in response to collision. We used a scheme developed by Vadim (2009) to calculate local forces as impulses, which are then handed to the rigid body simulator for resolution system wide.

Details of how we coupled this scheme to massive numbers of interacting objects in GIS are available in Torrens (2014b). We assume that an earthquake event has initiated fracture in the built fabric, the geography of which we calculate by tessellation, as described. The gravitational and frictional forces will cause the decomposed mesh geometry to separate and move along the imposed fracture patterns in their geometry, depending on position, mass, and timing. These dynamics are initiated for very small sub-sets of time (<0.03 s) as global forces that (may) displace the objects held in the urban mesh geometry. We perform a preliminary check for likely collision on (simpler) bounding-box envelopes for the meshes. For likely collisions, we then evaluate fine-grain inter-penetration and contact between colliding object-pairs based on the point-set that the mesh geometry presents. As per Eqs. 2, 3, 4, 5, 6 below, this evaluation yields the amount of space between colliders (\(p_{1}\) and \(p_{2}\)); the separating velocity (\(\dot{p}_{\text{separating}}\)) along the path of traversal for the collision considering the coefficient of restitution (R) and the contact normal (\(\widehat{{p_{1} - p_{2} }}\)); the linear component of the velocity change (\(\dot{p}_{d}\)); and the angular component of the velocity change (θ) (which includes torque and inertia; I −1 in Eq. 6 denotes the inverse of a tensor containing moments of inertia about the XYZ axes and the products of inertia). These collisions are then resolved by examining the amount of penetration and collision direction along the contact normal for the collision. Taking the coefficient of restitution for the colliders, the linear and angular velocities are then used to update the mesh points for a collision-free position. These are then passed to the physics engine with impulses on the objects’ center of mass to resolve in a subsequent small bundle of time, the positioning of those points is updated, and the mesh if repositioned ahead of an iterative evaluation. We can also designate thresholds for colliding force, above which the objects will sub-fracture (in these cases, fracture tessellations can be pre-applied and lay in wait for activation on the objects that exceed this force).

$$\Delta p_{1} = \frac{{m_{2} }}{{m_{1} + m_{2} }}\left( {\Delta p_{1} + \Delta p_{2} } \right)\left( {\widehat{{p_{1} - p_{2} }}} \right)$$
(2)

(Millington (2007), pp. 106, 113–114; Torrens (2014b), pp. 971)

$$\Delta p_{2} = \frac{{m_{1} }}{{m_{1} + m_{2} }}\left( {\Delta p_{1} +\Delta p_{2} } \right)\left( {\widehat{{p_{1} - p_{2} }}} \right)$$
(3)

(Millington (2007), pp. 106, 113–114; Torrens (2014b), pp. 971)

$$\dot{p}_{\text{separating}} \left( {t + 1} \right) = - R\left( {\left( {\dot{p}_{1} \left( t \right) - \dot{p}_{2} (t)} \right)\left( {\widehat{{p_{1} (t) - p_{2} (t)}}} \right)} \right)$$
(4)

(Millington (2007), pp. 104–106; Torrens (2014b), pp. 971)

$$\Delta \dot{p}_{d} = \left( {m_{1}^{ - 1} + m_{2}^{ - 1} } \right)$$
(5)

(Millington (2007), pp. 314–315; Torrens (2014b), pp. 971)

$$\Delta \dot{\theta } = I^{ - 1} \left( {\left( {q - p} \right) \times \widehat{d}} \right)$$
(6)

(Millington (2007), pp. 314–315; Torrens (2014b), pp. 971)

Behavioral agency binding agents and environments

We make use of a behaviorally focused agent-based model to drive the human population in the model environment. Our key innovation in its design is in opening-up agents’ behavior to dynamics from the shifting conditions of the built environment, and in doing so directly through agents’ acquisition of ambient geographic information. An earlier version of the model is described in Torrens (2012) and we will not repeat all of the details here, but to summarize, the model drives agents’ movement through space and time (Morris et al. 2000) using a series of geographic automata systems (Torrens and Benenson 2005) that account for information acquisition and action/interaction at various scales of geography from the corporeal up to the building, street, and downtown.

At the macro-level for the simulation (a section of an urban downtown), agents are supplied with agendas and draw origins and destinations in the model city from this. In Torrens (2015b), these have also been supplemented with time-of-day models that additionally take into account where agents are likely to be and whether they might avail of particular destinations in particular places and times. Movement at this scale takes place primarily by path planning (Latombe 1991), with agents acquiring information about how traversable the space between origins and destinations is for their specific needs, the distance required to move, and the meaning of that distance relative to their affordances for space and time.

The shortest paths that this portion of the model generates are then passed on to a way-finding module (Torrens 2014a) that partitions them into agent-relevant way-points and sub-paths for navigation and street-scale movement relative to the layout of sidewalks, barriers formed by the geographic information that they glean from buildings, landmarks, crossings at street curbs, and so on.

While way-finding, agents then engage in collision detection with the built environment, and with the ambient crowd of pedestrians that they encounter. This is handled primarily through steering (Torrens 2012) and makes use of agents’ visual or deduced information about their present and short-term vectors (Hammam et al. 2007), linear acceleration, and angular acceleration of their own movement and that of objects and people around them.

The steering routines are then delivered to a motion control geographic automata system that partitions them in small bundles of space and time that agents’ skeletons (the rigged kinematic chains referenced at the outset of this section) will be asked to move through. Thus, at footstep-scale, agents engage in collision detection with fixed and mobile objects (so that they avoid bumping into things in their immediate surroundings) and use inverse kinematics, forward kinematics (Badler et al. 1987), and motion blending to articulate their skeletons as locomotion to fulfill their movement. Agents’ step-size for locomotion is governed by motion capture data, which yields gait and body language data. In this way, then, our geographic automata system scales behavioral agency between the urban and the corporeal.

The steering and locomotion components of the model are well described in Torrens (2012) and the navigation and way-finding system is articulated in Torrens (2014a). Here, we focus on the path-planning routine. The path-planner provides a direct connection between agents and the environment and requires special handling through GIS, because it invokes agents’ polyspatiality to achieve the human–environment connection. This necessitates treatment of diverse spatial data types that reference the geographic information as ingredients in determining coupled dynamics between agents and the environment. In invoking the term polyspatial, we mean to describe the ability of agents to process a diverse array of spatial data (and data models) at different scales of space and time, and their ability to make sense of those data in an appropriate geographical context that situates the agents’ place- and time-specific appreciation for their surroundings and predicament in appropriate geographical context (Torrens and McDaniel 2013; Torrens and Nara 2013).

For path planning, a graph of traversable space is imposed on the simulated city, and the user can specify what is traversable and at what granularity the graph should take shape. (In the examples that we will illustrate, we regard traversable as implying all portions of the urban space that are not covered by a building or fractured building piece.) Path planning proceeds as a search over the graph (see Table 1), the product of which is a shortest path that is then supplied to the geographic automata system for further refinement of movement through the space by way-finding, steering, and locomotion. As the environment changes, the traversability of the graph is updated and the path can, therefore, change. Similarly, each agent will have a point of entry and exit in the graph that is unique to their agenda, such that the path-planning scheme can deliver agent-specific paths. These can be further refined by considering different interpretations of “shortest” as well as by considering different neighborhood filters for the search procedure, and these can be specified per-agent or for classes of agents. In the following applications, we use a mixture of Dijkstra (1959), and best-first A* (Hart et al. 1968) heuristics with immediate-neighbor Moore and von Neumann search filters. The merits of one approach over another are discussed with relevance to agent-based movement in Torrens et al. (2012), where comparisons to real movement data are presented.

A worked application scenario

To demonstrate the utility of our approach in connecting agents and environments with process models, we developed an applied simulation of an earthquake, building collapse, and evacuation scenario on real built environment data and using real motion capture data of human locomotion. We regard an earthquake scenario as a useful test case for the framework because the processes that drive the environment and human dynamics are largely incommensurate in their spatial scale, temporal scale, and governing rules. Urban earthquake scenarios also generally require different types of data to treat in computer simulation (human and built, for example), which often come with different data models and types. Moreover, the phenomenological systems that form in built response and human response to urban earthquakes are quite different, but interact critically when the form of the built environment changes and people must negotiate it to move to safe locations. Moreover, intertwining both in a virtual geographic environment requires direct treatment of the processes that drive them at varying characteristic scales of space and time.

We generated a synthetic built environment for downtown Salt Lake City, UT using parcel and building footprint data; building height data; and the United States Geological Survey National Elevation Dataset (http://ned.usgs.gov/) Digital Elevation Model (DEM) data. We produced a synthetic representation of the downtown, with 2419 buildings, represented by an 84.4 million-triangle mesh (Fig. 3). These data were georeferenced to a common GIS model (which can be used to layer-in land uses, activity sites from parcel data, street and road networks, and so on). We have also developed a scheme for building graphs of traversable space based on two-dimensional shape data, or from two-dimensional cross-sections of the mesh, by slicing (projecting the plane through the three-dimensional mesh and imposing point-polygons where the geometry intersects with that plane) (Fig. 4). These schemes were used to co-register a graph for path planning in the model (Fig. 5). The graph was supplied to agents when they were determining their origin–destination movement and when they were assessing the traversability of the urban environment.

Fig. 3
figure 3

The fracture patterns are imprinted on building meshes generated for Salt Lake City, UT, and subjected to physical simulation, with pounding and follow-on fracturing between interacting meshes. Panel a conditions 11.73 s into collapse; b after 6.07 s

Fig. 4
figure 4

The outer hull of the three-dimensional built geometry is converted into two-dimensional polygons at the plane that intersects with the vision of agents. Red points indicate vertices created in the slicing procedure

Fig. 5
figure 5

Our path-planning model can generate a graph of traversable and non-traversable space from the 2D geometry of the visual plane. a Path planning before the earthquake and b after

We populated the synthetic downtown with 100,000 behaviorally endowed, independently acting, geographically aware agents. Collectively, they represent the day-time population of workers in a ~90-acre area of the central business district. At the onset of the simulation, agents have origin locations at various buildings in the central downtown and after the earthquake they egress to assembly points in the urban space (designated plazas and sidewalks). In some instances, the earthquake has damaged the space that falls between the origin and destination, and so agents must acquire geographic information to build new “mental maps” (Gould and White 1974) of the downtown as they move. Similarly, a substantially larger than usual crowd of people now fill the streetscape as buildings empty of their populations all-at-once, and agents must negotiate these crowds while determining their movement.

When the simulation runs, agents are tethered to the (shifting) built environment directly through their movement: they must avoid destroyed parts of the space to get to where they would like to go. This enters into agents’ decision making in a few ways and provides opportunities for them to exercise their spatial behavior (Torrens 2007). The earthquake alters the way-finding space that agents must consider, as traditional built features may no longer be present (because they have been destroyed) or may not be accessible (because an agent cannot get to them or cannot see them). To allow agents to adjust to these conditions, we take shortest paths and parse them by distance, so that agents use intermediate goals at fixed lengths along the paths as their way-points. This is a known form of way-finding, although Lappe and colleagues (Lappe et al. 2007, 2011) have shown that it is “leaky”, because it is prone to errors as distances increase and as path complexity grows, and because it requires either visual cues or referencing against visual flow (Lappe et al. 1999) for calibration during movement. We, therefore, allow agents to interact with the built environment on a step-by-step scale using vision to detect collisions and to avoid them by both steering (at street-scale) and IK/FK (at very close range, “footstep-scale”). This further allows agents to fine-tune their movement as they encounter debris and obstructions in the built environment between origin and destination, and to prioritize them relative to interactions that they also encounter with agents in the ambient crowd around them as they move.

As illustrated in Fig. 4, our GIS scheme can convert the three-dimensional mesh geometry of the fractured built fabric into two-dimensional polygon layers at ground level and vision level, handling a large amount of points. These can then be read-in, directly, into agents’ “mental maps” (Fig. 5) for path-planning purposes, and the path-planning heuristics can produce shortest paths by varying considerations of “shortest” that reflect the newly updated traversability graph of the downtown. With these two-dimensional paths on-hand, agents can then begin trips between origins and destinations, acquiring geographic information of the three-dimensional geometry around them and of interest to them in particular places and times as they move; they can steer to avoid collisions with the built fabric and dynamic crowds around them; and they can articulate their skeletons along those paths without bumping into people or things. This allows agents to generate paths that circumnavigate the debris field that the damage produces, and it allows them to exercise movement that adapts to the new geography of the downtown after the earthquake and building damage (at all scales from the downtown to the step) (Figs. 6, 7). It is important to note that this can all be handled with significantly rich realism of the built fabric, dynamics of collapse scenarios, and massive crowds of 100,000 agents (Fig. 8).

Fig. 6
figure 6

Planned shortest paths for eleven origin–destination pairs before/after the earthquake. a Shortest paths for a selection of agents before the earthquake damage. b Shortest paths for a selection of agents after the earthquake damage

Fig. 7
figure 7

Egress paths through the damaged area (the height represents the speed at which the agent was traveling; the color represents the building from which their egress originated)

Fig. 8
figure 8

The simulation, run as an immersive virtual geographic environment with 100,000 agents at a macro-, b meso- and c local view

Issues of validation and verification of the movement produced by the model are always an issue in simulations. For the scenario that we have just described, this is a somewhat difficult proposition, because we generally have little data on earthquake egress movement for crowds with which to generate “ground truth”. Moreover, this is a what-if scenario that has realistic data, but is for an imagined scenario. In that sense, it is “calibrated” to real data (of cities, buildings, streets, and motion capture of real humans), but we do not have a direct scheme for assessing whether the movement that it produces is “valid” relative to a real earthquake in this place, because that event has never occurred. Elsewhere, however, we have spent considerable time investigating the validity of the movement processes and movement patterns produced by the agent model. For different scales of movement, and across scales, we have shown that it produces reasonable movement across the trip-to-locomotion spectrum and compared to real data (some of which was collected for Salt Lake City, UT). Details of those tests are available in Torrens et al. (2012).

Conclusions

As Virtual Geographic Environments have developed in sophistication, they have emerged as a candidate platform for uniting process models, GIS, and virtual representation of the world. Achieving connectivity between them can be challenging, particularly for the task of intertwining agents and environments, in several aspects. First, figuring out how humans act in, react to, transact with, and interact in environments can be burdensome, particular for critical situations such as the earthquake scenarios that we have illustrated in this paper, and here one must often enter into a conundrum in which models are required to generate what-if scenarios about which little (if any) real observation can be made. As a result, one must often consider how to build those models in something of an evidentiary vacuum, and extensibility to experiment with different drivers and parameters is desired to accomplish this in a way that allows questions to be posed in simulation flexibly. Second, cobbling together the data to represent both humans and environments in an integrated system requires data models in both computer-animated design (CAD) and Geographic Information Systems, which are often incommensurate in the way that they frame their constituent components. As the volume and variety of data that are available for geographic inquiry grow (Crooks et al. 2013; Crooks 2015; Graham and Shelton 2013), the demands on systems to integrate them across sources and uses may grow even further (Torrens 2010). Third, the processes that operate to shape the built environment during earthquake scenarios are quite distinct from those that shape human response. When the two intertwine, it is often for fleeting moments of space and time, and the two systems “touch” at multiple points in space and time, across multiple considerations of “space” and “time”. For example, agents may plan a path to avoid a wide-area debris field, or they may make a small adjustment to route around a piece of rubble; in other cases they will steer to avoid that rubble when it presents in their vision; or they may twist to avoid bumping into the rubble because they came into contact with it while trying not to bump into an adjacent person in a crowd. Fourth, there are varying complexities and non-linearity in the connections that form between physical processes in earthquakes and human processes in the population’s responses, and interactions between the two processes can generate knock-on phenomena, e.g., when a piece of a building lands on the ground and a person veers to avoid it, they may come into contact with other people, who steer to avoid them, establishing a chain reaction that transforms from a physical movement of a small piece of rubble into a broader crowd response of congestion and jamming. This requires a poly-scale and polyspatial approach that can flit with ease between representative units of space and time.

In this paper, we have addressed these challenges by building a methodology that can treat each in simulation, while also presenting a unified pipeline for exploration of human–environment phenomena in a VGE. By weaving three-dimensional mesh modeling and GIS into the base layer of the VGE, we were able to accommodate a much richer representation of the world in simulation. Moreover, by adopting geographic automata as the base vehicle for dynamic simulation, we were able to animate a diverse set of physical and human processes in the VGE without having to dilute the fidelity of either.

While GIS provide significant glue to bind all of the data in the models that we presented, to some degree, our experience in building a platform to connect agents and environments was largely concerned with producing schemes to “tease” GIS into delivering functionality for which they are traditionally ill-suited. Indeed, traditional GIS may not be the right platform for supporting model processes in this way and bending them to “fit” dynamic and often incommensurate process dynamics may be a stretch for cartography-based GIS in particular. VGE, which provide a blend of the geomatic empiricism of GIS with the immersive experience of virtual reality, may become a more popular medium for mimicking the richness of the real world, rather than abstracting from it.