Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

The Achilles’ heel of societal models, nearly universally, is their inability to convey their computational results to the human user. Data sets can be enormous and diverse, the uncertainties large and subtle, the dependencies complex and convoluted, and the products of the estimates often obscure and insubstantial and, therefore, difficult to convey and contrast. This chapter is about making the “invisible” visible.

Recognizing the diverse roles played by visualization, we divide them here into two categories. First is interactive data or information visualization, which is defined as the graphical expression of large amounts of data to enable efficient human perception and comprehension (Card et al. 1999). Second is visual analytics, which is the joining of interactive visualization with analytical reasoning and computational methods in order to derive and convey insight from massive, dynamic, ambiguous, and often conflicting data (Thomas and Cook 2005). This chapter pays equal attention to the two categories.

The chapter begins by introducing mental models and explaining why it is so important that they be made visible. Next, it turns to the challenges faced in understanding complex behavior, and indicates how effective visualization can help. The chapter then delves into the challenges associated with groups of diverse users, diverse model types, and disparate model authoring schemes, and again indicates the important role that visualization can play. Having uncovered the issues, the chapter presents a case study involving a large suite of PMESII models and discusses how (some of) the aforementioned challenges were successfully attacked. The chapter concludes with a set of tips for aspiring visualizers.

1 Models and Visualizations

1.1 The Importance of Mental Models

Analysts using PMESII models to gain insight into a complex geopolitical situation apply prior and tacit knowledge of experts to the task. In discussions with other experts, and with research, this “conceptual model” of the situation evolves. The visualization of these conceptual models aids the experts in sharing, testing, and evolving their concepts. In addition, the computer–human interface for computational PMESII models should be expressed in terms similar to these conceptual models, or linked to them, in order to maximize the usability of the PMESII tools in supporting and extending human expert knowledge. The effective communication of concepts requires that the conveyance aligns well with the receiving analyst’s mental model.

The problem of mental model relation is a relatively new one. Computational methods and models in the realms of engineering, physics, and finance have seen few challenges in this area. Although they are applied to everything from weather forecasting to portfolio performance estimation, it has nearly always been possible to depict the outputs in terms of tangible, measurable observables, things that relate well with our mental models. The output of physical or financial models, for instance, is visualized using a simulated enactment of physical behavior, or tabulated spreadsheet charts. These are forms consistent with the real-world experiences of these phenomena and are, therefore, a natural representation for communication and comprehension.

The situation with social and political models is different. The subjects of primary modeling interest here are attributes such as satisfaction, attitudes, beliefs, goals, and cultural identity, things that cannot always be directly observed. Thus, there is no obvious or natural way of depicting them. Furthermore, the field is relatively young; therefore, there are few standards.

A case in point can be found in game-theoretical models. These have been used in economics, psychology, and sociology to model and understand attributes such as trust and reputation (Mui et al. 2001) and trust development over time (Axelrod 1987). However, few attempts have been made to visualize these interactions beyond the mathematical models themselves (Fig. 1). There are no standards, no precedents, and no common procedures for visualization of such models.

Fig. 1
figure 1

Bayesian model visualization means little to the nonspecialist

Another example is visual anthropology, in which emphasis is placed on creating a visual record of cultural interactions and leaving it to the analyst to interpret them (Collier and Collier 1986). No attempt is made at formulating an underlying visual expression. McCormick summarizes the state of affairs quite well with his observation that to visualize sociopolitical systems and their relationships to other social science domains such as economics and security requires “a method for seeing the unseen” (McCormick et al. 1987).

In the relatively short history of applying computational approaches toward the societal modeling, little emphasis has been placed on creating or adapting visual representations for such models. In fact, a survey of modeling and simulation applications reveals sets of time series graphs, often devoid of context, narrative, and summary, which do not tend to map well to the users’ conceptual constructs or mental models (Card et al. 1999; Sears and Jacko 2007).

To compensate for this gap, during model initiation and results analysis, human technical interpreters are often introduced between the model and the subject matter expert (SME). These interpreters view model results, translate them into summaries, and sometimes even draw preliminary conclusions. This is undesirable. The distancing of SMEs and decision-makers from the computational results not only impacts timelines and workloads, but also introduces the distinct possibility that the quality and accuracy of conclusions will be degraded. This diminishes the power of the models to explore issues and extend knowledge for the SME.

For elimination of this gap tool builders need to provide a means for easily interacting with and communicating information regarding a modeled situation directly to the decision-makers. The expert decision-makers need to be able to use the model tools to express the pertinent factors of the situation as they see them and then see the model results in the same terms. The modeling environment should be able to present the insights provided by the models, in forms consistent with domain SMEs’ mental models.

Despite repeated appeals for support in this area, progress thus far has been relatively limited. Orford and coworkers (1999), for instance, provide a review of approaches and techniques across a variety of social science disciplines and find that adaptation of visualization techniques is for the most part limited to fields with strong ties to the physical sciences, for instance, geography.

However, progress is being made in the area known as conceptual social modeling. Conceptual models, similar to computational models, visually express aspects of a situation, but without the quantitative formulas and values required to simulate behavior under different conditions. As a result, conceptual modeling tools and techniques provide useful reference examples for visualizing a situation, albeit without going so far as to provide the ability to visualize change in that situation over time due to either the natural course of events or the effects of an intervention.

One such class of conceptual modeling tool is link analysis. Applications such as Analyst’s Notebook™ (i2 Inc., Fig. 2) and VisuaLinks™ (Visual Analytics Inc.)Footnote 1 provide capabilities for building and maintaining diagrammatic visual representations that help acquire a rapid snapshot of key actors, interactions, and communications. These applications see heavy usage in the law enforcement, intelligence, and military communities, where they are employed to build visual maps of connections (e.g., transactions, phone calls, “is-related-to” relations, etc.) between various organizations, people, and concepts.

Fig. 2
figure 2

Analyst’s Notebook provides conceptual modeling using link analysis

Link analysis tools tend to provide a meaningful snapshot of current reality, perceived or otherwise, but they do not generally provide capabilities for considering alternative or dynamic realities. An example of a tool that attempts to support the analysis of alternative social and political realities is nSpace (Fig. 3). nSpace includes integrated tools for system information gathering and analysis (Jonker et al. 2005; Wright et al. 2006) and the nSpace Sandbox (r) componentFootnote 2 that allows analysts to conceptualize and evaluate alternative hypotheses. Links to original source evidence in the analysis provide a means to verify or re-evaluate evidence and assumptions.

Fig. 3
figure 3

nSpace helps rapid exploration of data

A component missing from many conceptual modeling implementations is that of time. Few tools actually support analysis of how a situation evolves over time. Animation, while a seemingly logical solution, provides a poor means of visualizing change over time. Underlying this shortcoming are human limitations in the ability for visual memory. Human visual perception is strong; human visual memory is weak.

Human perception of change over time tends to be improved when change is displayed simultaneously instead of sequentially (Wickens and Hollands 2000; Parasuraman and Mouloua 1987). GeoTime Configurable Spaces ™Footnote 3 exploits the advantage of this phenomenon (Kapler et al. 2008). It extends established two-dimensional (2D) X, Y forms of expressing conceptual models by adding a third visual dimension for time in the Z dimension (Fig. 4). This makes it possible to visualize a social network of entities on a 2D plane, with communication and transaction events between those entities represented in time above that plane in 3D space (Fig. 5).

Fig. 4
figure 4

GeoTime represents events within an X, Y, T coordinate space in which the X and Y planes represent geographic space, and the Z-axis represents temporal space

Fig. 5
figure 5

Temporal view of activity in a social network in GeoTime. Note the summary network image on the ground plane, with constituent activity as events above in the time Z-axis

Another good example is the commercial game known as SimCity™ (Maxis, Electronic Arts). The player’s objective in SimCity is to manage a city. Exogenous events occur throughout the game, some at random and some scheduled specifically to present challenges, and as the state of the city evolves, the player adjusts parameters in order to improve or stabilize the city’s health. Health is gauged in terms of numerical metrics and is visualized in terms of graphics and tables. A neighborhood displayed in bright red, for example, indicates to the game player a current undesirable (as defined by the game) state in that region that may require further investigation. Events are presented to the user in a scrolling bulletin fashion as they occur and accumulate in list form for detailed examination. SimCity offers a diverse model of geopolitical growth over time and allows the user to explore many types of scenarios (Fig. 6). Since its purpose is to simulate experience in real time rather than to analyze it, the game neither attempts to address visualization of the causes and structural systems underlying behavior, nor does it generally include the dimension of time in its displays.

Fig. 6
figure 6

SimCity 3000 allows fine control over a geo-political simulation, visually characterizing civic health on a map at a single point in time

The bottom line is that while some relevant precedents can be found in conceptual modeling and in the realm of computer games, little has been done in the world of PMESII modeling to express models and simulation results visually in ways that are compatible with an expert’s mental model. In our experience, when we ask an expert to describe visually their understanding of a complex societal situation, the result is likely to be a diagram of shapes, arrows, and words, and almost certainly not a series of charts and graphs. If the visual language used in conceptual models and in simulation games may serve as useful precedents in addressing this problem, the gap that must be bridged is the integration of time and causality into these vocabularies.

1.2 Explaining Complex Model Behavior

Another shortcoming in current visual vocabularies is the visual techniques for expressing model behavior. When analysts are unable to assure themselves that they understand the causal relationships and interactions associated with simulated events, they are unable to assign confidence in their observations and projections. A lack of insight into causes can also handicap analysts’ ability to respond to projections with effective mitigating strategies. In turn, analysts are limited in their ability to support decision-makers.

In the case of societal behaviors, calibration is rarely possible. The factors involved in a situation are not only immensely complex but also ever-changing and often immeasurable. Even when measurement is physically or conceptually possible, it is often not viable due to political or cost considerations. As a result, societal models are not likely to become precisely tuned instruments of exact forecasting. They will remain, rather, repositories into which human experts may collectively “encode” their understanding of the society’s behavioral structure and dynamics, albeit on a scale that is not limited by one person’s mental capacity.

Accepting that PMESII simulation results will likely never be worthy of blind trust, the question remains: how does one go about assessing confidence in the results? How can a system help to build (or temper) confidence in observations that have not been anticipated through unaided human reasoning? One approach is to provide capabilities for analysts to explore a model until they are able to ascertain the cause of nonintuitive results. Once the causes of a computed result are understood, the analysts may decide whether it is the model that is representing a situation incorrectly or too simplistically, or whether it is the analysts who are reading a situation incorrectly or too simplistically. Either way, a basis is provided on which to judge the result and respond accordingly. In the former situation, the analysts may make note of future areas for improvement of the model and in the short term take steps to work around the shortcoming, while in the latter case, they may benefit from an improved situational understanding and adjust their strategies to make them more effective.

Insights into model behavior help to explain unexpected results and frame conclusions. Today, however, there are few robust methods for visualizing the causes behind model behavior. One of the more common strategies used is to display cause and effect chains using nodes and links. This technique is seen almost exclusively with causal models such as Bayesian networks. In other modeling paradigms, the causes of an effect are often more complex, with many factors contributing in various ways and to various degrees over time. In these cases, often no explanation is offered for behavior other than the one given by an expert intimately familiar with the model.

One technical development in societal modeling that holds promise for more intuitive visualization of the model structure and behavior is that of agent-based simulations. Visualization of the structure and behavior of agent-based simulations may intrinsically be more natural, due to their structural similarities to traditional social networks and the potential for reduction of behavior to the “decisions” of individual agents.

1.3 Accommodating Diverse Users

Analysis of intervention effects brings together people with knowledge of the diplomatic, military, aid, and nongovernmental organization (NGO) agencies involved, as well as those who have expertise in the humanities, economics, military, national policy, social sciences, and technical aspects of modeling. These stakeholders and analysts come with their own expectations, work styles, and doctrines. Consequently, designing the human–computer interface for a tool to be employed by such a broad user community can be a complex and inherently conflicted endeavor.

A key challenge for a user interface designed for this environment is to accommodate different methodologies and preferences within a collaborative process. Moreover, the process and the tools provided must meet the needs of professional social scientists and modelers as well as experienced and pragmatic leaders. These challenges have been explored in Computer Supported Cooperative Work (Neale et al. 2004), although it focuses on methods to overcome differences in group work in time and space rather than differences due to diversity of background, objectives, language, terminology, and experience.

Computer-supported collaboration can be distributed or co-located, synchronous or asynchronous, each with their own challenges. Asynchronous-distributed collaboration, for example, will have difficulties in communicating workflow and intent, while synchronous-co-located collaborators will encounter difficulties with shared screen space and interaction methods. Metrics have been proposed for evaluating these systems that combine coordination, communication, work coupling, and contextual factors into an activity awareness model (Fig. 7).

Fig. 7
figure 7

Activity awareness model (Neale et al. 2004). Activity awareness is a key goal in collaborative environments

Although research from this field has been slow to reach mainstream analysis applications, SharePoint™ (a Microsoft product), InfoWorkSpace (an Ezenia product), wikis, and social-networking sites are excellent steps in the right direction. Consider, for instance, the InfoWorkSpace (IWS) tool, used extensively by the military. IWS provides shared white boards, bulletin boards, chat, and shared views, all based on a common physical office metaphor. Although this approach has become a de facto standard, Swanson et al. (2004) demonstrated that there were a number of areas in which IWS fell seriously short of the effectiveness of face-to-face collaboration.

Another example of a system for collaboration is the U.S. Army’s Command Post of the Future (CPOF). This system provides distributed situation awareness and planning capabilities. CPOF provides an environment in which different disciplines and perspectives can work productively together. An essential reason for this success is CPOF’s ability to visualize people’s work in progress and to make this intermediate product easily available to others. CPOF work products include real-time situation monitoring on maps, analyses on maps, plans, and analysis using interactive charts and tables.

However, CPOF relies on a similarity of purpose and background in its community of users. Standard symbology and graphics, pervasive in the military, are an example of the common visual expressions used to support the exchange of information. Integrated PMESII models, on the contrary, must support political, social, economic, health professionals, justice, and law enforcement as well as military agencies– a truly diverse community. For this reason, PMESII visualization must be able to support different vocabularies, disciplines, and methodologies.

1.4 The Challenge of Heterogeneous Models

In modeling, effects of interventions, political, social, economic, and security aspects of a situation are intertwined. Therefore, diverse models from different disciplines must operate in an integrated fashion. However, modeling paradigms vary widely from domain to domain. Whereas an agent-based model may be the natural choice for modeling key actors in a situation, a system dynamics model may be the best choice for an economic model. And the diversity does not end there. Even within the same problem domain, there may be multiple instances of each type of model, applied to different subregions, subgroups, and time delineations. This diversity vastly complicates user interface design.

To discuss the resulting user interface challenges, it is necessary to first describe what model integration entails. If models of different types are to be integrated generically, such that one can affect another during simulation without possessing knowledge of the other’s technical implementation, then a common technical language for expressing behavior must be established.

There are two common approaches to this problem. The first is to express and link behavior in the form of discrete and intermittent causal events. The second is to express and link behavior in the form of continuous scalar values over time. In the former option, noncausal models must be adapted to produce logical events at scalar thresholds, while in the latter, causal models must be adapted to produce scalar changes based on logical events. Note that an implication of either adaptation is that time between cause and effect must be resolved in some way for models which may not otherwise account for time.

Since scalar values offer a greater level of precision by representing any degree of change, at any point in time, it is often advisable to integrate models at this level so as not to handicap model classes that are able to work together at this level of granularity. Another potential advantage of this approach is that in a generic system of models, often less semantic interpretation is required for one model to “understand” the nature of a behavior produced by another model if the behavior is quantitative in nature. In addition, there is the assurance that adopting the scalar value approach does not preclude the translation or aggregation of scalar values to nonquantitative yet significant logical events for user consumption of simulated effects.

The homogenizing effect of a solution that dynamically integrates heterogeneous models into a single abstracted supermodel presents a significant challenge. Because the structure and properties of models vary so greatly, much texture and detail can be lost in the abstraction process.

Simply putting the onus on models to supply their own visualizations for display in the user interface is problematic for both users and model developers. The lack of consistency and visibility across models, as well as the reliance on modeling technology experts to supply visualization techniques and technology (an area of modeling that has traditionally been underdeveloped) often leads to poor results. Effective model integration at the end-user level requires a unified user interface.

Designing a unified user interface is inherently difficult. If the system is to be truly open to all models to be plugged in with minimal effort, and the variety of models is limitless, then there is typically little design that can be accomplished in advance for each model’s integration. In many cases, given not much more than a large, abstract, unified set of labeled numbers that may change completely from situation to situation, the user must be able to construct meaningful visualizations on the fly; to reassemble the mental model as it were, with the supporting narrative that went into the development of these models.

Numerous tool kits offer capabilities to construct a “mash-up” of charts and graphs for dashboard-style analytics, including Cognos Visualizer™ and, more recently, the Google™ Visualization API. However, these are designed for technical staffs to use in creating data-driven report templates for end users, and not for end users to create supermodel inquiry and to monitor model execution results. Thus, there is a large capability gap in the extension of these techniques to nontechnical users. The needed capability, the ideal tool for constructing model visualization on the fly, would be one with the flexibility and ease of narrative and graphical expression provided by a story-based report-building tool (Eccles et al. 2007).

While postintegration visualization assembly can go a long way toward recapturing the theory and narrative aspects of a model for the benefit of other users (the “big picture”), the loss of visibility into other, more detailed aspects is not as easily regained in this way. If a user cannot audit the evidence behind a conclusion made in model development, it becomes difficult to assess a level of confidence in that model.

For this reason, a capability in the system whereby model assertions can be tagged in a consistent way with evidentiary document references, comments, dates, and sources can be useful in making this information generally available to the end user for the purposes of validation. Unfortunately, many of today’s model editors do not provide capabilities for easily capturing this information such that it could be provided to a system. While not always practical or possible, the most efficient and thorough means of capturing this information for display would be to provide capabilities for authoring models in intuitive and visual ways from within the integrated system itself.

1.5 Model Authoring

Development of societal models involves knowledge of both the domain of interest and the appropriate modeling paradigms and systems. Currently, the latter requires trained technical expertise in specialized and relatively complex simulation tools. The parameters that govern a computational model and the tools used to configure them are often incomprehensible to the political or social science expert. As a result, they must rely on technical staff for this expertise and become separated from the construction of the models conceptualized by them. This is problematic. By divorcing experts from their models and their models’ products, we introduce the possibility of invalid translation from theory to code and of misinterpretation of results from models.

Accordingly, a key challenge for user interface designers – one that is not even close to being met – is to develop methods that enable the SMEs to own and direct the model-authoring process. In particular, the SME must be able to author and interactively manipulate models in an intuitive, visual manner that closely aligns with his or her mental model.

In addition to these expert-driven “top-down” approaches, “bottom-up” machine learning approaches attempt to infer models automatically from raw data. Bayesian and neural networks can represent many complex systems, and dynamic versions of these algorithms can model processes over time (Antunes and Oliveira 2001). The difficulty of these approaches is often the opposite of that of expert-crafted models; the resulting models and predictions may not be transparent to the users of these models.

2 Technical Approaches: A Case Study

To illustrate some of the challenges and approaches involved in designing user interfaces for international intervention analysis, we now consider a specific case. The following study focuses on the user interface built for the COMPOEX system (Kott and Corpac 2007; Waltz 2008). Developed around the aforementioned nSpace framework, this visualization system was designed by the authors of this chapter.

2.1 Expressing Mental Models

A key objective of COMPOEX was to develop approaches for communicating the details of complex simulations to SMEs and decision-makers. It did not fully succeed in this endeavor, but it did make significant progress. One of these successes entailed the conceptualization and design of a set of constructs known as forms and panels. Forms are graphical building blocks such as event timelines, graph frameworks, node and link diagrams, geospatial maps, and flow diagrams. Panels are groups of forms populated with data and arranged so as to explain or summarize a situation. Panels are created via a drag-drop process and can be developed by SMEs without the support of software personnel or technicians. The process entails dragging elements and variables of interest into forms and arranging the forms into panels to fit problem-specific information needs. Using forms and panels, a user is able to rapidly build a live visual window into the modeled world and tailor it to the topics of interest. When the user then saves the assembled panel, the detailed form of presentation is maintained, but the data displayed in the panel remain live and updated as the model produces new data (Fig. 8).

Fig. 8
figure 8

Forms provide a variety of frameworks for expression

To facilitate shared understanding when collaboratively conceptualizing or visualizing a problem, we developed a common visual language of expression. Modeled entities such as people, places, and organizations can be rendered using a common symbology. To help users from a variety of backgrounds, we designed intuitive icons crafted specifically for rapid identification of key archetypes such as political, military, criminal, media, and social groups, among others (Fig. 9).

Fig. 9
figure 9

Example elements of a common language of expression

The common visual language developed for this system also includes time series state graphs and entity attribute thumbnails (Fig. 10). Time series graphs may be dragged into any of the visual frameworks in order to display the state of key named indicators. Entity attribute thumbnails permit a user to visualize common entity properties, such as sociopolitical power, across an entire panel, simply by dragging the property of interest into the panel’s graphic legend. In this display option, small thumbnail charts appear to the left of each entity in either time series or scaled pie chart form, depending on the number of properties being visualized.

Fig. 10
figure 10

Entity attribute thumbnails characterize power over time

To accommodate the need to communicate information at higher levels of summation, and to navigate the information efficiently, varying levels of detail and aggregation are applied throughout the system. Outcomes of simulations are available for display at a summary level, and users are able to view lower levels of details for further information. For instance, a computational method of detecting significant effects from scalar data in a simulation result permits users to plot a summary of all effects as discrete event nodes on a timeline together with descriptive labels (Fig. 11). Hovering over an effect in the timeline provides further details in the form of a tooltip, and double-clicking the effect permits users to display a time series graph depicting the detailed behavior in relation to previous behaviors.

Fig. 11
figure 11

The effect timeline summarizes actions and resultant effects over time

Effect summaries are also available in the aforementioned forms for users who prefer a geographic or conceptual context. In these forms, thumbnail pie charts (Fig. 12) are used to indicate the number and nature (beneficial, undesirable) of effects on each entity (a region or an industry, for example). Double-clicking the effect thumbnail displayed beside any of the entities in this context invokes a detailed list of individual effects. These techniques provide effective interactive methods for a user to visualize the bigger picture as well as to explore more detailed information.

Fig. 12
figure 12

Effect summaries on a map characterize localized impact of actions

Another objective of COMPOEX was to develop methods for assessing, capturing, and visualizing uncertainty within inputs and outputs. After experimenting with several approaches and finding them impractical, we evolved an approach whereby the user may choose to assert a hypothetical behavior that overrides the model-computed behavior for a certain subset of phenomena. The user then reruns the simulation to compare the detailed impact of modified assumptions. These assumptions are visually flagged with warnings. This technique proved to be an important capability, enabling an SME to set aside a disagreement with the model and continue to make effective use of the results.

The challenges in providing methods for the user to express actions that the system could simulate led to the development of a design principle referred to in this case study as “actions, effects, and desired effects anywhere.” Using this principle, users can express desired effects and invoke actions from any context or view of the situation. For instance, when viewing a graph of a key economic indicator in a geographic context, a user can define a desired effect by simply drawing the desired change on the graph and moving on from there to a list of suggested actions for achieving that effect. This method significantly helped to ease the burden of communicating a conceptualized action to the modeling system.

Another important principle revolved around the need for planners to be able to compare and see change. Experience demonstrated that even subtle changes in the environment can be important at times, so a level of granularity is required whereby these changes can be detected and clearly displayed. Time-series graphs, with multiple instances and consistent scales, all displayed in context, became an important tool in the system for analyzing detailed differences. Several simulation runs (with different actions or assumptions) are overlaid on each graph for easy comparison (Fig. 13).

Fig. 13
figure 13

Multiple results are shown together so differences can be seen clearly

For summaries of simulation results, change difference algorithms detect and highlight areas of significant change across the entire supermodel to guide detailed inspection by SMEs. Focusing on significant change helped improve the communication of effects of a plan simulation by eliminating information of lesser interest.

The provision of annotation capabilities satisfies another important principle. The users’ ability to record their assumptions, thoughts, and observations as they worked has proved to be an important tool in framing and communicating a user’s thinking and conclusions. This has enabled team members to brief decision-makers directly from the live application, with the ability to view lower levels of detail to answer questions.

2.2 Insights into Model Behavior

The models’ inner logic should be transparent and comprehensible to a nonspecialist user. When something unexpected is observed in a model-computed result, it must be possible for analysts to rapidly view details and determine the sources of the surprise. Failure to provide such a capability can lead to misperceptions or the outright rejection of the model results. Since model authors will not always be available to explain the behavior of a model, analysts or decision-makers must be able to develop their trust in the model through effective interaction with the model and its visualization capability.

In our user interface, a causal investigation function provides cause-effect transparency via a view into model influences (Fig. 14). For any model behavior, this function displays downstream influenced behaviors (to the right). By dragging a behavior graph from the left or right into the middle, a user can follow the chain of influence downstream or upstream to “follow ripples in the pond” and to locate root causes. The display of both influence relationships and behaviors enables users not only to see that a relationship exists but also to observe the detailed nature of that relationship by comparing the pattern and degree of effects.

Fig. 14
figure 14

Causal explanation provides insight into behavior

A second level of detail is provided in the system for investigating the logic behind model behavior. This involves providing a hyperlink above the variable of interest in the causal view that, when clicked, displays a detailed written document describing the theory behind the model responsible for that behavior. These documents are prepared by the model authors and include references used in their research.

These functions provide a useful first step but do not go far enough. The above system needs improvement in, for example, the ability to trace many steps of variable interactions and to distinguish causal from correlative relationships as easily as in conceptual models. Thus, more work is required in this area, and corresponding efforts are ongoing.

In addition to the generic approaches for investigating behavior applied to the collection of all models, specialized extensions are provided for a particularly important model type of interest, which display key structures and properties of that model. For the agent-based Power Structure model, which models the power and influence of key actors in the situation, the system shows relationships in the form of an interactive social network diagram. Positive or negative influence is indicated in this treatment using an arrowhead symbology (Fig. 15).

Fig. 15
figure 15

Power and influence. Positive (arrowhead) and negative arrows (“X” head) indicate the type of relationship

The ability to view actor properties in the application, such as their goals and role in the conflict, provides important narrative background information and clues as to their behavior.

2.3 User-Authored Narrative

The COMPOEX system relies on a modeling and simulation backplane that provides generic integration of heterogeneous models (Waltz 2008). At its core is a state vector consisting of scalar variables that models can both read from and write to at simulated time intervals. Models developed or gathered for a particular situation are integrated by plugging their inputs and outputs into the state vector. The homogenized state vector of potentially tens of thousands of variables, time series of named values produced by the collection of models, is available to the user interface that must use the data to produce meaningful visualizations.

One of the challenges of producing meaningful visualization in this context is the loss of texture and detail that can occur when integrating an abstract and diverse set of models. The problem is that while quantitative values produced by the model are readily available, the mental model or thinking that went into the design of the model is not. Compounding the challenge is the sheer volume of information available. The absence of data to express the conceptual aspects that go into the building of a model is not a problem unique to COMPOEX. Computational model interfaces today are almost universally highly technical; they are not designed for sense-making.

To overcome this shortcoming, in our interfaces, users are given the ability to assemble and organize panels with live data from simulations and add layers of meaning and narrative expression for the human user. Through arrangement and annotation with words, links, images, and other visual elements, a conceptual model can be expressed and shared, with live computational elements (Fig. 16).

Fig. 16
figure 16

Layers of narrative expression improve communication of information

2.4 Model Authoring by Decision-Makers

Model-authoring tools are most effective in the hands of the domain SME: the individual who possesses the detailed mental conception of the situation being modeled. While layering conceptual model aspects with computational data after the fact does help, the ideal solution would be to capture the mental model at the time when the computational model is being crafted. Unfortunately, this facility is absent from the vast majority of computational model-building tools, and the degree of technical knowledge required to use these building tools presents a significant barrier for the SME.

Accordingly, a tool that proved particularly useful was the Power Structure model builder. SMEs and analysts (users without model-building skills) used this tool to create key actors, define their goals, and suggest how they influenced each other in the real-world environment being modeled. A targeted development effort produced both the principal constructs of the model and the user interface for the model in a way that closely fits the way an SME might think about these aspects of the situation.

The COMPOEX user interface provides visual exploration of the properties of the model, such as influence networks of actors, their roles, and their goals, in the larger context of the situation. In addition, users have editing capabilities that enable direct and intuitive model authoring in this context (Fig. 17), tied into the full suite of features provided for information gathering, research, and embedding of evidence and supporting narrative within model entities and relationships. By providing these capabilities, new levels of transparency and control enable SMEs to author models directly; however, much work is yet to be done.

Fig. 17
figure 17

Editing model of an actor’s goals in the Sandbox

In addition to the precedents it provides in addressing particular challenges, the COMPOEX case study serves to emphasize the importance of a number of key principles for the visualization of computational models in the social sciences. These are summarized in the following Practical Tips.

3 Practical Tips

  • Look for ways to streamline the process of inputting intervention actions. Consider using system intelligence to translate or interpret the actions that a user wishes to take, or to suggest actions that might be appropriate for a desired effect.

  • Provide end users who are not expert modelers with the means to author and manipulate models, using a language of expression that fits the users’ mental model. Likewise, streamline model output by finding methods of expression that are natural to the way that domain experts conceptualize a situation.

  • Use the simplest, most universal, and most accessible visual language. Account for the fact that methodologies and associated terminology can vary widely between user communities and change frequently.

  • Provide means to present varying levels of detail and aggregation of computed data. Enable users to provide rich summary-level information to decision-makers, and, when appropriate, enable senior leaders to interact with the tools directly.

  • Characterize intervention effects in summary form, indicating for instance the degree, potential desirability, scope, and nature of the effects, and enable analysts to perform comparisons without relying on visual memory.

  • Recognize that time is a critically important dimension in analyzing PMESII effects. Thus, when possible, provide tools for interpreting the temporal sequence and spacing of the effects, and for examining short- and long-term effects.

  • Consider permitting users to annotate computational results with descriptions, diagrams, and illustrations to communicate situations, actions, and anticipated effects.

  • When possible, allow users to see the inner workings of a model by presenting the model’s basic elements and the relationships among these elements.

  • Develop capabilities that assist in understanding simulation outcomes – what chain of causes led to a particular effect – and in quickly diagnosing and understanding model operations.

  • Expect and encourage a healthy level of skepticism from users and design computer-user interactions that are able to accommodate users’ disagreements with the model.

4 Summary

Data visualization expresses data in concise and elemental graphical formats. Information visualization uses higher-level organizational structures in the graphic forms. Visual analytics combines visualization with analysis and further computation to derive meaning from large datasets. Typically, social science data has few features that can be depicted in physical form; consequently, Subject Matter Experts (SMEs) are often left to analyze and interpret a display of computational model output in complex, nonintuitive forms. This is inefficient and is a source of potential error. Conceptual models, which describe relationships qualitatively rather than quantitatively, have seen progress in visualization, e.g., link analysis and tools with additional dimensions, such as time. Several challenges are particularly strong in PMESII modeling visualization. Because PMESII models involve significant uncertainty, better visualization approaches are needed to depict uncertainty and causal relationships. Another challenge involves the need to provide sufficient accessibility and adaptability to accommodate a wide range of intervention partners and organizations, with standard symbols, vocabularies, and protocols. There is the need to integrate models from multiple social science domains operating as one system and user interface, and the need to capture and present the underlying theory and evidence behind the models. The COMPOEX user interface is a relevant case study. Key constructs and principles of this interface include forms (templates) for the display of model data including timelines, graph frameworks, link diagrams and geospatial maps; user-created panels that can combine and spatially arrange groups of forms to convey meaning; a common visual language, levels of summary and drill-down, tools for uncertainty and for detecting changes in outcomes. A causal investigation function is also provided, as well as capabilities for visual annotation, markup, and hyperlink references at both the summary and detail level within large and complex societal models.

5 Resources

VisualComplexity.com http://www.visualcomplexity.com/vc/index.cfm?domain=Social%20Networks

Connectedness http://connectedness.blogspot.com/

Datawocky blog, 2008. Report on interview with Russel Norwig of Google research. http://anand.typepad.com/datawocky/2008/05/are-human-experts-less-prone-to-catastrophic-errors-than-machine-learned-models.html (as found on March 11, 2009)

Bertin, J. (2001). Matrix Theory of Graphics. Information Design Journal, 10(1), 5–19.

Bertin, J. (1983). Semiology of Graphics, Diagrams, Networks, Maps. University of Wisconsin Press.

Cohen, M.D., March, J.G. & Olsen, J.P. (1972). A Garbage Can Model of Organizational Choice. Administrative Science Quarterly, 17(1), 1–25.

Eick, S. & G. Wills, (1995). High Interaction Graphics, European Journal of Operational Research, 84 (445–459).

Epstein, J. M. (1999). Agent-Based Computational Models and Generative Social Science. Complexity, 4(5), 41–60.

Freeman, L. C. (2000). Visualizing Social Networks. Journal of Social Structure, 1(1).

Harris, R. L. (1996). Information Graphics. Management Graphics.

Hearst, M. (1999). User Interfaces and Visualization. In R. Baeza-Yates & B. Ribeiro-Neto, Modern Information Retrieval (Chapter. 10). Addison-Wesley-Longman.

Herman, D. (1999). Spatial Cognition in Natural-Language Narratives, Proceedings of the AAAI Fall Symposium on Narrative Intelligence.

Mullet, K. & Sano, D. (1995). Designing Visual Interfaces: Communication Oriented Techniques. Mountain View, CA. SunSoft Press/Prentice Hall.

Larkin, J. & Simon, H. (1987). Why a Diagram is (Sometimes) Worth Ten Thousand Words., Cognitive Science, 11(1), 65–99.

Scholtz, J. (2006). Beyond Usability: Evaluation Aspects of Visual Analytic Environments. In IEEE Symposium on Visual Analytics Science and Technology (pp. 145–150).

Tufte, E. R. (1990). Envisioning Information. Cheshire, CT: Graphics Press.

Tufte, E.R. (1983). The Visual Display of Quantitative Information. Cheshire, CT: Graphics Press.

Tufte, E. R. (1997). Visual Explanations: Images and Quantities, Evidence and Narrative. Cheshire, CT: Graphics Press.

W. S. Cleveland (1993). Visualizing Data. Hobart Press.

Ware, C. (2000). Information Visualization – Perception for Design. Morgan Kaufmann.

Wise, J. A., Thomas, J. J., Pennock, K., Lantrip, D., Pottier, M., Schur, A. & Crow, V. (1995). Visualizing the non-visual: spatial analysis and interaction with information from text documents. Information Visualization, IEEE Symposium on, 0:51+.

Wood, D. (1992). The Power of Maps. New York: Guilford Press.