Keywords

7.1 The Sustainability Framework and Complex Systems Approach to Policy Analysis

The basic idea of the sustainability framework is to (1) assess the state of a system, using multiple variables; (2) understand causal mechanisms, i.e., how human agents act and interact with one another to shape the state of the system; and (3) explore how to influence individual decisions such that they collectively move the system toward desired states. These steps need to be repeated over time to provide insight for policy to steer a system gradually toward desired states. This idea is applicable to other complex adaptive systems.

Most important for policy interventions in complex systems is to ensure that a system is moving on the right track. After all, it is difficult to make long-term point predictions for a complex system because its state is being shaped by adaptive actions and interactions of many agents, and can change in unforeseen ways. Nor is there an optimal policy that will cause a system to move linearly , from its current state to a desired state at once. Adjustments will have to be made along the way to correct the course of the system, or accelerate or slow certain effects.

This kind of “adaptation mentality ” is essential to the policymaking process. To use an analogy from Brian Arthur, the policy-maker is like a captain of a paper boat drifting down a river; at his best, he watches the currents and the changing flow, and uses his oar to “punt from one eddy to another” (see Mitchell 1992). And this is precisely why agent-based modeling is useful: it offers insights about the directions of “flow.” In the next section, I will try to illustrate how to develop agent-based models that generate new, useful, and convincing insights for policy analysis .

7.2 Agent-Based Modeling for Policy Analysis

7.2.1 Design Useful Models and Ask Meaningful Questions

An agent-based model simulates the decisions of heterogeneous agents in a complex adaptive system, and is an analytical tool for studying these systems. To develop a useful agent-based model, we need to ask good research questions—without those, the model can easily become a mechanical simulation that does no more than mimic a real system. Mechanic simulations may look realistic, but they are not particularly useful.

In any field, theories guide us to ask questions. So, too, theories of complex adaptive systems (Holland 1995, 1998, 2012) will help us to pose meaningful questions about these systems. Understanding their key features and relevant concepts can be useful for policy interventions in a broad sense, and for modeling in particular (OECD 2009).

In a complex adaptive system, the agents learn and adapt through interactions with other agents, leading to adaptability of the system. This means that policy needs to adapt over time to suit new situations and nudge a system toward more desired states, and how policy should adapt is an important research question. Agent-based models can offer useful insights for adaptive policymaking.

Complex adaptive systems often exhibit non-linearity, i.e., system-level novel patterns cannot be predicted just by summing the properties and actions of individual agents in the system. Policy may produce unintended consequences if it does not account for adaptive interactions of agents that have distinctive characteristics and experiences, and their coevolving behaviors. These systems can have lever points at which a small intervention produces large changes in system-level outcomes. Such lever points can be exploited by policy to influence the system cost effectively. Agent-based models can be used to explore policy levers and unintended consequences of certain policy.

A complex adaptive system usually has a large state space. The system can evolve in many different directions, and sometimes a robust policy that delivers satisfactory results across plausible future scenarios is more desirable than a policy that produces best outcomes only for some scenarios (Lempert 2002). A system can exhibit non-equilibrium or multiple equilibriums, with tipping points that propel it into a sudden phase transition. Tipping points may present policy challenges if a system is currently in a desirable state; they may present opportunities if other attractors represent more desirable states. In such cases, we can use agent-based models to simulate future scenarios, explore the state space of a system, and identify tipping points or robust policy.

The behavior of a complex adaptive system is path-dependent, i.e., dependent upon its initial conditions and previous states. It can therefore be locked on a long-term undesirable path. Inferior technologies, for example, sometimes prevail because they had an early advantage, while innovations in general are difficult to introduce early on. Policy can influence the future path of a system by helping to break existent patterns and promote the adoption of an innovation at the initial stage . Timing is important for these interventions, and models can explore the appropriate timing of interventions.

Complex adaptive systems tend to self-organize often without a central control. But individual actions and interactions in a system may not necessarily lead to optimal system-level outcomes. Just think about the Prisoner’s Dilemma and the Tragedy of the Commons. This is why policy is necessary, but policy can effect changes in a system more effectively, by setting up incentives that induce individual decisions to collectively lead to desired outcomes. Agent-based models can be used to explore the potential effects of alternative policy.

Although coherent behaviors can and often do emerge from individual actions and interactions, complex systems can fall into a state of chaos. Policy could play a role in preventing disastrous outcomes associated with chaos. Agent-based models cannot prove that certain things will happen, but they can demonstrate possible outcomes. Identifying conditions that lead to disastrous outcomes could be a powerful use of models and would provide insights for policy interventions to prevent disastrous outcomes.

These are some policy insights a complex systems perspective offers and some potential uses of agent-based modeling for policy analysis. Central to all these is the need to understand the micro-level processes and dynamics in complex adaptive systems .

7.2.2 Meet the Challenge of Conceptualization

The strength of agent-based modeling lies in its ability to capture agent diversity, interactions between agents, and the feedback between individual behaviors and global states (Epstein and Axtell 1996; Gilbert 2007; Manson and Evans 2007; Miller and Page 2007; Farmer and Foley 2009; Railsback and Grimm 2011; Cioffi-Revilla 2014; Walsh and Mena 2016). This is also why agent-based models can generate new and sometimes surprising insights about a system.

For example, Schelling’s classic segregation model (1971) illustrates an important insight that neighborhood segregation can happen even if individuals only have a slight preference to be near people of their own race. The segregation pattern generated by his model would not have been predicted by simply adding up individual attitudes, but emerged from their interactions.

The farmer household model in this study also shows some interesting patterns of change for farm sizes as nonfarm work wages rise. These patterns emerge due to interactions, particularly the interacting influences of wages and the land rental market. That rising nonfarm income may not naturally lead to farmland consolidation and increased scale of farming operations in the countryside, as economists would expect, has policy implications.

However, because agent-based models represent the micro-level processes of real systems, this create challenges for conceptualization, validation, and communication with non-ABM modelers (Parker et al. 2003). Conceptualization in particular is crucial to modeling success: where to draw the system boundaries, what components to include, how to represent agents, including their decision making, and what is the appropriate level of abstraction etc.

Meeting these challenges is even more critical for policy analysis. To convince policy-makers, we need high levels of confidence in our models. To develop a credible model, model conceptualization should be based on a good understanding of the system in question. A good understanding of a specific system can, in the first place, help us ask research questions that are important and meaningful for that system. Generally speaking, a conceptual model should capture the real system sufficiently to address intended research questions.

7.2.2.1 Use Empirical Methods to Inform the Development of Models

A variety of empirical research methods are available to increase our understanding of complex systems and inform the development of agent-based models. These methods can (1) provide insights into the micro-level processes and dynamics of a system, including agent decision making; (2) provide data for setting a model’s parameters, and for initializing various components of the model, e.g., agent types, distributions of agent attributes, environmental attributes, values of exogenous entities; and (3) provide insights and data, including qualitative or quantitative macro-level patterns, for model validation. In-depth case studies, large N statistical analyses, experiments used in behavioral economics, participatory research that involves shareholders, and qualitative approaches can each give us valuable, albeit different, insights into a system (Janssen and Ostrom 2006; Robinson et al. 2007).

Case studies, as used in this Poyang Lake project , can provide detailed information about the processes and dynamics of a system. But case studies tend to be system-specific and lack generality. Large N data analyses can be used to derive general patterns of individual motivations and behaviors, providing detail on how to populate agents in a model. They are not so good, however, at revealing mechanisms and processes. Nonetheless, they are attractive because data can be easily available from a census and, increasingly, from electronic sources, besides surveys.

Experiments can test specific hypotheses about human behaviors, informing the decisions of agents in a model. But in general they are vulnerable to weaknesses in subject representativeness, contextual information, controlled experiment environments, and credibility of the answers (e.g., Berg et al. 1995; Kurzban and Houser 2005; Houser et al. 2008; Cotla 2016). Participatory approaches enable researchers to discover rich information about agent decisions and interactions, and even to uncover policy from the bottom up, but they can be costly and are often limited to relatively small scopes (e.g., Castella et al. 2005; Van Berkel and Verburg 2012).

Qualitative approaches can be very useful, too. For example, Jane Jacob (1961) provides a convincing account, based on her intense observations of urban life, of how economic prosperity and public safety emerge from mixed land use and the interactions of its inhabitants. What she describes is essentially a qualitative agent-based model, with detail beyond the capacity of a computer simulation. In this PLR study, field observations and qualitative analysis of the interviews also yield important insights about farmer households’ decisions concerning land use and livelihoods.

Despite all that capability of agent-based models, we should not expect to discover important insights solely from these computer experiments. Of greater importance from the onset is that we develop a good understanding, even an intuition, about the system we wish to explore, based either from our own empirical research or the theories and empirical work of others. Models are analytic tools we use to formalize our intuitions and improve our understandings about a system. While we should try to make modeling technically rigorous, we need broad and deep grasp of an issue to convince policy-makers of a model’s usefulness, and ultimately influence policymaking.

7.2.2.2 Decision Theory and the Representation of Agent Decision Making

Understanding how the agents in a system make decisions is particularly important for policy analysis. It is this understanding that enables policy-makers to improve macro-level processes for individual agents , or to design “smart” policy to influence individual behaviors, facilitating change toward more desired states. From a complex systems perspective, the role of policy is not to impose a central control, but to introduce incentives to induce individual decisions and actions such that they collectively lead to desired system-level outcomes. In addition, top-down interventions have become increasingly unpopular and tend to provoke bottom-up resistance, leading to difficulty in implementation and high enforcement costs.

Researchers in various disciplines examine human decision making through different lenses. Economists, for example, have developed rational choice theory, according to which people weigh the costs and benefits, and choose the option that gives them the best utility, assuming people have complete information about the choices and consistent preferences (Hogarth and Reder 1987). Psychologists, however, emphasize the irrationality of human behavior and consistently find bias in human decisions, especially with heuristics (Tversky and Kahneman 1975). Behavioral economists try to bridge the economists’ rationality and psychologists’ irrationality, and their experiments have mostly illustrated the foundation of human rationality, with some exceptions (Smith 2005).

Coming under the general framework of rational choice is the notion of bounded rationality, which argues that individuals are rational decision makers, but they may not always have complete information about their options or possess consistent preferences over choices, or have the computational power to make optimal choices (Simon 1956). Individual choices are however hardly made independently; rather they are influenced by social and cultural forces. Social economists thus see social influences over individual decisions everywhere (Becker and Murphy 2009). Sociologists, with deep roots in empiricism, and development economists in the field, often find that societal structures play a large role in affecting or constraining individual choices (Scott 1977; Susan 1977; Sen 1981; Blaikie et al. 1994).

So what should we take from these divergent theories and perspectives? We may start with the assumption that people are rational decision makers, and look for empirical evidence to verify this assumption. If the evidence suggests otherwise, that people are not making rational choices, we will need to investigate further. Are they trying to optimize? Do they have unusual or different preferences? Are they constrained by a lack of information or computational capabilities?

People can still be rational decision makers, even when they do not appear to be rational or seem to use simple heuristics. The majority of farmer households in the Poyang Lake Region , for example, appear to rely on a few heuristic rules in labor allocation: young male adults work in the city, while old people and some women cultivate rice on the farm. Yet in conversation, the farmers show that they are actually rational decision makers: They are trying to achieve the optimal economic result and have done what they can.

Farmers in the PLR are aware of other land-use and livelihood options, and the costs and benefits associated with these options. They can explain how they derive the costs and benefits. Not much calculation is needed for labor allocation to optimize total income, either; household members have just two choices—work in the city or work on the farm. Because migratory work tends to produce higher returns, a household member chooses to work in the city as long as possible. Members who cannot find work in the city naturally stay on the farm and cultivate rice. It happens that young people and male adults are more likely to find work in the city.

Thus, while many empirical cases contradict perfect rationality, there is plenty of evidence to suggest that a peasant’s behaviors exhibit an attempt to improve the household livelihood (Strauss and Thomas 1995). What appears to be irrational may be the result of a complex exercise in rationality, and can often be explained with deeper probes into the nature of constraints or preferences.

Of course, not all decisions are “rational,” as defined by rational choice theory . We have all analyzed the pros and cons for some decisions in our lives; but we have also relied on “rule of thumb” or “gut feeling” to make some other (even important) decisions. There is now empirical evidence suggesting that heuristics and gut feelings may not be poor “second best” methods for decision making . Rather they are flexible and effective decision-making processes formulated through life experience—which is to say, based on our interactions with the dynamic environment (Gigerenzer and Brighton 2009).

Furthermore, the individual decision maker can probably rationalize each choice he or she makes from his or her perspective, with a range of factors, including emotions, figured into that rationale. Utility is a rather broad concept that ultimately means happiness, which can incorporate emotions.

Considering all these, can we make a bold assertion that decision making is all about trying to optimize some kind of utility? Again, utility may mean different things to different people and in different contexts. Each person’s utility function reflects individual experiences. And we may at times have difficulty formulating it because our experiences are qualitative and rich. Then examining the heterogeneity of human experience to understand how human agents value different things and make decisions can be theoretically enlightening, and also brings far more useful insight for policy than the notion of irrationality.

Much like the dual perspectives of decision theory , the representation of decision making in agent-based models falls into two general categories: optimization with a utility or objective function, and non-optimization. In the optimization category, there are variations of how agents in a model find solutions to their optimization problems. Some use mathematical programming (e.g., Berger 2001; Berger et al. 2006), which is optimization in the ultimate sense. Others often use approximation, and an approximate solution can be achieved by (1) using a genetic algorithm (e.g., Manson 2006) or more generally an evolutionary approach that makes adjustments based on experiences (e.g., the farmer household model in this study); or (2) sampling a limited solution space (e.g., Robinson and Brown 2009). When agents use these techniques to find an approximate solution to their optimization problems, the models are representing bounded rationality.

The representation of agents as non-optimizers reflects the psychological perspective. Non-optimizing agents often apply heuristic rules in decision making (e.g., Deadman et al. 2004; Kennedy et al. 2014). The psychological framework of belief, desire, and intention (BDI) has also been implemented to represent agent decision making in ABMs (e.g., Drogoul et al. 2016). A hybrid design of heuristics and utility calculation can be useful as well to simulate household-level decisions (e.g., Evans et al. 2011). Agent-based models may even employ cognitive architectures developed in artificial intelligence, such as SOAR and ACT-R, to represent agent decision making (Kennedy 2011).

In general, the representation of agent decision making in an agent-based model needs to be based on how people actually make decisions. The modeling purpose is also important for the choice of representation. Representations based on psychological and cognitive frameworks are thought to be more realistic than those based on optimization, and there is a general desire to enhance the cognitive aspects of agents (see Epstein 2014). However, with cognitive representations, like BDI, it can be difficult to understand what is going on in a model, and their usefulness for policy analysis is not obvious. Besides, the deep cognitive mechanisms underlying human decision making are not yet well understood. Heuristics, while useful for explaining existing patterns, may not be suitable for policy analysis because heuristic rules reflect what people do in immediate present and may change when situations change .

Optimization can be a useful representation of decision making for policy analysis, especially if we consider utility in a broader sense (with constraints) and may limit the ability of the agents to find perfect solutions in a model. Even implementing agents with perfect rationality could be appropriate for policy analysis. Schreinemachers and Berger (2006) argue that a representation of perfect rationality “seeks to identify inefficiencies not in the limited cognitive capacity of the human mind but in structural factors external to the decision maker, which may be addressed through policy intervention.” Using mathematical programming to represent and solve optimization problems also allows modelers to include a large number of variables and constraints, capturing full agent heterogeneity (Schreinemachers and Berger 2006).

The PLR model can be used to illustrate the differences between heuristics and optimization. The households in the model could use the following heuristic rules: (1) if a member is older than age X, do farming; (2) if a male member is younger than age X, do migratory work with probability Y; (3) if a female member is younger than age X, do migratory work with probability Z; (4) if extra labor is available for farming, subcontract additional farmland; (5) if labor is insufficient for farming, rent out farmland. The model could still reproduce land use and livelihood patterns observed in three different villages by calibrating X, Y, and Z. But it would not, however, be so useful for exploring policy effects; the decisions of the agents would not even be sensitive to changes in wages or policy incentives.

Furthermore, the heuristic rules describe what agents do at the present time and may not be valid for exploring future scenarios unless we can make agents adapt their rules in the model. In contrast, income optimization represents the more fundamental principles of household decisions. The heuristic rules deducted based on our observations of agent behavior are manifestations of the fundamental decision principles in the current situation. Fundamental decision principles are more likely to remain the same than heuristic rules, and they may manifest as different choices and heuristics in different situations. Currently, the heuristic rules implemented in most agent-based models are fixed. John Holland’s classifier system , which allows adaptation and creation of new rules , could be further explored to make truly adaptive agents.

7.2.2.3 Appropriate Level of Abstraction

Agent-based modelers must consider many elements in a real system when designing an ABM. It can therefore be difficult to decide what details to include (or exclude) in the model, and determining the appropriate level of abstraction has been a persistent challenge for the ABM community (Parker et al. 2003). Agent-based models can exhibit a gradient of abstraction levels, ranging from extremely abstract to extremely realistic representations. Schelling’s (1971) segregation model, Axelrod’s models on culture dissemination and cooperation (1997a, b), and Epstein and Axtell’s Sugarscape model (1996) are classic abstract models that bring profound insights about social dynamics . As an example of extremely realistic design, An et al.’s model (2005) represents every household in the Wolong National Nature Reserve, plus a full range of demographic and economic dynamics, to examine the influence of human activities on the giant panda habitat. Because of its realism, the authors are able to interpret and compare their modeling results with other models in absolute quantitative terms, whereas most agent-based models look at trends or patterns, and discuss results in relative terms.

Note that as the level of details increases in an ABM, the model’s ability to make general inferences decreases. One argument made against agent-based modeling is that it is intractable; more details make it even more difficult to understand model outcomes (Axtell and Epstein 1994). In addition, agent-based models can be overly fitting, i.e., fit too well to the specific system (Brown et al. 2005).

The increasing power of computers and big data present opportunities for more “realism” of agent-based models. Large, realistic models can be useful and are necessary in some cases, especially for applied studies, but we need to keep in mind that realism is not always equivalent to usefulness (see also Paola and Leeder 2011). “In searching for powerful models, this temptation to inclusiveness should be rested,” wrote Holland (2012). “A model’s clarity and generality directly depend on how much detail has been set aside.”

Large, realistic models can also increase the chance for errors and exacerbate the modeling issues discussed previously. Steve Bankes (1993) offers a fantastic fictional account of building , for a fictional Joint Chiefs of Staff, the ultimate combat simulation; as increasing details are demanded, and added to the model, it becomes quite useless at the end. Models are useful because they are abstractions of the real world, just as maps are useful because they simplify geography. In his book Dreamtigers, Jorge Luis Borges tells the ironic story of how cartographers driven by the “rigor of science” to create maps of increasing precision:

“In that Empire, the Art of Cartography attained such Perfection that the map of a single Province occupied the entirety of a City, and the map of the Empire, the entirety of a Province. In time, those Unconscionable Maps no longer satisfied, and the Cartographers Guilds struck a Map of the Empire whose size was that of the Empire, and which coincided point for point with it. The following Generations, who were not so fond of the Study of Cartography as their Forebears had been, saw that that vast Map was Useless, and not without some Pitilessness was it, that they delivered it up to the Inclemencies of Sun and Winters. In the Deserts of the West, still today, there are Tattered Ruins of that Map, inhabited by Animals and Beggars; in all the Land there is no other Relic of the Disciplines of Geography.”

The appropriate level of detail is largely determined by the research question a model is intended to address (see also An et al. 2014); different questions about the same system may require different model designs . Let us use modeling the brain and the mind as an illustrative example. The human brain is an extremely complex system comprising billions of neurons and numerous physical, chemical, and biological processes that somehow give rise to higher-level cognitive functions and human intelligence (Baars and Gage 2010). Assume that our modeling goal is to explore how the brain gives rise to the mind. At the crudest level, a simple model of the left-right brain can bring us some understanding of human cognition. When we differentiate the frontal lobe , parietal lobe, temporal lobe, the occipital lobe, etc. in the model, we can understand more of the brain’s functions. However, this model is not yet sufficient to explain how the brain gives rise to the mind. To understand the brain-mind relation, it is probably necessary to include neurons and neuron networks in the model. But since neurons are supported by many chemical processes, should those also be represented? My thinking is no. Humans and other animals share similar chemical processes, and therefore these processes are probably not critical for explaining human intelligence.

Let’s suppose now we have developed a model that simulates how neurons form networks through learning mechanisms, and that this is the process that gives rise to human intelligence. Even so, can this model explain cognitive problems, such as autism and Alzheimer’s disease? I do not think so, for these disorders involve important physical, chemical, and biological processes that are not included in this model.

A good agent-based model captures a system’s key elements and dynamics, with a level of detail that is sufficient to address the research question. Modeling is not only a technique—it is an art. The art is to capture the essence of a system, as a painter captures the spirit of the subject with a few strokes. The modeler, like the artist, must decide what details to include and how to capture them. Yes, there are painters who include such fine detail that we get lost in the intricacies. There are also painters in whose few strokes we can barely recognize the subject. Modeling is useful if we do it right. After all, there are the impressionist masters, but none of them painted solely from imagination —they all made intense observations of reality.

7.2.3 Strengthen a Model’s Credibility

Conceptualization , based on a good understanding of the system in question, is the first step toward building a credible model for policy analysis. Several techniques can help us test and enhance a model’s credibility: validation, sensitivity analysis, and robustness analysis.

7.2.3.1 Validation

We can address model validations on three levels: the conceptual, micro, and macro (Robinson 1997). Conceptual validation involves capturing the right processes and dynamics in a model. Empirical research and theory can provide insight into the processes and dynamics of the real system and are part of the conceptual validation. On the micro level, empirical data is useful for initializing model parameters and populating agents (Brown et al. 2008). At the macro level, comparing the simulated patterns with observed patterns , either qualitatively or quantitatively, constitute a formal validation (Axtell and Epstein 1994). A model could implement different mechanisms that all lead to the same macro-level pattern; the likelihood that the model captures the right mechanism is increased, and its credibility strengthened, if a model can reproduce multiple observed patterns (Grimm et al. 2005).

Validation at three levels gives a model increasing credibility; and the levels at which we confirm validation affect what we can claim from the model’s experiments. Also different modeling purposes may require different levels of validation. To further illuminate the issue of model validation, it is helpful to quote John Holland about modeling (personal comm.):

“I think of the model as a kind of axiom system. First, I try to make the basis of the model (the axioms) as clear as possible. I actually try to write an explicit list of assumptions. Then I try to make sure that the construction adheres to just these assumptions and no others. This is hard, but possible. The whole purpose of setting up axioms is to move all questions of interpretation to them. From that point onward, the rules of deduction, or the program, are a “mechanical” working out of consequences, with no interpretation involved in that part (unlike arguments of rhetoric and persuasion). That is what, in my mind, separates the scientific method from other methods (say, philosophical argument). In short, when the ‘axiomatic’ approach can be followed, the art and interpretative cleverness are concentrated in selecting the axioms. Then consequences are ‘proved’ without resort to interpretation. Note, however, that intuition usually guides us in what consequences we would LIKE to show. But you cannot ‘cheat’ the deductive method—the consequences may, or may not, follow from the axioms chosen.”

Let’s build upon Holland’s “axiom systems .” We may think in general that there are three types of agent-based models . In the first, not much is known about the system’s processes, and the modeling purpose is to explain the mechanisms underlying macro patterns. In this case, the modeler can list any axioms, including any assumptions about the mechanisms. The modeler may even choose to “manipulate” the axioms. As long as the model reproduces the observed patterns, the modeler can claim that the postulated mechanism is plausible. Even such plausible mechanisms are useful and can guide the direction of empirical studies. Craig Reynolds’s bird flocking model (Boid) and Holland’s language model, which explores how grammar emerges and how languages evolve, fall into this category. I would think these models are so-called “existence proof models.”

A second type of model is used to explore and test abstract ideas. The modeler assumes or has some intuition that a system works in a certain way and seeks to “prove” that assumption, using a model capable of reproducing some stylistic patterns. The modeling purpose, however, is not to prove the assumption or intuition but to illustrate further insight about the system. In this case, it is appropriate to list all the assumptions as axioms and then let the program work out. My simple model on Towns, Cities, and the Happiness of Humanity (see personal.umich.edu/~qtian/HappinessOfHumanity.htm), and some of the early exploratory agent-based models, such as Robert Axelrod’s (1997a) culture dissemination model fall into this category. I tend to think that such models are more about brain exercise, and attempt to illustrate some insight.

The third type of model is used for prediction (e.g., An et al. 2005) or, as in this study, has clear policy implications. For these models, validation at all three levels is essential to achieve sufficient credibility to persuade policy-makers . In other words, the axioms must largely reflect facts. This is close to Steve Bankes’s (1993) notion of “consolidative models.” Bankes (1993) offers an interesting discussion on the important role of “exploratory models” for policy analysis. However, even the exploratory capability of a model need to rely on certain levels of understanding about a system to be useful for policy analysis. In implementing agent-based models, we almost always make some assumptions, but the more our axioms rely on assumptions rather than fact, the less credible will be our inferences from the model experiments. There are also technical issues associated with using too many assumptions I will discuss later.

These three model types are intended for different purposes, and the validation levels required for them differ as well. To make agent-based modeling a rigorous research method, we should be clear about the modeling purpose and our assumptions, just as mathematicians explicitly list their axioms. We should also discuss how the assumptions may affect our conclusions. For important assumptions, it may be necessary to do additional experiments to examine their potential impacts on model outcomes. Two analytical tools especially useful for analyzing axioms are sensitivity analysis and robustness analysis.

7.2.3.2 Sensitivity Analysis

Sensitivity analysis tests how changes in a model’s parameters or variables can affect outcomes (Railsback and Grimm 2011). We can apply sensitivity analysis when we lack reliable or accurate estimates about a model parameter or variable. If the results are sensitive to small changes in a model parameter or variable, we need to collect additional data to improve the estimates. Sensitivity analysis can also be used for model verification and validation (e.g., An et al. 2005). We can vary the parameter or variable values to explore how this affects outcome variables. If the patterns of change do not conform to our expectations (based on our theoretical understanding or empirical work), we need to examine model design and implementation to make sure the computer code is correct and the conceptual model is “right.”

Scenarios that combine extreme values of parameters or variables are particularly useful because it is relatively easy to discern how the simulated system should behave under them. As the numbers of parameters and variables increase, it can quickly grow burdensome to conduct systematic model experiments using all possible combinations; sensitive parameters or variables identified by sensitivity analysis can help narrow the range of possible scenarios (e.g., Happe et al. 2006).

Sensitive parameters or variables can be useful for policy interventions. For example, An et al. (2005) identify several variables to which household electricity consumption and, consequently, panda habitat in the Wolong National Nature Reserve, are sensitive. Among them, the age at which people marry and the price for electricity could help formulate policy interventions for habitat conservation. The PLR model shows that in villages with average farmland, the decisions of households to rent out farmland are sensitive to the size of the rental subsidy. This insight could be used by policy-makers to choose a subsidy amount, for example, one that influences land rental markets cost effectively, or one that allows for farmland concentration synchronized with rural labor transfer to the urban sector.

7.2.3.3 Robustness Analysis

Robustness analysis tests how a specific component of a model’s implementation affects model outcomes. For example , we can test alternative representations of agent decision making or alternative distributions of agent attributes. We can explore the distribution of an outcome variable to understand the uncertainty of model outcomes if we know the distribution of a parameter or variable. Every assumption is theoretically subject to robustness analysis. In practice, however, it is impossible to test every one because agent-based models usually make a great many assumptions.

We should at least try to examine the major assumptions. If the model still produces the same outcomes with alternative implementations, the model results are robust and the assumptions are not problematic. Otherwise, we would need to do additional research to learn more about the real system. Despite all the effort made to understand rural development in the PLR through empirical research, there are still some unknown elements in the system. The robustness tests against two major assumptions—that current grain subsidies are based on actual areas planted for rice, and that all farmland rental contracts involve payments—do not only enhance credibility of the model but also improve our understanding of policy effects.

We can also use robustness analysis as an analytical tool to understand our creations. What is the specific contribution of a given component to model outcomes? What is the relative importance of a model’s major components? We can remove a component from a model to understand its contribution to model outcomes. This allows us to look into the black box and unravel the inner workings of an agent-based model, and helps us explain why a model behaves in a certain way or produces certain outcomes. Such explanation also helps us to communicate with non-ABM modelers and convince policy-makers.

These analysis results may be used to simplify a model as well. The Einstein principle is a good guideline for modeling: Models should be made as simple as possible, but not simpler. Robustness analysis is a useful technique to find that “right” model by teasing out relevant but unimportant components. For example, social relations in the farmer household model are relevant to the negotiation of farmland contracts and make the model appear more realistic. But they carry little weight for model outcomes, and the model could be simpler without them. In fact, a parallel model implemented in Python without social relations produces the same dynamics and results .

7.2.4 Models as Projection Systems

Holland’s notion of models as axiom systems is very useful; we may further think of all models as projection systems from some elements to outcomes. Mathematicians start with axioms (elements) and use logical deduction rules (the projection system) to infer system behavior. For regression models, and mathematical model more generally, the elements are state variables, and the project system is a formula; to define a mathematical model, the modeler needs to choose the form of the formula and the variables.

For agent-based models, the elements are many and diverse, including agent attributes, agent decision making, the attributes and dynamics of the environment, interactions, feedback, and often some stochastics. The computer program that weaves all these elements together is the projection system. The modeler must decide which elements in the real system to include and how to relate these elements to one another in the model, necessarily making numerous assumptions. The projection system is thus not as straightforward as a mathematical formula or as clean as deduction rules. From this perspective, we can see more clearly why the benefit of using an agent-based model to represent micro-level processes also creates challenges for its modeler.

On the other hand, as projection systems, agent-based models are not so different from other types of models. In fact, an agent-based model can be approximated by a mathematical model (most likely nonlinear) that directly relates model parameters and variables to model outcomes, ignoring agents and their actions and interactions (e.g., Happe et al. 2006). For all model types, model outcomes depend on nothing more than the elements we select and the projection system we use. How much truth we attach to axioms, model elements, and mechanisms affects our confidence in the model and what we can claim from modeling results.

We know that for mathematical models, and for regression models in particular, more variables increase fitness—but the fittest model may not be the most useful. We know that higher orders of mathematical formula generally lead to better fit to data—but the model’s prediction ability may decrease, as shown by Gigerenzer and Brighton (2009). Similarly, more details in agent-based models do not necessarily improve the model, and too many details can make a model lose generality and become less useful for explaining other systems, or make it problematic for predicting future scenarios; those relevant but inessential details can vary among similar systems or easily change in the future. Again, robustness analysis is helpful for finding the “right” agent-based model just as step-wise techniques are useful for finding the “right” regression model.

Modeling is essentially about exploring the unknowns of a system based on what we know—we build a model based on what we know to learn new things about a system. The model’s ability to bring new understanding therefore rests on what we know. With agent-based modeling, we can gain new understanding by exploring scenarios, and we can do experiments to explore plausible scenarios. But when too little is known about the real system, the number of scenarios we must test grows exponentially from our assumptions. Systematic model experiments will be overwhelming, and even techniques like sensitivity analysis and robust analysis can become ineffective.

To model for policy analysis, then, it is essential to learn as much as possible about a system. This helps us to ask meaningful questions about the system and provides insight about how to design alternative policies to influence the system. This is important for model conceptualization and validation, and can also mitigate the practical issue of experiment analysis just described .

7.2.5 Unlock the Modeling Potential for Policy Analysis

Agent-based modeling is useful for evaluating policy effects, but we can take it further, using models to explore policy levers, tipping points, adaptive policy, robust policy, unintended consequences, and disastrous future outcomes. Agent-based models are particularly powerful for addressing what if questions. Goolsby and Cioffi-Revilla (2011) raise many great what if questions about development and disaster response in sub-Saharan Africa, where social conflicts, unstable governments, and climate all contribute to low levels of development and human well-being.

Models are excellent adjuncts to human intellect, and we can combine models and human intellect to better inform policy decisions (Lempert 2003). Humans have an incredible ability to recognize patterns and make inferences with limited information. We also possess contextual and qualitative knowledge that is difficult to implement in a model. A computer cannot capture the richness of human experience but is capable of computing a large number of scenarios. If we offer policy-makers the modeling results about the performance of multiple policy options, rather than just one, across many scenarios, it will allow policy-makers to integrate their unique human capabilities and other sources of information as they consider policy choices. In this study, for example, the model provides insights into the effects of different subsidies on rural development at different stages of development across multiple outcome variables. This gives policy-makers flexibility to consider and choose appropriate options and use contextual information, such as generational changes—which are not represented in the model but play an important role in influencing the success of subsidies to large farms—under a variety of scenarios.

We can combine agent-based modeling with other methods to enhance its capabilities for policy analysis (see also O’Sullivan et al. 2016). For example, we can combine mathematical tools developed in systems dynamics (LaSalle and Lefschetz 1961; Martynyuk 1998; Bramson 2009, 2010) and bring in data-mining techniques, such as evolutionary algorithms, to explore the model parameter space and data produced by agent-based models (e.g., Miller 1998). This can help identify conditions that lead to disastrous outcomes, generate insights about robust policy, and inform adaptive policymaking. We can integrate GIS within an agent-based model to explore spatial effects (see Torrens 2010; Heppenstall et al. 2012; Malanson and Walsh 2015). Geospatial agent-based models are particularly useful for disaster evacuation and rescue planning (e.g., Crooks and Wise 2013; Crooks et al. 2015). We can also integrate social network analysis (see Wasserman and Faust 1994; Barabási 2002; Newman et al. 2006) with agent-based models to explore social influences (e.g., Andrei et al. 2014). Real social network data are often difficult to collect, and modeling can help explore situations associated with incomplete information or uncertainty.

Social network analysis , as another technique for analyzing complex systems, brings unique insights about policy interventions. Social network-based principles have long been used to effect change in the real world. Such interventions may aim to control or accelerate the diffusion process in social networks (e.g., to contain contagious disease, promote innovation), stabilize or destroy system structure (e.g., enhance stability of electrical grids, eliminate criminal or terrorist networks), or improve system performance (e.g., increase voting participation, improve organization efficiency). Social network-based interventions can target nodes, links, groups, or the overall network structure to influence system-level outcomes (see Valente 2012). Social network analysis is an area where the complexity approach has been relatively successful in influencing policy, particularly in epidemiology.

Social network analysis and social network-based interventions are large topics, beyond the scope of this book. The point here is that social interactions could be policy levers for influencing individual behavior to curb negative outcomes or foster positive changes (e.g., Centola 2010; Rand et al. 2011; Bond et al. 2012). Social network-based interventions are therefore an important part of “smart” policy. As social media and smart devices become more popular, social networks in the cyberspace will likely exert increasing influence over individual behavior and could be used for policy purpose. To make “smart” use of social media for policy interventions, again, we need to understand how these virtual relationships affect individual behavior in the first place.

7.3 An Unfolding End

Agent-based modeling has become increasingly popular in a growing number of fields to simulate various systems, but advances in the theoretical understanding of complex adaptive systems are slow. According to Murray Gell-Man and late John Holland, the founding fathers of CAS, these systems are difficult to study and we are only just beginning to understand them.

I have no doubts that the science of complexity is the science of the twenty-first century, as Stephen Hawking says. But I think it may be helpful if we shift away from the broader notion of complexity and instead focus on some of the specific properties of complex systems and emphasize the CAS approach to examine the micro-level processes. (The very notion of complexity, to some skeptics, indicates something that is unknowable, contributing to suspicions about the science of complexity).

Sustainability is a common property of many complex adaptive systems, from social organizations to economic systems and human civilizations, and can be an organizing concept for studying CAS more generally. These systems all “grow” in some way. And it is generally desirable for them to exhibit resilience . However, they also seem to share a common cycle of fast growth, stagnancy, decay, and collapse. It appears that growth and resilience are somehow intertwined, and even at odds with each other at times.

We can characterize the sustainability of complex adaptive systems in terms of growth and resilience, and define it as continuous, resilient growth. Investigating the fundamental mechanisms underlying sustainability will expand and deepen our general understanding of complex adaptive systems, and also bring profound insight for how policy can foster changes to promote growth and enhance resilience in such systems. I believe that niches , which interested John Holland in his late life (see Holland 2012, 2014), play a large role in such mechanisms. And I can envision how niches are responsible for those common evolutionary patterns in complex adaptive systems—but this is for future research.

Thus, my inquiry about the sustainability of coupled human-environment systems, which started a decade ago, has arrived at this point, an unending end. Our quest for understanding human-environment systems, and complex adaptive systems more generally, will continue to unfold and expand. It is in that quest, that examination of the deep unknown, that one discovers the purpose and the joy behind all scientific inquiry.