Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

There is a wide array of simulation methods that mimic the mechanisms of human intelligence to achieve one or more objectives. Analytical simulation approaches basically use equations that explain data, while statistical ones work primarily with probabilities. An iterative combination of any or both of the above uses feedback options to answer problems which are too complex to be solved by one equation. Most of these equation-based mathematical models identify system variables, and evaluate or integrate sets of equations relating to these variables. A variant of such equation-based models are based on linear programming (Howitt 1995; Weinberg et al. 1993), and are potentially linked to geographical information science (GIS) information (Chuvieco 1993; Cromley and Hanink 1999; Longley et al. 1994). However, in practice there are limited levels of complexity that can be built into these models (Parker et al. 2003).

To incorporate complexity, sets of differential equations linked through intermediary functions and data structures are sometimes used to represent stocks and flows of information (Gilbert and Troitzsch 1999). Although they include human and ecological interactions, these systemic models tend to have difficulties in accommodating spatial relationships (Baker 1989; Sklar and Costanza 1991). Given their power and ease of use, statistical simulation approaches have been widely accepted, largely because they include a variety of regression techniques applied to space and more tailored spatial statistical methods (Ludeke et al. 1990; Mertens and Lambin 1997). However, according to Parker et al. (2003), unless tied to theoretical frameworks, statistical models tend to down-play decision-making and social phenomena. Other simulation approaches express qualitative knowledge in a quantitative fashion by combining expert judgement with probability techniques such as Bayesian or artificial intelligence approaches (Parker et al. 2003).

The gaps and inconsistencies left by these modeling approaches saw the proliferation of cellular automata (CA) in combination with Markov models. In CA, each cell exists in one of a finite set of states, and future states depend on transition rules based on a local spatio-temporal neighborhood (Kamusoko et al. 2009), while in Markov models, cell states depend probabilistically on temporally lagged cell state values. These cellular models (CMs) underlie many land-use studies in which Markov–CA combinations are common (Balzter et al. 1998; Li and Reynolds 1997; Kamusoko et al. 2009). While many CMs assume that the actions of human agents are important, and others assume a set of agents coincident with lattice cells and use transition rules as proxies to decision-making, they both fail to simulate decisions expressly and explicitly (Parker et al. 2003). In the latter case, the actor is not tied to locations and, as Hogeweg (1988) observed, this introduces problems of spatial orientation to the extent that the intrinsic neighborliness of CA relationships do not reflect on the actual spatial relationships. This highlights the main challenge faced by CMs and most of the aforementioned modeling approaches when it comes to incorporating individualistic human decision-making (Parker et al. 2003). When the focus is on human actions, agents become the crucial components in the model. While cellular models are focused on landscapes and transitions, agent-based models (ABMs) primarily focus on humans and their actions. Therefore, it is not surprising to realize that an ABM is more of a mindset that builds on describing a system from the perspective of its constituent units than a technology.

The benefits of ABMs over other modeling techniques can be expressed in three statements: (1) they capture emergent phenomena; (2) they provide a natural description of a system; (3) they are flexible. It is clear, however, that the ability of ABMs to deal with emergent phenomena is what drives the other benefits (Bonabeau 2002). Emergent phenomena result from the interactions of individual entities which cannot be reduced to the system’s parts: the whole is more than the sum of its parts because of the interactions between the parts (Bonabeau 2002). In the geographical context of level and scale, Auyang (1998) understands “emergence” as emergent phenomena at one level that constitute the units of interaction, or drivers of change at a higher level.

There is a wide range of literature discussing the application of ABMs in a number of global environmental challenges where agents have been used to represent a number of entities, including atoms, biological cells, animals, people, and organizations (Conte et al. 1997; Epstein and Axtell 1996; Janssen and Jager 2000; Liebrand et al. 1988; Weiss 1999). However, in this chapter we seek to add to the current discussion about ABMs in land-use modeling, some of which follow the conceptual framework shown in Fig. 10.1. The rest of the chapter is as follows. We begin by presenting the history of ABMs, followed by the concepts of agent modeling and the tools available for simulations with a bias towards land-use modeling. We later outline the work carried out so far in agent-based land-use modeling, and discuss a selected set of applications. In conclusion, we discuss the advantages and limitations currently facing ABMs, and try to predict their future use.

Fig. 10.1
figure 1

A conceptual framework for a farm-based decision-making ABM (adapted from Deadman et al. 2004)

2 History of ABMs

Agent-based modeling can be traced back hundreds of years to discoveries that include Adam Smith’s invisible hand in economics, Donald Hebb’s cell assembly, and the blind watchmaker in Darwinian evolution (Axelrod and Cohen 2000). In each of these early theories, simple individual entities interact with each other to produce new complex phenomena that seemingly emerge from nowhere (Heath 2010). Because of Newton’s reductionist philosophy (Gleick 1987) and his lack of tools to adequately study and understand emergent phenomena, it was not until the theoretical and technological advances were made that led to the invention of the computer that scientists began building models of these complex systems and began to have a better understanding of their behavior (Heath 2010). The pioneering work was carried out by Alan Turing with the invention of the Turing machine around 1937. By replicating any mathematical process, the Turing machine showed that machines were capable of representing real-world systems (Heath 2010). The theoretical scientific belief that machines could recreate the non-linear systems observed in nature got a further boost when Turing and Church later developed the Church–Turing hypothesis, which stated that a machine could duplicate not only the functions of mathematics, but also the functions of nature (Levy 1992). Premised on von Neumann’s heuristic use (von Neumann 1966) these machines have since moved from theoretical ideas to the real computers that we are familiar with today (Heath 2010).

Now that computers had come to stay, the scientific focus shifted towards synthesizing the complexity of natural systems. Influenced by a reductionist philosophy, most scientists took a top-down approach (Heath 2010). Evidence of this is seen in early applications of artificial intelligence, where the focus was more on defining the rules of the appearance of intelligence and creating intelligent solutions than focusing on the structure that creates intelligence (Casti 1995). This approach was skewed towards the idea that systems are linear, and thus it failed to enhance our understanding of the complex non-linear systems found in nature (Langton 1989). A U-turn towards a bottom-up approach followed when Ulam suggested that von Neumann’s self-reproducing machine could be represented more easily by using cellular automata (CA) (Langton 1989). CA are self-operating entities that exist in individual cells which are adjacent to one another in a 2D space like a checkerboard, and have the capability to interact with the cells around them. According to Heath (2010), the impact of the CA approach was overwhelming for two reasons: (1) because the cells in CA act autonomously and simultaneously with other cells in the system, the simulation process changed from serial to parallel representation, and (2) CA systems are composed of many locally controlled cells that together create global behavior. The former was important because many natural systems are widely accepted to be parallel systems (von Neumann 1966), while the latter led to the bottom-up approach as the CA architecture requires engineering a cell’s logic at the local level in the hope that it will create the desired global behavior (Langton 1989).

After learning how to synthesize complex systems and discovering some of their properties using CA, complex adaptive systems (CASs) began to emerge as the direct historical roots of ABMs (Heath 2010). Drawing much of its inspiration from biological systems, CASs were mainly concerned with how complex adaptive behavior emerges in nature from interactions among autonomous agents (Dawid and Dermietzel 2006). Much of the early work in defining and designing CASs resulted from Holland’s work to identify properties and mechanisms that compose all ABMs as we know them today (Buchta et al. 2003). Holland reported the three main properties of CASs to be aggregation, non-linearity, which is the idea that the whole system output is greater than the sum of the individual component outputs, and diversity, meaning that agents do not all act the same way when stimulated by a set of conditions.

It is evident that ABMs emerged from the scientific search to try and understand non-linear systems, and this revelation suggests why ABMs are a useful research tool. In summary, many subject areas played an important role in developing the multidisciplinary field of ABMs.

3 Agent Modeling

Parker and Meretsky (2004) noted that ABMs often model complex dynamic systems and focus on the macro-scale, or “emergent,” phenomena that result from the decentralized decisions of, and interactions between, the agents. The concept behind ABMs, which was borrowed from the computer sciences, is to mimic human- or animal-like agents interacting at the micro-scale in a computer simulation in order to study how their aggregation leads to complex macro-behavior and phenomena (Berger 2001).

ABMs build on a successful specification of the agent itself, its behavior, the representation of the environment and the interactions. The term agent refers to any individual or group of individuals who exist in a given area and are capable of making decisions for themselves or for the given area. Generally, an agent can represent any level of organization (a herd, a village, an institution, etc.) (Verburg 2006). In land-use modeling, these agents couple a human system making land-use decisions with an environmental system represented by a raster grid (Deadman et al. 2004, see Fig. 10.1).

The specification of the behavior of agents demands a proper description of the actual actions of the agents and the basic elements that cause modifications in their environment and in other agents (Bandini et al. 2009). It also demands the provision of mechanisms for the agents to effectively select the actions to be carried out. The mechanism of an agent refers to the internal structure which is responsible for the selection of actions (Russel and Norvig 1995), while the actions of agents pertain to descriptions of the agents’ actions, for instance state transformation, environmental modifications, an agent’s perception and responsiveness, and the spatial physical displacement of an agent in the environment. The description of the environment of an agent see Weyns et al (2007), for a detailed definition should, among other factors, primarily define and enforce the rules of behavior of an agent, and maintain the internal dynamics of the system to avoid chaos. At the same time, it should also support an agent’s perception and localized actions by embedding and supporting access to objects and parts of the system that are not necessarily modeled as agents (Bandini et al. 2009). Interaction is a key aspect in agent design, both with other agents and/or the environment. Several definitions of interaction have been provided, and most of them focus on the ability of agents to engage with the environment and with other agents in a meaningful problem-solving or goal-oriented scheme to achieve particular objectives according to the coordination, cooperation and competition practices of natural phenomena.

These concepts have been the subject of experiments on many platforms, the choice of which tends to depend largely on the researcher’s preference, the computation requirements, and the overall objectives of the study. Most ABM platforms follow the “framework and library” paradigm (Railsback et al. 2006). A framework is a set of standard concepts for designing and describing ABMs, while a library is a set of software implementing the framework and providing simulation tools. Without trying to be exhaustive, we present some of the commonly available agent modeling platforms. The earliest of these platforms include the Swarm (Minar et al. 1996, www.swarm.org), whose libraries were written in Objective-C with later up-dates using Java Swarm in order to allow the use of Swarm’s Objective-C library in Java (Railsback et al. 2006). The recursive porous agent simulation toolkit (RePast) (Collier 2000; http://repast.sourceforge.net/) was first developed as a Java implementation of Swarm, but has since evolved into a fully fledged stand-alone Java platform. MASON (Luke et al. 2005; http://cs.gmu.edu/%7Eeclab/projects/mason/) was developed later, also as a Java implemented tool. Despite these platforms providing standardized software designs and tools without limiting the type or complexity of the models they implement, they have well-known limitations (Railsback et al. 2006). According to Tobias and Hofmann (2004), their weaknesses include difficulty of use, insufficient tools for building models, and especially tools for representing space, insufficient tools for executing and observing simulation experiments, and a lack of tools for documenting and communicating software. The Logo family evolved from such limitations with the aim of providing a high-level platform that allows model building and learning from simple ABMs (Railsback et al. 2006). Although built on elementary-level principles primarily to aid student learning, NetLogo (http://ccl.northwestern.edu/netlogo/) now contains complex capabilities and is arguably the most widely used platform (Railsback et al. 2006). Figure 10.2 is a screenshot of a NetLogo platform that comes with its own programming language, which is claimed to be simpler to use than Java or Objective-C, an animation display automatically linked to the program, and optional graphical controls and charts.

Fig. 10.2
figure 2

A NetLogo ABM platform

A model agent is an abstract representation of the real world, the landscape, individuals or groups, and the processes that link these components. Model agents are developed at varying levels of complexity and scales of representation, but their development should offer a level of realism that will not inhibit any validation techniques which will be used later (Deadman et al. 2004). An agent tends to act as an interface in helping to assimilate the broader macro-information into the decision-making process at the grid level, thereby creating an action in response to the natural and economic stimuli (Rajan and Shibasaki 2000). In land-use modeling, the macro-information comes in the form of the biophysical conditions in the area and the prevailing economic conditions at a given location and time.

Mismatches between the units of analysis and the units of actual decision-making have been widely accepted, and attention is slowly shifting from pixels to agents (Verburg et al. 2005). In land use/cover change (LUCC) modeling, for instance, the overarching problem has been linking agents capable of decision-making to land areas: i.e. linking “people and pixels” (Geoghegan et al. 1998; Rindfuss et al. 2003). An expanding group of models has recently used individual agents as units of simulation (see Berger 2001; Bousquet and Le Page 2004; O’Sullivan and Haklay 2000; Parker et al. 2003). While agent-based approaches have specific strengths in describing and exploring decision-making by agents in a variety of fields (see Malleson 2010), they face difficulties in adequately representing the spatial patterns in LUCC models owing to difficulties in representing the feedback between the behavior of the agents and land units (Verburg 2006). In some ABMs, a cellular automata (CA) approach is used, in which the state of a pixel is determined by the state of the neighboring pixels (Ligtenberg et al. 2004; Manson 2005). Although CA methods are often seen as a type of multiagent approach, because of the explicit treatment of interactions between (spatial) entities it is hard to imagine that the pixels are a representation of the agents (Couclelis 2001).

In current practice, a cellular component that represents the landscape is coupled with an agent-based component that represents the human decision making (Schreinemachers and Berger 2006; Parker et al. 2003). As the debate progressively leans towards agents and away from pixels, challenges about how to represent real-world decision making become more apparent. The decision-making structure of an agent falls into two broad categories, optimizing and heuristic. The key difference is that the latter have neither the information to compare all feasible alternatives nor the computational power to select the optimum (Schreinemachers and Berger 2006). Heuristics are relatively simple rules that build on the concept of a search process guided by rational principles (Simon 1957), while optimization needs the ability to process large amounts of information about all feasible alternatives and always select the best one (Schreinemachers and Berger 2006). The intuitive nature of heuristics makes them more transparent and therefore easy to validate. However, constructing a decision tree which is representative of the thought processes of a human being is not easy. A variety of optimization approaches are available, but the most common include mathematical programming (see Balmann 1997; Berger 2001; Becu et al. 2003; Happe 2004) and genetic programming (see Manson 2005). Mathematical programming (MP) is a computerized search for a combination of decisions that yields the highest objective function value (Schreinemachers and Berger 2006). Unlike the heuristic approach, MP requires the explicit specification of an objective function. In LUCC modeling, the objectives of the agents, which include cash income, food, and leisure time, tend to be similar for both MP and heuristic approaches. Figure 10.3 gives an example of a heuristic decision-making tree.

Fig. 10.3
figure 3

A heuristic structure of subsistence farm-based decision-making (adapted from Deadman et al. 2004)

4 ABM Applications

Using a mountainous region in Laos, Wada et al. (2007) developed a micro-scale ABM to simulate the spatial and temporal patterns of shifting cultivation with the aim of understanding how this expands in space. While ABMs recognize and take advantage of the fact that human decision-making is heterogeneous, decentralized and autonomous (Parker et al. 2003), this is a representative case in which individual behavior is conspicuously less heterogeneous and less decentralized. The base unit in the model was a cluster of villages as opposed to individual households (see Deadman et al. 2004; Evans and Kelley 2004). The choice of a cluster of villages in the Laotian model was partly because of the limited availability of spatial data (village boundary data), and also because of the revelation that decisions to expand and/or relocate shifting cultivation are made at village level rather than in individual households (Wada et al. 2007).

Underscoring the sporadic, incomplete and mostly non-existent market context in the subsistence agriculture set-up, Walker (1999) attempted to account for land allocation beyond the extensive margins of permanent agriculture. He builds on the notion of peasantry, where the subsistence farmers require a wide selection of natural commodities to survive and pursue their cultural activities. In the absence of markets, such commodities tend to be obtained from the forest environment or through agricultural activities of limited scope. In cases of a natural increase in population, the pressure brought to bear on the land resources results in technological intensification which, in the initial phases, involves a reduction in the rotation times of the shifting cultivation. As a result, the nutritive requirements of a household combined with the accelerated rotations explained the diversity in crop selection for most households, while the reduced areas of cultivation accounted for the magnitude of production (Walker 1999).

In the field of policy analysis and planning, much work has been done, for example, to evaluate the impact of a number of agricultural policies on regional structural changes (Happe 2004), and the impacts of free trade policies on the diffusion of innovation in agricultural regions of Chile (Berger 2001). The pioneering work by Balmann (1997) was a demonstration of the existence of a dependence on paths in the evolution of land use, which he later used to investigate the effect of reducing price support and introducing compensation payments (Balmann et al. 2002). Several studies have attempted to use ABMs to explore the likely impacts of specific real-world policies (see Weisbuch and Boudjema 1999; Deffuant et al. 2002; Sengupta et al. 2005; Janssen 2001), while others have examined the influence of generic and abstract policies on the behavior of an agent within a system (Janssen et al. 2000).

Deadman et al. (2004) presented a simulation model that explored human understanding of the spatial, social and environmental concerns related to LUCC. Based on a heuristic decision-making strategy, they utilized household characteristics, among other factors, in which the interaction of agents was effected through a labor pool. While subsistence labor demands may not always be significant, it has been reported that significant gender differences occur with respect to farm labor within households (Siqueira et al. 2002) in much the same way as population age. Although it flexed randomly on gender, the LUCITA model (see Deadman et al. 2004) did not pay particular attention to overall population age. Evans and Kelley (2004) did an analysis of scale and how it impacts on the design and implementation of LUCC ABMs at the household micro-level. The analysis revealed differences in land-use preference weights that helped to identify scale considerations in the design, development, validation and application of ABMs in LUCC analysis. In their discussion, Evans and Kelley (2004) highlight the complexities of spatial scale and computational capacity limitations, and acknowledge the non-monetary influences on decision-making.

Using ABMs to describe the decision making of land-use parcel managers and cellular automata to represent the landscape, the SLUDGE model explored the impact of distance-dependent spatial externalities and transportation costs on patterns of urban development and land use (Parker and Meretsky 2004). A similar test on the mechanisms behind the growth and spatial patterns of cities was conducted by Torrens and Alberti (2000) in order to address issues of local decision-making in determining urban sprawl. In this study, several metrics were developed to quantify the sprawl patterns. Brown et al. (2004), Loibl and Toetzer (2003), Rajan and Shibasaki (2000), Sanders et al. (1997), Dean et al. (2000), Kohler et al. (2000), Hoffmann et al. (2002), Huigen (2004), and Otter et al. (2001) have all contributed significantly to the use of ABMs by explicitly simulating human decision-making processes rather than using empirical approaches (Mathews et al. 2007).

5 Conclusions

Agent-based modeling is an approach that continues to receive attention in studies of many geographical phenomena. As Mathews et al. (2007) note, this is because it offers a way of incorporating the influence of human decision-making on land use in a mechanistic, formal and spatially explicit way. ABM is therefore a handy tool in developing a greater understanding of the natural world.

Empirical illustrations of observed outcomes have been shown to be a sufficient end for ABM (Epstein 1999). However, Parker et al. (2003) note that it is retrogressive to limit the potential and appropriateness of ABM to such illustrations, especially in cases where the design and implementation prospects of ABMs are very promising, with reported success in many varying fields of human significance. It has been argued that as simulation models, ABMs are limited. Firstly, they cannot be sufficiently deductive to give confidence in the outcomes from the model parameters. However, as Judd (1997) counter-argued, through sensitivity analysis, an almost complete understanding of the dynamic system under study is achievable. Secondly, ABMs are said to be sensitive to small perturbations in model parameter values at the micro-scale or a lower level, thereby providing a multitude of outcomes, but as Parker and Meretsky (2004) stated, the focus of ABMs is on the macro-scale or emergent patterns. Although there may be significant differences at the micro-level, the outcomes tend to be similar at the macro-level. While ABMs address individualism in the mechanics of system behavior, validating ABMs has proved to be a difficult task. However, Berger (2001) justified his choice by pointing out that ABMs do allow for a pragmatic treatment of data availability. He cited the exchange of information interactions between farming households, the cumulative effects of experience and the observation of neighbors’ experience, and technical and financial constraints as factors that affected the diffusion of innovations and which could be explicitly defined and controlled within an ABM.

ABMs are implemented at varying levels of stakeholder involvement. Parker et al. (2003) highlighted three cases in which stakeholders were involved either right from the beginning of the modeling process, in the final stages of testing and running the model, or where models are presented as ready-made applications to policy makers. With the majority of ABMs falling into the former two categories, Mathews et al. (2007) ascribed the failure by end-users to use ABMs directly as decision support systems to a poor understanding by researchers of the actual process of decision-making and the role that decision support tools may play in this process. Several other factors are attributed to the lack of success of decision support systems, since failures at the former two levels are equally common (Mathews et al. 2007). Faced with such limitations, Stephens and Middleton (2002) stated that simulation models are probably more useful as research tools to provide insights into constraints that can later be transformed into rules-of-thumb, than as operational decision support tools. Lempert (2002) followed a similar line when he argued that much of the failure is to do with the predictive, as opposed to explanatory, approach that many modellers adopt. He suggested that model runs ought to compare the robustness, resilience and stability of alternative policies.

Agents interact either indirectly through a shared environment and/or directly with each other through markets, social networks and/or institutions. Higher-order variables such as commodity prices and population dynamics are usually expressed as emergent outcomes (Mathews et al. 2007). Moving from relatively abstract representations, ABMs have gradually progressed into an exploration of the conceptual aspects of spatially explicit systems of real-world situations (see Epstein and Axtell 1996), and all the way through to more complex representations of socio-ecological systems (see Berger and Ringler 2002; Hoffmann et al. 2002). With the addition of empirical data, recent versions of these models are now being applied to specific real-world situations (see Deadman et al. 2004). Complex environmental problems tend to be multidisciplinary, temporally dynamic, and spatially referenced. As a result, the nature of the interactions of these systems often makes it difficult to predict the outcomes for particular management actions, socio-economic conditions, or environmental processes (Deadman et al. 2004). However, recent advances in computing technology have further enhanced the use of computer-based models and analyzes that have since expanded the interest in computational approaches to the study of human geographic systems with the aim of providing meaningful solutions. ABM has tapped into these advances, and there is still plenty of room for growth and improvement.