Keywords

Introduction

A distinctive feature of every economy is that it consists of a large number of interacting units who pursue their private interests in an uncertain environment within particular circumstances of time and space.Footnote 1 Even though it seems very complex from the outside, a lot of economic activity depends upon the very simple question of what individual members of a society know. Economic analysis has been traditionally done under the rationality assumption of, loosely speaking, perfect knowledge, perfect foresight and the agent’s best possible selection, although the environment in which economic units make decisions is inherently the environment of incomplete and frequently contradictory knowledge that is dispersed among all economic units and which is evolving over time, as argued by Hayek (1937, 1945). Some presume that dispersion of beliefs is required for the market to function at all (Milgrom and Stokey 1982). Such decentralized systems represent the basis for extensive economic interactions among bounded rational individuals. In the environment which consists of many such individuals, the individual-specific characteristics and imperfections determine the peculiar problem of a rational economic order. A problem of a selection and allocation of available resources is thus not just a technical one that could be represented by the closed-form solution but a complex, making the economy a complex system. Generally, every complex system that is characterized by the repeated nonlinear interactions among its constituents, where one agent’s decision affects and is affected by decisions of others, can induce coherent large-scale collective behaviors with a very rich structure that is impossible to foresee (Sornette 2004).

An agent-based approach in which bounded rational agents represent the driving force of the aggregate behavior that evolves over time takes this into account. Agents in an agent-based model are modeled as software entities and include data together with behavioral characteristics that act on these data. They are goal-oriented individuals of different characteristics who are able to learn over time and are subject to different constraints. Definition of an agent is not restricted only to human agents. Agents might also include social groupings, biological entities and physical entities. They can range from active data gathering decision-makers with sophisticated learning and cognitive capabilities to passive world features with no cognitive functioning (Tesfatsion 2002; Tesfatsion and Judd 2006). Then, the network consists of a group of mutually connected agents who may be very different in their structure. Starting from an initially specified system state and the rules of conduct, these agents are constantly engaged in local interactions by which they produce the outcome of the entire group, which in turn affects individuals’ future behavior. Many of these applications borrow from the evolutionary game theory, which helps us examine how behavior of individual entities within a group changes over time according to behavior of their counterparts (Maynard Smith 1982; Weibull 1995).

The present chapter examines social networks in a social science context, especially in economics and finance. Early attempts of using networks in economics were due to Myerson (1977) and Kirman (1983, 1997). Since then, agent-based techniques have been increasingly used in economics. Our principal objectives were the following: to review the field of interaction-based models in economics and finance, to describe the properties of these models, and to present some applications.

Interaction-based approach provides a multidisciplinary tool for exploring many different phenomena and can easily incorporate elements from other fields, such as game theory, psychology, neurology, sociology, biology, which makes these models highly applicable. They have helped us understand many open questions from different research fields from within and out of economics, especially in cases of complex models that are mathematically intractable. At least, these models have provided us some complementary arguments to the questions at hand; not only about the solutions as such but also about how these solutions evolve over time and how they might change as the circumstances are slightly perturbed. In an interaction-based model, individual nodes, links or some other model attributes can be affected by different types of shocks while the modeler is able to examine the consequences. The modeler can also examine the model according to different rules of conduct and so on, which makes the agent-based approach well suited for examining very complex and evolving systems.

The chapter proceeds as follows. In section “Properties of the Interaction-Based Models”, we present social networks and cellular automaton as two baseline models on which interaction-based games can be applied. They both represent an infrastructure which agents use to interact with each other and share information with one another. In addition, both of them are capable of encompassing many different theoretical aspects that might be relevant in interaction-based applications and agents’ decision-making. The section ends with a brief discussion about the network types that would best fit the model’s characteristics. Subsequent sections offer a thorough overview of interaction-based models in economics and finance by which we provide some ideas and the range of how these models can be used. We start with diffusion models because other interaction-based models borrow many concepts from this group. A special class of interaction-based models represents game theoretic models. In section “Applications Outside Economics”, we review some interaction-based models from outside the economics and finance. Many solutions from these applications can be and have been very effectively used in examining various phenomena in economics and finance. Although very extensive, the review is far from being complete. Section “Simulation-Based Experiments” presents two examples of interaction-based applications, by which we present how these applications can be conducted and how (even minor) changes in parameter values or in the network structure end up in highly different outcomes of the entire system. These simulation experiments are then followed by a short discussion in Section “Discussion”, while the last chapter concludes.

Properties of the Interaction-Based Models

The Network

A model for a social network is a graph. We can also say that a graph is a mathematical representation of a network. By definition, a graph \(G=\left( {V,E}\right) \) is composed of a nonempty finite set of nodes (or vertices) \(V\), representing the units, and a nonempty finite set of edges (or links) \(E\), representing their pairwise relations (Fig. 10.1). Depending on the application, a node can be a single individual, a firm, a country, a group, or some other autonomous unit. Nodes and links may include a variety of properties, which may be numerical or qualitative. For instance, in a network of friends, nodes represent individuals and the links their friendship relations. In a banking network, nodes may represent different banks and the links the interbank exposures. Extensive reviews on social networks and the network-based models are given in Boccaletti et al. (2006), Goyal (2008), Jackson (2008, 2010), Wasserman and Faust (1994), Brock and Durlauf (2001a).

Fig. 10.1
figure 1

A graph

It is very common to denote the link between nodes \(i\) and \(j\) simply as \({ {ij}}=1\), and \({ {ij}}=0\) otherwise. Two nodes that are joined by a link are referred to as incident nodes or neighboring nodes or connected nodes. The presence of a link is a required condition for the information flow between the two nodes, but not also a sufficient. In an undirected graph, edges are unordered pairs of nodes, which means that if \({ {ij}}=1\Leftrightarrow { {ji}}=1\) and if \({ {ij}}=0\Leftrightarrow { {ji}}=0\). This applies to situations where two nodes are either in a relationship with each other or not. In a directed graph, edges have directions. An edge \(\left( {i,j}\right) \) allows us to move only from \(i\) to \(j\) but not also from \(j\) to \(i\). We say the network is finite if it has a finite number of nodes.

In most applications, graphs do not contain loops or reflexive ties, by which single nodes would be linked to themselves, nor multiple edges by which a pair of nodes would be linked more than once. If such a structure exists, the elimination of a single link between the two nodes does not eliminate the link between them. A demonstration of a multiple-edge banking network would be the network, in which banks possess various different instruments from the same counterparty. Mixed graph is a graph with both undirected and directed links.

In a graph, individual nodes that are not directly linked may be reached through the sequence of nodes and links. A graph is connected if for every pair of nodes \(\left( {i,j}\right) \) there exists a walk from node \(i\) to node \(j\).Footnote 2 The distance \(L\left( {i,j}\right) \) from node \(i\) to node \(j\) is equal to the length of the shortest path from \(i\) to \(j\). Often, the shortest path between two nodes is referred to as a geodesic. If there is no path from \(i\) to \(j\) then \(L\left( {i,j}\right) =\infty \). The eccentricity of a node is the largest geodesic distance between the node and any other node in the graph, i.e. \(Ecc_i =\max _i L\left( {i,j}\right) \). Maximum eccentricity of any node is \(\left( {n-1}\right) \). A graph has a diameter \(D\) if every node in the graph can be reached by the maximum geodesic of a length \(D=\max _{i,j} L\left( {i,j}\right) \). Diameter is the largest eccentricity. The term “degree of separation” is usually used in the context of diameter.

The degree of a node \(k\) is the number of edges incident with it. It represents the number of nodes linked to it. A node degree can range from 0 for an isolated node, to \(n-1\) for a node that is linked to every other node in the network. The set of nodes that are linked with node \(i\) is called the neighborhood of \(i\). In directed graphs, every node has an in-degree that represents the number of incoming links, and an out-degree referred to as the number of out-going links. A bridge is a link in a graph such that its elimination splits the graph into several unconnected sub-graphs; i.e. components or islands. A node that connects two components is called a cutpoint. Connectivity is an important element in defining the network behavior and may induce different consequences when the network is used for different purposes. For instance, connectivity can work either contagiously or as a channel of risk-sharing in a financial network, while epidemiological networks do not have the risk-sharing potential.

The most basic topological characterization of a graph can be obtained in terms of the degree distribution P \(\left( k\right) \). It relates to the statistical distribution of the nodes’ degrees and is defined as the probability that an arbitrarily chosen node has degree \(k\). Equivalently, it is defined as the fraction of nodes in the graph having degree \(k\). In homogenous networks, such as random networks and small-world networks, nodes with degrees significantly higher than others do not exist. However, such networks are rather exceptional in reality. On the contrary, it has been argued that most real-world networks exhibit power-law distribution, which means that there exist some nodes with high degrees and a vast majority of nodes with small number of adjacent links (Albert and Barabasi 2002 and references therein). Such networks are referred to as scale-free networks. In scale-free networks, the rate at which individual nodes increase their degrees depends on their fitness to compete for the links of other nodes. This observation is very general, because fitness of a node may be determined by many factors, such as its degree, age, reputation, distance or some other competitive factor that attracts other nodes. By the same token, some nodes may avoid linking to some of the others. Not all nodes are identical in terms of fitness while each node increases the number of connections accordingly to the fitness it possesses over time. Barabasi and Albert (1999) have demonstrated that if the network develops according to this principle, which they call preferential attachment, it exhibits power law distribution. The preferential attachment and, under certain conditions, also the fitness models allow for the endless link formation, which is possible only theoretically. Most networks are subject to serious constraints, either due to the nodes’ limited degree capacities or due to aging of nodes or links, for which the nodes’ degrees have an upper bound (Amaral et al. 2000). In addition, communities of bounded sizes have been observed in reality.

One additional network-based characteristic that is very common in socio-economic networks is assortativity. It illustrates a phenomenon when nodes tend to be connected to other nodes that are similar to themselves. Kremer (1993) identifies such a pattern in a production process and argues that workers of the same skills are matched together in the equilibrium, which makes high-skill workers even more productive. Patterns of assortativity can be found in various grouping models, such as marriage models, models that build on trust, mating models, etc. Dissortativity (or negative assortative) is the opposite case, when, for instance, high degree nodes tend to be connected to low degree nodes. Similarly, one can find homophily in the networks, and heterophily, its opposite.

Although the notion of a node degree is compelling, it is by no means sufficient let alone exhaustive. The behavior of a network depends on the role, influence, and importance of single members within bigger communities. A node degree measures the number of nodes to which individual nodes are adjacent but this does not say anything about the importance of these links. A node can be linked to many unimportant nodes or to some highly important. There is no clear definition of a node’s importance, which depends upon the structure of the network and the context as such. One measure is represented through the node’s centrality, the other, when the directed networks are applied, through its prestige (see Ballester et al. 2006; Wasserman and Faust 1994). In Steinbacher et al. (2013), the node’s importance is measured through the damage that its elimination causes to the system. Hence, each node is assigned an alpha-criticality index with the criticality level measuring the extent of the damage.

Cellular Automata

The alternative approach to the network-based experimentation represents cellular automaton (Wolfram 1983). Cellular automaton was originally introduced by von Neumann (1966). It is a discrete time and discrete state system in a D-dimensional lattice which consists of the cell space, cell size, neighborhood size and type, transition rules and temporal increments. In the schematic representation of Fig. 10.2, individual agents are colored black and move throughout the lattice according to the state on the lattice, their preferences and decision rules, and general rules of conduct.

Fig. 10.2
figure 2

Cellular automata model

Cellular automata on complex topologies are systems in which each agent can be in only one of a finite number of states. At each time step, the next state of each agent is computed as a function of its state and of the states of its neighbors on the network. Agents can move only in the neighborhood of the cell which they occupy. Agents’ dynamics on the lattice could be limited by the outer bordering cells, but this does not have to be the rule. For instance, in a 2-dimensional lattice we can assume that the lattice is a representation of a globe. Therefore, an agent exiting of left enters the lattice on the right and each cell has eight neighboring cells no matter the position.

Applications of cellular automata are many. Cellular automata have been extensively used to simulate the evolution of self-organization systems and particularly for urban modeling. Langton (1990) finds that by manipulating the parameters of cellular automaton, the aggregate behavior of the system exhibits a phase transition between highly ordered dynamics to chaos. Some applications will be presented in the sequel.

The Game Structure

Many of the methodological concepts that we use are taken from the game theory literature (see Osborne 2002). By referring to games, we do not have necessarily the usual game theoretical framework in mind but computer-based experiments. Hence, the games on networks could alternatively be referred to as the activities on networks. We denote them games to highlight the connection to the games as we usually know. Each game consists of the finite set of \(i=\left( {1,2,\ldots ,N}\right) \) agents, preferences and objectives \(P_i \) for each agent, payoffs \(\Pi _i \in {\mathbb {R}}\) for each agent (or utility), a set of actions \(\mathrm{A}_i \) for each agent and a set of rules \(R_i \). In the network terminology, the properties of nodes are usually called attributes and the properties of links are called weights.

As we have said before, an agent may be an abstract version of a single individual, a firm, a country, a hub or some other autonomous unit, while a multiagent system is a system that contains multiple agents who interact with each other within the environment in which they live and, in some applications, also with the environment. Russell and Norvig (2010) define agents as anything that can perceive its environment through sensors and act upon that environment through effectors. The usual assumption here is that agents are heterogeneous and adaptive both in their attributes and the ways in which they react to the environment.

Because heterogeneous agents interact with one another, the system virtually always evolves over time dynamically and evolutionary, and often also stochastically. The agents’ heterogeneity is an important factor which adds the complexity into the course of the game. And not only that, when using an agent-based framework, it becomes apparent that most of the problems we try to solve require heterogeneous agents.

The game usually proceeds as follows. The model is first constructed, then the initial conditions and the system states are specified and the rules of conduct defined. Afterwards, autonomous agents are constantly engaged in local interactions according to their characteristics and the predefined rules. During the game, agents affect the others and are affected by the others by which at every point in time the aggregate outcome of the entire group arises. Perpetual activity is thus integrated into the model structure, although the participants can also be just passive creatures.

Modeling Agents

Having defined the agents’ attributes, they can be freely manipulated to meet the structure of the problem and to describe the agents’ characteristics. Computational agents can be very broadly defined and can range from simple scalars to complex functions.Footnote 3 Agent-based approach is very robust in this respect and allows the modeler to define different types of agents with different knowledge and preferences, objective functions and endowments, or to model agents who follow different strategies and pursue different selection criteria, or agents who are omniscient or non-omniscient, rational or irrational, unsuspicious or suspicious, autonomous or subservient, far-minded or short-minded, patient or impatient, or conservative agents, etc. All this may bring the modeled agents much closer to how they look like in reality than the neoclassical.

A special class of games represents those in which agents refer to human beings. A lot of what has been said in the previous paragraph relates to the agents as human beings. Human agents differ from other types of agents in a cognitive component. When we talk about the cognitive component, we particularly have in mind the agents’ preferences, their communication skills, knowledge and learning mechanisms, their abilities to set-up their goals and build expectations, to gather and process information, to maintain their (social) role in society and, finally, also their selection patterns. A cognitive component makes the decision making of human agents a complex task and their aggregate behavior a complex system of interdependent subjectivisms. Non-human agents are generally passive in nature and their actions instantaneous without the agents’ control. However, this does not mean that passive agents are not subject to different types of constraints. Electric hub, for instance, can transmit just a limited amount of electricity. On the other hand, not all the models of human-alike agents include a cognitive component.

In dynamic and evolutionary games, agents’ initial attributes are set over a vector of characteristics for each agent. Agents usually also have some prior knowledge about the problem they face. If learning is applied, then the evolutionary dynamics for these attributes has to be specified. The ability to include learning is an important factor in interaction-based modeling, by which we make the agents the active units who are allowed to correspond reasonable to the changing circumstances. This gives the games a new dimension. Different situations spur different learning processes (Brenner 2006; Fudenberg and Tirole 1998; Russell and Norvig 2010). Agents can learn either when they are repeatedly faced with the same or related problem, or when they play the same or related game repeatedly, or from observing the changing environment. In the latter case, the dynamics inhibits learning processes. In the extreme case when the environment is chaotic, agents do not have many opportunities to learn. However, in the two-agent repeated game, each agent should also acknowledge that their actions may affect the future plays of their opponents. In the games that include learning, different types of reinforcement learning methods have been highly extensively used (Sutton 1988). In the regret matching model, agents decide upon the pain caused by the non-selected alternative as to the selected alternative, or upon experience-weighted attraction model, proposed by Camerer and Ho (1999), or copy actions of their neighbors (Erev et al. 1998), in Q-learning agents learn from delayed rewards. The method of temporal differences is a method of incremental learning in which learning occurs upon the difference between the predicted outcomes that are done upon past experience and the data, and actual outcomes. Reinforcement learning is a kind of feedback learning. In reinforcement learning games, agent’s actions gradually approach the most efficient ones. Agents either try different actions over time or get information from their neighbors. In either case, this is either done through the trial-and-error search or through reward-based search.

Alternatively, Cowan and Jonard (2004) model knowledge diffusion as a barter process between agents, in which agents exchange different types of knowledge with their adjacent links. In their model, agents repeatedly meet their neighbors and trade if mutually profitable trades exist. In this way knowledge diffuses throughout the economy.

Anderlini and Ianni (1996) study the long-run properties of a class of locally interactive learning systems. A finite set of players at fixed locations play a two-by-two symmetric normal form game with strategic complementarities, with one of their neighbors selected at random. Their model exhibits emergent phenomena and a high degree of path dependence, which is induced by the endogenous nature of the model and the noise. Baumol and Benhabib (1989) argue that due to the huge sensitivity of an economic system to microscopic changes in parameter values, a chaotic system may reach the long-run at different points, producing very complex time paths, despite the simple and even deterministic relationships among its constituents as long as they are nonlinear. Such a local nature of search can explain price dispersion in a search model, wherein different agents sell the same good at different prices in a given market.

Although agents interact with their peers and exchange information with them, the interaction-based experiments are not bound only to such inter-personal links, but may also include information from the environment in which they live.

The last phase of a selection process is the selection as such. The selection is viewed as a mapping of agents’ knowledge into a decision from a set of available alternatives, given the feasibility constraints. Although this is the universal description of the agent’s problem, the assumptions about its constituents are the key to understanding agents’ behavior. Traditionally, researchers have assumed that agents make perfect selections so as to maximize a utility function upon the perfect knowledge and time-consistent preferences.

The conceptual breakthrough that traced the path towards modeling cognitive agents was initiated by the work of Herbert Simon who substituted the optimization principle with the notion of satisficing agents (Simon 1957). The enormous literature on the psychological (or behavioral) economics which was later initiated by Kahneman and Tversky (1979), Tversky and Kahneman (1986) expanded the perception of the agent-based modeling, particularly in respect to individual agents. The behavioral economics research suggests that the expected utility theory does not adequately describe the agents’ behavioral and selection patterns, because agents violate the fundamental axioms of the theory. Often, the “agent-based” agents are described as bounded rational agents. Rubinstein (1998) provides a thorough discussion on modeling bounded rational agents.

In the behavioral approach, agent’s decision-making is subject to imperfections along the entire selection process from their preferences, set of alternatives, knowledge about the alternatives, data gathering, data processing, to the choice rules and the selection as such.Footnote 4 Nothing from this is static over time. It is standard to assume that agents face a hard budget constraint, while the behavioral literature also assumes that agents have incomplete and asymmetric knowledge about the available alternatives, that they do not have transitive preferences, violate rational expectations hypothesis, are time-inconsistent, learn, apply different learning rules, while their behavior is also affected by the behavior of others, etc. (see Barberis and Thaler 2003; Hirshleifer 2001). Agents’ decision may be subject to various types of “errors” in the selection process, some of which might also be induced by confusion Selten (1975). In some instances, they gamble, speculate, use heuristics and even make blind guesses. In addition, agents can make decisions simultaneously or jointly as a part of different groups. Agents’ decisions may either be perpetual solutions to their choice problems or one-shot actions. In order to include these agent-based specifics but not to make the model too precise which could easily make it very inappropriate, Steinbacher (2012) combines all the subjective specifics that are relevant for the agent’s selection into a residual variable which he denotes the level of suspiciousness. Each agent is thus allowed to make sub-perfect selections for whatever reason.

The emerging literature of a new approach that is commonly referred to as neuroeconomics introduces neuroscience, i.e. knowledge about brain mechanisms, into modeling economic agents’ decision-making (see Camerer et al. (2005) for an overview). Neuroeconomics represents a very ambitious approach. The other ambitious area of research that is developing very fast relates to sentiment analysis and opinion mining (Pang and Lee 2008). This bunch of research is concerned with the analysis of what individuals think, and this could bring some new insights on how to give information an economic value and benefit from it.

Agent-based agents may either contain behavioral characteristics or be optimizers. The modeler is free to incorporate the neoclassical type of maximizing agents into the model. In fact, agents might range from zero-intelligence agents who make random guesses (Gode and Sunder 1993) to such who decide upon the very detailed procedure as in the ASM model where agents’ rules include more than 100 parameters. Agents might have perfect recall, which means that they remember entire histories of their actions and the relevant data, or they may have an imperfect recall. In addition, agents’ decision making and their expectations can be affected or even restrained by their cultural or religious characteristics, which have proved to be significant (see Guiso et al. 2006 for a survey). Culture is particularly relevant in relation to trust and economic activities that outdo the mere market mechanisms (Akerlof 1970).

Altogether, agents’ selections might depart from the seemingly most promising alternatives if such ever existed. With such a shift away from a perfectly rational and omniscient agent, Homo Economicus becomes bounded and begins losing IQ, evolving into Homo Sapiens (Shiller 2000). Of course, it would be mistaken to think that computational agents do not try to think rationally or even to make optimal decisions if such existed. The behavioral approach gives the agents a “non-automata” and a human characteristic, in which their selections capture cognitive and social features.

Which Networks to Use?

In recent years, there has been a remarkable interest in the network structures. Over the years, many different networks have been developed and identified and, in the end, also used for different purposes. Some networks are very complex with a specific architecture, which gives them very specific and unique characteristics. Following Newman (2003), the networks can be divided into four broad categories: socio-economic, technological, information and biological networks.

From the economist’s perspective, socio-economic networks are usually applied, especially in the game theoretic or agent-based applications. These networks consist of a set of people or groups of people who are connected together in pairs by links. Links signify some pattern of contacts or interactions between these individuals, and represent a channel which these individuals use to share their private information with one another. Because individual units usually communicate with each other, these networks usually take the form of communication networks. Technological networks include tangible objects. Typically, they are designed to represent the distribution of some commodity or resource. Some cookbook examples of technological networks include networks of roads, airline routes, pedestrian traffic, etc. In economics and finance, an example of a technological network would be a banking network in which banks are linked to each other through the interbank market of mutual exposures. Typical examples of the information networks are citation networks and the World Wide Web. Information networks are also referred to as knowledge networks. A class of preference networks in which people express their preferences on objects, e.g. book or stock recommendation, also belongs among information networks. Finally, biological networks model biological systems as a network and examine their activities in a network-based setting. Networks may include elements from different categories. The network of trading relationships among countries has elements of both socio-economic and technological networks (Jackson 2008).

Networks can be further classified into those in which objective data, such as risks, viruses or other events is transmitted and the networks where subjective beliefs, such as ideas and opinion are transmitted. The difference between the two is not just methodological but also technical, while applications of the latter are much more complex than that of the objective-data models.

Furthermore, when links are necessarily reciprocal it will generally be the case that mutual consent is needed to establish and maintain the link. Most economic applications fall under the reciprocal-link framework. In such a case, undirected networks should be adopted. When direction of a link is important, such as in credit risk models, directed networks should be applied. In weighted networks, nodes and links possess some attributes and weights. Usually, nodes possess some values whose dynamics is provided by the weights of the links. Granovetter (1973) used such a weighted socio-economic network, in which the links are given the strength of a friendship between two persons according to the closeness of existing link and the frequency of interaction. Hence, the interpersonal links can be either strong or weak with the latter illustrating casual acquaintances. Granovetter then demonstrates that although individuals get very useful information from their closest ties, the group homogeneity exhausts the level of unknowns within the group, which makes weak ties indispensable for the propagation of new information into highly homogenous group. Although Granovetter studied the job search market, weak ties can be instrumental elsewhere, as well. Goldenberg et al. (2001) argue that weak ties overcome the effect of the strong in all stages of the product life cycle. On the other hand, by definition, weak ties are less accessible than the strong and also less willing to share their knowledge and information, which may limit their value.

In addition, networks can be classified into the static and dynamic. In static networks, all nodes and links between them are fixed over time. In evolving networks, new nodes emerge over time and some of them die off, while nodes make new links and sever some of the existing. Many systems in reality would be best described by evolving networks. Models of evolving networks need to include a mechanics by which the network grows and develops (Albert and Barabasi 2002; Boccaletti et al. 2006; Jackson 2008, 2010; Newman 2003).

We do not exaggerate by saying that economics has developed into a highly interdisciplinary science with very broad research interests. Within this scientific development, social networks represent the additional methodological tool to examine the questions bottom-up from their substance. Furthermore, in tackling the complexity and to simplify the problem, economists have often used conformist assumptions in their models. Agent-based approach is, of course, not immune to such simplifications. On some occasions, a modeler uses simplifying assumptions in order to isolate the effects of particular factors of the model and simulate the model under these specific circumstances. In the other, increased complexity of the model structure seems redundant. Still, many models could not be solved without such simplifications. Finally, there are some open questions of which we still do not have a satisfying clue of how to tackle them successfully. In this respect, Gibbard and Varian (1978) argue that models can be either approximations which aim to describe reality, or caricatures which seek to give an impression that they describe reality. From this perspective, one could argue that it is not so much a question of which network-types are generally more appropriate for economics and finance, but which types better fit the specific problem at hand and satisfy the modeler’s aims. This may be true, although the appropriate network is required if one would like to defend the argument.

In the following chapters we review the literature on agent-based models in economics with an emphasis on social interactions and discuss the models from several different aspects.

Diffusion Through the Networks

Spread of a Disease

We begin this overview of interaction-based models with the epidemic diffusion models for a simple reason; because they represent a platform for other diffusion models. They are also very intuitive and easy to understand, while the connection between the networks and the epidemic models is very straightforward.

Let us presume a group of people, which consists of a portion of infected individuals and the rest. The network can be formed if we imagine that nodes represent individual people and the links their pairwise connections along which the infection can spread (Newman 2002; Pastor-Satorras and Vespignani 2001). Each individual has a finite set of contacts, while over time these individuals interact with one another through interpersonal contacts. Individuals may make new contacts and sever some of the existing. The network structure allows the modeler to examine the epidemic dynamics over time. Namely, diseases spread by contact and go from the infected individuals to others and the modeler is able to monitor the speed and the extent of the progression under various circumstances. Applications of epidemic models usually include the propagation of human and electronic viruses or other diseases.

Generally, three different types of the epidemic model are examined. In a SIS model, individuals exist in one of the two discrete states: susceptible (S) or infected (I). At each time step, each susceptible node gets infected with some rate if it is connected to one or more infected nodes. A node which is connected to the bigger number of infected nodes has a higher probability that it will get infected. At the same time, infected nodes are cured and become susceptible with some rate. Susceptible individuals who become infected become potential virus transmitters.

The extended model includes a group of recovered individuals (R) and is thus referred to as a SIR model. Recovered (or dead) are those who have been infected and are immune for life (or are dead). Such individuals cannot be transmitters anymore. SIR models can be further extended to include the case when a recovered individual, if not dead, can again become susceptible and infected. This kind of model is usually referred to as a SIRS model.

The significant property of epidemic models is represented by the epidemic threshold, which marks the effective spreading rate of the infection. It is important because it gives us information whether the infection can become endemic or dies out. In SIS models, it is the quotient between the infection rate and the rate at which the infected nodes are cured.

Credit Contagion in Financial World

Epidemic models can easily be applied to the banking world to examine issues that relate to credit risk. Credit risk can be defined as a risk of changes in the value that is associated with unexpected changes in the credit quality of other counterparties. As such, a credit event spreads like a “virus” across the network. Credit contagion has been extensively studied within the network models (Allen and Gale 2000; Gai et al. 2011; Haldane and May 2011; Lelyveld and Liedorp 2006; Steinbacher et al. 2013, see Allen and Babus 2008 for a survey).

Financial system has a natural representation of a network, in which the nodes represent individual financial institutions (or banks) and the links the interbank positions. Banks may be modeled through their balance sheets. Financial network would usually be directed and weighted, reflecting the fact that banks are either debtors or creditors, and that interbank positions are of different sizes. Banks may be heterogeneous across the types and the size, which makes the banking network very complex. It has been argued that the network of major international financial institutions exhibits an increasing scale-free characteristic in which a few large banks interact with many others, although the system is strongly interdependent (Iori et al. 2008; Schweitzer et al. 2009). In the banking network, the bank capital serves as a cushion to absorb losses.

By using the network-based approach, the modeler is able to stress the model by either an idiosyncratic or macrostructural shocks and examine how these events affect the stability of the network and the banking system. The latter are considered systematic events because no particular bank that holds an asset that has been hit by the shock can avoid the consequences. The two events may be co-integrated and correlated.

When a credit event occurs and the borrowers are unable or unwilling to fulfill their obligations, the interdependent banking system may induce credit contagion, where failure of a single bank triggers the subsequent failures of counterparty banks. Contagion is a typical network effect and represents the counterparty risk. Following the bank default, adjacent banks are infected first and, if capital buffers are not sufficient to cover losses, the shock propagates through the chain of links. Contagion in the presence of a systemic event is different from that of the idiosyncratic in that the systemic shock itself reduces the capital of each bank, thus making them more vulnerable to additional writedowns due to the counterparty risk. Correlated exposures of banks to a common source of risk can propagate systemic risk through the banking system. In addition, the extent of a credit event depend upon the level of recovery rates and a time delay from the time the bankruptcy of a bank is acknowledged until the time when recovery rates are applied.

Hoarding models extend the perspective of the interbank market.Footnote 5 Liquidity hoarding refers to a situation when single institutions start to hoard liquidity from other banks which are exposed to them. In the best case, they hoard long-term liquidity and thus making the interbank market extremely short-termed. There are mixed views on the reasons for liquidity hoarding (see Acharya and Merrouche (2010) and Acharya and Skeie (2011) for the precautionary motive and Taylor and Williams (2009) for the counterparty risk). Hoarding model of Allen et al. (2009) includes the central bank, which could provide required liquidity to illiquid banks by using open market operations.

Risk propagation models can be further extended so that credit events exacerbate uncertainty, loss of confidence or panics. In addition, positive and negative news may also be transmitted from one place to another when they are not directly connected. Following the positive or negative news about certain entity, the relevant market participants may reassess their priors about entities that are similar to them or come from similar environments and make similar expectations as to those which they have examined. Historically, transmission of the Thai crisis of 1997 from Thailand to Brazil and Russia was largely psychological. In Russia, it induced the collapse of the stock market and then also of the ruble in 1998. Some other cases of international contagion are thoroughly examined in Kindleberger and Aliber (2011).

By the same token, credit contagion may also denote propagation of economic distress from one firm to another or from one country to another or from firms to banks and to countries’ budgets and vice versa and so on. Such interdependence makes the effects of credit events nonlinear and complex, making the interaction-based approach even more appropriate.

The interbank market resembles the parallels to the epidemic networks; it acts as a transmission channel for spreading credit events from infected banks to their counterparties and hence across the network. However, credit contagion models differ from the epidemic models in transmissibility. As we have said before, the node connectivity may work as a channel for the risk propagation or risk-sharing in a banking world, while not also in epidemic networks, where single infected units do not have the risk-sharing potential and would infect the entire component.

Spread of Ideas and Opinion-Building

The third class of diffusion models relate to the propagation of ideas through a social network and the related opinion-sharing. It presumes that agents’ beliefs are affected by the influence of others. Spread of ideas may also be referred to as information contagion. In these models, we implicitly assume that each agent makes a decision regarding some issue. Individual agents usually have some prior beliefs on the issue they decide about and they regularly update their knowledge (Bala and Goyal 1998; Banerjee 1992; Bikhchandani et al. 1992; Blume 1995; Castellano et al. 2009; Ellison and Fudenberg 1995; Steinbacher 2012). Technically, either undirected or directed networks may be applied. However, because correspondents are likely to respond differently to direct communication than to the indirect, the choice for either of the two network types may imply different consequences on how beliefs progress over time.

The usual framework is the following. Agents are represented by nodes and their pairwise connections by links. Agents are split into sub-groups of different priors. They meet (randomly or systematically) with each other and share their beliefs to one another. Depending on the agents’ characteristics and that of the others, agents may get persuaded with some probability and adopt the priors of the fellows or remain with the same belief.

Heterogeneity in agents’ attributes may not refer only to diversity in beliefs, but also to the magnitude. The priors may either be strong or weak which affects the evolutionary dynamics. Specifically, agents with stronger priors are more likely to convert other agents to their beliefs. Then, the size of the information cascade depends upon the network structure and the proportion of highly persuaded individuals who never change their initial beliefs. Glaeser et al. (1996) offer an interaction-based model to examine crime rates as a function of individuals’ attributes and that of the neighborhood. In the model, there are individuals who influence and are influenced by their neighbors and those who influence their neighbors but who cannot themselves be influenced. Each individual faces a choice of whether or not to engage in criminal activity upon the behavior of his closest neighbors and the average behavior of the neighborhood. Although the network-based effects are identified in petty and moderate crimes, they are almost negligible in the most serious crimes. Golub and Jackson (2012) apply the network-based approach to examine the effects of homophily to the speed of learning when agents apply best-response techniques.Footnote 6 They argue that when agents’ beliefs or behaviors are developed by averaging what they see among their neighbors then homophily slows down the convergence to a consensus. Aral et al. (2009) examine peer effects in a dynamic network of social interaction and distinguish between the influence-based contagion and homophily-driven diffusion of ideas. A sort of the homophily-driven model was introduced by Mullainathan and Shleifer (2005) and Gentzkow and Shapiro (2011). The model of Mullainathan and Shleifer presumes that individuals prefer the news which is more consistent with their prior beliefs. This infers that individuals segment their audience according to their belief and prefer those that are likely to confirm their own views. Gentzkow and Shapiro claim that individuals with some prior belief may process information they receive very differently depending on the source and their priors. Similar to these is the network-based bounded confidence model introduced by Deffuant et al. (2000), in which agents can influence each other’s opinion only if the two opinions are close enough. Agents start with some opinion, while at each time step an agent shares his opinion with a randomly selected neighbor. If the two opinions differ by more than a threshold parameter, both opinions remain unchanged; otherwise each opinion moves in the direction of the other. In the experiment, either both change their opinion or none. In the end, for the given difference in initial opinions, higher thresholds increase the probability that two opposing opinions converge towards an average opinion, while the low thresholds result in several opinion clusters. Yet, the ants’ example of Kirman (1993) demonstrates that agents can change their beliefs autonomously with no influence of others.

In the classical model of learning and consensus formation that was proposed by DeGroot (1974), agents put weights on the opinion of others. At each time period, weights are assigned to each individual according to trust and the level of confidence an individual enjoys among other agents, while the opinion of each is then defined as the weighted average of the opinions of others. Holme and Newman (2006) offer an opinion model in which agents form their beliefs by either joining a group of individuals with a similar belief or by influencing each other’s opinion which, as a result, is becoming similar. By controlling the balance of the two processes, they identify a phase transition, from a regime in which opinions are diverse to one in which most individuals hold the same opinion.

Acemoglu et al. (2010) examine how the presence of forceful individuals who influence beliefs of the others but are not willing to change their own, interferes with information aggregation. Their main result is that the worst outcomes are obtained when there are several forceful agents and forceful agents themselves update their beliefs only on the basis of information they obtain from individuals most likely to have received their own information previously. Watts and Dodds (2007) examine the “influentials hypothesis” and argue that large cascades of influence are driven not by the opinion leaders but by a critical mass of individuals who can be influenced easily. Kreps and Wilson (1982) argue that in the multistage and imperfect information games, agents may try to acquire a reputation in the early stage of the game and use it as the game proceeds. Burnside et al. (2011) present a heterogeneous agent belief model to examine the housing market and explain variation in the housing prices. Agents have different priors about the long-run fundamentals, meet randomly and change their expectations following the interaction with others. The tighter the priors of an agent, the more likely it is that an agent will convert other agents to his beliefs. Sood and Redner (2005) examine the voter model and study its dynamics on heterogeneous graphs. Vazquez et al. (2003) examine a version of a voter model with three states: rightists, leftists and centrists, in which only the latter are involved into interaction and subject to the opinion change.

Hong et al. (2004, 2005) examine the effects of word-of-mouth information on individuals’ stock market participation and find that local networks of “friends” affect their decisions. Goldenberg et al. (2001) use cellular automata to demonstrate how the presence of weak and strong ties contributes to the spread of information through the word-of-mouth and the acceptance of a new product. In their model, a purchase of a product by an agent induces the non-zero probability that an adjacent agent decides to purchase it as well, which makes the strength of ties highly effective in the product life-cycle after the introduction stage is over.

Opinion-sharing differs from the usual diffusion models in that it relates to the one’s beliefs, whose dynamics depends on many factors, such as prior beliefs, knowledge and expectations, incentives, reputation of an agent who would like to spread his belief, willingness of an agent to change his prior belief. Many times the belief dynamics is contextual, subject to the changing circumstances of time and space, the general mood in society and similar. Often, after some new information arrives, the group of a predominant opinion is enlarged by the group of fast adopters who either have the least tight priors or have the most similar priors. Then the group is enlarged by those who decide upon the size of the group until the late adopters. Some individuals remain outside this box.

The most salient feature of this class of diffusion models is that agents refer to human agents with a strong cognitive component. This is an important distinction from the previous two classes of diffusion models.

Agent-Based Models in Finance

Agent-based framework is also highly applicable to finance. Financial markets are inherently occupied with issues that involve time and uncertainty. What is even more important, the market is characterized by the large number of micro agents who differ in many respects: knowledge, preferences, objectives, attitude towards risk, expectations building, learning capabilities, endowments, patience, friends, and the very subjective and mostly indeterminate factors such as daily mood, eureka, coincidence, the level of luck and similar. Additionally, agents on the markets are repeatedly engaged in local interactions and exhibit non-standard behavior by which they produce the aggregate outcomes that are path dependent with very complex time paths that go beyond the predicted outcomes.

A usual agent-based model of finance consists of a group of, presumably heterogeneous, agents who interact with each other and hence determine the dynamics of asset prices. The price dynamics is a perpetual activity caused by the agents’ actions which in turn affects agents’ future actions. In some cases, the aggregate behavior of the whole can induce huge oscillations on the markets, including highly unexpected outcomes such as market bubbles and crashes. As demonstrated by Lux (1995), these outcomes are attached to herding of interacting market participants and cause market instability. Kindleberger and Aliber (2011) provide a thorough historical overview of how manias, panics and crashes have shaped financial world over time. They also consider herding a central factor for price fluctuations. Therefore, financial markets are highly appropriate for modeling in an interaction-based fashion. Handbook of Computational Economics edited by Tesfatsion and Judd (2006) provides a thorough review of recent agent-based models in economics and finance.

One of the first agent-based models of financial markets with heterogeneous agents is attributed to Zeeman (1974). The model is populated with fundamentalists and chartists and explains switching phenomena in the proportion of the two types of traders between the bull and bear markets. Fundamentalists and chartists represent two typical groups of traders within the financial modeling. The first base their decisions upon market fundamentals, such as dividend return and economic growth, and the second upon the historical pattern of stock prices. There is no common rule of how to model fundamentals let alone the trend behavior. Zeeman argues that in a bull market the proportion of chartists who follow the trend increases, which pushes the prices even higher. The uptrend continues until fundamentalists perceive the prices too high and start selling, which in turn leads to price drops (bear market) and reduces the proportion of chartists, respectively. The downtrend provokes fundamentalists to start buying the stocks, which again turns the trend around. DeLong et al. (1990) used the finite horizon financial market model to demonstrate that a constant fraction of chartists may on average earn a higher expected return than fundamentalists and may survive in the market with positive probability. In the model of Day and Huang (1990), fundamentalists trade the more aggressively the farther the market price is from the fundamental value. In the models of Lux (1998) and Lux and Marchesi (1999) chartists pursue a combination of imitative and trend following strategies and switch between an optimistic (bullish) and pessimistic (bearish) mood, depending upon the majority opinion and the prevailing price trend. Boswijk et al. (2007) is among the recent heterogeneous agent models with fundamentalists and chartists.

A slight deviation from these models have been proposed by Kim and Markowitz (1989), whose simulated market contains two types of investors, rebalancers and portfolio insurers, and two assets, stocks and cash. This model is one of the earliest models of multi-agent dynamics. Hong and Stein (1999) propose a model, in which market is populated with newswatchers and momentum traders. Newswatchers make forecasts based on private information without conditioning on past prices, whereas momentum traders’ forecasts are based on the most recent price change.

Brock and Hommes (1998) develop a discounted value asset pricing model with agents of heterogeneous beliefs. In the model, agents pursue an adaptive behavior and tend to switch towards strategies that have performed better in the past. Upon the parameter values, the resulting system is nonlinear and capable of generating the entire specter of complex behavior from local stability to high order cycles and even chaos.

Levy et al. (1994) present an early econophysics approach in finance. Their simulations exhibit rich phenomena which include cycles, booms, and crashes. Cont and Bouchaud (2000) develop a model of stock market returns by using tools from statistical physics. The model is constructed on interacting agents and it demonstrates how herding that is spurred by communication structure between agents and imitation induces heavy-tails in stock returns. Iori (2002) develops a model with heterogeneous agents, in which agents’ interactions are restricted to nearest neighbors to examine large fluctuations in stock market returns and volatility clustering.

Artificial Stock Market model consists of an auctioneer, a risky and riskless asset, and the arbitrary number of traders (Arthur 1994; LeBaron et al. 1999; Palmer et al. 1994). At the beginning of each time period, each trader selects a portfolio to maximize his expected utility in the next period. Agents interact with each other, individually form their expectations of stock prices over time and continually introduce new rules into their decision-making. Agents’ actions are a continuous activity. Each agent first monitors the stock price and upon the stock price submits bids and asks by which they jointly determine tomorrow’s price. In the model, agents learn and modify their forecasting rules by a genetic algorithm, succeeded later by the method of swarms. Following these rules, they eliminate the worst-performing rules and replace them with new rules that are formed as variants of the retained rules.

Steinbacher (2012) proposes an interaction-based model that is run on a social network to study agents’ portfolio decisions. In the model, stock prices are given and unknown to agents. At each point in time, agents interact with adjacent counterparts, share information with them and make regular decisions. Following the idea of Selten (1975), decisions of suspicious agents are subject to selection errors in a very selection phase, be they intentional or accidental. Agents’ decisions are not just bound to the imperfect knowledge about asset prices, but also to imperfect selection. In the model, agents’ inaction is also considered a decision that was done. The model is simulated under different circumstances, including bull and the bear markets.

Another category of agent-based models in finance represent the order book models. These are models of price formation, in which agents post their buy or sell orders (Rosu 2009 and the references therein). There are two classes of order book models. A limit order is an order to trade a certain amount of a security at a given price. A market order is an order to buy/sell a certain quantity of the asset at the best available price in the limit order book. The lowest offer is called the ask price and the highest bid is called the bid price. When a market order arrives it is matched with the best available price in the limit order book, and a trade occurs.

Game Theoretic Applications

Game theoretic models are another class of particularly appealing applications for the network-based approach. A game is an abstract formulation of an interactive decision situation with possibly conflicting interests. In a general form, it consists of the set of agents, payoffs for each agent and a set of rules and strategies for each agent (Osborne 2002). Traditional game theoretic application is given as a finite two-person simultaneous-move game in which each agent individually decides whether to cooperate (\(C)\) or to defect (\(D)\), while agents do not know what the other will do (Table 10.1).

Generally, for the given matrix structure, the game is named upon the values of these parameters in the matrix.

Table 10.1 The payoff matrix of the game

For instance, in a prisoner’s dilemma game, defection yields higher payoff than cooperation. However, if both defect, both are worse off than if both had cooperated. On the contrary, in the stag-hunt game the player is better off doing whatever the co-player does (Santos et al. 2006). With the payoff matrix given, the usual game theoretic framework tries to answer a simple question of when should a person cooperate and when defect in an ongoing interaction with another person or a group of persons.

Evolutionary game theory extends these classical games with an evolutionary aspect such as uncertainty, learning, adaptation and a dynamic component. In the evolutionary games, the large populations of agents repeatedly engage in strategic interaction, which allows them to learn over time and change their behavior upon previous experience, communication with others and developments of individual games (Camerer 2011; Maynard Smith 1982; Weibull 1995). In the evolutionary setting, agents who are usually heterogeneous in nature adapt their behavior over the course of repeated plays. Some models are reviewed in Chakraborti et al. (2011), Goyal (2008), Jackson (2008, 2010), Szabo and Fath (2007).

The El Farol bar problem explores the dynamics of attendance (Arthur 1994). Each week agents independently decide whether to go to the popular bar or not, while the bar is enjoyable if it is not too crowded. The game could be denoted a prediction-based model, because agents, who are not allowed to communicate to each other, predict how many entered the bar the previous week. If an agent predicts more than a certain number will attend he stays home, otherwise he goes. Upon the success of their prediction, agents continuously adapt their predicting model and corresponding parameters. The game thus exhibits a non-linear behavior. The evolutionary perspective of the game was provided by Challet and Zhang (1997) and Challet et al. (2004).

Minority games have gained a widespread popularity. Chow and Chau (2003) propose a variation of the minority game where every player has more than two options. Bianconi et al. (2008) propose a version of a minority game in which agents may invest in different assets (or markets) and find that the likelihood that agents trade in a given asset depends on the relative amount of information available in that market, while agents prefer to play in the stock with less information.

A large portion of researchers have examined the evolution of cooperation among agents in evolutionary prisoner’s dilemma games under different circumstances (Nowak and May 1992, 1993, see Szabo and Fath 2007 for an extensive overview).

Nowak and May argue that spatial version of the prisoners’ dilemma game, with no memory among players and no strategic elaboration, can generate chaotically changing spatial patterns, in which cooperators and defectors both persist indefinitely. In these models, we can assume that the decision for cooperation or defection depends upon the given payoffs; as the reward for defection increases, the probability for cooperation decreases. However, Nowak (2006) has argued that cooperation can evolve by kin selection, direct reciprocity, indirect reciprocity, network reciprocity, and group selection. Axelrod (1984, 1997a) has demonstrated that “Tit-for-Tat” is often the optimal strategy for iterated prisoner’s dilemma. Abramson and Kuperman (2001) study an evolutionary prisoner’s dilemma game, played by agents on different network topologies, in which agents change their strategies over time by imitating that of the most successful neighbor. They find that different network topologies produce a variety of emergent behaviors. Helbing and Yu (2009) argue that success-driven migrations help to establish cooperation and, besides the ability for strategic interactions and learning, play a crucial role for the evolution of large-scale cooperation and social behavior.

By using an n-person binary choice game, Axelrod (1986) has studied the emergence of behavioral norms in the game of bounded rational agents. He concludes that norms that have proved to be more effective are used more often in the future than the less effective. In the game, agents can choose either to cooperate or to defect. Young (1993) has examined a repeated n-person stochastic game to study the evolution of conventions and demonstrated that in an environment where agents’ decisions are subject to mistakes, societies occasionally switch from one convention to another, while the society converges in probability to only one convention if the probability of mistakes approaches zero.

Evolutionary approach is applicable to include different stochastic elements into the usual game frameworks, such as errors in agents’ decisions, signaling or screening, imperfect recall, impatience, reputation, learning methods, network topologies and similar. In the evolutionary perspective, agents may learn over time and modify their behavior as to the game developments.

Evolutionary Macroeconomics

Macroeconomic models have traditionally been analyzed by a top-down approach under the rationality condition and solved as optimization problems with constraints (see Ljungqvist and Sargent 2004). Although they considered the economy by a top-down approach, uncertainty and asymmetric information, imperfect competition and rivalry among many heterogeneous economic units, learning by doing and knowledge spillovers, (uneven) initial conditions, increasing returns, diffusion processes and imitation, incentives, strategic interaction, cooperation and collusion, transaction costs, institutional framework and social norms, heterogeneous economic environments and time component have been highlighted as important elements of a production process (see Barro and Sala-i-Martin (2004) for an overview of endogenous growth models). Sargent (1993) and Simon (1997) provide a survey on bounded rationality in macroeconomics. Research in behavioral science suggests that agents differ in their preferences, especially in relation to risk, expectations and time, and that their behavior is often time-inconsistent subject to errors, mistakes and regret. Simon argues that the optimization maxima, i.e. the choice of the best available alternative, that is a building-block of the standard approach is simply not feasible in most real-world situations and has to be substituted with that of satisficing, i.e. the choice of an alternative which meets specified criteria but is not necessarily the best. In their actions, individuals are often led by irrational exuberance or fads (Bikhchandani et al. 1992; Shiller 2005). As demonstrated by Schelling (1971), the outcome of a group of such interacting individuals with cognitive abilities can substantially differ from the outcome that would be aggregated upon their priors. Kirman (1992) provides a discussion against the use of the representative agent in economics.

Roots of the evolutionary approach to economic growth can be traced back at least into the late 18th century and Adam Smith’s division of labor and the invisible hand dynamics, which transforms the environment of selfish individuals who interact with each other into an ordered system in time and space that goes beyond initial intentions of every individual. Hence, the market outcome of a decentralized economy is the intercept of individual self-interests of market participants with the price system being an integral part of the market order. In the 20th century, Joseph Schumpeter Schumpeter (1934, 1947) described the economy as a system that is characterized by perpetual creation of new ideas, products and firms and the decline of those existing that have proved to be less efficient. An entrepreneur has been put in the center of Schumpeterian economic development. Processes led by creative destruction and entrepreneurial experimentation make the economy inherently dynamic, stochastic and evolutionary. In the early 1980s, Nelson and Winter (1982) wrote a seminal book on the evolutionary approach to economic growth.

Delli Gatti et al. (2011) offer an agent-based approach to macroeconomics. Delli Gatti et al. (2010) model the economy as a network consisting of households, firms and banks, and simulate the behavior of the modeled economy for different parameter values. They explain cyclical behavior of the economy as a consequence of the complex interaction of the agents’ financial conditions, and argue that a shock to the economy or to a significant group of agents in the credit network can be followed by a bankruptcy avalanche if agents’ leverage is critically high. Gabaix (2011) and Acemoglu et al. (2012) examine the effects of productivity shocks that hit different sectors on a micro level to macro fluctuations and argue that firm-level idiosyncratic shocks translate into aggregate fluctuations when the empirical distribution of firms exhibits fat tail.

Within economics, social networks have been extensively used in job search models to explain many phenomena that were considered anomalies. A typical job search model consists of job postings and job candidates. Montgomery (1992) was among the first to study labor market as an evolutionary process and highlights the importance of social connections for the salary of employees. Calvo-Armengol and Jackson (2004, 2007) use the Granovetter’s notion of weak ties (Granovetter 1973) to develop a model where agents get information about the job vacancies through the social interaction. Ioannides and Datcher Loury (2004) use social interaction to study job-market outcomes. Bramoulle and Saint-Paul (2010) build a model on the assumption that the probability of a new link formation is bigger between two employed individuals than between an employed and an unemployed individual, which generates negative duration dependence on exit rates from unemployment. Goyal and Moraga-Gonzalez (2001) examine the evolution of R&D networks of inter-firm collaboration on costly and human-capital intensive research and development activities.

Other Applications

In one of the earliest simulation-based models, Thomas Schelling applied cellular automata to demonstrate that an integrated society will generally turn into a rather segregated one although no individual agent strictly prefers this (Schelling 1971). This segregation seemed due to the spontaneous dynamics of the economic forces, with all individuals following their incentives to move to the most attractive locations. The model was later generalized by Fagiolo et al. (2007), who conclude that mild proximity preferences are an important possible explanation of segregation not only in regular spatial networks, but also in more general social networks.

Nagel and Schreckenberg (1992) use cellular automata to simulate freeway traffic and the related traffic congestion patterns. Epstein and Axtell’s sugarscape model is an interaction-based model that is run on a lattice (Epstein and Axtell 1996). Each cell is filled with different amount of sugar. Sugar is the commodity that agents need to survive, while those who ran out of it die off. In addition, agents who reach the maximum pre-defined age die off as well. Agents move sequentially in random order from cell to cell by which they consume the sugar. Agents have different metabolic rates. Each cell can be occupied by at most one agent at a time. When an agent occupies a cell, he increases his sugar supplies by the amount of sugar from the cell. Sugar then grows on an empty cell at the given rate. Agents also have different lateral vision, which helps them to decide which cell to occupy. Agents move to the best available location. Interactions in the model are endogenous because they depend upon the moves of agents throughout the lattice. There is no learning in the game. The extended version of the game includes spice, which agents can trade with their neighboring agents. Agents can only interact and trade with their direct neighbors. How much sugar and spice agents trade with each other depends upon the utility functions of the two agents and the pre-defined bargaining rule. Additional extensions of the game include different replacement rules of the deceased agents, sex and the birth of offspring, credit relations between agents etc.

By using an interaction-based model, Föllmer (1974) was among the first to demonstrate that even simple interactions among individuals can generate sophisticated behavior at the macro level, including a breakdown of price equilibria. Currarini et al. (2009) develop a model of friendship formation in which individuals have different types and see type-dependent benefits from friendships. Bramoulle and Kranton (2007) analyzed networks in relation to public goods.

Axelrod (1997b) uses social networks to study cultural dynamics. Corominas-Bosch (2004) uses a bipartite network to study a repeated bargaining game between buyers and sellers who are connected by an exogenously given network. In the game, players can make repeated alternating public offers that may be accepted by any of the responders linked to each specific proposer. Chang and Harrington (2006) provide a survey of various agent-based models of organizations. Bramoulle et al. (2009) consider a model where interactions are structured through a social network to identify the peer effects. Brock and Durlauf (2001b) develop an externality model to examine aggregate outcomes when social interactions are embedded in individual decisions of the agents.

Applications Outside Economics

Social networks and interaction-based models have been extensively used to explain many phenomena from natural and social areas. Listed are just some of them.

Kirman (1993) uses it to examine behavior of ant colonies in exploiting two identical sources of food and characterizes a switching potential that is defined by the self-conversion probability and a probability of being converted. Pastor-Satorras and Vespignani (2001) use the network approach to study the spread of diseases, while Bullmore and Sporns (2009) to study complexity of brain’s structural and functional systems. Barabasi and Oltvai (2004) use networks to study the cell’s functional organization. Helbing (2001) uses it to examine the traffic dynamics and demonstrates that the behavior of panicking pedestrians in a smoky room leads to an inefficient use of available escape routes. The paper of Helbing also delivers an extensive review of the main approaches to traffic and related models. Leskovec et al. (2005) examine the dynamics of viral marketing. They observe the propagation of recommendations and the cascade sizes and analyze how user behavior varies within user communities defined by a recommendation network. Epstein (2001) presents two variants of an agent-based computational model of civil violence in which agents, who differ by their private level of grievance, and cops interact on a lattice. In the first a central authority seeks to suppress decentralized rebellion. In the second a central authority seeks to suppress communal violence between two warring ethnic groups. Nowak et al. (1999) extend the basic framework of the evolutionary game theory to examine the evolution of language. Christakis and Fowler (2007) use social networks to examine the spread of obesity over time and link it to social ties.

A special class of models represents those that study the evolution of social networks (see Boccaletti et al. 2006; Goyal 2008 and Jackson 2010 for a survey). Callaway et al. (2000) and Albert et al. (2000) examine the network fragility under different types of attacks on the networks and argue that only intentional attacks focused on the elimination of some of the most important nodes or links within the network can destroy the network. Marriage networks have been used to explain the rise of the Medici family in medieval Florence (Padgett and Ansell 1993).

Simulation-Based Experiments

In this section, we present some applications of agent-based games on social networks. The principal aim of this chapter is to demonstrate how these games can be conducted, while we also demonstrate how even small perturbations of different parameters might end in highly different outcomes. The first application is an example of the evolutionary game theory and examines the modified principal-agent inspection game. In the second, we propose a network-based model of credit contagion in financial markets and examine the effects of idiosyncratic and macroeconomic credit events to the banking system for various network topologies.

Game Theoretic application: Principal-Agent Inspection Game

Model

We extend the principal-agent inspection game of Dresher (1962) by introducing social interaction among agents. In the principal-agent game, the principal assigns a task to the agent for which the latter, if successfully accomplished, receives a payment. Because the two participants have opposite interests, the arrangement between them results in the principal-agent problem (Grossman and Hart 1983). In particular, while the employer wants his task accomplished, the employee tries to receive his payment with as little effort as possible. The dilemma is tackled by a costly inspection going at the expense of the employer and intended to reveal the true effort of the employee. If the employee is caught shirking he does not get paid. We extend this basic framework by adding a credible and powerful institution into the game, i.e. labor union, which warrants the shirking workers who are members of this institution a partial pecuniary compensation. This institution does not have to be a labor union it can be any credible and powerful institution.

The game consists of a principal \(P\) who employs a finite set of employees (agents) \(A_i \), \(i=\left\{ {1,2,\ldots ,1{,}000}\right\} \), who are located on vertices of a small world network (Watts and Strogatz 1998). An average connectivity of the network is \(k_{i}\left( g\right) =6\) and randomness is \(p=0.1\). In every time period each agent simultaneously chooses between two discrete choices, either to work \(W\) or to shirk \(S\). When working, each agent produces the output \(v\) for the principal, gets the payment \(w\) and bears some work-related costs \(g\). To make it simpler, we assume that agents are homogeneous in this respect. A fraction \(u\) of agents is unionized, while the rest \(1-u\) are not. Unionized agents are randomly placed among the others and principal does not know who they are. A unionized agent pays membership fee in the amount of \(f\) and gets a \(c\) part of the wage if found shirking.

On the other hand, the principal may opt to inspect \(\left\{ I\right\} \) the agents or not \(\left\{ N\right\} \). It is assumed that \(P\) cannot condition the wage on the observable outcome \(v\). If \(P\) decides to inspect, this brings him additional cost \(h\). In every time period, each agent is inspected with the given probability \(r\in \left[ {0,1}\right] \), which agents do not know. Every agent, who is not found for shirking, gets payment \(w\). A unionized agent who is found shirking gets a portion of the wage.

The time is discrete. During a single iteration of the game, each agent \(A_i \) plays the game with the principal \(P\), where both choose their strategies simultaneously at the beginning of every time period, which means that they do not know what the opponent has selected. An agent \(A_i \) may have four different strategies available: shirking (\(S)\) and working (\(W)\), as well as shirking while being a union member (\(SU)\) and working while being a union member (\(WU)\). Principal chooses whether to inspect or not. Payoffs for each of them are given in the matrix from Table 10.2.

Table 10.2 The payoff matrix of the game

After each full iteration of the game, when \(P\) interacts with all \(A_i \), agents compare their accumulated payoffs with a randomly chosen adjacent agent.

In every iteration, each agent \(A_i \) randomly selects one of the adjacent agents \(A_j \) and reports him the level of his wealth and the strategy he played. The two agents then compare the two strategies they have played and the accumulated wealth, \(e_i \) and \(e_j \), and independently choose the strategy for the next period. Hence:

$$\begin{aligned} e_i (t)=s\sum _{h=0}^{t-1} {q(h)} +q(t) \end{aligned}$$
(10.1)

where \(s\) is the workers savings rate, and \(q(h)\) and \([q(t)]\) are the payoffs of \(A_i \) at iteration \(h\) and \(t\), respectively. Agents’ choice function is determined as:

$$\begin{aligned} \wp =\frac{1}{1+\exp \left[ {{\left( {e_i -e_j }\right) }/\kappa }\right] } \end{aligned}$$
(10.2)

Parameter \(\kappa \in \left( {0,1}\right) \) represents the uncertainty parameter and denotes a nonnegative probability that an agent \(A_i \) will depart from adopting the most promising alternative of the two being compared. If \(ran>\wp \), an agent keeps his alternative, otherwise an agent adopts the alternative of adjacent agent. Parameter \(ran\sim U\left( {0,1}\right) \) is a uniformly distributed IID random number (Press et al. 2007). In the model, the choice depends upon the expected benefit differential \(\left( {e_i -e_j }\right) \) and the suspiciousness parameter \(\kappa \). The scheme relates to the preferential attachment model where agents have a preference to “attach” to the most profitable alternative, but may fail to do it for different reasons. In general, the lower the \(\kappa \) the higher is probability that an agent adopts the most promising alternative, and vice versa. Agents also decide whether or not to get unionized. Principal’s profit \(\pi \) depends upon the value produced by the workers and the expenditures for wages and inspection. In games, we examine the profit rate for a principal under different circumstances and the optimal inspection rate for a principal.

Results

All inspection games are iterated forward in time, using a synchronous update scheme. If not stated differently, we use the following values for corresponding coefficients. The output level of each agent \(v\) equals to 1 or zero if an agent shirks, while other figures are set relative to the level of \(v\). Each agents earns \(w=0.4\) and bears work-related costs that are set at \(g=0.125\). Agents save 10 % of the wage, thus \(s=0.04\) and the union membership fee equals 5 % of the wage, thus \(f=0.02\). Inspection costs the principal \(h=0.16\). Unionization rate, where applicable, equals \(u=0.4\), and \(\kappa =0.1\). Parameters \(r\) and \(c\) may vary within \([0, 1]\) with a \(0.02\) step. The outcomes presented in figures are average values after \(10^{5}\) iterations of \(20\) independent runs of the game.

Figure 10.3 shows how \(\pi \) varies in dependence on \(r\) and \(c\). Color-palette on the heat-map visualization presents the profit of a principal. It is clear that what matters for the principal’s payoff is the influence of the union that is directly correlated with its bargaining power \(c\). In particular, as the authority of the union increases (\(c\rightarrow 1)\), the maximal average income of the firm per iteration (\(\pi )\) decreases steadily. By \(c=1\) the maximal \(\pi \) is obtained by \(r=1\), whereby then \(\pi =250\), which is slightly more than 50 % lower as the peak value of \(\pi \) in a no-union case at \(c=0\). The union without bargaining power cannot affect the performance of the firm but just lowers the net income of their members by \(f\).

Fig. 10.3
figure 3

Performance of the principal under exogenous unionization

Results in Fig. 10.4 relate to the endogenous unionization rate, in which agents are allowed to adopt the status of an adjacent agent as well, not only the corresponding strategy. The worst-case scenario, a loss of \(\pi =-560\), is obtained at \(c=r=1\), when everyone is inspected with probability \(1\), no one works, recall that \(f\) is strictly less than \(w\), while the principal is obliged to pay out full wages.

Fig. 10.4
figure 4

Performance of the principal under endogenous unionization

Inspection is a required condition for the principal to push agents to work and also a sufficient one in the no-union environment. If agents cannot be backed by the union, a principal does not need to inspect every agent in order to force them to work. On the other hand, inspection is neither required nor a sufficient in the environment of a powerful union and endogenous unionization. Albeit in a bit lesser extend, the union has an indirect effect even on non-members, in particular imposing a tendency to shirk when this is definitely not optimal neither desirable.

Epidemic Games: Credit Contagion

In this experiment, we examine the propagation of credit events throughout the system of interconnected financial institutions, which we call banks. Assume that financial system consists of a number of \(n\) banks, which are connected through the interbank market into a banking network. By definition, such network is weighted and directed, with the weighted links indicating the exposure of bank \(j\) to bank \(i\), representing a counterparty risk for bank \(i\) with strength given by the weight. Exposures can also be both-sided.

Each bank that is exposed to other banks is vulnerable to losses of counterparty banks. While banks with many outgoing links make their financial positions very sensitive to the operations of other banks, banks with high in-degrees may provoke contagion if defaulted. The extent of out-degrees entails two opposing effects; it may work as a channel for the shock propagation or as a channel of risk-sharing. In addition to direct links, banks may be connected to each other through several different paths which all determine their status due to the credit event of a distant bank.

Model

The banking network consists of \(n=40\) banks which are numbered from \(1\) to \(40\). Banks retain the same number in all settings. Each bank is defined through its balance sheet. The sample includes 13 big banks with total assets exceeding 900 bn USD each. Total assets of 17 banks range from 100 to 700 bn USD each, while total assets of ten small banks do not reach 100 bn USD per bank. A cumulative initial value of banks’ assets is 25951.16 bn USD. The banks represent real banks from different geographical regions. They were chosen arbitrarily.

Fig. 10.5
figure 5

Banks’ total assets versus Tier 1 ratios

We use the banks’ 2011 Annual Reports to get the data on the banks’ total assets and Tier 1 capital ratios as of December 31, 2011, from which we calculated each bank’s initial Tier 1 capital level. Figure 10.5 plots banks’ initial Tier 1 ratios to their total assets. The figure demonstrates that the smallest banks have both the lowest and the highest capital ratios, with the medium and the largest banks being in-between. Concentration towards the origin signifies that the sample consists of mostly undercapitalized banks. Some descriptive statistics of initial banks’ positions are further provided in Table 10.3.

Table 10.3 Descriptive statistics of banks’ initial positions

Time is discrete and defined over \(t=\left\{ {1,2,\ldots ,252}\right\} \), which should resemble one business year. Financial position of each bank is defined and reflected in its balance sheet. The value of assets of bank \(i\) in time \(t\) is then given as:

$$\begin{aligned} A_{i,t} =H_{i,t} +B_{i,t} +N_{i,t} +\sum _{ij=1} {{ {IB}}_{i,t}^j } \end{aligned}$$
(10.3)

\(A_{i,t}, \,H_{i,t},\,B_{i,t} \) and \(N_{i,t} \) denote the values of bank \(i\) total assets, mortgage loans, bonds and non-trading assets in time \(t\), while \({ {IB}}_{i,t}^j \) denotes the values of bank \(i\) assets of banks \(j\) in time \(t\). In the equation, \({ {ij}}=1\) designates the link from bank \(j\) to bank \(i\). Let us assume that the liabilities’ side of each bank is confined to the level of its capital \(C_{i,t} \). Capital of each bank promptly evolves according to the profit or loss \(\Pi _{i,t} \) a bank generates on its trading part of assets as \(C_{i,t+1} =C_{i,t} +\Pi _{i,t} \). The banks’ assets develop over time according to the dynamics in the value of its equity portfolio and through dynamics of the interbank market.

Banks are not allowed to rebalance their balance sheets over time nor raise additional capital. Banks default when their Tier 1 capital ratio falls below 4 %. Bank capital thus represents its capacity for absorbing losses. Default of a bank deteriorates the balance sheet of its counterparties for the \(\left( {1-{ {RR}}}\right) \) proportion of the exposure at default, where \(0\le { {RR}}\le 1\) designates the recovery rates assigned to each bank. For each bank RR is randomly taken from the uniform distribution on an interval 0.3 to 0.6 and is fixed for all repetitions and network topologies. Hence, the capital dynamics for each bank over time is thus given as:

$$\begin{aligned} C_{i,t+1} =C_{i,t} +\Pi _{i,t} -\sum _{ij=1\left| {C_j \le 0}\right. } {\left[ {\left( {1-{ {RR}}_j }\right) \cdot IB_{i,t}^j }\right] } \end{aligned}$$
(10.4)

We test the model against idiosyncratic and systemic shocks. An idiosyncratic shock is represented as a sudden default of an individual bank. Generally, individual banks may default due to the failed business decisions, malpractice, fraud or any other bank specific event. A systemic shock is represented by a sudden drop in the value of mortgage loans. The two shocks induce different outcomes. Contagion in the presence of a systemic shock is different from that of the idiosyncratic in that the shock itself reduces the capital of each bank, by which each bank is more vulnerable to additional writedowns due to the counterparty risk later on. As the first banks default, the shock may be succeeded by a sequence of idiosyncratic events. All shocks are applied in t=10. They are unexpected events to banks, against which they cannot protect. We assume that the shock affects no other parameters.

Results

We first consider the consequences in the banking network after a sudden default of a bank which was given number 1. Bank 1 is a big bank with 2,129 bn USD in assets and initial Tier 1 capital ratio of 12.40. Figure 10.6 plots time evolution of the net defaulted assets within the banking system in 20 random network topologies. Network topologies determine the structure of the interbank market. We get net defaulted assets per scenario if we substract the benchmark evolution of the game in which the system is subject to no shock from the corresponding shock evolution framework. The figures thus represent pure differences in defaulted assets within the system that are only due to different network topologies.

Fig. 10.6
figure 6

Time evolutions of net defaulted assets after a default of bank number 1 in 20 random network topologies

Although the same shock magnitude has been applied in all network topologies, the plots clearly exhibit the differences in dynamics of banks’ defaulted assets, which is a consequence of different credit contagion paths. This means that the effects of a bank default to the banking network depend upon the network topologies. In one case, interbank market works as a shock absorber, while it gets contagious in some other constellations.

We now examine the effects of a systemic shock. It is represented as a one-time drop in the value of housing for a specified percentage. Simulations start with a shock of a percentage point, while through the repetitions housing default rates progress with an increment of 1 % up to the 40 %. In addition to the direct effects of the shock, it can also become contagious if it induces bank defaults.

Again, we use 20 random network topologies. Heat-map visualizations in Fig. 10.7 present the amount of net defaulted assets within the banking system over time (X-axis) by different levels of housing default (Y-axis). Color-palettes (Z-axis) progress from red (low value) to black (the highest value).Footnote 7

Fig. 10.7
figure 7

Time evolutions of net defaulted assets after a default in housing for a specified percentage (Y-axis) in 20 random network topologies

Contagion in the presence of a systemic shock is different from that of the idiosyncratic in that the shock itself reduces the capital of each bank, by which each bank is more vulnerable to additional writedowns that arise either due to the counterparty risk or due to losses on the equity portfolio in the subsequent periods.

Discussion

In the preceding chapters, we have presented some arguments, theoretical, methodological and empirical, in favor of the agent-based approach in economics and finance. Obviously, the approach is very ambitious and gives us some novel techniques and methods to model and examine the old questions from a new perspective. Its multidisciplinary nature makes it highly applicable for exploring complex models that exhibit nonlinear dynamics.

A distracted individual in the sense of his multiple imperfections is put into the center of the agent-based approach. Although we refer to agents, an agent may not only represent a human agent but any entity, which possesses some data and is endowed with some behavioral rules.

A fundamental presumption of the agent-based approach pertains to decentralized markets which are populated with heterogeneous agents with cognitive abilities who interact with each other and also with the environment by which they regularly change the environment in which they live and adapt to these changes which they as a group create. Heterogeneous agents may respond differently to these developments, which may end in aggregate outcomes of very rich structure. This may induce highly extreme aggregate outcomes, such as market bubbles that are followed by crashes, the tragedy of the commons as argued by Hardin (1968), or segregation in urban communities as argued by Schelling (1971, 1978). Some of these are highly undesired and, very likely, contrary to personal interests of most of egoistic individuals. There is a “divergence between what people are individually motivated to do and what they might accomplish together” (Schelling 1971). Markets are thus considered as complex and adaptive systems in an uncertain environment and regularly exhibit nonlinearities.

Interaction-based techniques are more capable of explaining these outcomes than the equilibrium-based models which presume a representative agent. The latter models almost completely disregard the complex nature of economics which arises due to the microstructure, uncertainty, non-optimization, emergent behavior, etc. Although they have provided us many helpful insights and reduced the sensitivity of many models to the parameter estimates, equilibrium-based models have been subject to a severe critique. A huge dissatisfaction with inability of equilibrium-based models to explain some empirical facts could be reflected in the words of LeRoy and Werner (2001), who have called them the placid equilibrium-based models that “bear little resemblance to the turbulent markets one reads about in the Wall Street Journal” and have called for the improvements.

Interaction-based methods provide methodological improvements and include a great part of the micro-structure that was missing in previous models. Interaction-based approach offers a multidisciplinary tool for exploring many complex systems that are build on interacting units from different fields. They are especially useful for examining systems that consist of heterogeneous agents who exhibit non-standard behavior, or the systems that are characterized by evolution and path dependency. By using simulation-based experiments, we are able to observe and examine how autonomous agents behave over time, how egoistic agents cooperate with each other, and how they respond to different circumstances which their behavior creates. Significant features of interacting agents who are able to observe and imitate are herding and information cascades, which may induce large, unexpected and often also undesired aggregate outcomes.

Interaction-based models may include all the specifics of other computational models in which agents’ information sets include histories of observable and some hidden states. The related uncertainty is ingrained in the structure of agent’s selection. When agents make decisions under uncertainty, which is usually the case, they may rely on the probability which they assign to each alternative and then act according to the expected payoff. However, the behavioral theory firmly suggests that choices among risky alternatives exhibit the pattern which is inconsistent with the mere probability analysis. This is even more relevant when one adds the evolutionary perspective where certain events either occur or not. For, it has been demonstrated that events that occur induce much larger consequences from events that might have happened but have not.

From the methodological perspective, interaction-based approach applies a positive approach, addressing a question of which actions and strategies agents use not the one of which they should. Agents’ decisions are not considered as right or wrong, but as decisions that bring them lower or higher payoffs. Tversky and Kahneman (1986) argue that normative approaches are doomed to failure, because people routinely make choices that are impossible to justify on normative grounds, in that they violate dominance or invariance. The behavior of cognitive agents is nonlinear and can be characterized by thresholds, if-then rules, nonlinear coupling, memory, path-dependence, and hysteresis, non-markovian behavior, temporal correlations, including learning and adaptation. These assumptions are even more relevant given the very subjective nature of information, which is never (or extremely rarely) objective, and never available to everyone but is rather highly dispersed and dynamic. As argued by Hayek (1937), “it is important to remember that the so-called “data”, from which we set out in this sort of analysis, are (apart from his tastes) all facts given to the person in question, the things as they are known to (or believed by) him to exist, and not in any sense objective facts.”

Methodological individualism and subjectivism together with interaction between heterogeneous economic agents go beyond the equilibrium which is so common to the economics society. The robustness of simulation-based modeling allows us to test, evaluate and challenge economic theories and models against different assumptions and data, which can be either real or imaginary. Models and theories always simplify. Usually, the assumptions on which they are built are very restrictive. The agent-based approach also simplifies. However, it allows us to examine robustness of these assumptions and the simplification factors as they may be relaxed, modified and challenged. Once the model is constructed, a modeler can very easily perturb (or stress) different parameters and then monitor and analyze the effects which may be remarkable or insignificant.

By using the interaction-based approach, we are not stuck in the equilibrium, but do not rule it out it in the long run, neither. If the equilibrium exists, we are able to see the adjustment process and examine the speed of convergence. The approach allows us to find the conditions under which these theories and theorems are supported and provide some arguments about the anomalies. In order to obtain reliable statistical estimations, equilibrium-based models regularly exclude extremes in spite of all the effects they produce and the content that they include. In this respect, we are able to identify critical points within the systems, whose elimination might well ruin the system as such (see Albert et al. 2000), or explain rare outcomes that occur under very specific circumstances which an econometrician, for instance, would simply regard as an outlier.

Similar to the interaction-based approach are laboratory experiments. Gode and Sunder (1993) and LeBaron et al. (1999) argued that the first are capable of isolating and monitoring the effects of individuals’ various preferences, such as risk aversion, learning abilities, trust, habits, and similar factors, while this is nearly impossible in laboratory experiments. Even though the experimenter controls the procedure in laboratory experiments, those who take part in it are aware of the fictitious nature of the circumstances and are likely to adapt their responses accordingly. Such experiments do not necessarily reflect what individuals would do under the same circumstances in reality.

Although, methodologically, the interaction-based models reflect the real world more accurately than the equilibrium-based models, their efficiency is far from the absolute, be they approximations or caricatures. Sometimes we would like to bring the model as closer to reality as possible, the other time we would like to apply the “as if” assumption and examine the outcomes in a fictitious ideal world. In either case, by applying the interaction-based methods, complexity of the model behavior over time is usually not induced by a complex model, but by interaction of bounded rational agents who regularly make decisions upon the very simple behavioral rules. Of course, this does not prevent us from modeling agents with highly complex selection criteria.

Conclusions

The purpose of the chapter has been to present how the interaction-based methods can be used in economics and finance. Interaction-based approach encompasses micro behavior. It is rooted in methodological individualism and subjectivism which makes it applicable to various areas that involve agents and interaction. One key departure of interaction-based modeling from more standard approaches is that events are driven solely by agent interactions once initial conditions have been defined and the rules of conduct specified. Then, interacting economic agents are able to continually adjust their actions according to the changing environment which their actions produce. New opportunities that emerge over time impede the system to reach global optimum or general equilibrium, although the two are not ruled out a priori.

There is no doubt that the games on social networks or the activities on networks will be an important part of the future research in economics and finance as they represent a potentially highly useful instrument for conducting different kinds of agent-based experiments that are based on interaction. If the purpose of the model is to help us explain the questions which we come across or find them just intellectually challenging, and we think that this is the case, then interaction-based techniques represent an adequate and highly competitive tool for obtaining some of the answers. To represent at least a complementary method to the currently mainstream techniques if not supplementary. This comes true amid the fact that social networks are very robust and may easily include ideas from many different areas.

However, the future of economics and finance will to a great extent depend on how successful researchers will be in grounding the two fields on a psychological evidence about how people consider uncertainty and how they behave under different circumstances when they are faced with uncertainty. This is one of the major challenges in economic modeling. Simon (1997) has argued that the future challenge for economists relates to the question of how to “receive new kinds of research training, much of it borrowed from cognitive psychology and organization theory,” and that they “must learn how to obtain data about beliefs, attitudes, and expectations.”

With new methods that build on interaction among heterogeneous units, we are able to find better explanations for many problems that were before either considered intractable or were computationally too intensive or poorly calibrated.

On this trail for better models, the good news is that hardware and software solutions develop very fast, and that newly developed simulation techniques could allow for this data translation. The bad news is that no matter how good all these improvements are and will be in the future, given the capacity of people to communicate, think and adapt, human action will always be a couple steps ahead of the conceivable capabilities of researchers and financial economists to model and understand it. However, a good researcher will try to do his best.