Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

While computer simulations are a widely accepted method of research in the natural sciences, they have only begun to gain widespread acceptance among the social sciences. Initial apprehension of the social science community toward computer simulations grew out of a long-held belief that the experimental methodology employed by researchers in the natural sciences would not be a suitable mechanism for understanding social phenomena (Roehner 2007). With little quantitative knowledge on human social interaction, social scientists are eager to use computer code to transform their once textual-only social theories into virtual realities. Sophisticated computer simulations can serve as virtual laboratories to investigate feedback mechanisms, emergence, and the micro- and macrointeractions among agents in artificial societies.

The value of these simulations extends far beyond just proof and discovery (Axelrod 1997). Computer simulations of artificial societies can decompose complex inputs and generate predictions ranging all the way from the level of individual agents to the system as a whole. Simulations of human behavior can be carried out solely for performance reasons in order to mimic human behavior, which could lead to more accurate or optimal results, for example, medical diagnosis. Simulations could also serve as training mechanisms for helping children deal with bullying as well as military personnel or business management by providing dynamic, responsive, and reasonably accurate representations of their human colleagues (Aylett et al. 2004). Simulation of human social behavior can also serve a purely entertainment purpose, as in the case of Will Wright’s popular video game The Sims.Footnote 1

While agent-based simulations have been a subject of a great deal of research in recent years, to date there is no framework for describing social agents that captures the uniqueness of human decision-making while remaining applicable across a wide variety of domains. The challenges facing any framework describing agents embedded in a social environment exist in two aspects.

First, such a framework should provide a computational model that neither singly takes the point of view of the individual agent nor the entire society by reconciling the needs, commitments, and goals of individual agents with the behavior of the system as a whole (Castlefranchi 2000). The problem of balancing the microlevel behaviors of the individual agents with the macrolevel behavior of the overall system results in one of two extremes: oversocialization or undersocialization (Castlefranchi 1997). Oversocialization occurs when a framework takes an entirely macro or organizational approach to constructing the social environment; the system is very static, predictability is stunted, and the resemblance to human social systems is tenuous. Likewise, undersocialization occurs when a framework focuses entirely on the microbehaviors of the individual agents by recursively modeling the nested beliefs of other agents leading to a potential explosion in computational complexity and very little resemblance to human social systems (Kim 1999).

Existing approaches to achieving a balance between under- and oversocialization embed agents with a notion of social awareness through two general approaches: external incentives and sanctions that favor group participation or endow agents with prosocial attitudes (Conte et al. 1997). Incentives and sanctions reward or punish an agent for respectively obliging to or deviating from institutionalized social norms and conventions (Hales and Edmonds 2003; Portes and Sensenbrenner 1993). Likewise, prosocial attitudes such as altruism and cooperation can either be acquired at runtime through learning or other socialization behaviors or encoded initially in the design of the model itself (Jiang and Ishida 2007; Parsons and Woolridge 2002). However, these solutions to the micro–macro problem have major disadvantages:

  • When modeling a complex human-based social system, the incentives and sanctions that lead to the desired behavior may be difficult if not impossible to identify.

  • Furthermore, even if identified for one domain, social norms are not universal across all simulation domains.

  • The degree to which social norms are enforced can greatly affect the overall system behavior—too strong and the system is relatively predictable and nonaccidental, too weak and the system is chaotic and unruly.

  • Learning prosocial attitudes can be computationally expensive for larger multiagent systems.

  • Prosocial attitudes are one way, representing the influence an agent has on social structures but not the influence those structures exert back onto the individual agents.

In the second aspect, such a framework should incorporate into the individual agent decision model recent developments in cognitive psychology that involve important modifications to the classical concept of a rational decision-maker. These developments, drawn from observed human decision-making patterns, shift decision theory away from a world view where the decision-maker chooses among a set of fixed and known alternatives with known consequences toward a conception of the world in which alternatives are not given and the consequences that will follow are unknown (Simon 1949). While creating heavier, more cognitive agents, this paradigm shift minimizes the work done by individual agents, likewise avoiding any potential performance penalty normally associated with other cognitive architectures such as COGENT and CODAGE (Das and Grecu 2000; Kant and Thiriot 2006). Aside from performance benefits, this new fuller description of decision-making has three distinct theoretical advantages over classical descriptions:

  • The model allows for the perception of incomplete and imperfect information that is subject to biases, omissions, and distortions.

  • Pseudointuitive inference can be carried out on key pieces of information (anchors) that constitute only a small fraction of available information (accessibility) if an agent is constrained by some external resource (Kahneman and Tversky 1979; Kahneman 2002).

  • Deliberative inference utilizes information-gathering mechanisms such as communication to expand an agent’s knowledge base, then adopts either a notion of satiation (Stirling 2003) or maximization to reach a decision.

In this chapter, we will introduce innovative mechanisms that allow agents to exhibit social behaviors by balancing their individual wants and needs with the concerns of the entire society while retaining a high level of cognition.

2 Dr. Tuncer Ören’s Contributions to Human Behavior Simulation

Dr. Tuncer Ören is one of the first researchers who have philosophical thoughts on the mode, scope, and originality of bridging human decision processes and computer simulation. His research in human behavior simulation has pursued a decision theory–centric focus. Through Tuncer’s whole career, he investigates a variety of decision-making techniques to meet a diverse set of needs. For example, advances in game theory have carried over as methods for selecting partners across a variety of agent-to-agent interaction patterns, while more traditional economic notions such as expected value and utility have translated into winning strategies for decision-making agents.

One of Tuncer’s work done in early 2000 is multimodels and multisimulation (Yilmaz et al. 2006), which is an advanced simulation-based problem-solving environment for social and political scientists to improve their ability to conceive, perceive, and foresee conflicting situations for human behavior simulation. The multimodels and multisimulation theory is based on interpretation of emergent, potentially unforeseen conditions to facilitate dynamic runtime simulation composition and simultaneous experimentation with multiple plausible models. This method explores the problem state space using feasible sequences or stages of models. This enables experimentation with alternative realities, potentially at different levels of resolution. It can also detect relevant and significant situations in a problem domain and therefore lead to interpretation capabilities regarding emergent conditions and causes of observed effects. Finally, observed effects need to be attributed to certain causes within the domain theory of the problem at hand. Such causes need to be appraised against the problem-solving goals and preferences to make recommendations for further, potentially simultaneous exploration of different realities. While this scheme can be characterized as forward multisimulation, this work also nicely examines the possibility of backtracking and replaying situated simulation histories with altered conditions as well as futures generated before exploring alternative realities.

Perceptions—including anticipations— are subjective and are prone to biases and influences. Some biases may stem from lack of relevant knowledge; others may be induced by others by influencing decisions. Tuncer’s group uses fuzzy logic to simulate them properly (Ören and Yilmaz 2004; Ghasem-Aghaee and Ören TI 2004). This problem is hard because there is a wide range of a base for persuasion such as reciprocation, consistency, social validation, liking, authority, and scarcity. But despite the inherent difficulty of the problem, several researchers have pursued a line of research that can be roughly grouped under the title Socially Rational Decision-Making. The primary goal of this research is to develop a fuzzy agent–based decision model that produces decisions that are inherently rational from the individual perspective yet retain that property of rationality on upward toward the level of the entire system. Traditionally, this has been achieved by making an individual agent’s autonomy subordinate to the needs and desires of the overall system. This kind of the top-down approach fails to exploit the inherent bottom-up and emergent properties that characterize any multiagent system. To retain the autonomy of individual agents, this research advocates classical decision-theoretic approaches by encoding social considerations into the utility functions of individual agents.

Cognitive complexity is an important factor in decision-making in problem solving. Seck et al. (2005) study human cognitive abilities in order to understand and test the mechanisms of several aspects of cognition to be able to incorporate them in simulation studies. They foresee two types of use: (1) enhance simulation studies and contribute to the advancement of the methodology and technology of cognitive simulation and (2) use cognitive simulation to test hypotheses about human cognition. Ören elaborated on the importance of increasing cognitive complexity of an individual to increase his/her effectiveness in coping with complex situations. This paper aims at the cognitive ability under stress and fatigue. Stress and fatigue can interfere dynamically in behavior in performance and decision-making (both variables can change within the course of a task). They distinguish on certain tasks the performance difference between high–cognitive complexity people and low–cognitive complexity people. A first distinction might be done concerning the time necessary to finish successfully a cognitive task; a second one can be made concerning decision-making as high–cognitive complexity people are known to be more fluent in ideas and more creative and thus generally find the best solution. To do so, each task of the DEVS atomic behavioral model will contain a variable representing the task’s cognitive complexity. Different individuals with different personalities, the openness trait in particular, will have different performances in terms of both time and decision-making.

The ability to understand the emotions of others is critical for successful interactions among humans. Kazemifard et al. (2011) presented a framework for emotion understanding to enable intelligent agents to improve their emotional intelligence when interacting with other agents. This framework builds on a paradigm of machine understanding. It includes (1) a metamodel, (2) an analyzer, (3) an evaluator, and (4) a memory modulator. The metamodel consists of episodic memory and three versions of semantic memory, semantic graphs, a general semantic graph, and a lookup table of general information about emotions. The analyzer is a perceptual categorization mechanism. The evaluator consists of an interpreter that provides an understanding of the perceived agent (analyzer output) with respect to the contents of the different kinds of memory (the metamodel). The memory modulator updates episodic memory and semantic graphs. This paper addresses one of the major themes in individual decision-making, bounded rationality (Tisdell 1996; Simon 1957; Kahneman 2003), in an expanded social setting. Agents are bound by the amount of time and resources they can commit toward resolving a balance between their wants and needs and those of the entire system. The emotional bound in this paper is able to dynamically change as more resources become available to an agent allowing them to devote an increased amount of time and effort toward social considerations. If resources are scarce, an agent may opt to make a socially nonoptimal yet computationally cheap decision over one that is more computationally expensive and more aligned with the prevailing social norms at the time.

The major novel contribution of Tuncer’s research to human behavior simulation is the formalization to modeling and simulation from theory to practice. He built up the conceptual foundations of a new exploratory multisimulation methodology with dynamic models and simulation. This solution presents an advanced problem-solving environment for social and political scientists to observe and examine the implications and plausible outcomes of decisions in conflict. He also contributes to individual agents’ decision-making by quantitatively measuring the effect of an agent’s action (based on the agent’s personality and emotion-understanding ability) on the needs of other agents relative to its own. Tuncer’s model stands out in that it retains an individual’s preference or indifference between two alternatives from not only its personal perspective but from its societal standpoint as well. This model falls short in providing a direct mechanism for agents to influence the decisions and subsequent actions of others; rather, agents are left to passively infer new beliefs and desires from their understanding of the needs of other agents.

3 CASE: Cognitive Agents for Social Environments

This section introduces CASE, a multiagent architecture that is efficient and scalable in simulating large-scale social systems.

3.1 System Overview

Social behaviors are behaviors that are solely oriented toward another agent. Such behaviors consider the intention behind another agent’s expression, create expectations about another agent’s actions, and aim to evoke a distinguishable response from another agent (Rummel 1976). Social interaction occurs when the social behaviors of two or more agents are mutually oriented toward one another.

Most social interactions can be differentiated according to Weber (1947) into the following three categories:

  • Accidental. This class of interactions is often not planned by either party in advance and rarely repeated with the same members. However, in rare instances, this initial unplanned contact between agents has the potential to develop into one of the other three more temporally permanent classes of social interactions. Example: A waiter asking a table of customers for their order.

  • Repeated. Similar to accidental interactions, these are not planned meetings between two agents but likely to occur on a frequent basis because of spatial proximity, shared interests, or similar habits. Example: Coworkers sharing small talk over the water cooler.

  • Regulated. These interactions are planned and tightly controlled by the laws, customs, norms, or other enforcement mechanisms put in place by members of the society. Example: Attendance at an employee staff meeting or visiting a courthouse for jury duty.

In all its forms, social interaction carries with it some degree of influence on the behavior of the agents involved. While sociologists differentiate between several types of social influence, namely, peer pressure, charisma, connections, force, and reputation (Cialdini 2001), CASE agents only concern themselves with the social structure through which the interaction they are currently experiencing occurs.

These structures represent a relatively stable and enduring pattern of shared relationships among agents within the society. Each structure subdivides the entire society of agents into interrelated sets where member agents share a common function, meaning, and/or purpose (Porpora 1989).

The likelihood an agent will respond to social influence or social impact of an agent, however, is intimately tied to the following dimensions (Tanford and Penrod 1984) of the social structure through which the agents are interacting:

  • Strength. How important are the other agents who are attempting to influence you?

  • Immediacy. How close to you, in either geographic or social space, are these agents?

  • Number. How many agents are exerting this influence upon you?

Agents always respond to the influence of another agent by altering their perception of their relationship to the influencer, other agents, or society in general. This alteration in perception ultimately affects future decisions and behaviors of that agent. Latane and Darley (1970) generalized these principles to state that the more agents that were interacting within a social structure, the more influence each individual agent will have. However, while the impact of individual agents may grow as new agents are added, the rate of growth actually shrinks inversely to the number of agents. In addition to the rate of growth, the amount of influence any individual agent can exert shrinks inversely proportional to the number of agents.

To achieve such ends, many researchers have attempted to grow in silico fundamental social structures and group behaviors. Their primary aim is to identify the local or microinteractions among agents that are sufficient to generate the desired macroscopic behaviors and collective patterns they desire (Epstein and Axtell 1996). However, while providing a good computational model that takes into consideration both the individual and social behaviors of autonomous agents, it is hardly efficient, scalable, or robust.

The difficulty exists in modeling the system by holding both the societal view and the individual agent view simultaneously. The societal view involves the careful design of agent-to-agent interactions so that an individual agent’s choices influence and are influenced by the choices made by others within the society. A stark contrast to the agent view involves only modeling the individual decision-making processes. While the single societal view mainly concentrates on the centralist, static approach to organizational design and specification of social structures and hence limits system dynamics, on the other hand, the single-agent view focuses solely on modeling the nested beliefs of the other agents and suffers from an explosion in computational complexity as the number of agents in the system grows.

Motivated by these observations, the interactions among CASE agents are embedded CASE agents in three social structures: group, which represents social connections; neighborhood, which represents space connections; and a social network, which spans social and space categories. These three structures reproduce the way information and social strategy is passed and therefore the way people influence each other. In our view, social structures are external to an individual agent and independent from its goals. However, they constrain the individual’s commitment to goals and choices and contribute to the stability, predictability, and manageability of the system as a whole.

We take up the classification proposed by Ferber (1999) that multiagent systems are an agent/society duality. There are two levels of organization in multiagent systems, which are illustrated in Fig. 14.1:

Fig. 14.1
figure 1

Social realms for the CASE agent

  • The microagent level, which is in essence represented by the interactions between agents. There are three common types of interaction: cooperation, competition, and negotiation. Agents interact with each other through two ways: its sphere of influence in the environment and direct communication to other agents.

  • The macrosociety level is represented by the dynamics of agents together with the general structure of the system and its evolution. Our work focuses on the mesolevel of the agent/society duality. Any society is the result of an interaction between agents, and the behavior of the agents is constrained by the assembly of societal structures. For this reason, a society is not necessarily a static structure, that is, an entity with predefined characteristics and actions.

3.2 Groups

A group is usually defined as a collection of agents who share certain characteristics, interact with one another, accept expectations and obligations as members of the group, and share a common identity (Sherif and Sherif 1948). Interactions within a group fall under Weber’s regulated category as interactions within a group are tightly controlled by a communally established set of social enforcement mechanisms. A group differs from a mere aggregate of agents in that a group exhibits a sense of cohesiveness and stability through time. Groups may be formed on the basis of intimate relationships or more formal and institutional means. All agents maintain the concept of a reference group, i.e., if I am an A, then I am definitely not a B or a C. Indeed, it is by creating these disassociations with others in society that agents categorize, identify, and compare themselves with other agents by joining groups with whom they share commonalities.

CASE agents interact with other agents in their group with respect to the classical definition of the function and formation of a group as defined by Muzafer Sherif (1955):

  • A common set of motives and goals

  • An accepted division of labor, i.e., roles

  • Established status (social rank) relationships

  • An accepted set of social norms and values

  • The development of accepted sanctions if and when social norms were respected or violated

Hence, CASE agents that share a similar preference for a class of decision problems form groups to reinforce their goals and objectives by diffusing their decision-making preferences to other agents. Each group maintains its own separate preference that is formulated based on a composite of its members’ preferences as an analogue to that group’s accepted set of social norms and values.

3.3 Neighborhood

An agent’s neighborhood is a geographically localized community located within the environment and is comprised of all agents whose spatial location falls within some predefined distance of its own. Here, interactions are typically accidental in nature as an agent’s neighborhood is subject to change as that agent moves through the environment. The size neighborhood of a CASE agent is directly related to the observation capabilities of the agent. The more an agent is able to observe, the larger its neighborhood will be. As an agent’s neighborhood grows, so does the number of agents that are likely to influence it; however, in keeping with Latane and Darley’s (1970) findings, the individual impact of each of its neighbors decreases relative to the size the entire neighborhood.

3.4 Social Network

A social network is a social structure made of nodes, here agents that are tied by one or more specific types of interdependency. The social network CASE agents utilize ties them together based on their communication patterns. This type of interaction is not frequently regulated like the interactions within a group are but typically are repeated on a regular basis between a small subset of agents. An agent’s social network serves as a medium through which agents actively disseminate information and influence to other agents through explicit communicative acts.

3.5 Varying the Sociability of Individual Decision-Making

Let a given agent in the population be denoted as a, where A denotes the set of all agents and aA. Each agent has a social strategy. This social strategy can be either ordinal or cardinal. We denote the social strategy for agent a by S a .

Let a given group in the population be denoted as g, where G is the set of all groups and gG. Groups are formulated on the basis of a common preference. Each agent identifies itself with any group such that the agent’s preference falls within some threshold of the group’s preference.

$$ \forall a\in A\ \mathrm{and}\ g\in G,\ a\in g\ \mathrm{if}\ \mathrm{diff}\left({S}_a,\ {S}_g\right)<d $$
(14.1)

where diff(S a , S g ) is the difference between the agent’s strategy S a and the group’s strategy S g and d is the threshold. It can be seen that agent a can belong to more than one group at a time and can belong to different groups over time.

When an agent joins a group, it is given a rank in that group. An agent will have one rank for every group it belongs to. The agent’s rank can be evaluated based on the agent’s importance, credibility, popularity, etc. It defines how much the agent will influence the group as well as how much the group will influence the agent. A high-ranking agent influences the group and therefore its members more than a low-ranking agent and at the same time is influenced more than a low-ranking agent. An agent’s rank is specific to the domain and may change over time. At each time step, every group will update its strategy. The update is determined by its members’ strategy and the percentage of the total group rank they hold. At each time step, every group will update their strategy.

$$ {S}_g={\displaystyle \sum_{a\in g}}{S}_a\times \frac{R_a^g}{{\displaystyle {\sum}_{b\in g}}{R}_b^g} $$
(14.2)

where \( {R}_a^g \) denotes agent a’s group rank. This allows for groups to be completely dynamic because both their members and their strategy can change at each time step. Just like the rank an agent holds in groups, an agent also has a rank in its neighborhood and network. Each agent keeps track of the agents in its neighborhood and the agents it communicates with. Every time an agent observes another agent in its neighborhood, that agent’s neighborhood rank will increase. Also, each time an agent communicates with another agent, that agent’s communication rank increases. Therefore, every agent will have a rank value for every agent it interacts with and a separate rank for every agent it communicates with. When an agent updates its strategy, it will take into account these ranks. Agents with a high rank relative to the other agents will have a stronger influence. Therefore, the longer two agents are near each other, the more they will influence each other. The same is true for communications. Below is the update function for the neighborhoods strategy and the networks strategy:

$$ {S}_n={\displaystyle \sum_{a\in n}}{S}_a\times \frac{R_a^n}{{\displaystyle {\sum}_{b\in n}}{R}_b^n} $$
(14.3)
$$ {S}_w={\displaystyle \sum_{a\in w}}{S}_a\times \frac{R_a^w}{{\displaystyle {\sum}_{b\in w}}{R}_b^w} $$
(14.4)

where S n is the strategy for neighborhood n, S w is the strategy for network w, \( {R}_a^n \) is agent a’s neighborhood rank, and \( {R}_a^w \) is agent a’s network rank.

At each time step, every agent also updates their strategy. An agent’s update function is defined as

$$ {S}_a^{\hbox{'}}=\alpha \times {S}_a+\beta \times {S}_g+\gamma \times {S}_n+\lambda \times {S}_w $$
(14.5)

where α, β, γ, and λ ∈ [0, 1] and α + β + γ + λ = 1. These values represent what percentage of influence the agent takes from itself, its group, its neighborhood, and its network. They allow for multiple agent types. For example, (1, 0, 0, 0) represents a selfish agent because it cares nothing about the whole society, and (0, 0.33, 0.33, 0.34) represents a selfless agent who cares about the three social structures equally.

3.6 The Psychophysics of Individual-Agent Decision-Making

Traditionally, the design of intelligent agents has centered around the common abstract notion of an agent execution cycle. This structure serves as a high-level map for the internal components of any agent-based system. This relates not only the data structures that comprise an agent’s knowledge about the environment but the algorithms that act on and control that flows between these structures. In a vast majority of cases, agent architectures differ only by the data structures and algorithms they choose to utilize. Figure 14.2 illustrates this cycle graphically, with details about each of the five major steps listed as follows:

Fig. 14.2
figure 2

Traditional agent execution cycle

  • Observation. This step collects information on current environmental conditions and maps those conditions to precepts. It is important to note that this step is absolutely domain dependent and limited in its scope by its implementation. For example, if this model were to be implemented within some sort of robotic system that utilizes a video camera for input, then the agent’s observation step would be limited in the amount and types of information it could take in as sensory input.

  • Updating KB (Knowledge Base). An agent’s knowledge base will be updated under two cases: (1) when the agent observes the environment, it will assert new percepts to the knowledge base; (2) when the agent performs an action, it will assert the effects of the action to the knowledge base. For both cases, the function update must check the entire knowledge base for inconsistencies.

  • Decision. Here, agents make two separate decisions: (1) what act to perform and (2) what message to communicate and to whom.

  • Communication. In general, intelligent agents working within a multiagent environment cannot force other agents to perform a specific action or directly alter their internal state. However, they can exert influence over other agents through communicative actions. Multiagent researchers have built upon John Searle’s speech-act theory (Sherif and Sherif 1948) to develop a number of formal languages and ontologies such as FIPA-ACL and KQML (Labrou et al. 1999) so intelligent agents can understand one another.

  • Action. The functional nature of an agent’s action step is rather intuitive and simple; its purpose is to ensure a successful, coherent, and fault-proof execution of the optimal action that was recommended by the agent’s decision-making mechanism. No real further explanation of act is necessary as this function is highly dependent on the implementation.

3.6.1 CASE Agent Execution Cycle

Kahneman and Tversky (1979) suggest a two-phase decision model for descriptive decision-making (see Fig. 14.3): an early phase of editing and a subsequent phase of evaluation. In the editing phase, the decision-maker constructs a representation of the acts, contingencies, and outcomes that are relevant to the decision. In the evaluation phase, the agent assesses the value of each alternative and chooses the alternative of highest value. Our decision model incorporates their idea and specifies it by the following five mechanisms:

Fig. 14.3
figure 3

Two phase decision-making process

3.6.2 Editing

  • Framing: the agent frames an outcome or transaction in its mind and the utility it expects to receive.

  • Anchoring: the agent’s tendency to overly or heavily rely on one trait or piece of information when making decisions.

  • Accessibility: the importance of a fact within an agent’s selective attention.

3.6.3 Evaluation

  • Two modes of cognitive function: intuition and deliberation.

  • Satisfying theory: the goal is no longer optimality, and decisions are accepted when they are good enough.

3.6.4 Editing Phase

One important feature of the descriptive model is that it is reference based. This notion grew out of another central notion called framing where agents subjectively frame an outcome or transaction in their minds and the utility they expect to receive is thus affected. This closely patterns the manner in which humans make rational decisions under conditions of uncertainty. CASE agents frame their current situational context by forming an attitude or weight, w, toward one class of decisions or outcomes.

Framing can lead to another phenomenon referred to as anchoring. Anchoring or focalism is a psychological term used to describe the human tendency to overly or heavily rely (anchor) on one trait or piece of information when making decisions. A classic example would be a man purchasing a used automobile; he may tend to anchor his decision on the odometer reading and year of the car rather than the condition of the engine or make of the car. CASE agents anchor by building selective attention on relevant information. The salience of information i is determined by

$$ {\Delta}_i=\frac{{\displaystyle {\sum}_c}i\ \mathrm{is}\ \mathrm{used}}{\mathrm{Card}\left(I\ \mathrm{is}\ \mathrm{used}\right)},\ i\in I $$
(14.6)

where Δ i is the frequency that information i was used under the context c.

If the salience of i is higher than the threshold, i becomes the anchored information:

$$ {I}^{*}=\left\{i\ \Big|{\Delta}_i>\mathrm{threshold}\right\} $$
(14.7)

Accessibility is the ease with which particular aspects and elements of a situation, the different objects in a scene, and the different attributes of an object come to mind. As it is used here, the concept of accessibility subsumes the notions of stimulus salience, selective attention, and response activation or priming. CASE agents determine the similarity between states only with I* establishing the relation

$$ {S}_t\sim {S}_m\ \mathrm{if}\ {d}_{c,\kern0.06em {I}^{*}}\left({S}_t,{S}_m\right)<D $$
(14.8)

where S t is the current state, S m is a state in the agent’s memory, and d c,I* (S t , S m ) is the distance between S t and S m regarding all anchored information I * under the context c. Those states that are most similar to the current one are said to be more accessible than others.

3.6.5 Evaluation Phase

In the evaluation phase, there exist two modes of cognitive function: an intuitive mode, in which decisions are made automatically and rapidly, and a deliberative mode, which is effortful and slower. The operations of the intuition function are fast, effortless, associative, and difficult to control or modify, while the operations of the deliberation function are slower, serial, and controlled; they are also relatively flexible and potentially rule governed. Intuitive decisions occupy a position between the automatic operations of perception and the deliberate operations of reasoning. Intuitions are thoughts and preferences that come to mind quickly and without much reflection. In psychology, intuition can encompass the ability to know valid solutions to problems and decision-making.

Our technical solution to achieve this behavior is that if S t , the current state, is close to a state in memory, S m , then the optimal policy π*(S t ) and π*(S m ) should be close as well. Hence, the agent uses an optimal policy that it has employed before in a similar state and updates its state memory by adding the current state. If the policy the agent employed was successful, then the reward associated with that policy and its accessibility will be increased. The slower, serial, and controlled process of deliberation determines the state similarity across all information available to the agent, not just that which is anchored, I*. Traversing its memory, an agent attempts to reoptimize a previously used policy stored in memory:

$$ {\pi}^{*}\left({S}_m\right)={\mathrm{argmax}}_xE\left[{\displaystyle \sum_{i=0}^{\infty }}{\gamma}^iwR\left({S}_i\right)\Big|\pi \right],\ 0<\gamma <1 $$
(14.9)

where r is the time discount factor and R(S i ) is the reward an agent receives when it arrives at state S i .

In keeping with the notions of satisficing theory under their intuitive mode, CASE agents do not compute an optimal policy to use in the current S t if there is a state in the agent’s memory S m that is similar and the policy utilized under that state can be used once again.

3.7 Experiments and Results

We tested the CASE architecture and its new decision-making mechanism within a number of domains ranging from the classic prisoner’s dilemma to an artificial stock market as well as initial work on such real-world applications as the subprime lending crisis.

3.7.1 An Extended Prisoner’s Dilemma: Investigating Intuitive Attitudes Toward Risk

We choose an extension of the classical prisoner’s dilemma as the domain for our initial experiment. In the classical prisoner’s dilemma, a game comprises two agents: A and B. Each agent is given the option to either cooperate with or defect from its opponent with various outcomes for each choice.

We extend this classical prisoner’s dilemma in two distinct ways. First, outcomes are cumulated in our domain. In the classical dilemma, the outcomes of each game are not cumulative. Even in an iterated prisoner’s dilemma scenario, the outcomes of a previous game have no effect on an agent’s decision in subsequent games. The only outside factor that influences an agent’s decision in an iterated prisoner’s dilemma is an agent’s knowledge of what action(s) his opponent has taken in the past. The change to cumulative outcomes allows agents to assign value to gains and losses rather than final assets. Since the current asset (prisoner sentence) of an agent serves as a reference point for subsequent decisions, the cumulative value of an agent’s assets can have a tremendous effect on that agent’s later performance. Second, while in the classical prisoner’s dilemma, the four outcomes are fixed, here we allow them to be uncertain.

Our experiment involved a total number of 2,000 agents within either one or two societies. Each agent plays over 500 iterations. At each iteration, we randomly paired agents to play a prisoner’s dilemma game. After each game, the assets of each agent will be changed to reflect the outcome of the game (gains or losses). This outcome was then used by the agents to the next iteration. At the start of each experiment, each agent was assigned a small positive number to represent its beginning asset position. In some of the experiments, we arbitrarily chose this number to force groups of agents into either initially risk-seeking or risk-averse attitudes. At other times, we allowed this number to be randomly generated to create a heterogeneous distribution of both risk-seeking and risk-averse agents.

Figure 14.4 shows the average asset position of all the agents in the experiment. The two upwardly curving lines reflect the two possible rewards each agent could receive for either cooperating (left line) or defecting (right line). There is an evident shift in the concentration of agents from one decision choice to another as time and assets progress. This is reflective of the fact that as the agent’s overall assets increase, its individual behavior becomes increasingly risk averse. The increased density of points toward the upper end of the line reflects the congregation of agents around a single risk averse decision. This result demonstrates that a few minor and conscientious alterations to individual agent decision processes are more than sufficient to create the attitudes toward risk that characterize observed human decision patterns.

Fig. 14.4
figure 4

The performance of agents assets

3.7.2 An Artificial Stock Market: Evaluating the Performance of Intuitive and Deliberative Decisions

Our initial experiments involving the prisoner’s dilemma only investigated a small portion of the entire CASE agent functionality and explored only a distinct subset of prescribed human behavior. Here, we aim to examine in detail the effectiveness and role of the mechanisms underlying an individual agent’s two-phase decision process, most notably the two cognitive modes of intuition and deliberation. Twenty thousand agents were selected from among 30 unique stock indices for a time frame of 25 rounds. Every agent began the simulation with initial 10,000 cash, and no limitations were set on the amount of stock it could purchase each round as long as they had cash available to make a desired purchase. Stock prices changed each round based on traditional microeconomic supply and demand curves that accounted for the volume of buying and selling that occurred in the previous round. The more shares of a stock were purchased, indicative of a higher demand for that stock and a dwindling supply, the higher the price was driven up and vice versa. Agents bought and sold stock only to the market and did not engage in interagent purchases, sales, or trades for simplicity purposes.

Agents used the two-phase decision-making process, first editing the decision space by selecting only 10 stock indices from among the available 30 to serve as anchors each round. These anchor stocks could change from round to round and are selected as basis for predicting the overall market behavior. Anchors that do not seem to reflect observed market behavior are discarded at the end of each round, and new ones are added. However, each agent only keeps exactly 10 anchor stock indices at each time step. The second phase of the decision process utilizes the two modes, intuition and deliberation. In the intuitive mode, the 10 anchors are utilized to predict, by way of a simple polynomial fit, the expected behavior of each anchor stock index and likewise the predicted behavior of the overall market in the next round. A downturn in the overall market would signal the CASE agents to begin selling off their low-performing stocks, while an upturn would signal the need to purchase stocks on the rise. If half or more of the chosen anchors are the same stocks that the agent is holding, stock holdings that match current anchors are bought and sold, and no action occurs to holdings that do not match a current anchor. Otherwise, more information must be gathered and the deliberation process started to determine either buying new stocks or selling an agent’s current holdings. This is done by computing the distance between several random points on the anchor’s price function and a selected holding’s price function. The anchor with the smallest distance was chosen to be representative of that particular holding.

Figure 14.5 illustrates the CASE agent’s choice of cognitive function (i.e., intuitiveor deliberative) relative to their stock-holding performance. Here, we see a clear visual correlation between the number of agents utilizing the intuitive mode and positive performance in the CASE agent’s stock holdings. This reflects the crucial role information plays in the simulation. The better the information the CASE agents have about their environment which is reflected in their choice of anchors, the better they are able to predict both positive and negative fluctuations in stock price and likewise react to those anticipated changes. A rise in the number of deliberative agents and slump in stock-holding performance can be explained in terms of information as well. Here, the reactive agents have either bought or sold a large point of stock changing the behavior of the overall system. Hence, the CASE agent’s anchors no longer serve as a good predictor of overall market performance. This loss of good information on the part of the CASE agents results in a temporary downturn in their performance until the next round when new anchors can be chosen that better reflect the newly altered reality.

Fig. 14.5
figure 5

Intuitive or deliberative decision vs. stock holding performance

Investigating the performance of the CASE agents, alone is certainly not enough to validate the superior performance of the two-phase decision process. Likewise, Figs. 14.6 and 14.7 draw from a separate experiment run over 50 time steps (twice the length of the original) in which the performance of CASE (indicated by the lighter pink line) and a set of agents employing a classical decision-theoretic approach (indicated by the darker blue line) were compared.

Fig. 14.6
figure 6

Performance of two-phase decision process vs. classical decision-theoretic approach: no limits placed on stock volume

Fig. 14.7
figure 7

Performance of two-phase decision process vs. classical decision-theoretic approach: limits placed on stock volume

In Fig. 14.6, when no limits are placed on how many shares of each stock are available for purchase, agents are absolutely guaranteed that they can purchase shares of any stock bearing in mind that they have sufficient funds to do so. This environmental characteristic essentially devalues the major competitive advantage of decision-making speed that CASE agents hold. Even in these shallow decision problems where complexity and available information are low as indicated by Fig. 14.6, our CASE agents remain competitive with the classical decision-theoretic frameworks traditionally employed by agent-based researchers. What is most apparent from Fig. 14.6 is that even in areas when the CASE agent’s performance drops below that of the classical decision-theoretic agents, their ability to return to a decision strategy that yields more optimal results is remarkably fast and usually within approximately five rounds of the simulation.

When the number of shares of each stock offered at the beginning of each round is limited, as in Fig. 14.7, the performance of the CASE agents is both markedly superior and enjoys a slight degree of sustained growth throughout the duration of the simulation. As stock purchased in one round may or may not necessarily be available to that agent in future rounds, it is important that agents measure the cost associated with purchasing/selling that stock now or taking the risk to potentially purchase/sell that stock later on at a higher or lower price. The ability of the CASE agents to not only gauge the opportunity cost associated with each of their decisions but to make those decisions in a rapid and timely manner using their intuition is the integral recipe for their inevitable sustained success.

3.7.3 An Artificial Stock Market: Evaluating Diffusion by Social Structures

To measure the influence of the three social structures we developed on the individual agent decision-making process, a 100 × 100 grid-based environment was created, and 1,000 agents were randomly dispersed across it. Sixty percent or 600 agents were assigned at random to a group that employed a conservative decision-making strategy that attempted to minimize risk while maximizing profit. Likewise, 40 % or 400 agents were assigned a more aggressive decision-making strategy that was risk seeking in nature.

The social structures were given initially the following attributes: (1) agents could observe only the eight cells immediately surrounding them, (2) they were allowed to have at maximum three agents in their social network that they communicated with, and (3) they were not allowed to move from their location, meaning their neighborhood remained static throughout the experiment

Figure 14.8 shows that under these conditions, the neighborhood appeared to be the least effective social structure for rapidly disseminating influence among a large group of agents because of its limited reach and static nature. Adding the group and social network structures tended to increase the rate at which the conservative and successful strategy diffused to the other agents in the experiment.

Fig. 14.8
figure 8

Social structure diffusion rate: small neighborhood + network size: 3 + no walk

This pattern continues until all three social structures are in use, at which point the combination of the neighborhood and social network significantly outperforms the combination of all three. While initially puzzling, this result is indicative of the very nature of the group social structure. To maintain consistency with conventional sociological conceptions of a group, an agents group serves as a composite of influence its members receive through their neighborhood and network. This allows the group to serve as an important medium for widely broadcasting influence nondiscriminately to a number of agents not bound by any social or spatial context. The group also serves as a mechanism for resisting or smoothing rapid and sharp changes occurring in the underlying social structures. In a very limited sense, we can say that CASE agents through their groups not only maintain a sense of identity or commitment to a certain ideology (here taken to be aggressive or conservative) but actively try to maintain and propagate that sense of connection to other agents in a fashion that mimics observed human social behaviors.

A more thorough examination of the relationship between an agent’s social network and neighborhood was carried out by extending the previous experiment along the following lines: (1) the duration was increased to 300 time steps to ensure adequate time for the diffusion rate to stabilize, (2) the size of both structures was varied along with the ability of the agents to move.

Figures 14.9 and 14.10 indicate a strong relationship between the movement of agents in the environment (no walk/walk) and the rate at which the neighborhood is able to diffuse the conservative strategy to other agents. We identified two primary reasons for that the rate of diffusion being significantly lower in Fig. 14.9. First, the rate of diffusion with the neighborhood social structure is intimately linked to the spatial density of the agent population with a higher spatial density yielding rapid, effective diffusion and vice versa. Second, the direction of the influence diffusing out of the neighborhood social structure is tied to the location of their immediate neighbors. The distribution of agents within an individual agent’s neighborhood is by no means uniform and could very well be overly heavy in one or several directions as the cells adjacent to an agent’s could or could not contain agents. Those adjacent cells containing agents specify the direction of influence for the subsequent time step.

Fig. 14.9
figure 9

Small, medium and large neighborhoods + network size: 3 + no walk

Fig. 14.10
figure 10

Small, medium and large neighborhoods + network size: 3 + walk

As Fig. 14.10 indicates, allowing agent movement overcomes both these limitations as spatial density and location of neighbors are no longer factors when an agent is allowed to move. In an abstract sense, the inclusion of agent movement around the environment effectively transforms an agent’s neighborhood from a static to dynamic entity. As Figs. 14.9 and 14.10 illustrate, this move to dynamism is also a dramatic move toward an increased rate of diffusion.

In contrast to the neighborhood, as Figs. 14.11 and 14.12 demonstrate, an agent’s social network is seemingly unaffected by the spatial density and movement of the agent population as it exists outside the boundaries of physical space. However, a direct correlation does exist between the number of agents within an individual agent’s network and the rate at which and/or degree of influence it can exert on those agents.

Fig. 14.11
figure 11

Small neighborhood + small, medium and large network + no walk

Fig. 14.12
figure 12

Small neighborhood + small, medium and large network + walk

4 Grand Challenges on Simulating Human Social Behaviors

Human social behaviors are directed toward society. Therefore, these behaviors are influenced by the interactions with other people in the society. At the same time, human behaviors are also influenced by culture, attitudes, emotions, values, ethics, authority, rapport, hypnosis, persuasion, coercion, etc. Due to the paper length, we focus the grand challenges on how people interact with each other in social networks to maintain their relationships. In the following sections, we will discuss the challenges in complex social networks, temporal patterns, and network randomness.

4.1 Structural Network Measurement

A complex network is a network with nontrivial topological features, i.e., features that do not occur in simple networks such as lattices or random graphs but often occur in real graphs. Examining the structure of the whole network as well as individual patterns that arise offers valuable insight into many different social applications. These include

  • Studies of Communication, which focuses on the study of the transfer of information. This can include in-person communication, such as the spread of a rumor, or public-forum communication, such as information conveyed on a blog (Fleming 2011; Minsheng et al. 2013; Zhoua et al. 2013)

  • Community development, including both geographic and online communities. Of particular interest is developing tools to analyze the development of social media networks such as Facebook, Twitter, and Wordpress (Lapachelle 2011; Zhoua et al. 2013)

  • Diffusion of innovations or the spread of ideas throughout a community. This can include finding the “opinion leaders” or the individuals who are especially influential in the spread of an idea as well as modeling the spread of an innovation through an entire organization. Recent studies into diffusion have also looked at how diffusion interacts with network structure (Stattner et al. 2013)

  • Health care analysis, including epidemiological studies and studies of health care organizations and systems (Levy and Pescosolido 2002; Christakis and Fowler 2013)

  • Language and linguistics, including how different languages evolve through social interaction. In an increasingly globalized world, this is of particular interest in studying the decline of native dialects as well as language maintenance and shift in multilingual communities (Milroy 2008)

  • Social capital or the resources available to individuals through their social interactions. For instance, social capital allows certain people to access opportunities such as job openings. It has also been shown that there is a correlation between measured social capital and reported quality of life (Valenzuela et al. 2009).

As complex social networks can be used to analyze many real-world interaction types from social networking websites to interactions between animals, being able to effectively study their structures has become increasingly important in recent years (Pinter-Wollman et al. 2013)

Rumors, opinions, behaviors, and diseases spread to the population via social interactions. A blocker is an individual in the network that can most effectively slow down the spread of a process through the population. For example, to slow the spread of disease, it would be most efficient and effective to vaccinate one of the key blockers in the network. This paper attempts to find structural network measures that indicate the best blockers in dynamic and social networks. A dynamic network is a series of static networks that show the interactions of an individual at a certain time. An aggregate network shows a group of individuals and their interactions over a period of time. If two nodes have interaction during the observed period of time, it is represented by an edge; multiple interactions between a pair of individuals might be represented as a single edge, multiple edges, or possibly a weighted edge between the two nodes. A dynamic network is generally more useful because it shows time and keeps intact the order of interactions.

Structural network measures are like social properties within a network, for example, betweenness. This method used several of these measures to look at the entire network and other more localized measures to look at individual nodes. The global structural properties observed were density, the proportion of edges in a network to possible edges, dynamic density, the average density at one time, path, a distinct sequence of nodes, temporal path, a time-respecting path in a dynamic network, and diameter, the length of the longest shortest path. The localized properties used were degree, or a node’s number of neighbors, dynamic degree, dynamic average degree, nodes in the neighborhood, edges in the neighborhood, betweenness (previously discussed), dynamic betweenness, closeness, or the average distance between one individual and other individuals in the network, dynamic closeness, clustering coefficient and dynamic clustering coefficient, the fraction of a node’s neighbors that are neighbors to each other in previous time steps. These measures were all compared to determine the blocking ability of individuals.

The paper by Habiba et al. (2010) finds that the dynamic clustering coefficient, which basically measures how many of your friends are friends with each other, was a good indicator of the node’s blocking ability. The other structural methods that best predicted blocking ability were node degree, number of edges in a node’s neighborhood, and dynamic average degree. These methods must still be tested on larger and more complex models to determine whether they will be truly useful for realistic disease spread models. Another problem with this method is that it focused on practical applications and the theoretical structure of the problem is still not well known. A large problem with this method is that it cannot identify a set of top blockers because it goes through the data and tests nodes one at a time by removing them and measuring the spread of data. Finding the top set of blockers is computationally hard, and an exhaustive search is infeasible. Another interesting thing they find is that in networks where blocking spread was difficult, nodes were all ranked about the same; however, in networks where spread could be blocked by just removing a few individuals, the nodes had a wider range of rankings.

4.2 Temporal Patterns

A temporal network is a network in which the connection between the nodes is not continuous. The most important part of a temporal network is time. This is also the most difficult part of a temporal network to visualize and analyze because relationships in networks are changing and modeling the change in relationships while keeping the order and time aspect of the network is difficult to do in just one image. It almost takes a whole string of images or snapshots of the aggregate network at different points in time to demonstrate a temporal network. In our research, we examined the various methods that have been used to analyze different aspects of temporal networks. After becoming thoroughly acquainted with the various methods of examining properties of temporal networks, we compared each method and summarized the pros and cons of each method.

We examine several models each of which focuses on a specific problem in temporal patterns of social networks. The first method we examined was the betweenness preference (Pfitzner et al. 2013). This method focuses on the structural properties of a temporal network and looks at how likely certain nodes are to mediate interactions between any two nodes. Betweenness preference is based on the idea that certain nodes contact other nodes based on previous contact. The problem with using betweenness preference to analyze a temporal network is that it is not present in the time-aggregated network. For example, if given two temporal networks, when these networks are aggregated, we cannot determine between which time steps a node mediated an interaction between two other nodes or if these two nodes are able to interact through their previous contact. This becomes a problem because of the order that edges are made. In spite of this, betweenness preference is an important aspect of a temporal network. If we can keep betweenness preference intact when aggregating our network, this will help us see the flow of information throughout the network, which is typically lost when the network is collapsed.

Another problem with a network of time-stamped pairs defining who spoke to who or who tweeted at who is that a vital part of the flow of information is lost. For example, if A speaks to B in the morning, then B speaks to C in the afternoon, information might flow from A to C but not from C to A. The paper (Gindrod et al. 2011) suggests using a more natural definition of a walk on an evolving network. Specifically, a node’s ability to both broadcast and receive information is calculated through a series of basic operations in linear algebra, and the lapse in time can be accounted for in the flow of data from node to node by utilizing the noncommutativity of matrix–matrix multiplication. A walk is a path that goes from node to node. A path is closed if it starts and ends at the same place and open if it starts and ends at different places. The Katz centrality of a person measures a person’s centrality by taking into account the number of walks between a pair of people.

The third issue of temporal patterns is that in some networks, links between nodes are either positive, like a friendship, or negative, like an opposition. For a link in a social network, its sign is either positive or negative depending on the attitude of the creator of the link to the recipient. If we are given a network of nodes with edges of either positive or negative value and one of the links has a missing value, we want to find a way to determine, based on the other connections of the network, whether this missing link is positive or negative. The paper by Leskovec et al. (2010) uses the logic that the enemy of my friend is my enemy and the friend of my enemy is my enemy. This method looks at how the sign of a link interacts with the pattern of the signs of the links within a certain distance of the given link. This paper uses edge sign prediction to determine the state of a link given an almost complete network of either positive or negative links.

While many of these graphs are directed, it is sometimes useful to examine the graph and its links regardless of the direction of the connection between two nodes. However, when predicting the sign of an edge, they use directed links. For example, if we are trying to determine the sign of the edge from node u to node v, we look at the signs of the outgoing edges from u and the incoming edges to v. They also keep track of the number of outgoing positive and negative edges from u and the total number of common neighbors u and v have. When predicting the sign of a link, it is also helpful to form triads of nodes. For example, they consider the triad containing the edge (u, v) with a node w such that w has an edge either to or from u and an edge either to or from v. This theory is called structural balance theory. This theory is based on the idea stated above that the friend of my friend is my friend and the enemy of my friend is my enemy. Using the structural balance theory and the idea of triads, if w forms a triad with the edge (u, v), then (u, v) should have the sign that causes the triangle formed by w, u, v to have an odd number of positive signs. By using a balanced data set, we know that by guessing the status of a link, it can have a 50 % correct prediction rate. If we use a full data set, accuracy is improved to 80 %.

4.3 Network Randomness

Most social networks are dynamic networks where connections are being continuously made and broken. Being able to accurately predict a dynamic network in the future has many consequences, many of which are found within the business world. Large organizations can benefit by being able to suggest new collaborations and interactions within the organization. Security companies also benefit from the fact that we could use this information to analyze terrorist networks.

Our first problem is to predict which interactions among existing members are likely to occur in the near future. A supervised random walk method (Backstrom and Leskovec 2011) looks at how we can use only the given data of the existing relationships to accurately predict the future of the network. This is how social networks like Facebook send users recommendations on who they should befriend in the future. Supervised random walk combines the information from the network with node and edge attributes. These attributes are then used to guide a random walk on the graph. When looking at link prediction, we look at a network at a given time t and try to predict the edges that will be added to the network at a future time t.

There are a few problems with link prediction methods. One major problem is that social networks are sparse. For instance, Facebook users connect to an average of 100 nodes out of a 500 million-node network. For this reason, a good way to predict edges is to predict no new edges, allowing this method to be extremely accurate, albeit useless. Thus, supervised random walk helps with these problems. Using this method, Backstrom and Leskovec (2011) take a supervised random walk on the network which visits given nodes more often than other nodes. They use given node and edge features to determine the strength of the edges so the random walk will be more likely to visit positive nodes more than negative nodes. Positive nodes are nodes to which new edges will be created in the future, while negative nodes are all of the other nodes. A new function must be created then to assign strength to each edge so when we compute the random walk in a weighted network, nodes which a node will connect to in the future will have higher scores than those nodes which a node will not visit in the future.

Another way to analyze the randomness is to use Markov Chain Monte Carlo (Clauset et al. 2007). This analyzes the probability of nodes making connections by looking at the whole network. We can then use these probabilities to look at how the network will look at any given time. Markov Chain Monte Carlo uses Markov chains along with the law of large numbers to estimate the state of the network at any given time in the future. Markov Chain Monte Carlo uses stationary distribution, making the network easier to work with. The network has a given probability distribution, and Markov Chain Monte Carlo generates random elements with the same distribution. Markov Chain Monte Carlo then uses this information to predict the state of the given temporal network at any given time. This means that it will predict which connections will be made between which nodes at a certain time.

Markov Chain Monte Carlo (Tjelmeland 2007) is closely related to supervised random walk. A random walk is performed on the network, and each step has a probability associated with it. Markov Chain Monte Carlo creates a Markov chain with the same distribution of the network and uses a random walk to simulate the chain. The problem with the random walk method is the aspect of random walk that must be calculated and performed on each and every node in the network. These calculations in a large and expansive network such as Facebook or twitter can become exhaustingly long and tedious. Another problem with Markov Chain Monte Carlo is the rate of convergence. It is not very well known how to determine how long the chain must be used on a network to generate a suitable convergence.

5 Conclusion

One of the fundamental questions of human social simulation has traditionally been how should agents make decisions given they inhabit an environment where their actions may have unforeseen or unpredictable effects on others? This question often raises interesting points about the extent to which the individual autonomy of agents should be sacrificed for global needs and desires of society. As the economic and mathematical sciences have transitioned toward a more socially conscious decision-theoretic framework, they have discovered that human beings operating in real-world environments often rely not only on their own cognitive capabilities but of those of others around them as well through a network of complex social structures and institutions.

If multiagent systems are to provide a proper computational model of both human decision-making and social interaction, then these structures and institutions and the cognitive capabilities of the agents that comprise them must be modeled to a level where computational complexity is not sacrificed on behalf of realism. Our CASE model represents an important work that processes by combining recent advances in behavioral economics that point to a more bounded-rationality human mindset with the time-honored theories of socialism that cross disciplinary boundaries between both sociology and psychology.