Keywords

1 Introduction

Service systems are defined as dynamic, value co-creation configurations of resources (people, technology, organizations, and shared information) [1]. In literature it is possible to find several ways in which the components of a service systems are expressed [2]. In this paper we adopt the definition provided by Kim and Nam [2] looking at the components of a service system defined as “value activity network, resource integrator network, and capability network”, where:

  • Value Activity Network (VAN) represents the set of customers’ and suppliers’ activities; the goal of their interactions is to provide a set of solutions to customers for solving their problems.

  • Resource Integrator Network (RIN) has as primary role to provide and organize resources for each participants, in order to support the value creation activities; it represents several actors with their roles in the value creation process, such as customers, service providers, customer communities, suppliers.

  • Capability Network (CN) consists of the capabilities and resources (physical and not) that could exist inside or outside the resource integrator network but needed to enable the value creation process.

On the basis of these assumption, an online community (focus of our study) can be seen as a service system, in which: (i) the VAN is formed by all the actors’ interactions happened in the online community; (ii) the RIN is represented by the actors and their roles as customers, suppliers or both (on the basis of the position in the value chain), and (iii) the CN consists of a variety of resources such as tools (operand resources) and skills and knowledge (operant resources) which are owned by service providers and/or customers.

The availability of Information Infrastructures such as Internet and more recent social network sites has provided the ground for the emergence of many forms of online communities [3]. In such environments individuals can interact both within and across organizational boundaries with different purposes such as for instance working in global distributed teams [4], developing open source software [5], participating in political decisions [6, 7], crowdsourcing [8], exchanging resources with community members [9].

The main sources for investigating governance structures that characterize this virtual world refer to the work of Shirky [10] and the work of Demil and Lecocq [11]. The latter work deals with a specific activity: the development of open source software as a particular form of peer production process whose governance structure has been called bazaar. Bazaars are characterized by self-organizing, rather than by authority, as in the case of the hierarchy, or by prices, as in the case of the market. According with Shirky, virtual world is populated not only by collaboration forms (i.e. bazaars), but also by information sharing forms and collective action forms. While trust has been recognized as the key coordinating mechanism in community based institutions [12], the micro-foundations of trust in online communities still need further investigation for better understanding social networks’ behavior [13].

Drawing on complex adaptive systems theory, we recognize that social phenomena emerge from the bottom-up interactions among learning agents in a given environment [14]. This view has recently gained much attention in different fields of management and organization studies [1518] and integrate contributions from cybernetics, cognitive sciences, decision and organization sciences [19, 20]. With these assumptions information sharing, collaboration, and collective action are seen as new governance forms emerging in online networked environments for handling the complexity of social exchanges. Like other institutions, such as markets, hierarchies, clans and fiefs, these are transactional structures of data-processing agents that are contingent to the type of complexity that must be handled [21].

In this paper we present an agent based model for simulating the dynamics of interacting agents whose behavior is determined by their learning capability and by a set of environmental rules. The model is grounded on the theory of trust and dependence networks [2224] and provides a tool for studying emergent properties/phenomena within social networks. Following previous works that introduce formal models of trust and dependence networks, we propose an architecture of cognitive agents and of the environment in which they act and interact. This architecture will be the basis for implementing a platform for agent based simulation that serves as a tool for investigating the dynamics of information sharing, collaboration, and collective action within social networks.

The paper is structured as follows. We first introduce the theoretical framework on which the model is grounded. Then we describe the architecture of cognitive agents and the environment. Finally we discuss about implications and research directions.

2 Trust and Dependence Networks

As pointed out by Castelfranchi et al. [22] dependency and trust are concepts strictly related with each other. From a cognitive point of view, trust is a complex object built on tasks, goals etc. and based on beliefs of different kinds (evaluations, expectations etc.), including dependence beliefs [25]. Therefore trust networks are basically connected with what is called dependence network.

The cognitive theory of dependence developed by Conte and Castelfranchi [26] includes two type of dependences: (1) the objective dependence, which says who needs who for what in a given society and (2) the believed dependence, which says who is believed to be needed by whom.

The model of dependence network proposed by Conte and Sichman [27] in order to supply a tool for improving coordination in multi-agent systems and used to build a simulation must then be integrated. By assuming that there exists an objective reality not necessarily known by agents as it is, we propose an updated model of dependence networks based on the evolution described in more recent works [24]. Therefore, the three basic notions of the first model, i.e. external description, dependence relationship and dependence situation, must be extended. In particular, while we build external descriptions and dependence relationships following the instruction presented in the previous model of dependence network we consider some additional features of dependence situations in order to classify them. Together with the nature (given by dependence relations) and the locality (considered by Conte and Sichman [28]), we add the distance between the locally believed dependence and the real dependence. In other terms, we extend the model by explicitly considering subjective and objective points of view in order to test how their distance influences agents behaviors in the networks. The cognitive model we used as theoretical framework takes into account both an objective dependence network, built on the real dependence relation between agents in the network, and multiple believed dependence networks (as many as the number of agents in the network) [29].

Furthermore, we introduce in the model the concept of trust, as a first step to start investigating what the dynamics of this complex object are. The importance of trust is in fact crucial to allow exchanges to happen: although agents can depend on each other and, then, need each other to reach their own goals, delegation is a choice that can be activated only when there is a trust relationship between agents (that become namely truster and trustee).

We can consider the model as made up by an environment exogenously given, characterized by different dependence relations, and a set of goal oriented agents autonomous in making decision but dependent by other agents to reach their own goal. Each agent has a level of trust towards the others that can be updated while interactions go on. On the basis of their locally believed dependence network (BDN), agents can proceed by trial and error in order to reach the goal (updating their own beliefs about the network in a process that can lead to reduce the divergence between it and the real environment and by developing trust relationships between each other). As for this first attempt, in order to make it as simple as possible, we tested only a single type of dependence relationships among the set defined in [27], namely the mutual one, which means that agents depend on some of the others in the network to reach a common goal. The idea is to test interactions more and more complex once the simulator described in the next section is built and operative.

3 The Architecture for Simulation

Each agent in the environment has her own representation of the reality on the basis of her beliefs. She knows well the actions and resources available to her, and she starts pursuing some goals, by combining actions and resources and/or asking some of them to other agents in the environment. In this way, she interacts with other agents either to perform an exchange of resources or to involve other agents in performing a specific action. The interaction is based on agent’s own BDN, and it can be equal or different from the dependence network that really hold in the social environment agent acts in. After every interaction the agent can update her believed dependence network on the basis of the information exchanged. The interaction among agents happens every slot of time defined as rule of the environment (called round).

In this work, the agent decides to take into account the information stored in the working memory for updating her BDN, on the basis of the level of trust put on the respondent agent. The level of trust perceived by an agent about another one is based on the behavior of the latter. In the paragraph related to the agent’s behavior we introduce a mechanism adopted by an agent to evaluate the level of trust put on the others. For now it is only related to the trustworthiness concept.

3.1 Agent Mind and Environment Configuration

Each agent has two kind of memories for storing several information, namely the Long Term Memory (LTM) and the Working Memory (WM). The former contains all information needed by agents to act (goals, actions, plans, resources, etc.), whereas the latter is used to know what an agent will do in the next round and also to store the information produced by the interactions in the environment. This last kind of information may also be useful to update in the LTM the Believed Dependence Network by the agent. In particular the LTM contains:

  • a set of goals G = {g1,…,gn};

  • a set of plans P = {p1,…,pn} where each pi is the collection of actions to reach the goal gi (each plan can be updated at runtime if needed);

  • a set of possible actions Act = {a1,…,am} where one or more resources are associated to each action

  • a set of resources R = {r1,…,rn} owned by the agent

  • a network dependence of the society built on the basis of agent’s beliefs

  • a set of rules (Rules for Updating - RU) with which the information present in the Working Memory can be considered reliable and useful for updating the believed dependence network and the RU themselves.

As regards the WM, it contains the step of the plan to execute, the stored information received (obtained during the interaction), and other practical information (e.g. the number of rounds awaiting for a certain response) (Fig. 1).

Fig. 1.
figure 1

The agent’s mind

The acting of agents is driven not only by the information stored in their own LTM but also by the information or constraints inherited from the environment. The environment settings contains:

  • the real dependence network (it can be unknown to all the agents)

  • the set of priority rules for executing some actions (not necessarily they involve all possible actions, i.e. some actions are executable independently from others)

  • the set of possible resources needed for executing an action (i.e. some actions require using certain resources)

  • the information about the latency between one round and the next one (an arbitrary technical requirement, on which agent’s consideration about others’ answers must be calibrated).

3.2 A Simple Scenario

As a first step of our research, we consider a simplified scenario with the following assumptions:

  • The Goal is the same for each agent: consuming resources following a given sequence; each agent can start from a different position in this sequence depending on the resources she has; she reaches the goal if the sequence is complete. Given the goal G = {R1, R2, R3, R4} if agent Ax has the set of resources R = {R3, R5, R3, R2, R4}, she combines her resources for reaching the longest sequence she can: R2, R3, and R4; then she must start looking for R1; once obtained R1 by another agent, she reaches the goal G (in this example it could be in five rounds, should she have received R1 in one interaction);

  • The actions for each agent can be: Act = {consume a resource; ask for a resource to a given agent; give a resource to a given agent; ask for a resource through a broadcasting request}; for now we consider the only possibility for an agent to give the requested resource, when that resource is not needed by the owner (in which case it will be pre-allocated to be consumed by the latter in next rounds);

  • Each agent can execute only one action in a given round;

  • The number of resources owned by agents either is the same or it is higher than the number of resources for achieving the goal;

  • The unique set of given priority rules is related to the sequence of resources to consume;

  • Each agent has her own believed dependence network (on which she base her interactions);

  • There is a unique updating rule: the new information collected in a certain round can update the believed dependence network on the basis of the subjective level of trust towards the respondent agent involved in the exchange, and it will be used in the next round.

3.3 The Agent Behavior

In this first work, we assume that Agents cannot mislead, giving wrong information (i.e. asking for a not needed resource, or not answering to a request). In future works we are planning to consider also misleading situations in order to simulate also either mutual and reciprocal trust relationships. In our simulation architecture, each agent can answer to a request more or less quickly on the basis of two parameters: the dependence level (DL) with the requester, and the number (N) of requests already performed by the same agent for the same resource.

Starting from the definition of dependence relationships in [28], the levels of dependence for two agents, Ax on Ay, are defined below from the lower to the higher.

  • Total Independence (TI): Ax does not depend on Ay, and every agent from which Ax depends does not depend on Ay.

  • Indirect Dependence (ID): Ax does not depend on Ay, and some agents from which Ax depends, also depend on Ay.

  • Direct Dependence (DD): Ax depends on Ay and, every agent from which Ax depends, does not depend on Ay.

  • Total Dependence (TD): Ax depends on Ay and, some agents from which Ax depends, also depend on Ay.

In the environment, a coefficient for measuring the degree of dependence is associated to each level. Furthermore it is also possible to introduce a gap between the coefficient of totally independence and the other one, emphasising the clear difference between the dependence relationships (strong or soft) and the complete independence (for example a possible set of value for each level can be: TI = 1, ID = 3, DD = 4 and TD = 5). Finally, the null value (0) is not considered as a coefficient for avoiding to block some requests forever.

Every time an agent receives a request (directly or in broadcast), she multiplies the coefficient of the dependence level (related to the requester) for the number of requests asked to her by the same agent for the same resource. The result of this multiplication is defined as “weight of the request” (W). In the environment it is possible to define a minimum level of W that forces agent’s response.

With this architecture setting, agent’s answer, and then the resource exchange, is performed more or less quickly on the basis of the believed dependence network (BDN) configuration. Moreover, during the simulation run, the BDN update can modify the perceived dependence relationships, changing previous behaviour of the same agent.

After every resource exchange, each agent stores those information on her own working memory and uses them for updating the BDN on the basis of trust levels about the respondent agent.

In this simulation architecture, the trust level (TL) is strongly related to the number of rounds (RN = Rounds Number) consumed for having a response: given agents Ax and Ay, Ax (truster) has a high trust level on Ay (trustee) as much faster is the Ay’s response to the Ax’s requests. Even though in this manner the trust level is mainly related to the trustworthiness concept (one of the aspects used to build the trust perception [25]), it is anyway close to what happens in the real cases in which trust is often related to the reliability for performing some tasks [25]. Moreover, if the trustee behaves in the same manner with a given requester, the requester (truster) increases also the level of trust towards her.

The value of TL is between 0 and 1, and initially it is set to 0 (zero) by each agent for all the agents in the BDN. The calculation of TL is described below and it is performed by an agent Ax for every known agents in her BDN when the simulation runs.

  • When Ax asks a resource to another agent Ay, she counts the number of requests performed before obtaining that resource from Ay, calculating the RN.

  • When Ax receives the resource from Ay, she sets her value of TL for Ay as 1/(RN-1) if the previous TL is already equal to 1/RN and RN > 1 (with RN = 1 we have already the maximum level of trust), otherwise TL = 1/RN.

For the broadcast requests, the calculation is quite similar: Ax counts the number of requests before obtaining a certain resource and uses it for assigning a TL to the respondent agent. With this definition, an agent Ax considers as a positive element both the decrease of RN and the constant value of TL evaluated in two consecutive exchanges.

Finally, an agent updates her BDN using information arisen by the resource exchanges in which the response is performed by an agent with a certain level of trust. This level can be fixed to a value between 0 and 1 as parameter for the simulation run.

4 Discussion and Conclusion

In this paper we presented a first and simplified version of an agent based model developed to study emerging behaviors of social networks, in which actors (agent) could play either the request (customers) and the provider (supplier) role. Following previous works that introduce formal models of trust and dependence networks, we developed an architecture of cognitive agents and of the environment in which they act and interact. This architecture will be the basis for implementing a platform for agent-based simulation that serves as a tool for investigating the dynamics of information sharing, collaboration, and collective action within online communities, seen as service systems. Thanks to this insight into social networks dynamics, then, we will be able to rich useful findings both in identifying regularities that characterize different service systems and in defining features for each specific system. Hence, it could be adopted for designing a digital platform to support a collaborative service environment [30].

Computational simulations are gaining an increasing attention for studying the behavior of complex social systems [19, 3133]. Previous simulation models have mainly addressed the emergent states in virtual teams [34, 35]. Further works can extend the applicability of this research approach to community environments governed by trust mechanisms.