Keywords

1 Introduction

In today’s world, Collaborative Networks (CNs) are increasingly becoming imperative and widely known about the support they provide to resource sharing. Meanwhile, global entities (partners/actors) are also establishing strong social-graphs, intensifying how critical CNs are. As a crucial requisite, partner relationships in CNs rest on a foundation of trustworthiness, particularly, to contexts where partners interact through socio-technical systems. As socio-technical interaction systems provide required support to CNs, they also raise a difficulty on trust management. This difficult, among others, is constituted by a lack of physical cues rich in physical interactions systems. As a consequence, uncertainty about an extent to which partners can uncover presumed consistency in their behavioral patterns becomes a challenge. This consistency relates to a degree of reliability by which partners can confidently rely upon one another. Rendering this confidence through “security primitives of Confidentiality, Integrity, and Availability (CIA) of data or information” [1] appears hard. This hardness originates from various angles, including the least ability of CIA protocols to accommodate partner’s behaviors in CNs.

Unlike CIA mechanisms which protect computer-network resources by avoiding malicious agents, CNs emphasize on the confident assurance that partners will behave in a prescribed/expected manner. Fulfillment of this requirement includes investigating partner behaviors in the operational phase of CNs. In the operational phase, while interacting, partners exhibit a range of behaviors, which in turn can generate an effect on collaboration. To that end, understanding partner’s behavioral uncertainties and the generated effect on network performance and trust are imperative. As much as various behavioral uncertainties exist in CNs, the present paper concentrates on behavioral uncertainty resulting from negotiation and decision making. In particular, the paper addresses behavioral discontents, which according to [2], are set of partners’ disagreements over to a particular set of decisions [2]. Such disagreements can procreate conflicts, with the higher possibility to affect underlying trustworthy. Such disagreements are described in [3] as conflicts concerning partner’s perspectives or beliefs. This paper refers to such disagreements as conflicting preferences.

Conflicting preferences on decision rights can affect network performance and underlying trustworthy differently. Context to resulting outcome, the effect can either strengthen or weakens trustworthy. In particular, a partner who enacts preferences to own interests can end up denying welfare of others. Such preference may have a high chance to decrease trustworthy. As a result, rather than remaining centric to own interests, partners urge to assume a degree of fairness in decision rights. Towards this required fairness, synchronizing partners’ conflicting preferences is a necessary. While building on this assumption, unsynchronized decisions, according to [4], appear in forms of behavioral discontents, namely: rivalry and compromise. These discontents are further described in [4] as: (1) rivalry, a partner has high concern to own interests coupled with low concern for others’ interests (I win, you lose); (2) compromise, a partner emphasizes on give-and-take bargaining (we both win a bit and lose a bit).

Whereas rivalry decisions can largely impede trust, compromised ones can facilitate its growth and sustainability. Therefore, the present paper investigates the effect of decision synchronization on trust. Alongside this objective, a model depicting behavioral trust has to be established as well. To that end, two research questions are specified: (1) How can behavioral trust be modeled in CN and related domains? (2) How do compromised and uncompromised conflicting preferences affect trust in CN?

The remainder of the paper is organized as follows. Section 2 discusses trust under computational and behavioral perspectives. While Sect. 3 presents research methodology applied by this study, Sect. 4 sheds light on agent preferences and confrontation analysis. In Sect. 5, a behavioral trust model applicable to CNs and related domains, comprising of trust propagation, measurement and assessment is proposed. Section 6 specifies a validation scenario in the domain of logistics collaboration. In Sect. 7, an evaluation consisting of implementation, experiment design, and setup, results and discussions are provided. The paper ends in Sect. 8 by providing conclusion and outlook.

2 Trust: Computational and Behavioral Perspectives

Trust is generally understood as a degree of confidence agent X develops in agent Y, while believing that Y will behave in ways X expects. Even though, this understanding is made contextual specific depending on application domain and discipline. In psychology, trust is conceptualized a psychological state comprising an intention to accept vulnerabilities [5]. In social relational exchanges, trust has a form of reputation aimed to deny betrayal aversion. In economics trust is associated with rational choices against risks aversions. Moreover, in computer science and engineering trust is categorized in groups of “system and user” [6]. System trust (hard-trust) addresses relationships among machine nodes, to protect computational systems according to CIA primitives. User trust (soft-trust) concerns a level of confidence trustor-agent has in numerous objects it interact and collaborate with. These objects are like machines, software-agents, humans, and organizations. In essence, user trust is conceived on a behavioral foundation of social, economic, and psychological primitives.

Trust requirements in CNs can largely be founded on social and economic behavioral primitives, with an attempt to reduce uncertainties and resulting vulnerabilities. Adding to these primitives, trust challenges in CNs stem from consortia structure. Structure-wise, CNs consortia are largely dynamic, sometimes engaging actors who are strangers, and configured in the partially seamlessly environment. Within these settings, there can rise a high possibility that partners exhibit behavioral inconsistencies. Such behavioral inconsistencies, can be related but not limited to how: agents share information; divide gains and costs; synchronize, both compatible and incompatible preferences (positions in decision rights); and deviate from goal congruence (opportunism). The present paper, however, concentrates on conflicting (incompatible) preferences.

3 Research Methodology

Investigation of decision synchronization applies a Multi-Agent Systems (MAS) as the main method, compared to the survey, normal experiments, and case study. On the one hand, the survey appears inappropriate due lack of a process (physical reality) and longitudinal observations. Normal experiments deny social-virtual environments in computational settings while case study might be expensive in terms of resources. On the other hand, relatively, usage of MAS achieves intended purpose. The MAS deploys social structures in computational settings while maximizing process and time under virtual reality.

Therefore, applied methodology follows this sequence. First, an approach to model conflicting preferences, especially those preferences resulting in a mode of conflict is discussed (Sect. 4). Secondly, a model of behavioral trust which integrates CNs life cycle, the definition of trust, and the analogy of human trust propagation is conceived (Sect. 5). To empirically evaluate and validate the model, and also forecast the effect of decision synchronization on trust, the MAS is used. Through MAS, a logistics collaboration scenario is conceived using Agent-Based Modeling (ABM) technique. The collaboration scenario evaluated by using a “PlaSMA Platform” [7]. PlaSMA is an event-driven simulation system which has been designed to solve and evaluate scenarios of the logistics domainFootnote 1.

4 Agent Preferences: Confrontation Analysis

It is common that agents engage in numerous relationships and dependencies, some of which ending up in dilemmas. Each singleton dilemma involves specific character behavior from participating agents. Highlighted in [8], agents face dilemmas at a point when each agent takes a position that it regards as final, called a moment of truth. The positions can be compatible (collaboration mode) or incompatible (conflict mode) [8, 9]. If positions are compatible collaboration proceeds. If they are incompatible, agents negotiate by synchronizing (calibrating) positions in their conflicting preferences. Although agents get into negotiation, it is not mandatory that always they end up into a compromise. Works in [811] address six dilemmas, two belonging to collaboration mode and the rest belonging to conflict mode. Collaboration mode is constituted by dilemmas of cooperation and trust; while the conflict mode is constituted by dilemmas of positioning, persuasion, rejection, and threat.

Whereas CNs comprehend on dilemmas of collaboration mode, in some situations, disagreements over the domain of decisions need to be compromised prior advancing to next stage. For example, in a collaboration between shipper and receiver, the shipper may prefer producing in fixed quantities while receiver prefers a production which is consistent with market demand. Such disagreement represents a conflict mode that must be responded to. To any occurring dilemma, according to [9], a character may respond in four ways: (1) by changing its position; (2) by amending its preferences for the possible outcomes; (3) by denying that the dilemmas exist, or; (4) by taking irreversible unilateral action. This study builds on the second and third responses and advances on dilemmas of persuasion and rejection respectively.

5 Modeling of Behavioral Trust

A proposed behavioral trust model (Fig. 1) is developed by conceiving an analogy of human trusting process, propagating in three stages: intention, action, and reality. This propagation conforms to a trust building framework in [12], generic understandability of trust, and trust definition in [13]. The human-agent trusting process begins by initiating an intention to trust, that depends on contextual characteristics. Upon satisfying this intention, human-agent engages in action to trust. The action to trust involves establishing an expectation, corresponding to a degree of confidence human-agent develops in an object it desires to trust. The reality is a final stage in which a trust transaction become executed, providing human-agent a feedback through comparing expectation versus an outcome. In particular, the model is conceived by formalizing a dependency between two agents: X a trustor agent, and Y a trustee agent, although this relationship is symmetric. Moreover, as trust can be embedded within trustor or trustee, concordant to [12], conception draws on trust embedded within trustee.

Fig. 1.
figure 1

Behavioral trust model

Under an intention to trust, X initializes its propensity to trust Y. Depending on preferred characteristics and a degree to which are fulfilled, X can commit its propensity to trust. If however, conflicting preferences arise, X and Y negotiate seeking a compromise. On passing the intention to trust, X propagates to an action to trust. In this stage, X utilizes existing factual data from Y to develop a degree to which it can trust Y (expectation). This expectation forms a level of confidence, assurance or reliability which X develops in Y. Alongside its decision, X can: (1) accept being vulnerable to Y’s actions, or; (2) withdraw its action to trust if the payoff is perceived unsatisfactory. If X accepts vulnerabilities, for each underlying performance indicator, it specifies its range of expectation. Upon proceeding to reality stage, X loses control in hands of Y (during execution). Hereby, a transaction is executed and resulting outcome observed. Afterward, X measures trust (effect) by comparing resulting outcome to expectation it developed. In measuring and assessing trust, a specific level of trust is determined to depend on trust meter. For example, on applying a three scale meter, the outcome can be below, within, or above expectation.

6 Logistics Collaboration: A Validation Scenario

Validation of the behavioral trust model employs a collaboration scenario in logistics. The scenario is preceded by requirements definition as well as conceiving agents’ interactive negotiations.

6.1 Collaboration Requirements

Logistics collaboration, among others, seeks to increase asset utilization, reduce costs, and improve customer services and efficiency. While exploiting these goals, confrontational preferences concerning issues like production, distribution and demand of goods can affect underlying trustworthy. As long as such preferences remain uncompromised, they threaten collaboration future, leading to the uncertain and vulnerable outcome. Although many preferences affect trust in logistics collaboration [14], this study concentrates on three preferences (Table 1); where shipper, carrier and receiver are collaborating agents. Shipper represents manufacturers, suppliers, sellers and individuals. Carrier agent represents transporters, moving goods to receivers. Receiver agent represents retailers, distributors, buyers, and end consumers. The three preferences are described as follows. Shipper prefers producing goods in fixed quantities (P1). Equally, carrier perceives delivering goods in fixed quantities (P2), and; full truck (P3). Each preference has a threatened future (conflict) to be synchronized to a compromise or rejection.

Table 1. A matrix of conflicting preferences and threatened future (adapted from [3])

6.2 Agents’ Interactions and Negotiations

Agent-Based Modeling (ABM) is applied to model, understand and prognosticate behavioral effect resulting from agents’ decision synchronization in logistics collaboration. The ABM method is preferred because of its ability to capture emergent phenomena, and provide a natural description of a system [15]. Correspondingly, building on both, weak and strong agent notions as discerned in [16], collaborating agents are conceived autonomous, reactive, adaptive, with the ability to act socially. This implies, shipper, carrier, and receiver are limited to information related to planning and forecast of production, carriage and demand. As such they learn from previous experience, trying to match similar prospects. Moreover, each agent possesses objectives together with possible constraints. Among the constraints are conflicting preferences (Table 1). As such, shipper focuses on minimizing backorders and inventory, receiver focuses on increasing saving on transportation costs, and carrier seeks to maximize full truck.

While the behavioral description of the collaboration scenario (Fig. 2) draws from previous work in [14], its respective modeling proceeds as follows. A broker-agent invites shippers, carriers, and receivers (participants) by issuing a Call For Proposal (CFP). In its CFP, broker-agent set required specificity, including an estimated number of pallets to be produced and demanded/consumed, and required carriage capacity. Upon receiving CFP, each participant performs own internal assessment on its capability to fulfill CFP requirements. Afterward, each participant replies to broker-agent by proposing or reject the CFP. If rejected, the CFP is brought to an end. If proposed, each participant sends a proposal reply to broker-agent. In turn, broker-agent assess the proposal against established specificity. Depending on assessment outcome (satisfactory specificity), broker-agent either accepts, adjusts or rejects the proposal. If found satisfactory, broker-agent engage participants in forecasting their orders, compromise conflicting preferences as well as developing expectations. Finally, during the execution of orders, goods are moved by the carrier from shipper to receiver. Along this execution, actions of each participant are observed subsequent to recording actual score (reality).

Fig. 2.
figure 2

Generic behavioral descriptions of agents in logistics collaboration

Specific to conflicting preferences, during negotiation, for each preference an agent takes a position that may differ to that of others, resulting in disagreements. These disagreements lead to a threatened future of a specific preference. To compromise disagreements, three alternatives exist acceptance, persuasion, and rejection. The acceptance alternative occurs if rival agents see that the threatened future is attractive. In this case, the preference changes to collaboration mode of a dilemma (cooperation or trust), thus, disqualified to constitute disagreements. If however the threatened future is unaccepted, the respective agent engages in persuading other agents to its position or reject the existence of the threat. The persuasion of the threatened future goes along with a provision of incentives to rivals, together with a promise that the agent whose future is threatened will act trustworthily (Table 2). Finally, if a threat is rejected, collaboration proceeds with a condition that associated vulnerabilities, if they occur, will bear a penalty to the agent which rejected the existence of that threat. Table 2 summarizes exemplified incentives as well as award and penalty to a persuaded and rejected preference.

Table 2. A matrix of preferences, award and penalty

7 Evaluation

This section details an assessment of applicability, usefulness and conceptual validity of the behavioral trust model. Secondly, it examines effect, compromised and uncompromised conflicting preferences generate on trust. Accordingly, details of implementation (Subsect. 7.1), experiment design and setup (Subsect. 7.2), experiment results (Subsect. 7.3), and discussions (Subsect. 7.4) are presented.

7.1 Implementation

A simulation prototype is set a centralized network configured and managed by a neutral trustee (broker). It comprises of 2 carriers, 6 shippers, and 6 receivers. All agents including the broker, exchange messages structured according to FIPA standards [17] using logistics collaboration ontology. The protocol type of negotiation is adapted from [18], such that owner of a preference whose future is threatened has to propose a persuasion which can either be accepted or rejected. Furthermore, all participating agents, upon being initialized, they register in a directory facilitator named as “collaboration”, that provides yellow page services. Collaboration begins only when it is activated by the broker agent. The broker agent monitors collaboration by managing parameters exchanged, and it measures and evaluates trust levels.

7.2 Experiment Design and Setup

To examine the effect of decision synchronization on trust, MAS simulation experiment is designed as follows. Three predictor variables (P1, P2, and P3), with each predictor having two levels (Persuasion, Rejection) are designed to affect response variables. The experiment employs two response variables: transportation cost and full truck. The response variables are used to determine the internal-oriented performance of logistics collaboration, and subsequently, the trust level. Moreover, while each experiment yields 6 observations, the same experiment is replicated 6 times to estimate experimental errors. Thus, 10 experiments are conducted, leading to 360 (6 × 6 × 10) observations.

The experiment setup involves three events, three experimental unit, generation of random numbers, as well as distinct approaches to verify the model. The events comprise changes in collaboration lifecycle: from planning to forecast, and finally to operation. The planning, forecast, and operation stages correspond to periods of one month, one week, and six days respectively. In view of experimental units, the setup employs shipper, carrier, and receiver. Additionally, various techniques including the use of trace and checking of simulation outputs for their reasonableness are applied to verify behavioral trust model. As the simulation uses randomly generated data, a care is taken on seed generation to assure that generated numbers do not affect final conclusion. To fulfill this requirement, linear-congruential generators are used. Furthermore, to verify that generated numbers are uniformly distributed, generated numbers are scrutinized by chi-square test. Uniquely, each predictor level is assigned own seed value to avoid wrong correlation. Finally, a confidence interval statistical technique is applied to determine conceptual validity of the behavioral trust model.

7.3 Experiment Results

Results obtained by conducting MAS simulation experiments are summarized in Table 3. In these results, preferences were combined in a form or code of ABC by using notations “P” and “R”. Notations P and R denoted a persuaded and rejected preferences respectively. Similarly, the ABC combination corresponded to preferences P1, P2, and P3 respectively. As an illustration, a combination PPR means shipper’s preference (P1) was persuaded, carrier’s preference (P2) was persuaded, while carrier’s preference (P3) was rejected.

Table 3. A summary of results on synchronized preferences on performance and trust level

Measurement and assessment of trust level applied parameters of cost saving and full truck. In particular, cost saving (reduction in transport costs) was benchmarked to a 15 % per pallet as remarked in [19]. Equally, an extent to which a truck is filled was benchmarked to a 95 % as remarked in [20]. That is, a truck is considered full if it is loaded to over 95 % of its capacity. Following these benchmarks, transportation cost and full truck were operationalized as follows. That, agent, develops an expectation of 15 % of cost saving on transportation cost and a 95 % of a full truck. Upon comparing these expectations and corresponding scores, one among the following outcomes occurred: (1) less than expectation, which implied “less trustworthy” with a trust level of 1; (2) very close to expectation, which implied “trustworthy” with a trust level of 2, and; (3) above the expectation, which implied “more trustworthy” with a trust level of 3. In addition, since there were two performance criteria (cost saving and full truck), corresponding trust levels were aggregated by using the mean average. Due to this aggregation, some results generated a trust level of 2.5 and were marked with an asterisk (*) (Table 3).

Alongside these results, and on top of verification approaches used, the study achieves a 95 % accuracy in determining model’s validity as well as generated results (Table 4). Additionally, overall mean trust level of 1.890 is observed.

Table 4. A 95 % CI for 36 observations in each experiment

7.4 Discussions

This research has focused on designing a behavioral trust model, and determine how compromised and uncompromised conflicting preferences affect trust. Concerning modeling of behavioral trust, the analysis indicates that confidence intervals overlap considerably to an extent that mean of each experiment falls in the intervals of others. In addition, the confidence intervals are within the overall mean trust level. At an accuracy of 95 %, it is established that there is no significant variation in results generated by the model. To that effect, theories and assumptions underlying the behavioral trust model, as well as its reasonableness are correctly valid.

Concerning how decision synchronization affects trust, the following are deduced. On the one hand, whether a preference is persuaded or rejected, its effect on trust depends as well on other existing factors. Reflecting on logistics and collaboration scenario, synchronized preferences are dependent on an extent to which the truck is filled. Taking serial numbers 3 versus 11 (Table 3) for example, preference combinations PRR affects trust in different magnitudes. Similar cases are also noted in serial numbers 7 versus 14; 6 versus 19; 2 versus 10 versus 16, and; 8 versus 21. In particular, more trustworthy is experienced in cases where both carriers’ preferences are persuaded and the truck is at least filled to its full capacity. In such cases, shippers and receivers benefit lowered transportation cost by 20 % (Table 2), thus realizing a potential relief. On the other hand, if the full truck condition is held fixed, persuaded preferences generate better trust level compared to those in which preferences are rejected. This is observed in serial numbers 14 versus 21 as well as 9 versus 21.

The findings of this research have a number of crucial implications for practice. On the foremost, irrespective of the degree to which preferences are synchronized, a magnitude of the generated effect on trust is dependent on other contexts, if any. This dependence, however, is unlikely a guarantee to a partner who chooses to reject an existence of the threatened future. It has been demonstrated that other contexts (factors) can be in a position to favor or oppose the ignored threat future. Furthermore, under similar conditional settings, persuaded preferences are better than rejected ones (serial numbers 7 versus (8 and 21)). Finally, as collaboration seeks to compromise conflicts and build a foundation of trustworthy, it is prominent to persuade conflicting preferences than rejecting their existence. Remarking on a second research question, in the overall, compromised preferences in decision rights appear to be a better strategy than uncompromised ones.

8 Conclusion and Outlook

This paper addresses behavioral discontents resulting from disagreements collaborating partners encounter in decision rights. According to confrontational analysis, such disagreements occur when “each partner takes a position that it regards as final, called a moment of truth” [8]. Such disagreements are referred to as conflicting preferences. Appearing in the form of rivalry decisions, conflicting preferences affect trustworthy, especially when partners are denied of assumed fairness. This paper concentrates on investigating an extent to which synchronized and unsynchronized conflicting preferences affect trustworthy. In its first place, the paper contributes by devising a generic model that describes behavioral trust in CNs. Subsequently, the behavioral trust model is evaluated empirically using MAS method in logistics collaboration scenario.

In particular, a generic model of behavioral trust can be applied to CNs other than logistics in the operational phase. Concerning results presented in this paper, two critical insights are revealed. Firstly, synchronized preferences affect trust positively than unsynchronized ones. Secondly, irrespective of a degree to which conflicting preferences are synchronized, a magnitude of the generated effect on trust, depends as well on other factors. Future works will involve investigating a larger set of conflicting preferences in a combined form as well as effect generated by other collaborative processes, like incentive alignment and opportunism.