Keywords

1 Introduction

The concepts of trust and reputation are of paramount importance in human societies. Several disciplines resort to their use in different ways according to their own visions and perspectives. In this paper, we highlight the use of these concepts in an area that is becoming omnipresent in our lives, which is computer system. Within the past few decades, an impressive sum of inquires has been conducted on the subjects of computational trust and reputation models. One of the beginning focuses for considering computers and trust within the same setting was Marsh in [10]. Numerous commonsense approaches on high-profile applications are still right now widespread. For example, the reputation frameworks on websites like eBay, Amazon, or person rating websites such as Tripadvisor and Goodreads. To meet the challenges posed in this field, researchers began to develop theoretical and practical models to better understand the field and to improve existing solutions. Given that this is a trendy research area, several researchers are still developing other solutions [1, 2, 6]. However, we notice that there are several definitions related to trust and reputation, which lead to an ambiguity in the understanding of these concepts. Our first contribution proposes a unification of the definitions related to trust and reputation. It suggests a unique formalization with graphical and textual representations of these two concepts. We also found that it is not easy to understand the semantics and intuition behind the computations performed by the various computational models. Our second contribution attempts to make more transparent the “black-box” of the behavior of each of these models.

The rest of the paper is made up as follows; Sect. 2 presents definitions and properties related to trust and reputation. This section also introduces our first contribution, which is the graphical and textual formalization of trust and reputation principles. Section 3 describes the different types of trust and reputation computational models as well as our second contribution in section regarding the behavior of computational models. Some related works are discussed in Sect. 4. Finally, the points to remember and perspectives are presented in Sect. 5 .

2 Definitions and Properties

In this section, we revisit the definitions and the behaviors of trust and reputation and other related concepts. For each notion, both graphical notation and mathematical formalization are introduced.

2.1 Trust

  • Definition: Trust is a concept that we apply daily. Unfortunately, trust suffers from the problem of definition, in the absence of a precise definition commonly used in the literature. According to [8], it is possible to segment the social perspectives of trust into three segments categories: (i) that of personality, (ii) that of sociologists, and (iii) that of psychosociologists. It can generally be said that trust is a relationship between a trustor and a trustee. The trust relationship can thus be modeled as in Fig. 1, where the user \(U_{1}\) (Trustor) trusts another user \(U_{2}\) (Trustee) with a value X:

    The function \(C(U_{1}, U_{2})\) has two arguments the trustor “\(U_{1}\)” and the trustee “\(U_{2}\)”, the result of this function is X “the value of the trust”.

  • Context: According to [11], when defining trust, the context is an important factor. It indicates the situation in which the relationship is established. The graphical notation of the context can be shown in Fig. 2. \(U_{1}\)’s trust in \(U_{2}\) in a context c is X, we note:

    $$\begin{aligned} C_{c}(U_{1}, U_{2}) = X \end{aligned}$$
    (1)
  • Transitivity: A graphical notation of transitivity is given in Fig. 3. If \(U_{1}\) trusts \(U_{2}\) and \(U_{2}\) trusts \(U_{3}\), then \(U_{1}\) trusts \(U_{3}\), we note:

    $$\begin{aligned} C(U_{1}, U_{2}) = x \wedge C(U_{2}, U_{3}) = y \Rightarrow \exists f/f(x,y)=z. \end{aligned}$$
    (2)

    As Josang and Pope prove in [4], transitivity is possible only in some cases. However, in the case of transitivity, as the number of referrals increases, the level of trust is likely to decrease. For example, if Bob asks Alice for a dentist referral. Alice responds, my sister was telling me about a dentist that her friend had referred to her from a trusted friend. The level of trust Bob will have in this referral is less than if it were a dentist that Alice had seen directly.

  • Transaction In our proposal, a transaction is an action of a user on the network within a specific context that plays a role in the computation of trust and reputation. For example, in the context of online sales sites, placing an order, giving an opinion on an item or on the seller, are considered transactions.

Fig. 1.
figure 1

Graphical notation of trust

Fig. 2.
figure 2

Graphic notation of the context

Fig. 3.
figure 3

Transitivity of trust

2.2 Reputation

The second key concept considered in this paper is reputation. Several definitions of this concept can be found in different areas of literature. The first is a rather general definition, namely that reputation can be seen as the feeling that one user has towards another user [7], which is used to decide to cooperate with him [6]. Jøsang et al. [3] consider that “reputation is what is said or believed about a person or the properties of an object”. In a community, someone can be trusted if (s)he has a good reputation. According to Abdul-Rahman and Hailes [1], reputation is an estimate of an entity’s behavior in the community, based on its past behaviors. Reputation is in fact an intangible asset (an opinion, a feeling) over which an individual does not have total control since it emanates from the community. Thus, by instantiation in the field of computer science, reputation is the opinion of a system towards a user. The concept of reputation can thus be represented as in Fig. 4, where the system assigns a reputation rating to a user, based on community opinions and behavior in the system. The reputation of \(U_{1}\) in the system S is X, we note:

Fig. 4.
figure 4

Graphical notation of reputation

But above all, to talk about a computational model, we must define the measurements taken into account in this process.

2.3 Measurement

In order to quantify trust and reputation, an appropriate measure is needed. Four types of values of such measure are generally used:

  • Unique value: It is the measure used to ensure the quality of products in a production line. For example, if an item does not conform with manufacturing requirements, it is withdrawn from the chain and nothing is reported otherwise.

  • Binary values: binary values are used to distinguish between a trusted and untrusted entity. For example, if we trust a user, we assign him a rating of 1, and 0 otherwise.

  • Multiple values: allow to take into account the history of cooperation between two entities. For example, possible values are “very low, low, medium, high and very high” levels of trust.

  • Continuous values: Continuous values give a wider range of possible values of the trust level. Typically, this value varies along the range [0,1]; it measures trust in the form of probability.

To measure the degree of trust and reputation in the form of any value, computer systems have relied on models using different processes depending on the need, as can be seen in the rest of this paper.

It is important to understand the functioning and mathematical process of each model. This will facilitate their classification according to their behavior.

3 Computational Models Choice

Systems based on trust and reputation must, as noted below, have a computing model. Indeed, the computation process must make it possible to take a decision. For example, a high degree of trust in an entity makes it possible to judge that entity to be reliable and to take the decision to trust it. The same applies to reputation. Different models have been proposed to represent and compute trust and reputation in systems. These models can be classified into: Bayesian models, Belief-based models, Discrete value models and Flow models. In this section, for each computational model, we specify its semantics and intuition. This will allow the models to be differentiated based on their behaviour, which will be used to determine the model to use based on the needs of each application.

3.1 Bayesian Model

Bayesian models use probability distribution functions to estimate trust and reputation values. The distribution function of a real random variable X is the function \(F_{X}\) which, at any real x, associates the probability of obtaining a value such as:

$$\begin{aligned} {\displaystyle F_{X}(x)=\mathbb {P} (X\le x)}. \end{aligned}$$
(3)

The \(F_{X}\) function depends on the law used by the computational model. In probability theory, the machine procedure must be replicated many times in order to make a final decision. This approach induces a slow shift in the expected values. Therefore, to gain good trust and reputation rates, it is important to make many transactions. This strategy is beneficial in the sense that it allows a consumer who made a bad transaction the ability to regain his credibility. But eventually, it will punish him. The downside is that the model can not detect it explicitly in the event of malicious use of the device.

3.2 Belief-Based Model

Like Bayesian models, belief theory is related to probability theory, the difference being that the sum of the probabilities on all possible outcomes is not necessarily equal to 1, and the remaining probability is interpreted as uncertainty. This model category behaves in much the same way as the Bayesian model.

3.3 Discrete Value Model

The trust value of a newcomer is equal to zero. Since this model does not use a specific probability function, the choice of the appropriate function depends on system’s needs.

3.4 Flow Model

Flow models compute trust or reputation values by transitive iterations through looped chains. This model does not impact the initial value for newcomers, as in Google’s PageRank system. To build a strong reputation, one has to start detecting incoming trust flows. The benefit here is that the methods for estimating data flows rapidly update the estimated values each time a new flow is detected. In the case of cheating, the machine explicitly detects a series of malicious acts. The limitation of this technique is that a neutral value of trust or reputation is equal to zero, which can be considered penalizing.

3.5 Summary

In today’s IT systems, trust and reputation are two principles that have become very important. Since there are many works that have discussed these concepts in the literature, it leads to a variety of descriptions. On the one hand, for these definitions, we have suggested a global graphical and textual formalization. On the other hand, to the best of our knowledge, there is little work that clarifies what sort of trust and reputation modeling they use. This makes their use in applications a bit complicated. That is why it will be easier to select which model to adapt, if we set the system specifications from the beginning.

4 Related Works

In recent years, computational trust and reputation models have become quite important methods to improve interaction between users and with the system. And since their appearance, several research works have been published to solve the problems linked to these concepts. Other types of work were carried out which gave an additional aspect to this research, an aspect of analysis and comparison of reputation and trust models. Among these works we can cite a work published in [9] whose aim is to present the most popular and widely used computer models of trust and reputation. Then in 2017, a survey was carried out to classify and compare the main findings that have helped to address trust and reputation issues in the context of web services [12]. And finally in 2018, Braga, Diego De Siqueira, et al. conducted a survey which provided additional structure to the research being done on the topics of trust and reputation [5]. A new integrated system for analyzing reputation and trust models has been proposed. There are therefore several works and classifications, but they do not help to clarify which model to take and why to take it. Moreover, the comparison made in these papers is static; in common community application scenarios, it does not help to explain the dynamic models’ behavior. Furthermore, they do not provide formalized and graphically illustrated definitions of the concepts used in these models.

5 Conclusion and Perspectives

In this paper, we have tackled the problem of computational models of trust and reputation. Due to the numerous studies made in the literature about these two concepts, we have tried to unify these notions. In particular, we have revised their definitions and restate their basic properties. The analysis done show that our requirements proposal helps to differentiate the various models and select the most suitable ones according to the users needs. This paper constitutes our first step towards a new general model of trust/reputation that can fit each application’s context. As an immediate future work, we plan to identify a set of requirements that make the model choice more practical and intelligent, in the sense that it meets the desired needs.