Keywords

1 Introduction

Given the growing number of interactions in online communities and of information exchanges in information management systems, it’s becoming increasingly important to have security mechanisms that can prevent fraudulent behaviors. However, implementing hard security mechanisms for every possible interaction, e.g. authentication methods or access controls, can be costly and a failure of such mechanisms might put the whole system in jeopardy. For this reason, it is necessary to include other soft security mechanisms in the design of those environments where transactions and exchanges take place [10, 19]. Trust is one form of soft security that can be implemented into a system: trust is a social control mechanism that brings an undoubtedly positive impact on cooperative operations, both by increasing the chances of performing an interaction and by decreasing the chances of having malevolent behaviors during those interactions [1, 18]. Therefore, trust has both a proactive and a control effect over interactions. Computational trust is the digital counterpart of trust as applied in ordinary social communities and computational trust models are soft security mechanisms that implement the notion of trust in digital environments to increase the quantity and quality of interactionsFootnote 1. Computational trust models are typically composed of two parts: a trust computing part and a trust manipulation part. While trust manipulation is widely studied, very little attention is paid to the trust computing part. In this paper, we propose a formal language with which it is possible to reason about how knowledgeFootnote 2 and trust interact. Specifically, in this setting it is possible to put into direct dependence possessed knowledge with values estimating trust, distrust, and uncertainty, which can then be used to feed the trust manipulation component of any computational trust models. The paper will proceed as follows: in Sect. 2, we discuss the distinction between trust computation and trust manipulation by providing some examples of both components as implemented in computational trust models; in Sect. 3, following on the discussion of Sect. 2, we will show how one well-known model for trust manipulation, i.e. Jøsang’s Subjective Logic [11], struggles when dealing with the trust computing component; in Sect. 4, we provide the syntax and semantics of what we will call trust logic. In this logic, we will show how we can talk about the notion of trust and we will then show how the semantical structure in which the logic is interpreted helps in compute trust values that can be fed into Subjective Logic’s trust manipulation component; finally, in Sect. 5, we conclude the paper with some general remarks.

2 Trust Computing and Trust Manipulation

A good formalization of the notion of trust must accomplish two goals. The first goal is that of explaining how trust is generated: while it is possible to take trust as a primitive and unexplained notion, it is often preferable to provide a reduction of the notion of trust to more basic concepts, aiding our comprehension of the phenomenon of trust in various contexts. The second goal is that of explaining the dynamics of trust, i.e. how trust evolves under different circumstances: this should help in determining how dynamics in an environment (e.g. a group of friends or a multinational company) influence trust. To each goal corresponds a different component of trust models. Specifically, it is possible to identify a trust computing and a trust manipulation component. The former serves the purpose of gathering relevant information, which is considered basic, and then use it to compute trust values; the latter takes trust values as granted and manipulates them for specific purposes using different operators.

Even though both components are important, authors often concentrate on one or the other: models that concentrate on the trust computing component rely on the fact that when an environment changes, it is possible to repeatedly compute new trust values, therefore no trust manipulation component is needed; models that concentrate on the trust manipulation component can rely on conceptions of trust that take the concept as a primitive or depend for their initial values on other models, therefore neglecting the trust computing component. To highlight the distinction between trust computing and trust manipulation, we will now provide three examples of trust models and show how each component behaves in those models. To those three models we will then add a fourth one, which will be used as a target for the reflections of the rest of the paper. Note that the models presented were selected for explanatory purposes (i.e. to make clearer the distinction between the two components). No important or specific facts are derived from the models and, therefore, no results of this paper depend on the choices made.

2.1 Marsh’s Trust Model

In [13], Stephen Marsh presented the first example of a thorough and detailed analysis of the notion of trust in a computational setting. His system was designed for possible implementations in distributed artificial intelligence and multi-agent systems: the main purpose of his thesis was that of using a formal variant of common-sense trust to increase the quality of the evaluation an autonomous agent should perform to decide whether to collaborate or not with other agents. In this model, it is possible to identify three different forms of trust:

  1. (1)

    Basic trust;

  2. (2)

    General trust;

  3. (3)

    Situational trust.

Basic trust represents the general attitude of an agent, when all his experiences in life are considered; general trust is the overall trust a trustor has in a trustee; situational trust is the specific trust a trustor has in a trustee when a specific collaborative task should take place. Basic trust and general trust are taken as primitives, while situational trust is reduced to more basic concepts.

In Marsh’s model, the trust computing component is arrived at through a conceptual analysis of the notion of trust, which either classify trust as a primitive notion or helps in identifying the basic elements that form specific versions of trust (i.e. situational trust). We will now briefly explain how situational trust is analyzed, since this form of trust is at the core of Marsh’s model. Specifically, situational trust is computed starting from three basic parameters: \( U_{x} \left( \alpha \right) \), the amount of utility agent \( x \) gains if situation \( \alpha \) occurs; \( I_{x} \left( \alpha \right) \), the importance of situation \( \alpha \) for agent \( x \); \( \widehat{{T_{x} \left( y \right)}} \), the general trusting disposition of agent \( x \) towards agent \( y \). To obtain the situational trust, those three components are multiplied together.

The trust computing component of Marsh’s model is straightforward. Once some basic information is gathered and expressed with numerical values, those values are multiplied to obtain a specific trust value.

What is limited in Marsh’s model is the trust manipulation component. This is because the model relies on repeated computations of the trust values, rather than a manipulation of already obtained values. We will show in later sections (Sect. 3) that this is a very severe limiting factor in computational trust models, since it makes it hard to provide evaluations for trust in different scenarios, e.g. when trust is obtained through referrals from one agent to another.

2.2 Yu and Singh’s Trust Model

In [21], the authors present a formal framework in which to evaluate the reputation of agents based on witnesses. The model also provides tools to avoid deception in rating provision. Yu and Singh’s model is based on Dempster-Shafer theory of evidence and favors the trust manipulation component over the trust computing one. The trust computing component is based on past interactions between agents, where an agent \( x \) trusts another agent \( y \) if the percentage of positive past experiences over the whole number of recent experiences is superior to a given threshold. The trust computing component of this model is extremely simple and is lacking a proper analysis of trust. Moreover, it relies on the fact that the amount of data on past interactions is big, otherwise the values computed might be misleading. This is troublesome, since in digital environments interactions between the same agents are scarce. However, the strength of the model is associated with its trust manipulation component. In this model, it is possible to aggregate together ratings from different agents. Given a trust net, which is a directed graph representing the referral chains produced by an agent \( x \)’s query about the trustworthiness of another agent \( y \), \( x \) might obtain a value of the trustworthiness of \( y \) by combining the various ratings given to \( y \) by all close acquaintances of \( x \) in the trust net. The operation is based on Dempster rule of combination and the result is that of combining as a weighted sum all the trust values of other agents into one for the trustor.

2.3 BDI + Repage

In [17], the authors present a sophisticated model for trust and reputation evaluation and propagation. This model has both a simple trust computing and a good trust manipulation component. First, in BDI Rapage, there is a clear distinction between a trust and a reputation evaluation, where the former (called image) is interpreted as the trustor’s belief in the trustworthiness of the trustee, while the latter is interpreted as the trustor’s meta-belief on the beliefs about the trustee of other agents. The trust computing part is based, like in [21], on past experiences, which can be categorized under five different labels (Very Bad, Bad, Neutral, Good, Very Good). Past experiences influence the weight of each label, providing a specific numerical value for each. Those numerical values are then used to generate an image (what we call trust evaluation) of the trustee for the given trustor. This trust evaluation, coupled with the desires, the beliefs and the intentions of the trustor, leads to the decision of collaborating or not. As in [21], the trust computing component suffers from the requirement of having many past interactions and the real strength of the model lies in its trust manipulation component. For such component, this model includes a full logical language that can help reasoning about trust and reputation. The logical framework is that of a hierarchical multi-sorted first-order language. In such language is easy to express formulas that describe, over and above simple properties of objects, the desires, beliefs, intentions and the trust evaluations of a given agent and then, using various logical connectives (e.g. conjunction and disjunction), it is possible to specify different conditions for the presence or absence of trust by the trustor.

2.4 Summing up

We saw three computational trust models and their respective trust computing and trust manipulation components. It has been shown that none of the model can deal perfectly with both components. We will now present a fourth model, which will be the starting point for the reflection made in this paper. This model, i.e. Audun Jøsang’s Subjective Logic [11], is extremely well-suited to manipulate trust values using different algebraic operators, but suffers from a poor trust computing component. We will show how the logical language we present in this paper, can be used to implement a good trust computing component in Subjective Logic.

3 Subjective Logic

We described in Sect. 2 the two main components of computational trust models and provided some examples of the role those two components have in such models. In this section, we will introduce a further computational trust model, i.e. Audun Jøsang’s Subjective Logic. We will explain why we choose this model as the starting point of our work and why we believe the model requires some improvements.

In Subjective Logic trust is represented as the opinion of agent \( x \) about a given proposition \( p \). An opinion has three major components, and a fourth added component which completes the trust evaluation and helps in computing an expected value for trust. The three major components are, respectively, a belief component, a disbelief component and an uncertainty componentFootnote 3, while the fourth added component is labelled as base rate and indicates the prior probability associated with the truth of a proposition when no initial relevant information is available. The belief, disbelief and uncertainty components are additive to one, leading to the fact that Subjective Logic is effectively an extension of traditional probability logics. The additivity principle of the three major components allow also for a nice visualization of opinions through a triangle, which we will call the opinion triangle (Fig. 1). It is possible to observe in the figure that an opinion \( \omega_{x} \) is identified through the three major components of belief, disbelief and uncertainty and, after the generic opinion is obtained, it is possible to compute the expected trust value \( E\left( {\omega_{i} } \right) \) using the base rate (indicated in the figure with the letter a): the base rate determines the slope of the projection of the opinion on the base of the triangle and allows to compute an expected value when no uncertainty is taken into account in the valuation of the opinion.

Fig. 1.
figure 1

Opinion triangle where all components used to compute trust are represented.

Subjective Logic is a widely employed model to manipulate trust, but is rather ill-suited when it comes to compute initial trust values to be used as inputs to the model. The reason is that the only source of information that Subjective Logic allows to compute trust values is reputation scores based on past interactions. Once it is noticed that different agents might evaluate interaction differently and that reputation scores in one context are not easily transferable to other context, the fact that Subjective Logic has no other means to compute initial trust values becomes a big drawback. This is also noted by Jøsang himself:

“The major difficulty with applying subjective logic is to find a way to consistently determine opinions to be used as input parameters. People may find the opinion model unfamiliar, and different individuals may produce conflicting opinions when faced with the same evidence.” [8].

The aim of this paper is specifically to implement a trust computing component that can produce initial trust values which can then be plugged into Subjective Logic.

4 A Language for Trust

We will now introduce the syntax and semantics of a formal language that will allow us to reason about knowledge and trust and, furthermore, to provide a trust computing mechanism that can produce values to be fed into Subjective Logic’s trust manipulation component. This, we believe, is an improvement to Subjective Logic. The leading idea of the framework comes from an insight given by Jøsang:

“…[T]rust ultimately is a personal and subjective phenomenon that is based on various factors or evidence, and that some of those carry more weight than others. Personal experience typically carries more weight than second hand trust referrals or reputation…” [9].

The idea is therefore that of using the expressive power of a formal language to describe the information possessed by an agent and then transform this knowledge into an opinion value about a given proposition with all the three major components of Subjective Logic made explicit.

4.1 Syntax

In our language \( {\mathcal{L}} \), we start with two sets. The two initial sets are a finite set Ag of agents and a finite set At of atomic propositional constant. Given \( p \in At \), \( i \in Ag \) and b a rational number in the interval [0, 1] (with 0 and 1 included), the grammar for our language is given by the following BNF:

$$ \upvarphi : = \left. p \right|\left. {\neg {\upvarphi }} \right|\left. {{\upvarphi } \wedge {\upvarphi }} \right|\left. {K_{i} \left( {\upvarphi } \right)} \right|\upomega_{i} \left( {\upvarphi } \right) \ge b $$

All other connectives and operators are defined in the standard way:

  1. (i)

    \( {\upvarphi } \vee \uppsi : = \neg \left( {\neg {\upvarphi } \wedge \neg\uppsi} \right) \);

  2. (ii)

    \( {\upvarphi } \to\uppsi: = \neg {\upvarphi } \vee\uppsi; \)

  3. (iii)

    \( {\upvarphi } \leftrightarrow\uppsi: = \left( {{\upvarphi } \to\uppsi} \right) \wedge \left( {\uppsi \to {\upvarphi }} \right) \);

  4. (iv)

    \( F_{i} \left( {\upvarphi } \right): = \neg K_{i} \left( {\neg {\upvarphi }} \right) \);

  5. (v)

    \( \upomega_{i} \left( {\upvarphi } \right) \le b: = -\upomega_{i} \left( {\upvarphi } \right) \ge - b \);

  6. (vi)

    \( \upomega_{i} \left( {\upvarphi } \right) < b: = \neg \left( {\upomega_{i} \left( {\upvarphi } \right) \ge b} \right) \);

  7. (vii)

    \( \upomega_{i} \left( {\upvarphi } \right) = b: = \left( {\upomega_{i} \left( {\upvarphi } \right) \ge b} \right) \wedge \left( {\upomega_{i} \left( {\upvarphi } \right) \le b} \right) \).

\( K_{i} ({\upvarphi }) \) should be intuitively read as “agent i knows that \( {\upvarphi } \)”; we will call such formulas knowledge formulas. \( \upomega_{i} ({\upvarphi }) \ge b \) should be intuitively read as “agent i trusts formula \( {\upvarphi } \) to degree at least b”; we will call such formulas trust formulas. The degree to which an agent can trust goes from 0, complete distrust, to 1, complete trust.

4.2 Semantics

The semantics we will provide in this paper is in truth theoretical form and depends on a structure that is a combination of traditional relational structures for modal logics with added components to interpret trust formulas [4, 7, 12]. We will interpret the above presented language in the following structure \( M = \, (S,Cntx,\uppi,R_{i1} , \, \ldots ,R_{in} ,T) \), where \( {\text{S}} \) is a finite set of possible states of the system; Cntx is a finite set of contexts, i.e. scenarios in which to evaluate trust; \( \uppi \) is a valuation function over the set At, assigning to each atomic proposition in At a set of possible worlds, i.e. \( \uppi:At \to \wp \left( S \right) \); \( R_{i} \), one for each agent \( i \in Ag \), is an accessibility relation defined over S, i.e. \( R_{i} \; \subseteq \;S \times S \); finally, \( T = \, ({\rm X}_{{\left( {i,c} \right)}} ,\upmu_{(i,c,\varphi )} ) \) is a trust relevance space which determines, for each formula of the language, which propositional constants are relevant and how relevant they are. T has two distinct components: a qualitative relevance component \( {\rm X}_{{\left( {i,c} \right)}} \) and a quantitative relevance component \( \upmu_{(i,c,\varphi )} \).

The qualitative relevance component \( {\rm X}_{{\left( {i,c} \right)}} \), one for each couple of elements \( i \in Ag \) and \( c \in Cntx \), takes formulas of the language as arguments and returns as values subsets of At, i.e. \( {\rm X}_{{\left( {i, \, c} \right)}} :{\mathcal{L}} \to \wp \left( {At} \right) \); the quantitative relevance component \( \upmu_{(i,c,\varphi )} \), which is a family of functions, one for each formula of the language and defined over the same couple of \( i \in Ag \) and \( c \in Cntx\,of\,\text{X}_{{\left( {i,c} \right)}} \), assigns to each relevant atomic proposition p member of the subset of At selected by \( {\rm X}_{{\left( {i,c} \right)}} \), a rational number in the range (0, 1], i.e. \( \upmu_{(i,c,\varphi )} :\text{X}_{{\left( {i,c} \right)}} ({\upvarphi }) \to \left( {0,1} \right] \)Footnote 4. The weights assigned must be additive to 1. Since the set of agents and the set of contexts are finite, there is a finite number of trust relevance spaces.

A possible state \( s \in S \) represent a way in which the system can be specified; two states differ from one another by what propositions hold in such states. For example, in one state it might hold that it is sunny in San Francisco, while in another state it might hold that it is raining in San Francisco and therefore it is not sunny. If no non-standard conditionsFootnote 5 are placed on the description of a system (i.e. proscriptions on propositions that can or can’t hold together), there will be exactly 2|At| states in S, where |At| is the cardinality of the set of propositional constants.

The valuation function \( \uppi \) assigns to each proposition \( p \in At \) a set of states; a state is included in the set if, and only if, the proposition holds in the given state.

The accessibility relations Ri connect possible states according to the epistemic status of the agent to whom the accessibility relation is associated. Two states are therefore connected if they are epistemically indistinguishable for the agent (i.e. according to what the agent knows, he can’t determine which of two states is the actual one). For example, if an agent only knows the weather of Rome, he can’t determine which is the actual state between the state in which it is sunny in San Francisco and the one in which it is raining in San Francisco; therefore, the two states would be connected by an accessibility relation; for the current paper, we are assuming that the accessibility relations are equivalence relations, i.e. reflexive, symmetric and transitive relations.

Finally, the trust relevance space T provides both a qualitative and a quantitative evaluation of the information that will help in determining the exact trust value an agent places in a formula. The qualitative relevance function \( {\rm X}_{(i,c)} \), specifies, for an evaluating agent (the trustor) and an evaluation scenario (the context), which information (under the form of propositional constants) is relevant to compute an actual trust value. The relevance weight assigning functions \( \upmu_{(i,c,\varphi )} \) provide a quantitative assessment of relevance of a given proposition to trust, assigning specific weights to all propositional constants that are selected by applying \( {\rm X}_{{\left( {i,c} \right)}} \text{ to }{\upvarphi } \). The additivity condition on \( \upmu_{(i,c,\varphi )} \) is in place to guarantee that in the best-case scenario (the one in which all relevant propositional constants do hold in the state and the agent is aware of all of them) there is full trust on the part of the trustor and moreover, it is never possible, in the system, to exceed full trust.

Once the basic components of the model are given, it is possible to extend the functions \( \uppi \) and \( \upmu_{(i,c,\varphi )} \) to consider other formulas over and above the ones on which those functions are defined. This is fundamental to provide proper satisfiability conditions for the language. We will proceed to give the extensions explicitly: we will label the extension of \( \uppi \) with \( \uppi^{{\text{ext}}} \) and the extension of \( \upmu_{(i,c,\varphi )} \) with \( \upmu_{(i,c,\varphi )}^{{\text{ext}}} \). Intuitively, the function \( \uppi^{{\text{ext}}} \) assigns to each formula of the language the set of states in which the formula holds, i.e. \( \uppi^{{\text{ext}}} :{\mathcal{L}} \to \wp \left( S \right) \). Put another way, \( \uppi^{{\text{ext}}} \) returns, for each formula of the language, the states that are compatible with the truth of such formula. This means that if an agent knows a given formula (i.e. he/she thinks that the formula is true), he/she will consider possible only the states contained in the set identified by \( \uppi^{{\text{ext}}} \). For now, it is only possible to extend \( \uppi \) to negations, conjunctions and knowledge formulas, we will then extend it also to trust formulas. The function \( \uppi^{{\text{ext}}} \) is defined recursively as follows:

$$ \uppi^{{\text{ext}}} \left( p \right) =\uppi\left( p \right) $$
(1)
$$ \uppi^{\text{ext}} \left( {\neg {\upvarphi }} \right) = S\backslash\uppi^{\text{ext}} \left( {\upvarphi } \right) $$
(2)

Where \( S\backslash\uppi^{\text{ext}} ({\upvarphi }) \) is the set-theoretic complement of \( \uppi^{\text{ext}} ({\upvarphi }) \) with respect to the whole set of possible states S.

$$ \uppi^{\text{ext}} \left( {{\upvarphi } \wedge\uppsi} \right) =\uppi^{\text{ext}} \left( {\upvarphi } \right) \cap\uppi^{\text{ext}} \left(\uppsi \right) $$
(3)
$$ \uppi^{\text{ext}} \left( {K_{i} \left( {\upvarphi } \right)} \right) = \left\{ {s \in S \,|\,\forall t \in S\,\text{s.t.}\,sR_{i} t,\,t \in\uppi^{\text{ext}} \left( {\upvarphi } \right)} \right\} $$
(4)

The valuation of knowledge formulas should be read intuitively as follows: to check if a given state s is a member of the valuation set of the formula \( K_{i} ({\upvarphi }) \), take all states t of the system that are accessible from the state s; if all those states t are members of the valuation set of the formula \( {\upvarphi } \), then the state s is a member of the valuation set of the formula \( K_{i} ({\upvarphi }) \); repeat the process for every state of the system (since the set of states S is finite, the process will eventually end). Before extending \( \uppi^{{\text{ext}}} \) also to trust formulas, we are required to extend the family of functions \( \upmu_{(i,c,\varphi )} \). To obtain such an extension, we define another family of functions \( \uptau_{(i,c,\varphi )} \). The \( \uptau_{(i,c,\varphi )} \), one for each formula \( {\upvarphi } \) of the language, are defined over the possible states of the system and associate with every state \( s \in S \) a rational number in the interval [0, 1], i.e. \( \uptau_{(i,c,\varphi )} :S \to \, \left[ {0, \, 1} \right] \). The family \( \uptau_{(i,c,\varphi )} \) depends on a trust relevance space T and the members of such family sum up all the relevance weights of the propositional constants that hold in the state s taken as argument. This sum represents the upper bound of trust in \( {\upvarphi } \) for each state. Intuitively, a function \( \uptau_{(i,c,\varphi )} \) indicates how much trust an agent has in the formula \( {\upvarphi } \) in each state, if he/she is aware, in the state, of all the relevant information related to that \( {\upvarphi } \) (i.e. he/she knows what relevant propositions are true in that state). Another way to put it is the following: if an agent knows exactly which is the current state, \( \uptau_{(i,c,\varphi )} \) indicates the amount of trust he/she has in \( {\upvarphi } \). Thus, \( \uptau_{(i,c,\varphi )} \) is an ideal measurement of trust. The function is defined explicitly as follows:

$$ \uptau_{(i, \, c,\varphi )} \left( s \right) = \sum {\upmu_{(i, \, c,\varphi )} \left( p \right),{\text{s}} . {\text{t}} .\,s \in\uppi^{{\text{ext}}} \left( p \right)} $$
(5)

Note that since \( \upmu_{(i,c,\varphi )} \) is additive to 1, we are guaranteed that \( \uptau_{(i,c,\varphi )} \) itself never exceeds 1. We assume that if the state to which \( \uptau_{(i,c,\varphi )} \) is applied is not contained in any \( \uppi^{{\text{ext}}} (p) \), then \( \uptau_{(i,c,\varphi )} \) assigns to it the value 0:

$$ {\text{If there is no }}\uppi^{\text{ext}} \left( p \right)\,\text{s.t.}\,s \in\uppi^{\text{ext}} \left( p \right),\,\text{then}\,\uptau_{{\left( {i,c,\varphi } \right)}} \left( s \right) = 0 $$
(6)

We can now define \( \upmu_{(i,c,\varphi )}^{{\text{ext}}} \) for all formulas. Intuitively, the function \( \upmu_{(i,c,\varphi )}^{{\text{ext}}} \) assigns a trust relevance weight to all formulas of the language, i.e. \( \upmu_{(i,c,\varphi )}^{{\text{ext}}} :{\mathcal{L}} \to \, \left[ {0, \, 1} \right] \). Note that \( \upmu_{(i,c,\varphi )}^{{\text{ext}}} \) also depends on all the elements on which \( \upmu_{(i,c,\varphi )}^{{\text{ext}}} \) depend. This allows for the possibility of taking \( {\upvarphi } \) both as parameter and as argument of the function. \( \upmu_{(i,c,\varphi )}^{{\text{ext}}} \) takes the sum of the relevance weights of all worlds in which a formula holds, i.e. the members of the valuation of the formula, and then divides the result for the cardinality of the valuation of the formula. It is extremely important to understand the behavior of \( \upmu_{(i,c,\varphi )}^{{\text{ext}}} \), because this function is the one properly defining trust in our formal language. The explicit definition of \( \upmu_{(i,c,\varphi )}^{{\text{ext}}} \) is the following:

$$ \upmu_{i,c,\varphi }^{{\text{ext}}} \left(\uppsi \right) = \left( {\sum\uptau_{{\left( {i,c,\varphi } \right)}} (s),\,\text{s.t.}\,s \in\uppi^{\text{ext}} \left(\uppsi \right)} \right)/\left| {\uppi^{\text{ext}} \left(\uppsi \right)} \right| $$
(7)

Two remarks must be made about (7): first, it should be noted that the \( \upmu_{(i,c,\varphi )}^{{\text{ext}}} \) of a formula, again, never exceeds 1; furthermore, the reader should note that \( \uppi^{{\text{ext}}} (\uppsi) \) is yet to be defined for trust formulas and, therefore, the function \( \upmu_{(i,c,\varphi )}^{{\text{ext}}} \) is undefined for this typology of formulas. The reason is that, in our language to obtain the valuation of a trust formula, we need to be able to apply the function \( \upmu_{(i,c,\varphi )}^{{\text{ext}}} \) to knowledge formulas first. This should be expected, since the main intuition behind the language presented in the paper is that trust depends on the knowledge of agents.

To properly define \( \upmu_{(i,c,\varphi )}^{{\text{ext}}} \) for all formulas, we must return first to the function \( \uppi^{{\text{ext}}} \) and see how this function can be fully extended to include in its domain trust formulas. To obtain the valuation \( \uppi^{{\text{ext}}} \) for trust formulas, we now present a complete procedure. This procedure indicates what are the step necessary to obtain the set of states in which a given trust formula hold and a by-product of the procedure will be that of understanding how to compute the value of a trust formula in each state.

The procedure has 6 steps:

  1. 1.

    We start with the trust formula \( \upomega_{i} ({\upvarphi }) \ge b \) that we want to evaluate. We consider the trust relevance space T of the formula \( {\upvarphi } \). Note that the set \( {\rm X}_{i,c} \) of any formula will only contain propositional constants and, particularly, it will not contain any occurrence of trust formulasFootnote 6.

  2. 2.

    Given a state \( s \in S \), we take the formulas \( \uppsi\;{\text{s}} . {\text{t}} .\;s \in\uppi^{{\text{ext}}} (K_{i} (\uppsi)) \) and \( \uppsi \) does not contain occurrences of trust formulas (i.e. no subformula of the formula is a trust formula)Footnote 7.

  3. 3.

    We take the conjunction of the formulas identified in step 2 and we manipulate it to transform it in the equivalent simplified Conjunctive Normal Form (CNF), i.e. a CNF on which redundant formulas are eliminated and the annihilation and absorption law have been appliedFootnote 8.

  4. 4.

    We compute the \( \upmu_{(i,c,\varphi )}^{{\text{ext}}} \) of the formula obtained in step 3. This is a rational number in the range \( \left[ {0, 1} \right] \). Note that, given our conditions during the procedure, we are guaranteed that \( \upmu_{(i,c,\varphi )}^{{\text{ext}}} \) is properly defined for such a formula, since there is no occurrence of trust formulas at this point.

  5. 5.

    We compare the value obtained in step 4 with the value \( b \) that appears in the trust formula we are evaluating. If the value is more than or equal to b, we say that \( \upomega_{i} ({\upvarphi }) \ge b \) holds in the state \( s \in S \) and we therefore add it to the set \( \uppi^{{\text{ext}}} ({\upomega_{i} ({\upvarphi }) \ge b}) \).

  6. 6.

    We return to step 2 and repeat the process for another state of the system. Note that, since the set S of states is finite, the procedure will eventually end.

When the procedure ends, the result is the valuation set of the formula \( \upomega_{i} ({\upvarphi }) \ge b, \) i.e. \( \uppi^{\text{ext}}({\upomega_{i} ({\upvarphi }) \ge b}) \). Even if not strictly necessary for the evaluation of the formulas of our language, we also complete the extension of function \( \upmu_{(i, \, c,\varphi )}^{{\text{ext}}} \) to all formulas by giving the valuation function of trust formulas.

We will now provide truth-theoretical conditions for the satisfiability of a formula. Given a model M, a state \( s \in S \), a context \( c \in Cntx \) and a rational number b in the range [0, 1], the satisfiability conditions are defined as follows:

  • \( \left( {M, \, s, \, c} \right)\,{ \vDash }p\quad \;\;\,\,\text{iff }s \in\uppi^{{\text{ext}}} \left( p \right) \);

  • \( \left( {M,s,c} \right)\,{ \vDash }\neg {\upvarphi }\quad \,\,\text{iff }s \in\uppi^{\text{ext}} ( {\neg {\upvarphi }} ) \);

  • \( \left( {M,s,c} \right)\,{ \vDash} \upvarphi \wedge\uppsi\;\,\text{iff }s \in\uppi^{\text{ext}} ( {{\upvarphi } \wedge\uppsi}) \);

  • \( \left( {M,s,c} \right)\,{ \vDash }K_{i} ( {\upvarphi } )\quad \quad \text{iff }s \in\uppi^{\text{ext}} ( {K_{i} ( {\upvarphi } )}) \);

  • \( \left( {M,s,c} \right)\,{ \vDash}\upomega_{i} ( {\upvarphi } ) \ge b\quad \text{iff } s \in\uppi^{\text{ext}} ( {\upomega_{i} ( {\upvarphi } ) \ge b}) \).

Given the above satisfiability conditions, we define four concepts of validity. A formula is context-valid with respect to a structure if, and only if, it is satisfied by every state of the system, once a specific context is given. A formula is state-valid with respect to a structure if, and only if, it is satisfied in every context, once a specific state is given. A formula is model-valid if, and only if, it is both context-valid and state-valid. A formula is fully-valid if, and only if, it is model-valid for all possible models. This structure is sufficient to be able to reason about knowledge and trust.

The above presented language is sufficient to reason about knowledge and trust and their interrelationship. The language provides a good way to compute pre-trust values and therefore can be employed as a trust-computing component of a computational trust model. We will now show how it is possible to use this language to feed the trust manipulation component of Jøsang’s Subjective Logic.

4.3 From Knowledge to Trust: Pre-trust Computations

The aim of our pre-trust computation is to obtain the three distinct components of Subjective Logic’s opinions. Such components are respectively, belief, disbelief and uncertainty. We will now show that obtaining those three components is straightforward in our system, once our semantical structure is given. We start by specifying the three opinion components: “agent i believes in proposition p” (symbolically ) means that agent i, the trustor, believes, to a given degree, in the truth of proposition p; “agent i disbelieves in proposition p” (symbolically ) means that agent i disbelieves, to a given degree, in the truth of proposition p; finally, “agent i is uncertain about proposition p” (symbolically ) means that agent i does not possess any relevant information on whether to trust or not the proposition p. In our case, it is possible to connect the amount of information with the cardinality of the set attributed by \( \uppi^{{\text{ext}}} \) to the simplified CNF of the known formulas. Specifically, the smaller the cardinality, the higher the amount of information possessed and viceversa.

The three components are obtained in the following way:

  1. 1.

    We start by considering the trust relevance space T corresponding to a given context and a given agent for the formula p.

  2. 2.

    Assuming the actual state is \( s \in S \), we take the formulas \( \uppsi\;{\text{s}} . {\text{t}} .\;s \in\uppi^{{\text{ext}}} (K_{i} (\uppsi)) \) and \( \uppsi \) does not contain occurrences of trust formulas.

  3. 3.

    We take the conjunction of the formulas identified in step 2 and we manipulate it to transform it in the equivalent simplified Conjunctive Normal Form (CNF). We label this formula with \( \Phi \).

  4. 4.

    We compute \( \uppi^{\text{ext}} (\Phi ). \)

  5. 5.

    For each state \( s \in\uppi^{\text{ext}} (\Phi) \), compute \( \uptau_{{\left( {i, \, c, \, p} \right)}} \left( s \right) \).

  6. 6.

    Identify the maximum and the minimum value among the results of step 6.

  7. 7.

    is equal to the minimum value identified; \( d_{i} \left( p \right) \) is equal to 1 minus the maximum value identified; \( u_{i} \left( p \right) \) is equal to the difference between the maximum and the minimum values.

The three components so defined form the basis of subjective logic’s opinions and they can therefore be manipulated by Jøsang’s model to obtain further trust values. We believe that augmenting subjective logic with the trust computing component that comes from the application of the interpretational structure of the language we proposed is an improvement of the original model presented by Jøsang.

4.4 Summing Up

We will explain the intuitive idea behind the semantical structure we just presented and explain why we believe this structure is able to capture formally the traditional notion of trust. We will do so by describing how each component of the system contributes to the generation of trust values.

In our language, we take propositions as pieces of information. Each proposition tells us something about the system and, doing so, it helps us in determining the state of the system. The set S of possible states includes, initially, all the possible combinations of propositional constants: this is equivalent to a setting where there is complete ignorance, i.e. the system can be in every possible state. On top of this initial setting we start constructing ideal settings. To do so we first identify which information is relevant for trust in a formula in each context (i.e. evaluation scenario): this role is fulfilled by the structure T, which takes formulas of the language and returns the relevant information for such formulas. Obviously, this step is subjective in nature and this is the reason the X set depends both on agents and contexts. Each agent might consider different information as relevant in different contexts and therefore there is a different X set for each couple (i, c). However, determining what is relevant isn’t sufficient. We are dealing with a formal version of trust and this makes it necessary that trust is somehow measurable. For this reason, we have the \( \upmu \) function, that assigns measures to the relevant information, telling us how much each piece of information is relevant to the trust in a given proposition. Again, this measure is subjective in nature and therefore must depend both on agents and contexts. Once obtained the T structure, we can now compute trust values for ideal situations. This is the role of the \( \uptau \) function. What a \( \uptau \) function does is to take states and determine the trust value in the formula in such a state, if an agent can determine univocally that that state is the only possible one. To this extent, the value of the \( \uptau \) function and that of the \( \upmu^{{\text{ext}}} \) function is the same, if the formula to which \( \upmu^{{\text{ext}}} \) is applied univocally identifies the state to which \( \uptau \) is applied. As we said, though, this is an ideal situation and often, in the real world, agents do not possess enough information to univocally determine a state over the others. For this reason, we introduce the function \( \upmu^{{\text{ext}}} \), which tells us how much trust must be placed into a given formula, when that formula identifies a subset of S. This is the reason \( \upmu^{{\text{ext}}} \) depends on the function \( \uppi^{{\text{ext}}} \) (i.e. the function that identifies the states compatible with the information carried by a given formula). One possible critique is that we take the average of the ideal trust values of the remaining possible states and this is not justified, since we might want to take the minimum or the maximum value among the ones available. To justify our choices, we rely on the principle of sufficient reason. For the principle of sufficient reason, if an agent has no sufficient reasons for preferring a state to another, he should attribute to each state the same probability. Since we are assuming that agents can’t determine which is the actual state among all the ones compatible with his/her information (if he/she could determine the actual state, we would again return to the reflections on ideal settings), he/she has no way of preferring one trust value over the others. Note that this might lead, in some scenarios, to trust a formula in a situation where there shouldn’t be trust and distrusting a formula in a situation where there should be trust. We believe that this is not fully problematic, since the traditional concept of trust is open to the same issues and therefore it might be that the problem is inherently connected with trust and not specifically with our formalism. Nonetheless, it is possible to change the computation of \( \upmu^{{\text{ext}}} \) to consider optimistic and pessimistic attitudes in the part of the agent. Note, however, that this would only avoid one part of the problem of misplaced trust by enhancing the other part (e.g. an optimistic approach would avoid the possibility of not trusting when trust should be warranted, at the expense of having many more scenarios in which the agent trusts when trust shouldn’t be warranted). For those reasons, at least at the theoretical level of this work, we prefer using the average.

This exhaust the intuitive description of our formal language. We will now include a concrete example, to show a practical application of our language to a real scenario.

4.5 Example

In this section we build a concrete example and show how our formal language (especially the semantical structure) helps in analyzing the example.

Assume we have 2 agents (Anne and Bob) and 2 contexts (Fixing_the_car and Preparing_dinner). Imagine we have a third agent, Charlie, Anne’s father and a car mechanic. What we are trying to evaluate in this example is the trust Anne and Bob place in the proposition “Charlie will help me”. We will call the agents A and B, the contexts with F and P and the proposition “Charlie will help me” with \( {\upvarphi } \). Imagine the world is completely described by 4 propositional constants: p1: Charlie has a master degree in engineering; p2: Charlie has taken cooking classes; p3: Charlie offers a guarantee when he repairs a car; p4: Charlie is a meticulous person.

Given the fours propositional constants, we have 16 possible states of the world, representing all the possible combinations of the four propositions. We will label the states with the letter s, with subscripts. The initial valuation \( \uppi \) of the propositional constants is the following: \( \uppi\left( {p_{1} } \right) = \left\{ {s_{1} , \, s_{2} , \, s_{3} , \, s_{4} , \, s_{5} , \, s_{6} , \, s_{7} , \, s_{8} } \right\} \); \( \uppi\left( {p_{2} } \right) = \left\{ {s_{1} , \, s_{2} , \, s_{3} , \, s_{4} , \, s_{9} , \, s_{10} , \, s_{11} , \, s_{12} } \right\} \); \( \uppi\left( {p_{3} } \right) = \left\{ {s_{1} , \, s_{2} , \, s_{5} , \, s_{6} , \, s_{9} , \, s_{10} , \, s_{13} , \, s_{14} } \right\} \); \( \uppi\left( {p_{4} } \right) = \left\{ {s_{1} , \, s_{3} , \, s_{5} , \, s_{7} , \, s_{9} , \, s_{11} , \, s_{13} , \, s_{15} } \right\} \).

The R for each agent are the following: RA = {(s1, s3), (s3, s5), (s5, s7), (s2, s4), (s4, s6), (s6, s8), (s9, s11), (s11, s13), (s13, s15), (s15, s17), (s10, s12), (s12, s14), (s14, s16)}; RB = {(s1, s2), (s2, s5), (s5, s6), (s6, s9), (s9, s10), (s10, s13), (s13, s14), (s3, s4), (s4, s7), (s7, s8), (s8, s11), (s11, s12), (s12, s15), (s15, s16)}Footnote 9.

We will now specify the trust relevance space for the three couples (A, F), (B, F) and (A, P) and the proposition \( {\upvarphi } \). We start with the X sets: \( \text{X}_{{(\text{A,F})}} ({\upvarphi }) = \left\{ {p_{4} } \right\} \); \( \text{X}_{{(\text{B,F})}} ({\upvarphi }) = \left\{ {p_{1}, p_{3}, p_{4} } \right\} \); \( \text{X}_{{(\text{A,P})}} ({\upvarphi }) = \left\{ {p_{2},p_{4} } \right\} \).

We now apply \( \upmu:\upmu_{{( {\text{A,F},{\upvarphi }} )}} ( {p_{4} } ) = 1 \); \( \upmu_{{(\text{B,F},{\upvarphi })}} ( {p_{1} } ) = 0.5 \), \( \upmu_{{({\text{B,F}},{\upvarphi })}} ( {p_{3} } ) = 0.4 \), \( \upmu_{{({\text{B,F}},{\upvarphi })}} ( {p_{3} } ) = 0.4 \); \( \upmu_{{({\text{A,P}},{\upvarphi })}} ({p_{2} } ) = 0.6 \), \( \upmu_{{({\text{A,P}},{\upvarphi })}} ({p_{4} } ) = 0.4 \).

We now have all the elements to assess how much trust an agent has in the proposition \( {\upvarphi } \) in each possible scenario. Let’s imagine we want to evaluate the two formulas \( \upomega_{A} ({\upvarphi }) \ge 0.5 \) and \( \upomega_{B} ({\upvarphi }) \ge 0.8 \). We will evaluate the first formula for two contexts in the same state, i.e. the contexts Fixing_the_car and Preparing_dinner in the state s7. We will evaluate the second formula for the same context in two different states, i.e. the context Fixing_the_car in the states s2 and s8.

Let’s try to determine first whether \( ( {M, s_{7}, F} ){ \vDash \omega }_{A} ({\upvarphi }) \ge 0.5 \). Given the structure we presented above, it is possible to derive that in every state Anne knows whether or not her father has a master degree in engineering and whether or not he is meticulous. Therefore, she knows those facts also in s7, meaning that s7 is contained both in the set \( \uppi^{{\text{ext}}} \left( {K_{A} \left( {p_{1} } \right)} \right) \) and \( \uppi^{{\text{ext}}} \left( {K_{A} \left( {p_{4} } \right)} \right) \)Footnote 10. We therefore identified the two propositions p1 and p4. The simplified CNF of the conjunction of those two propositions is just their conjunction \( p_{1} \wedge p_{4} \). We now compute the \( \upmu_{(A,F,\varphi )}^{{\text{ext}}} \) function of this proposition:

  • $$ \upmu_{{\left( {A,F,\varphi } \right)}}^{{\text{ext}}} \left( {p_{1} \wedge p_{4} } \right) = \left( {\sum {\uptau_{{\left( {A,F,\varphi } \right)}} \left( s \right),\,{\text{s}} . {\text{t}} .\text{ }s \in \,\uppi^{{\text{ext}}} } \left( {p_{1} \wedge p_{4} } \right)} \right)/\left| {\uppi^{{\text{ext}}} \left( {p_{1} \wedge p_{4} } \right)} \right| $$

Note that \( \uppi^{{\text{ext}}} (p_{1} \wedge p_{4} ) = \left\{ {s_{1} , \, s_{3} , \, s_{5} , \, s_{7} } \right\} \) and therefore \( \left| {\uppi^{{\text{ext}}} (p_{1} \wedge p_{4} )} \right| \) = 4. The \( \uptau_{(A,F,\varphi )} \left( s \right) \) are: \( \uptau_{(A,F,\varphi )} \left( {s_{1} } \right) =\uptau_{(A,F,\varphi )} \left( {s_{3} } \right) =\uptau_{(A,F,\varphi )} \left( {s_{5} } \right) =\uptau_{(A,F,\varphi )} \left( {s_{7} } \right) = 1 \). This means that \( \upmu_{(i,c,\varphi )}^{{\text{ext}}} (p_{1} \wedge p_{4} ) = 4/4 = 1 \). Therefore, \( ( {M,s_{7},F} )\,{ \vDash}\,\upomega_{A} ({\upvarphi }) \ge 0.5 \) holds. This should be expected, because for Anne to trust that Charlie will help in the context of fixing the car the only relevant fact is that Charlie is meticulous. Since she knows in s7 that Charlie is meticulous, she has full trust in the fact that Charlie will help.

Let’s now check \( ( {M,s_{7}, P} )\,{ \vDash}\,\upomega_{A} ({\upvarphi }) \ge 0.5 \). Given that the knowledge Anne has in state s7 is not affected by the context, part of the procedure is like the previous one, we therefore must compute:

  • $$ \upmu_{(A,P,\varphi )}^{{\text{ext}}} \left( {p_{1} \wedge p_{4} } \right) = \left( {\sum\uptau_{(A,P,\varphi )} \left( s \right),{\text{ s}} . {\text{t}} .\,s \in\uppi^{{\text{ext}}} \left( {p_{1} \wedge p_{4} } \right)} \right)/\left| {\uppi^{{\text{ext}}} \left( {p_{1} \wedge p_{4} } \right)} \right| $$

The \( \uptau_{(A, \, P,\varphi )} \left( s \right) \) are: \( \uptau_{(A,P,\varphi )} \left( {s_{1} } \right) =\uptau_{(A,P,\varphi )} \left( {s_{3} } \right) = 1{\text{ and}}\,\uptau_{(A,P,\varphi )} \left( {s_{5} } \right) =\uptau_{(A,P,\varphi )} \left( {s_{7} } \right) = 0.4 \). This means that \( \upmu_{(A,P,\varphi )}^{{\text{ext}}} (p_{1} \wedge p_{4} ) = 2.8/4 = 0.7 \). Therefore \( ( {M,s_{7} ,P} )\,{\vDash}\,\upomega_{A} ({\upvarphi }) \ge 0.5 \) holds. This result should be expected, because for Anne to trust that Charlie will help in the context of preparing the dinner the relevant facts are that Charlie tool cooking classes and that he is meticulous. Since Anne knows that Charlie is meticulous, her trust increases slightly above the indifference threshold. We now check both \( ( {M,s_{2} ,F} )\,{\vDash}\,\upomega_{B} ({\upvarphi }) \ge 0.8 \) and \( ({M,s_{8}, F})\,{\vDash}\,\upomega_{B} ({\upvarphi }) \ge 0.8 \).

Note that in our structure, Bob knows whether p3 holds each state (i.e. if it holds he knows it and if it doesn’t hold he knows the negation of it). Again, we will only compute the values, without giving discursive explanations. The reader can check autonomously that the computations are correct. Since p3 holds in s2 Bob knows it. We will therefore compute \( \upmu_{(B,F,\varphi )}^{{\text{ext}}} \left( {p_{3} } \right) = \left( {\sum\uptau_{(B,F,\varphi )} \left( s \right)\text{,}\,{\text{s}} . {\text{t}} .\,s \in\uppi^{{\text{ext}}} \left( {p_{3} } \right)} \right)/\left| {\uppi^{{\text{ext}}} \left( {p_{3} } \right)} \right| = 0.7 \). Therefore \( ( {M,s_{2} ,F} )\,{\vDash}\,\upomega_{B} ({\upvarphi }) \ge 0.8 \) does not hold. Finally, in s2, p3 does not hold and therefore Bob knows \( \neg p_{3} \). We then must compute \( \upmu_{(B,F,\varphi )}^{{\text{ext}}} (\neg p_{3} ) = \left( {\sum\uptau_{(B,F,\varphi )} \left( s \right),\;{\text{s}} . {\text{t}} .\;s \in\uppi^{{\text{ext}}} (\neg p_{3} )} \right)/\left| {\uppi^{{\text{ext}}} (\neg p_{3} )} \right| = 0.3 \). Therefore, \( ( {M,s_{8} ,F} )\,{\vDash}\upomega_{B} ({\upvarphi }) \ge 0.8 \) does not hold.

When evaluating the \( \omega_{A} \left( \varphi \right) \) opinion in the context fixing_the_car and assuming the actual state is s7, we first determine \( \Phi , \) which is \( p_{1} \wedge p_{4} \). We then compute \( \uppi^{{\text{ext}}} (p_{1} \wedge p_{4} ) \), which is {s1s3s5s7}. The next step is that of computing the various \( \uptau_{(A,F,\varphi )} \left( s \right) \), i.e. \( \uptau_{(A,F,\varphi )} \left( {s_{1} } \right) =\uptau_{(A,F,\varphi )} \left( {s_{3} } \right) =\uptau_{(A,F,\varphi )} \left( {s_{5} } \right) =\uptau_{(A,F,\varphi )} \left( {s_{7} } \right) = 1 \). We can finally determine the various components of the opinion \( \omega_{A} \left( \varphi \right) \): is equal to 1, i.e. the minimum among the values of the \( \uptau_{(A,F,\varphi )} \left( s \right) \); \( d_{A} \left( \varphi \right) \) is equal to 0, i.e. 1 minus the maximum among the values of the \( \uptau_{(A,F,\varphi )} \left( s \right) \); \( u_{A} \left( \varphi \right) \) is equal to 0, i.e. the maximum minus the minimum. We now give the result for the other three examples we examined, without providing the computations. For the opinion \( \omega_{A} \left( \varphi \right) \) in the context preparing_dinner and assuming the actual state is s7: is equal to 0.4; \( d_{A} \left( \varphi \right) \) is equal to 0; \( u_{A} \left( \varphi \right) \) is equal to 0.6. For the opinion \( \omega_{B} \left( \varphi \right) \) in the context fixing_the_car and assuming the actual state is s2: is equal to 0.4; \( d_{B} \left( \varphi \right) \) is equal to 0; \( u_{B} \left( \varphi \right) \) is equal to 0.6. For the opinion \( \omega_{B} \left( \varphi \right) \) in the context fixing_the_car and assuming the actual state is s8: is equal to 0; \( d_{B} \left( \varphi \right) \) is equal to 0.4; \( u_{B} \left( \varphi \right) \) is equal to 0.6. This last computation exhausts our example.

5 Conclusion and Future Work

In this paper, we first discussed the distinction between the trust computing component and the trust manipulation component of computational trust models. We then showed that classical computational models are often focused only on one of the two components. Specifically, we showed that Audun Jøsang’s Subjective Logic, one of the best-suited models for trust manipulation, is lacking a proper trust computing mechanism. To overcome this downside of Subjective Logic, we proposed a logical language that can be employed to reason about knowledge and trust. Moreover, we showed how to move from our logical language to Subjective Logic. This, we believe, is an improvement for Subjective Logic and, generally, for the understanding of computational trust. In the future, the aim is to provide an actual implementation of the language and the ideas contained in this paper. Moreover, we believe it is possible to provide dynamic version of the language, which can take into consideration flow of information and time-related concerns. A final and interesting research direction is that of comparing the language proposed with others formal structures employed to represent uncertainty.