Keywords

1 Introduction

A trust relationship is one of the important reciprocal relationships in our society and cyberspace. There are many reciprocal relationships that must concern two parties, e.g., parent-child relationship, relative relationship, friendship, cooperative relationship, complementary relationship, trade relationship, buying and selling relationship, and so on [6]. Especially, the trust relationship is the basis of communications among agents (human to human, human to system, and system to system), and the basis of the decision-making of the agents.

Trust reasoning is an indispensable process for establishing trustworthy and secure communication under open and decentralized systems that include multi-agents. In open and decentralized systems, although it is difficult to know whether an agent that is required to communicate with us can be trusted or not before communication with it, we want to know whether the agent is trusted or not to establish trustworthy and secure communication, e.g., public key infrastructure (PKI). Thus, we should calculate the degree of trust of the target agent by using already known facts, hypotheses, and observed data. Trust reasoning is a process to calculate the degree of trust of the target agents and messages that come from other agents.

Although reciprocal logic [6] is an expectable candidate for a logic system underlying trust reasoning, current reciprocal logic cannot deal with several trust properties. Cheng proposed reciprocal logic [6] as a logic system underlying reasoning for such reciprocal relationships and formalized trust relationships between agents and agents, and agents and organizations in the logic. On the other hand, there are various trust properties for trust relationships, i.e., “an agent \(\alpha \) trusts another agent \(\beta \) about a message from \(\beta \) in property” where property are sincerity, validity, vigilance, credibility, cooperativity, completeness, and so on [10, 13,14,15,16]. The trust properties focus on not only trust relationships between agents and agents but also trust relationships between agents and messages that are informed by other agents. We cannot describe the trust relationship between agents and messages in current reciprocal logic.

An extension of reciprocal logic is demanded to deal with trust relationships between agents and messages which includes trust properties. We proposed an extension of reciprocal logic [4]. In the extension, messages that come from other agents are regarded as countable objects and are represented as individual constants from the viewpoint of predicate logic. On the other hand, Demolombe [8] defined several trust properties and formalized them. He regarded messages from agents as the beliefs of the agents and represented them as propositions (logical formulas) from the viewpoint of predicate logic. From the viewpoint of expressive power, Demolombe’s approach is better than the approach of our last extension, so the last extension is not enough.

This paper presents a new extension of reciprocal logic that can deal with trust properties and shows a case study of trust reasoning based on the proposed extension in PKI. We introduced two modal operators \( Bel \) and \( Inf \) into reciprocal logic to represent trust relationships between agents and messages from other agents according to Demolombe’s approach. We also add several axioms into reciprocal logic. One of the reasons why we want to know trustworthy agents is to reduce the process of whether the messages from the target agents are correct or not. In other words, the reason is to filter messages. Under this consideration, we defined axioms representing how agent \(\alpha \) deals with a message from agent \(\beta \) if \(\alpha \) trusts \(\beta \) in some trust property. Finally, we conducted a case study of trust reasoning based on the proposed extension in PKI. The case study shows that the proposed extension can deal with several trust properties. Thus, we can conclude that the proposed extension is an expectable candidate for a logic system underlying trust reasoning. The rest of the paper is organized as follows: Sect. 2 presents a summary of survey results concerning trust relationships and trust properties, the limitations of reciprocal logic, and the last extension. Section 3 presents the new extension of reciprocal logic. In Sect. 4, we showed a case study of trust reasoning based on the new extension in PKI. Some concluding remarks are given in Sect. 5.

2 Related Works

2.1 Trust Relationship and Trust Properties

Trust is a common phenomenon, and it is an essential element in a relationship that concerns two parties. These two parties are usually regarded as a trustor and a trustee when we consider a trust relationship, i.e., a trustee provides trustworthy messages to make the trustor trust the trustee. Trust relationships between parties are more tractable with the aid of trust properties.

Many previous works focus on trust properties. Several authors in [9, 11] target a certain property and focus on one dimension only whereas other authors in [10, 16] deal with trust in the reliability, credibility, and collectively. The authors in [14, 15] provide a classification of trust properties from the viewpoint of the trustor and trustee and regard them as essential in the establishment of a trust relationship.

In the context of trust not all the information from the other agent can be taken as a true message, i.e., an agent \(\alpha \) trusts another agent \(\beta \) with respect to some property means that \(\alpha \) believes that \(\beta \) satisfies this property. Demolombe [8] defined several trust properties. His definitions are as follows.

  • Sincerity: An agent \(\alpha \) trusts in the sincerity of an agent \(\beta \) if \(\beta \) informs \(\alpha \) about a proposition p then \(\beta \) believes p.

  • Validity: An agent \(\alpha \) trusts in the validity of an agent \(\beta \) if \(\beta \) informs \(\alpha \) about a proposition p then p is the case.

  • Completeness: An agent \(\alpha \) trusts in the completeness of an agent \(\beta \) if p is the case then \(\beta \) informs \(\alpha \) about p.

  • Cooperativity: An agent \(\alpha \) trusts in the cooperativity of an agent \(\beta \) if \(\beta \) believes p then \(\beta \) informs alpha about p.

  • Credibility: An agent \(\alpha \) trust in the credibility of an agent \(\beta \) if \(\beta \) believes p then p is the case.

  • Vigilance: An agent \(\alpha \) trust in the vigilance of an agent \(\beta \) if p is the case then \(\beta \) believes p.

Demolombe also provided a formal definition of the above properties. His formalization is based on classical mathematical logic.

2.2 Reciprocal Logic and Its Extension

Reciprocal logic was proposed by Cheng [6] as a logic system underlying reasoning for a reciprocal relationship. Classical mathematical logic and its various conservative extensions are not suitable for logic systems underlying reasoning because they have paradoxes of implication [2, 3]. Strong relevant logic has rejected those paradoxes of implication and is considered the universal basis of various applied logic for knowledge representation and reasoning [5, 7]. Thus, strong relevant logic and its conservative extensions are candidates for logic systems underlying reasoning. Reciprocal logic is one of the conservative extensions of strong relevant logic to deal with various reciprocal relationships. Reciprocal logic provides primitive predicates representing trust relationships between an agent and another agent, and between an agent and an organization, defined predicates based on the primitive predicates, and several axioms that include the predicates [6]. Let \( pe _1\), \( pe _2\), and \( pe _3\) be individual variables representing agents, and let \(o_1\) and \(o_2\) be individual variables representing organizations. The primitive predicates are as follows

  • \( TR ( pe _{1}, pe _{2})\): \( pe _{1}\) trusts \( pe _{2}\).

  • \(B( pe _1, o_1)\): agent \( pe _{1}\) belongs to organization \(o_1\)

Defined predicates based on the above primitive predicate are as follows.

  • \( NTR ( pe _{1}, pe _{2})=_{df} \lnot ( TR ( pe _{1}, pe _{2}))\) (\( NTR ( pe _{1}, pe _{2})\) means \( pe _{1}\) does not trust \( pe _{2}\)).

  • \( TREO ( pe _{1}, pe _{2}) =_{df} TR ( pe _{1}, pe _{2}) \wedge ( TR ( pe _{2}, pe _{1}))\) (\( TREO ( pe _{1}, pe _{2})\) means \( pe _{1}\) and \( pe _{2}\) trust each other.)

  • \( ITR ( pe _{1}, pe _{2}, pe _{3}) =_{df} \lnot ( TR ( pe _{1}, pe _{2}) \wedge TR ( pe _{1}, pe _{3}))\) (\( ITR ( pe _{1}, pe _{2}, pe _{3})\) means \( pe _{1}\) does not trust both \( pe _{2}\) and \( pe _{3}\) (Incompatibility))

  • \( XTR ( pe _{1}, pe _{2}, pe _{3}) =_{df} ( TR ( pe _{1}, pe _{2}) \vee TR ( pe _{1}, pe _{3}))\wedge ( NTR ( pe _1, pe _2) \vee NTR ( pe _1, pe _3))\) (\( XTR ( pe _{1}, pe _{2}, pe _{3})\) means \( pe _{1}\) trusts either \( pe _{2}\) or \( pe _{3}\) but not both (exclusive disjunction)).

  • \( JTR ( pe _{1}, pe _{2}, pe _{3}) =_{df} \lnot ( TR ( pe _{1}, pe _{2}) \vee TR ( pe _{1}, pe _{3}))\) (\( JTR ( pe _{1}, pe _{2}, pe _{3})\) means \( pe _{1}\) trusts neither \( pe _{2}\) nor \( pe _{3}\) (joint denial)).

  • \( TTR ( pe _{1}, pe _{2}, pe _{3}) =_{df} ( TR ( pe _{1}, pe _{2}) \wedge TR ( pe _{1}, pe _{3})) \Rightarrow TR ( pe _1, pe _3)\)

    (\( TTR ( pe _{1}, pe _{2}, pe _{3})\) means \( pe _{1}\) trusts \( pe _{3}\) if \( pe _1\) trusts \( pe _2\) and \( pe _2\) trusts \( pe _{3}\)).

  • \( CTR ( pe _{1}, pe _{2}, pe _{3})=_{df} ( TR ( pe _{1}, pe _{3}) \Rightarrow ( TR ( pe _{2}, pe _{3}))\) (\( CTR ( pe _{1}, pe _{2}, pe _{3})\) means \( pe _{2}\) trusts \( pe _{3}\) if \( pe _{1}\) trusts \( pe _{3}\). )

  • \( NCTR ( pe _{1}, pe _{2}, pe _{3}) =_{df} (\lnot TR ( pe _{1}, pe _{3}) \Rightarrow ( TR ( pe _{2}, pe _{3}))\)

    (\( NCTR ( pe _{1}, pe _{2}, pe _{3})\) means \( pe _{2}\) trusts \( pe _{3}\) if \( pe _{1}\) does not trusts \( pe _{3}\))

  • \( CNTR ( pe _{1}, pe _{2}, pe _{3}) =_{df} \lnot ( TR ( pe _{1}, pe _{3}) \Rightarrow \lnot ( TR ( pe _{2}, pe _{3}))\)

    (\( CNTR ( pe _{1}, pe _{2}, pe _{3})\) means \( pe _{2}\) does not trusts \( pe _{3}\) if \( pe _{1}\) does not trusts \( pe _{3}\))

  • \( TRpo ( pe _{1}, o_{1}) =_{df} \forall pe _{2}(B( pe _{2}, o_{1}) \wedge ( TR ( pe _{1}, pe _{2}))\) (\( TRpo ( pe _{1}, o_{1})\) means \( pe _{1}\) trusts \(o_{1}\)).

  • \( NTRpo ( pe _{1}, o_{1})=_{df} \forall pe _{2} (B( pe _{2}, o_{1}) \wedge ( NTR ( pe _{1}, pe _{2}))\) (\( NTRpo ( pe _{1}, o_{1})\) means \( pe _{1}\) does not trusts \(o_{1}\)).

  • \( TRop (o_{1}, pe _{1}) =_{df} \forall pe _{2}(B( pe _{2}, o_{1}) \wedge ( TR ( pe _{2}, pe _{1}))\) (\( TRop (o_{1}, pe _{1})\) means \(o_{1}\) trusts \( pe _{1}\)).

  • \( NTRop (o_{1}, pe _{1})=_{df} \forall pe _{2} (B( pe _{2}, o_{1}) \wedge ( NTR ( pe _{2}, pe _{1}))\) (\( NTRop (o_{1}, pe _{1})\) means \(o_{1}\) does not trusts \( pe _{1}\)).

  • \( TRoo (o_{1}, o_{2}) =_{df} \forall pe _{1} \forall pe _{2} (B( pe _{1}, o_{1}) \wedge (B( pe _{2}, o_{2})) \wedge ( TR ( pe _{1}, pe _{2})\) (\( TRoo (o_{1},o_{2})\) means \(o_{1}\) trusts \(o_{2}\)).

  • \( NTRoo (o_{1},o_{2}) =_{df} \forall pe _{1} \forall pe _{2} (B( pe _{1},o_{1}) \wedge (B( pe _{2},o_{2})) \wedge ( NTR ( pe _{1}, pe _{2})\)

    (\( NTRoo (o_{1}, o_{2})\) means \(o_{1}\) does not trusts \(o_{2}\)).

Through the above definitions of predicates, we can consider that reciprocal logic focuses on only trust relationship between an agent and other agent, and between an agent and an organization.

Axioms of the reciprocal logic are as follows:

  1. TR1:

    \(\lnot (\forall pe _{1} \forall pe _{2} ( TR ( pe _{1}, pe _{2}) \Rightarrow TR ( pe _{2}, pe _{1})))\)

  2. TR2:

    \(\lnot (\forall pe _{1} \forall o_{1} ( TRpo ( pe _{1},o_{1}) \Rightarrow TRop (o_{1}, pe _{1}))) \)

  3. TR3:

    \(\lnot (\forall o_{1} \forall pe _{1} ( TRop (o_{1}, pe _{1}) \Rightarrow TRpo ( pe _{1},o_{1}))) \)

  4. TR4:

    \(\lnot (\forall o_{1} \forall o_{2} ( TRoo (o_{1},o_{2}) \Rightarrow TRoo (o_{2},o_{1}))) \)

  5. TR5:

    \(\lnot (\forall pe _{1} \forall pe _{2} \forall pe _{2} ( TR ( pe _{1}, pe _{2}) \wedge TR ( pe _{2}, pe _{3}) \Rightarrow TR ( pe _{1}, pe _{3}))) \)

  6. TR6:

    \(\lnot (\forall pe _{1} \forall pe _{2} \forall o_{1} ( TRpo ( pe _{1},o_{1}) \wedge TRop (o_{1}, pe _{2}) \Rightarrow TR ( pe _{1}, pe _{2}))) \)

  7. TR7:

    \(\lnot (\forall pe _{1} \forall pe _{2} \forall o_{1} ( TRop (o_{1}, pe _{1}) \wedge TR ( pe _{1}, pe _{2}) \Rightarrow TRop (o_{1}, pe _{2}))) \)

  8. TR8:

    \(\lnot (\forall o_{1} \forall o_{2} \forall o_{3} ( TRoo (o_{1},o_{2}) \wedge TRoo (o_{2}, pe _{3}) \Rightarrow TR (o_{1},o_{3})))\)

\(TrTcQ =_{df} TcQ + \{\text{ TR1 }, \ldots , \text{ TR8 }\}\), \(TrEcQ =_{df} EcQ + \{\text{ TR1 }, \ldots , \text{ TR8 }\}\), and \(TrRcQ =_{df} RcQ + \{\text{ TR1 }, \ldots , \text{ TR8 }\}\) are the minimal logic systems of reciprocal logic where TcQ, EcQ, and RcQ are logic systems of the first order predicate strong relevant logics [6].

We proposed an extension of reciprocal logic to deal with trust properties [4]. Current reciprocal logic cannot deal with the trust properties explained in Sect. 2.1 because it does not provide a representation method of the trust relationship between an agent and a message that came from other agents. We introduced several predicates to represent the trust relationship between an agent and a message into reciprocal logic. In the extension, messages that come from other agents are regarded as countable objects, and are represented as individual constants from the viewpoint of predicate logic. However, the extension is not enough to represent the trust properties explained in Sect. 2.1.

3 A New Extension of Reciprocal Logic

Although we proposed an extension of reciprocal logic for trust reasoning [4], we regarded messages that come from agents as countable objects (individual constants). From the viewpoint of applications of trust reasoning, we should regard the messages from other agents as propositions like Demolombe’s logic system [8]. Thus, we replaced the trust properties in the first extension with Demolombe’s logic system like logical formulas.

At first, we add a predicate “\( TR ( pe _1, pe _2, PROP )\)” where \( pe _1\) and \( pe _2\) are agents, and \( PROP \) is individual constant that represents trust properties: sincerity, validity, completeness, cooperativity, credibility, and vigilance into reciprocal logic. For example, “\( TR ( pe _1, pe _2, sincerity )\)” means “\( pe _1\) trusts \( pe _2\) in sincerity”. Note that “\( TR ( pe _1, pe _2, all )\)” means “\( pe _1\) trusts \( pe _2\) in all trust properties”, i.e., “\( TR ( pe _1, pe _2)\)” in reciprocal logic is as same as “\( TR ( pe _1, pe _2, all )\)” in our new extension.

Secondly, we introduced two modal operators \( Bel _{i}(A)\) and \( Inf _{i, j}(A)\) used in Demolombe’s logic system into reciprocal logic to represent the trust relationship between agents and information that comes from other agents. The two modal operators follow the KD systems of modal logic [8].

\( Bel _{i}(A)\)::

an agent i believes that a proposition A is true.

\( Inf _{i, j}(A)\)::

an agent i has informed an agent j about A.

Finally, we add new axioms into reciprocal logic.

  1. ERcL1:

    \(\forall i \forall j ( TR (i, j, sincerity) \Rightarrow ( Inf _{j, i}(A) \Rightarrow Bel _{j}(A)))\)

  2. ERcL2:

    \(\forall i \forall j ( TR (i, j, validity) \Rightarrow ( Inf _{j, i}(A) \Rightarrow A))\)

  3. ERcL3:

    \(\forall i \forall j ( TR (i, j, vigilance) \Rightarrow (A \Rightarrow Bel _{j}(A)))\)

  4. ERcL4:

    \(\forall i \forall j ( TR (i, j, credibility) \Rightarrow ( Bel _{j}(A) \Rightarrow A))\)

  5. ERcL5:

    \(\forall i \forall j ( TR (i, j, cooperativity) \Rightarrow ( Bel _{j}(A) \Rightarrow Inf _{j, i}(A)))\)

  6. ERcL6:

    \(\forall i \forall j ( TR (i, j, completeness) \Rightarrow (A \Rightarrow Inf _{j, i}(A)))\)

  7. BEL:

    \(\forall i ( Bel _{i}(A \Rightarrow B) \Rightarrow ( Bel _{i}(A) \Rightarrow Bel _{i}(B)))\).

We summarize our new extension of reciprocal logic. Let RcL be all axioms of reciprocal logic. Our new extension is \(RcL \cup \{\text{ ERcL1 }, \ldots , \text{ ERcL6 }, \text{ BEL }\}\).

4 A Case Study of Trust Reasoning Based on New Extension in PKI

4.1 Scenario

We present a simple scenario in PKI inspired from [12]. We have formalized the scenario and applied the trust reasoning process based on new extension in PKI.

Suppose that a certificate \(c_2\) is signed by the subject of a certificate \(c_1\) with the private key corresponding to the public key of \(c_1\). Agent \(e_1\) trusts the certificate \(c_1\) because \(c_1\) is informed by its parent agent. In PKI, we consider that every agent trusts its parent agent in its validity, i.e., \(\forall e ( TR (e, parent (e), validity ))\). Moreover, agent \(e_2\) informs agent \(e_1\) about certificate \(c_2\). We assume that agent \(e_1\) trusts agent \(e_2\) in its completeness, i.e., \( TR (e_1, e_2, completeness )\). Agent \(e_1\) does not trust the certificate but wishes to use certificate \(c_2\). We need to know that whether certificate \(c_2\) informed by agent \(e_2\) is valid or not. From these two trust relationship: \( TR (e_1, parent (e_1), validity )\) and \( TR (e_1, e_2, completeness )\), we can conclude that certificate \(c_2\) informed by agent \(e_2\) is valid, i.e., \( Inf _{ e _2, e_1}( isValid (c_2))\).

4.2 Formalization

To formalize the above scenario, we defined following constants, functions, and predicates.

  • Individual variables:

    • e: an agent

    • c, \(c'\): certifications

  • Individual constants:

    • \(e_1, e_2\): agents

    • \(c_1\), \(c_2\): certifications

    • \( today \): date of today

  • Functions:

    • I(c): Issuer of certification c.

    • S(c): Subject of certification c.

    • \( PK (c)\): Public key of c.

    • \( SK (c)\): Share key of c.

    • \( DS (c)\): Start date of c.

    • \( DE (c)\): End date of c.

    • \( Sig (c)\): Signature of c.

    • \( parent (e)\): The parent of agent e.

  • Predicates:

    • \( inCRL (c)\): c is in certification revocation list.

    • \( isValid (x)\): x is valid.

    • \( isSigned (x, k)\): x is message signed by key k.

    • \(x = y\): x is equal to y.

    • \(x \le y\): x is equal to or less than y.

    • \(x < y\): x is less than y.

In PKI, we can assume following empirical theories.

  1. PKI1:

    \(\forall e ( TR (e, parent (e), validity ))\)

    (Any agent trusts its parent agent in validity).

  2. PKI2:

    \(\forall c (\exists c'(( isValid (c'))) \wedge (I(c) = S(c')) \wedge ( isSigned (c, PK (c')))) \Rightarrow isValid ( Sig (c)))\)

  3. PKI3:

    \(\forall c (( isValid ( Sig (c)) \wedge ( DS (c) \le today ) \wedge ( today < DE (c)) \wedge \lnot inCRL (c)) \Rightarrow isValid (c))\)

    (PKI2 and PKI3 allows to verify the signature and certificate itself on the basis of another certificate whose validity has been proven).

From scenario, we can assume following logical formulas.

  1. P1:

    \(I(c_2) = S(c_1)\)

    (This observed facts are used as a premises in our reasoning process and it is true in this scenario only).

  2. P2:

    \( isSigned (c_2, PK (c_1))\)

    (A certificate \(c_2\) is signed by the subject of certificate \(c_1\) with the private key corresponding to the public key of \(c_1\)).

  3. P3:

    \( Inf _{ parent (e_1), e_1}( isValid (c_1))\)

    (The parent agent of \(e_1\) has informed \(e_1\) about “certificate \(c_1\) is valid”).

  4. P4:

    \( TR (e_1, e_2, completeness )\)

    (our assumption)

  5. P5:

    \( DS (c_2) \le today \) (our assumption)

  6. P6:

    \( today < DS (c_2)\) (our assumption)

  7. P7:

    \(\lnot inCRL (c_2)\) (our assumption).

In the next section, we used inference rules of reciprocal logic and our new extension for trust reasoning. The inference rules are as follows.

\(\Rightarrow \)E::

“from A and \(A \Rightarrow B\) to infer B” (Modus Ponens) 

\(\wedge \)I::

“from A and B infer \(A \wedge B\)” (Adjunction).

4.3 Trust Reasoning Process

According to the above formalization, we can reason out the expected conclusion “\( Inf _{e2,e1}( isValid (c_2))\)”. The reasoning process is as follows.

  1. 1.

    \( Inf _{j, i}(A) \Rightarrow A\) [Deduced from PKI1 and ERcL2 with \(\Rightarrow \)E ]

  2. 2.

    \( isValid (c_1)\) [Deduced from P3 and 1 with \(\Rightarrow \)E ]

  3. 3.

    \( isValid (c_1) \wedge (I(c_2) = S(c_1)) \wedge isSigned (c_2, PK (c_1))\) [Deduced from 2, P1 and P2 with \(\wedge \)I]

  4. 4.

    \(\exists c'(( isValid (c')) \wedge (I(c_2) = S(c')) \wedge ( isSigned (c_2, PK (c')))) \Rightarrow isValid ( Sig (c_2))\) [Substitute \(c_2\) for c in PKI2]

  5. 5.

    \( isValid ( Sig (c_2))\) [Deduced from 3 and 4 with \(\Rightarrow \)E]

  6. 6.

    \( isValid ( Sig (c_2)) \wedge ( DS (c_2) \le today ) \wedge ( today < DE (c_2)) \wedge \lnot inCRL (c_2)\) [Deduced from 5 and P5 to P7 with \(\wedge \)I)

  7. 7.

    \(( isValid ( Sig (c_2)) \wedge ( DS (c_2) \le today ) \wedge ( today < DE (c_2)) \wedge \lnot inCRL (c_2)) \Rightarrow isValid (c_2)\) [Subsutitute \(c_2\) for c in PKI3]

  8. 8.

    \( isValid (c_2)\) [Deduced from 6 and 7 with \(\Rightarrow \)E]

  9. 9.

    \(A \Rightarrow Inf _{j, i}(A)\) [Deduced from P4 and ERcL6 with \(\Rightarrow \)E]

  10. 10.

    \( Inf _{e2,e1}( isValid (c_2))\) [Deduced from 8 and 9 with \(\Rightarrow \)E].

Having completed the trust reasoning process, we can therefore have \( Inf _{e2,e1}( isValid (c_2))\) derived from the fact \( Inf _{ parent (e_1), e_1}( isValid (c_1))\).

Instinctively, it represents a trust transfer. Agent \(e_1\) trust in certificate \(c_2\) informed by an agent \(e_2\) is transferred from its trust in the validity of its parent entity. In PKI, agents can transfer their trust from where it exists to where it is needed, e.g., if you initially trust the authenticity of a public key and you verify a message signed by the corresponding private key, then you will also trust the authenticity of the message [18]. Our trust reasoning process enables agents to achieve trust transfer correctly because it includes trust relationships with trust properties. Various forms of trust transfer occur in PKI. Since these certificates and PKI’s do not create trust, they just propagate it [18]. Therefore, first agents must trust something. We can call it initial trust.

One of the advantages of our trust reasoning process based on reciprocal logic is that it provides us with trust relationships and their properties and these trust relationships can be regarded as initial trust. In our PKI Scenario, trust relationship between agent \(e_1\) and its parent entity \( TR (e_1, parent (e_1), validity )\) is considered as an initial trust. Therefore, based on the initial trust agent \(e_1\) believes that certificate \(c_1\) informed by its parent entity is valid. Moreover, agent \(e_1\) trust in completeness of agent \(e_2\) \( TR (e_1, e_2, completeness )\) but at this point only agent \(e_2\) believes that certificate \(c_2\) is valid and agent \(e_1\) need to know whether the informed certificate \(c_2\) is valid or not. Therefore, through initial trust and other known trust relationships, agents can reason out the desired beliefs by correctly achieving trust transfer through our trust reasoning process.

4.4 Discussion

In PKI (Public key infrastructure), trust relationships play an important role, especially in cases when an agent wants to know whether the certificate informed by another agent is valid or not. Usually, authors have focused on certification relationships [12, 19] instead of trust relationships. Our trust reasoning process focuses on two trust relationships, i.e., validity, and completeness. Why are these two trust relationships essential? Because if an agent trusts another agent in its validity, it means that the agent believes that the other agent is a valid information source about both p and \(\lnot \) \(p\). In PKI, every agent has trust in the validity of its parent entity and an agent believes that the information provided by its parent entity, whether p and \(\lnot \) \(p\), is valid. Also, if an agent in PKI has complete information, e.g., about a certificate the agent should inform another agent about that certificate.

Such trust properties are essential in a trust relationship, especially when an agent deals with a message from another agent. Traditional reciprocal logic only deals with the trust relationships between agents. Trust reasoning without these trust properties does not provide us with room to deal with messages from other agents. For such a purpose, a new extension of reciprocal logic is introduced with two new modal operators \({ B el}\) and Inf and axioms. Thus, our new extension is \( RcL \cup \{\text{ ERcL1 }, \ldots , \text{ ERcL6 }, \text{ BEL }\}\).

Moreover, traditional reciprocal logic does not help us deal with complex situations, for example, when agents have trust relationships based on trust properties. \( TR (pe_1, pe_3, validity )\) cannot be concluded from \( TR (pe_1, pe_2, validity )\) and \( TR (pe_2, pe_3, validity )\) because trust relationships with trust properties are not transitive. Also, some studies have discussed reasons why trust is not transitive [17]. However if we consider the trust transitive as pseudo-transitivity [11], i.e., if all agents in a trust relationship have similar trust properties for the same proposition p then we can say that agent \(pe_1\) trust agent \(pe_3\) in its validity, i.e., \(\lnot (\forall pe _{1} \forall pe _{2} \forall pe _{3} ( TR ( pe _{1}, pe _{2}, validity ) \) \( \wedge TR ( pe _{2}, pe _{3}, validity ) \Rightarrow TR ( pe _{1}, pe _{3}, validity )))\). Therefore, we can conclude that in a case where there are three agents \(pe_1\), \(pe_2\), and \(pe_3\), and two of them, \(pe_1\) and \(pe_3\), do not have a trust relationship. Thus, based on pseudo transitivity, a trust relationship may be derived if agents \(pe_1\) and \(pe_2\), as well as \(pe_2\) and \(pe_3\), have trust relationships with the same trust property and hold the same belief.

Alongside, there are cases when agents can have different trust relationships with different trust properties and agents believe in different propositions. This refers to trust transfer. We have already discussed in Sect. 4.3 how agents can reason out desired beliefs by correctly achieving trust transfer through our trust reasoning process. In the scope of the current paper, trust relationships in the current PKI scenario focus on validity and completeness trust property only. Because in the domain of PKI, validity and completeness are one of the essential properties when dealing with messages from other agents. Also, not all the axioms have been used in the current scope of this paper, but future studies include the application of a trust reasoning process based on these axioms in scenarios complex PKI scenarios and other areas.

We know that trust relationships change themselves over space and time, e.g., at time t if agent \(\alpha \) believe p coming from \(\beta \) and at time \(t+1\) agent \(\alpha \) believe \(\lnot p\) coming from \(\beta \). This could cause complexity for agent \(\alpha \) when trusting agent \(\beta \). This problem provides us a new insight into maintaining and updating trust relationships an agent has with other agents as an agent view. Through a trust relationship, one can capture the beliefs of the agent about the message from other agents at a specific time or in a particular space. These captured beliefs can be added to agents view whom the agent would trust. These agent views containing trust relationships need to be maintained because two different agents may unequally trust any received message and may act differently. Also, these view needs to be updated when an agent makes new trust relationships with other agents. Maintaining and updating views will not only help deal with the future threats but also aid in making a decision. Further research is needed to establish such agents view in new extensions of reciprocal logic.

5 Concluding Remarks

We have proposed a new extension of reciprocal logic that can deal with trust properties. Two modal operators \( Bel \) and \( Inf \) have been introduced to represent trust relationships between agents and messages from other agents. A case study has been shown in PKI. Modal operators and new axioms aid in reasoning out new trust relationships in PKI. This we believe is an improvement our new extension. One of the advantages of our approach is generality. Trust reasoning based on a new extension of reciprocal logic is general in terms that not only trust relationships in PKI could be described as an empirical theory but also trust relationships in various complex scenarios can be described.

In the future, the aim is to provide a trust reasoning framework based on a new extension of reciprocal logic and its implementation in various areas and the ideas contained in this paper. Moreover, dealing with trust relationships with time-related constraints is also part of future works.