Keywords

1 Introduction

A piece of information can be generally associated with a source or an agent. In multiple-agent logic, a logical formula is associated with a group of agents that hold it for true. Then one can reason both on the information contents of a multiple-agent logic base and on the attitudes of groups of agents with respect to different sets of beliefs, and consider queries of the type “who believes what?”.

A multiple-agent logic was initially proposed in [10, 11] and developed in [1]. In this logic, formulas are pairs of the form of (aA), made of a proposition a and a subset of agents A. The formula (aA) is intended to mean “at least all agents in A believe that a is true”. The semantics of the set of multiple-agent logic formulas is expressed by a mapping which associates a subset of agents with each interpretation. In the graded extension of multiple-agent logic, propositions are associated with both a set of agents and a certainty level. A formula \((a,\alpha /A)\) expresses that “at least all agents in set A believe at least at some level \(\alpha \) (in the sense of a necessity measure) that a is true”. The semantics is given in terms of fuzzy sets of agents. When all the logical formulas are associated with the same set of agents (e.g., a singleton), one retrieves possibilistic logic [9]. The paper investigates the reasoning mechanism of the proposed logic based on the refutation method using a linear strategy. Namely, we propose an extension of the classical refutation method adapting the search algorithm A*.

The paper is organized in the following way. The next section provides a refresher on multiple-agent logic and its possibilistic extension. It also establishes soundness and completeness of the multiple-agent possibilistic logic. Section 3 presents the refutation method, based on a generalized resolution principle using a linear strategy, and then its generalization to multiple-agent possibilistic logic. Section 4 discusses the experimental study pertaining to the refutation method applied to both investigated logics. The concluding section briefly mentions potential applications. Preliminary versions of Sect. 3 appeared in French [3, 4], while Sect. 4 is brand new.

2 Multiple-Agent Logic and Its Possibilistic Extension

We present a background on multiple-agent logic by describing its syntax and its semantics in terms of generalized possibility distributions and then the syntax and the semantics of its extension with graded certainty levels.

2.1 A Multiple-Agent Logic

Let \(\mathcal {L}\) denote a propositional logical language. The set of all agents is denoted by All. A subset of agents is denoted by capital letters A, B, or by indexed letters \(A_i\). The set of subsets of agents is equipped with the usual set operations, i.e., \((2^{All},\cap , \cup ,\!\!\!\!\!\!\! \quad ^{\overline{\quad }},\subseteq )\) is a Boolean algebra. Thus, only a partial order exists between subsets of agents.

Syntax. A multiple agent propositional formula is a pair (aA), where a is a classical propositional formula of \(\mathcal {L}\) and A is a non empty subset of All, i.e., \(A \subseteq All\). (aA) represents the piece of information: at least all agents in A believe that a is true. The subset A may be given in extension or in intension.

A multiple-agent knowledge base is a finite set \(\varGamma =\{(a_i,A_i),i=1,\dots ,n\}\), viewed as the conjunction of multiple agent propositional formulas. Multiple agent logic has two inference rules:

  • if \(B\subseteq A\) then \((a,A) \vdash (a, B)\) (subset weakening)

  • \((\lnot a \vee b, A), (a, A) \vdash (b, A)\), \(\forall A\in 2^{All}\setminus \emptyset \) (subset modus ponens)

The axioms of multiple-agent logic [1] are those of propositional logic where each axiom schema is associated with subset All.

Using subset weakening, the following inference rule is valid:

$$(\lnot a \vee b, A), (a \vee c, B) \vdash (b \vee c, A\cap B)\ (A\text{- }B\text{-resolution) } $$

The subset of inconsistent agents for \(\varGamma \) can be defined as:

$$inc\text{- }s(\varGamma ) = \bigcup \{A\subseteq All \ |\ \varGamma \vdash (\bot , A)\} \text{ and } inc\text{- }s(\varGamma ) = \emptyset \text{ if } \not \exists A \text{ s.t. } \varGamma \vdash (\bot , A).$$

Let \(\varGamma ^{\circ }\) denote the set of classical formulas obtained from \(\varGamma \) by ignoring the sets of agents: \(\varGamma ^{\circ }= \{a_i \ |\ (a_i,A_i)\in \varGamma , i=1,\dots ,n\}\). The consistency of \(\varGamma \) does not necessarily imply that \(\varGamma ^{\circ }\) is consistent too. Indeed, if we take for example \(\varGamma =\{(a,A), (\lnot a, \overline{A}) \}\), then \(inc\text{- }s(\varGamma ) = A\cap \overline{A} =\emptyset \) whereas \(\varGamma ^{\circ }\) is inconsistent. This is because there is nothing anomalous with agents that contradict each other.

Semantics. A multiple-agent possibility distribution is a function \( \pi \) from a set of interpretations \(\varOmega \) to \(2^{All}\). \( \pi (\omega )\) represents the subset of agents in All who find \(\omega \) possible. A multiple-agent possibility distribution is said multiple-agent-normalized if \(\exists \omega \in \varOmega , \pi (\omega )= All\). This means that there is at least one interpretation that all agents find possible.

From \( \pi \), a function from \(\mathcal {L}\) to \(2^{All}\) called multiple-agent possibility measure is defined:

$$\begin{aligned} \mathbf{\Pi }(a)= \bigcup _{\omega \in \varOmega } \{ \pi (\omega ), \omega \models a\} \end{aligned}$$

It is the set of agents for whom a is possibly true.

By duality, a multiple-agent necessity measure \(\mathbf{N}\), from \(\mathcal {L}\) to \(2^{All}\) is defined:

$$\mathbf{N}(a)= \overline{\mathbf{\Pi }(\lnot a)}=\bigcap _{\omega \in \varOmega }\{ \overline{ \pi (\omega )}, \omega \models \lnot a\}$$

\(\mathbf{N}(a)\) represents the subset of agents who are sure that a is true (it is the complement of the subset of agents who find \(\lnot a\) possible).

Since the multiple agent propositional formula (aA) represents the piece of information “at least all agents in A believe a”, the agents in A find all interpretations of \(\lnot a\) impossible. This means that the maximal set of agents who think that \(\lnot a\) is possible is \(\overline{A}\). Besides, the agents in \(\overline{A}\) remain free to find the interpretations of a possible or not. Thus the maximal set of agents who may think that the interpretations that make a true are possible is All itself. This leads to the following semantical representation of formula (aA) by the multiple-agent possibility distribution \( \pi _{\{(a,A)\}}\):

$$ \forall \omega \in \varOmega , \pi _{\{(a,A)\}}(\omega )= \left\{ \begin{array}{l l} All &{} \text{ if }~~\omega \models a\\ \overline{A} &{} \text{ if }~~\omega \models \lnot a \end{array} \right. $$

where \(\varOmega \) is the set of interpretations associated with \(\mathcal {L}\).

More generally, the multiple-agent possibility distribution \( \pi _\varGamma \) semantically associated with a set of multiple agent formulas \(\varGamma = \{(a_i,A_i),i=1,\dots ,n\}\) is given by:

$$ \pi _\varGamma (\omega )=\left\{ \begin{array}{ll} All &{} \text{ if }~~\forall (a_i,A_i) \in \varGamma ,\omega \models a_i \\ \bigcap \{\overline{A_i}: (a_i,A_i) \in \varGamma , \omega \models \lnot a_i\} &{} \text{ otherwise } \end{array} \right. $$

Thus, the “value” \( \pi _\varGamma (\omega )\) of the multiple agent possibility distribution for \(\omega \) is obtained as the intersection of the different subsets \(\overline{A_i}\) of agents that still find \(\omega \) possible according to the different formulas \((a_i,A_i)\) violated by this interpretation.

2.2 A Multiple-Agent Possibilistic Logic

A natural generalization of multiple-agent logic stems from extending multiple-agent possibility distributions from \(2^{All}\) to \([0,1]^{All}\).

Syntax. In the following, the distributive lattice \(L=[0,1]^{All}\) is considered. This lattice is equipped with fuzzy set intersection \(\cap \), fuzzy set union \(\cup \) and fuzzy set complementation \(^{\overline{\quad }}\) defined by means of operators: min, max, and \(1-(.)\) respectively. Then, the order becomes a fuzzy set inclusion defined by the inequality between membership functions.

A multiple-agent possibilistic formula (aF) is built by attaching to a classical propositional formula a a nonempty fuzzy set of agents F belonging to All. The membership grade \(\mu _F(k)\) is understood as a lower bound on the degree of certainty (in the sense of a necessity measure) of a for agent k. In the following, the fuzzy set \(F=\alpha / A\) is defined by: \(\mu _{\alpha / A}(k)= \alpha \) if \(k \in A\), and \(\mu _{\alpha / A}(k)= 0\) if \(k \in \overline{A}\). Given that any fuzzy set F of agents can be written as a disjunction \(\bigcup _{i = 1}^\ell \alpha _i/A_i\) where \(A_i\) is the \(\alpha _i\)-cut of F, the formula (aF) can be assumed to encode the set of formulas \(\{(a,\alpha _i/A_i)\ | \ i = 1, \cdots , \ell \}\).

Henceforth, the language is limited to formulas of the form \((\alpha / A)\) that expresses the information that at least all agents in A believe at least at level \(\alpha \) that a is true. Indeed, the possibilistic multiple agent formula \((a, \alpha /A)\) is the syntactic expression of the semantic constraint \(N(a)\supseteq \alpha / A\) where N is a graded multiple-agent necessity measure, defined later on. Formulas of the form (a, 0 / A) or \((a,\alpha /\emptyset )\) are trivial since they do not provide any information, and thus they do not belong to the syntax (as \(\forall a,~ N(a)\supseteq 0/A\) with \(A\ne \emptyset \), and \(N(a)\supseteq \alpha /\emptyset \)). A multiple-agent possibilistic knowledge base may be viewed as the conjunction of multiple-agent possibilistic formulas.

Let \(\varSigma =\{(a_1, \alpha _1/A_1), ..., (a_n, \alpha _n/ A_n)\}\) be a multiple-agent possibilistic knowledge base. It can be viewed as a stratified set of multiple-agent knowledge bases:

$$\varSigma _\alpha =\{(a_i, A_i)| (a_i, \alpha _i/A_i) \in \varSigma ~ \textit{and}~ \alpha _i \ge \alpha \}$$

In the same way, a possibilistic knowledge base \(\varSigma _A\) can be defined for every non empty set \(A\subseteq All \) of agents:

$$\varSigma _A=\{(a_i, \alpha _i)| (a_i, \alpha _i/A_i) \in \varSigma ~ \textit{and}~ A_i \supseteq A\}$$

and if the \(A_i\)’s are given in extension, the projection of \(\varSigma \) on each agent k of All is defined by:

$$\varSigma _k=\{(a_i, \alpha _i)| (a_i, \alpha _i/A_i) \in \varSigma ~ \textit{and}~ k \in A_i\}$$

Furthermore, if subsets of agents in \(\varSigma \) are ignored, the possibilistic knowledge base \(\varSigma ^{All}=\{(a_i, \alpha _i), i=1,...,n\}\) is obtained. This possibilistic knowledge base represents beliefs of agents in All. Symmetrically, \(\varSigma ^{(0,1]}=\{(a_i, A_i), i=1,...,n\}\) is the multiple agent knowledge base where groups of agents are somewhat certain of propositions in \(\varSigma \) (since for all i such that \((a_i, \alpha _i/A_i)\in \varSigma \), \(\alpha _i> 0\)). Finally by ignoring fuzzy sets of agents associated with formulas of \(\varSigma \), a propositional knowledge base \(\varSigma ^{\circ }\) is obtained: \(\varSigma ^{\circ }=\{a_i, i=1,...,n\}\). It expresses the set of all beliefs \(a_i\) possessed by some groups of agents in All at some degree.

Fuzzy sets of agents are only partially ordered. Thus, a restriction of \(\varSigma \) by a fuzzy subset of agents \(\alpha /A\) can be defined as:

$$\varSigma ^{\alpha /A}=\{(a_i, \alpha _i/A_i)| A_i\cap A \ne \emptyset ~\textit{and}~\alpha _i\ge \alpha ~\textit{and}~(a_i, \alpha _i/A_i) \in \varSigma \}$$

\(\varSigma ^{\alpha /A}\) contain all formulas believed at least at level \(\alpha \) by some agents in A.

Multiple agent possibilistic logic has the following inference rules:

  • If \(A \cap B \ne \emptyset \) then \((c, \alpha /A),(c^\prime , \beta /B) \vdash (c^{\prime \prime },\min (\alpha , \beta )/(A\cap B))\) (gradual subset resolution), where \(c^{\prime \prime }\) is the resolvent of \(c, c^\prime \).

  • If \(\beta /B \subseteq \alpha /A\) then \((c, \alpha /A)\vdash (c, \beta /B)\) (gradual subset weakening),

  • \((c, \alpha /A), (c, \beta /B)\vdash (c, \alpha /A \cup \beta /B)\) (fusion).

Moreover, the axioms of multiple-agent possibilistic logic are those of propositional logic weighted by (1 / All).

The fuzzy subset of individually inconsistent agents of \(\varSigma \) is defined by:

$$inc\text{- }\varSigma =\bigcup \{\alpha /A|\varSigma \vdash (\perp , \alpha /A)\}$$

It should be noted that the consistency of the multiple-agent possibilistic knowledge base \(\varSigma \) does not entail necessarily the consistency of its classical projection \(\varSigma ^{\circ }\). Again, agents may contradict each other.

Semantics. A graded multiple-agent possibility distribution is a function \(\pi \) from a set of interpretations \(\varOmega \) to \([0,1]^{All}\), the set of all fuzzy subsets of agents. The fuzzy subset \(\pi (\omega )\) collects agents k in All who find \(\omega \) possible at degree \(\mu _{\pi (\omega )}(k)\). In the following, \((\alpha /A)\) will be the fuzzy subset of agents \(k \in All \) such that \(\mu _{\alpha /A}(k)=\alpha \) if \(k \in A\) and 0 otherwise. By convention, \(\pi (\omega )= 1/All\) means that all agents find \(\omega \) completely possible, while \(\pi (\omega )= 0/All\) means that all agents find \(\omega \) impossible. If \(\exists \) \(\omega \) such that \(\pi (\omega )= 1/All\) then the graded multiple-agent possibility distribution \(\pi \) is again said to be multiple-agent normalized. This property reflects collective consistency since there exists at least one interpretation that all agents find completely possible. Associated with the graded multiple-agent possibility distribution \(\pi \), a function, from \(\mathcal {L}\) to \([0,1]^{All}\) called graded multiple-agent possibility measure is defined:

$$\Pi (a)= \bigcup _{\omega \models a} \pi (\omega )$$

\(\Pi (a)\) is the fuzzy set of agents who think that it is possible to some extent that a is true.

In a dual manner, N(a) is the fuzzy set of agents who are certain to some extent that a is true. It defines the graded multiple-agent necessity measure N:

$$N(a)=\overline{ \Pi }(\lnot a)= \bigcap _{\omega \models \lnot a } \overline{\pi (\omega )}$$

In multiple-agent possibilistic logic, the satisfiability of a formula is defined in terms of graded multiple-agent possibility distributions. The formula \((a, \alpha /A)\) expresses the piece of information: “at least all agents in A believe at least at level \(\alpha \) that a is true”. So agents in A find any interpretation of a completely possible. Furthermore, other agents in \(\overline{A}\) are free to find the interpretation of a completely possible or not. So, the maximal set of agents who find any interpretation of a completely possible is again \(A \cup \overline{A}= All\). Besides, the maximal set of agents who find all interpretations of \(\lnot a\) possible at least at level \(1-\alpha \) are agents in A, and agents in \(\overline{A}\) find \(\lnot a\) possible at least at level 1. So, the semantics representation of the formula \((a, \alpha /A)\) is as follows:

$$ \pi _{\{(a, \alpha /A)\}}(\omega )=\left\{ \begin{array}{ll} 1/All &{}if~~\omega \models a\\ \{(1-\alpha )/A \cup 1/\overline{A}\} &{}if~~\omega \models \lnot a \end{array} \right. $$

More generally, the graded multiple-agent possibility distribution \(\pi \) semantically associated with the set \(\varSigma =\{(a_1,\alpha _1/A_1,..., a_n,\alpha _n/A_n)\}\) of multiple agents possibilistic formulas is defined by:

$$ \pi _\varSigma (\omega )=\left\{ \begin{array}{ll} 1/All \text{ if } \forall (a_i,\! \alpha _i/A_i) \! \in \! \varSigma , \omega \! \models \! a_i\\ \bigcap _{(a_i, \alpha _i/A_i) \in \varSigma , \omega \models \lnot a_i}(1-\alpha _i)/A_i \cup 1/\overline{A_i} \text{ otherwise. } \end{array} \right. $$

Since \(N(a \wedge b) = N(a) \cap N(b)\), \(\{(a \wedge b,\alpha /A)\}\) is equivalent to \(\{(a,\alpha /A), (b,\alpha /A)\}\), and a possibilistic multiple-agent formula can always be put under a clausal form. The knowledge base \(\varSigma \) can be interpreted as a set of constraints of the form:

$$N_\varSigma (a_i)\supseteq \alpha _i/A_i ~~\text{ for }~~ i=1,...,n.$$

For any graded multiple-agent possibility distribution \(\pi \), \(\pi \) satisfies \(\varSigma \) (denoted by \(\pi \models \varSigma \)) if and only if \( \pi \subseteq \pi _\varSigma \) (namely \(\forall \omega , \pi (\omega ) \subseteq \pi _\varSigma (\omega )\)). Thus, \((b, \beta /B)\) is a logical consequence of \(\varSigma \) if and only if \(\pi _\varSigma (\omega )\) is included into \(\pi _{\{(b, \beta /B)\}}(\omega )\). Formally:

$$\varSigma \models (b, \beta /B) \Leftrightarrow \forall \omega , \pi _\varSigma (\omega ) \subseteq \pi _{\{(b, \beta /B)\}}(\omega ).$$

2.3 Soundness/Completeness of Multiple-Agent Possibilistic Logic

In [8], soundness and completeness of possibilistic logic have been established in the following way:

$$ \varSigma =\{(a_i, \alpha _i)|i=1,...,n\} \vdash (a, \alpha ) \Leftrightarrow \varSigma \models (a, \alpha ) \Leftrightarrow \forall \omega , \pi _\varSigma (\omega )\le \pi _{(a, \alpha )}(\omega ). $$

In a similar manner, authors in [1], have proved the soundness and completeness of multiple-agent logic as follows:

$$ \varSigma =\{(a_i, A_i)|i=1,...,n\}\vdash (a, A) \Leftrightarrow \varSigma \models (a, A) \Leftrightarrow \forall \omega , \pi _\varSigma (\omega )\subseteq \pi _{(a, A)}(\omega )$$

The multiple-agent possibilistic logic is also sound and complete. Indeed, using previous results and with notations \(\varSigma _k\) and \(\varSigma ^{\alpha /A}\) introduced in Sect. 2, we have: \(\begin{array}{lll} \varSigma \vdash (a, \alpha /A) &{} \Leftrightarrow \forall k \in A, \varSigma _k\vdash (a, \alpha )&{} \text{(by } \text{ definition) } \\ &{} \Leftrightarrow \forall k \in A, \varSigma _k \models (a, \alpha )&{} \text{(completeness } \text{ of } \text{ possibilistic } \text{ logic) }\\ &{}\Leftrightarrow \varSigma ^{\alpha /A} \models (a, \alpha /A)&{} \text{(by } \text{ definition, } \text{ keeping } \text{ only } \text{ formulas } \text{ in } \varSigma \\ &{} &{} \text{ which } \text{ may } \text{ play } \text{ a } \text{ role } \text{ in } \text{ the } \text{ inference } \text{ of } (a, \alpha /A)\text{) }\\ &{}\Leftrightarrow \varSigma \models (a, \alpha /A) &{} \text{(inference } \text{ monotony) } \end{array}\)

3 A Refutation Method by Linear Multiple Agent Resolution

In possibilistic logic, the linear resolution strategy for the procedure of refutation by resolution, defined in [7], works in the same way as in classical logic, and thanks to an A\(^*\)-like search method (changing the sum of the costs into their minimum), one can obtain the refutation having the strongest weight first, this weight being the one of the formula we want to prove. Here, the (fuzzy) subsets of agents play the role of weights, but they are not totally ordered, while the weights in possibilistic logic are; this makes the problem more tricky (since the costs in the A\(^*\)-like algorithm will be computed from these weights). However, the procedure can be adapted to multiple-agent logic.

3.1 Refutation by Linear Multiple Agent Resolution

Let \(\varGamma \) be a knowledge base composed of multiple agent formulas. Proving (aA) from \(\varGamma \) comes down to adding \((\lnot a, All)\), in clausal form, to \(\varGamma \) and applying the resolution rule repeatedly until producing \((\bot , A)\). Clearly, it comes down to getting the empty clause with the greatest subset of agents \(set(a, \varGamma )\). Formally:

$$set(a, \varGamma )=\cup \{A | \varGamma \models (a, A )\}$$

Refutation by resolution using a linear strategy can be expressed in terms of tree search in a state space. A state \((C_0 C_1,...,C_i)\) is defined by a central clause \(C_i\) and the sequence \((C_0 C_1,...,C_{i-1})\) of central clauses ancestors of \(C_i\). For each state of the search tree, a subset of agents is associated, playing the role of a cost. It corresponds to the subset of agents of the latest generated central clause s.t. \(set(C_i)\) (short for \(set(C_i, \varGamma )\)) is associated with state \((C_0 C_1,...,C_i)\). The goal is to find the states ending with the empty clause with the greatest subsets of agents. An analogy with the search in the state space with costs is established in the following way:

  • The initial state \(S_0\) is defined by the initial central clause \(C_0\) with a cost equal to \(set(C_0)\),

  • The cost associated with the arc \((C_0C_1,...,C_i)\rightarrow (C_0C_1,...,C_iC_{i+1})\) is the set associated with \(C_{i+1}\),

  • The global cost of the path \(C_0\rightarrow C_1\rightarrow ...\rightarrow C_i\) is the intersection of (set-valued) costs of the elementary arcs,

  • The objective states are states \((C_0C_1,...,C_i)\) such that \(C_i=(\perp , A_i)\) with \(A_i \ne \emptyset \),

  • The state \((C_0C_1,...,C_n)\) is expanded by generating all resolvents of \(C_n\) authorized by the linear strategy.

Searching for a refutation with the greatest subsets of agents is then equivalent to searching for a path with maximal cost from the initial state to the objective states. However, many differences exist:

  • costs here are to be maximized not to be minimized. Indeed, the goal is to find the greatest subset of agents who believe a formula.

  • costs are not additive but they are combined using the intersection operator.

  • since only partial order can be defined between subsets, several objective states exist. The latter are then combined by the union operator.

  • if an order exists between subsets, the greatest subset is considered and the other path is never explored, unlike search in space states.

As for heuristic search in space states, the ordered search is guided by an evaluation function f calculated as follows: for each state S of the search tree, \(f(S)= g(S)\cap h(S)\) where g(S) is the path cost from the initial state to S, and h(S) a cost estimation from S to an objective state.

The different steps of the refutation by resolution using a linear strategy, presented by Algorithm 1, can be summarized in the following way:

Let \(R(\varGamma )\) be the set of clauses that has been produced (using resolution) from \(\varGamma \). For each refutation using the clause C, for each literal l of C and in order to obtain \(\bot \), the use of a clause \(C^\prime \) containing the literal \(\lnot l\) is required. A refutation expanded from C will have a cost less than or equal to:

$$H(l)= \bigcup \{set(C^\prime ), C^\prime \in R(\varGamma ), \lnot l \in C^\prime \}$$

The cost of the path until the contradiction developed from the clause C is then:

$$ h_1(C)=\bigcap \{H(l), l\in C\}= \bigcap _{l \in C}~\bigcup \{set(C^\prime ), C^\prime \in R(\varGamma ), \lnot l \in C^\prime \}$$

with \(S=(C_0,...,C)\). An admissible evaluation function is obtained \(f_1(S)= set(C) \cap h_1(S)\). \(h_1(S)\) depends only on C. A sequence of evaluation functions can be defined as follows:

$$h_0(C)=All;$$
$$f_p(C)= set(C)\cap h_p(C); p\ge 0$$
$$h_{p+1}(C)= \bigcap _{l \in C}~\bigcup \{f_p(C^\prime ), C^\prime \in R(\varGamma ), \lnot l \in C^\prime \}; p\ge 0$$
Fig. 1.
figure 1

Refutation tree of Example 1

figure a

Example 1

Let \(\varGamma \) be a multiple-agent clausal knowledge base:

\(C_1:(\lnot a \vee b, All ); \quad C_2:(a \vee d, All );\)

\(C_3:(a \vee \lnot c, A ); \quad C_4:(\lnot d, A);\)

\(C_5:(\lnot d, B).\)

Let us to consider the search of the greatest subset of agents who believe b. Let then \(\varGamma ^\prime \) be the set of clauses equivalent to \(\varGamma ^\prime =\varGamma \cup \{(\lnot b, All)\}\). \(C_0=(\lnot b, All)\) as \(\varGamma ^\prime -\{C_0\}\) is coherent. The only clause which contains the literal b is \(C_1\) (see Fig. 1). The next state is then \(S_1=(C_0 C_6 )\) with \(C_6 : (\lnot a, All )\) and cost equal to \(set(C_0)\cap set(C_1)= set(C_6)=All\). Different paths with \(C_2\) and \(C_3\) exist from this state. The evaluation function then will be calculated. The greatest set that maximizes the evaluation function is All, because \(A \subset All\). Effectively, taking into account this inclusion order, the path with the clause \(C_3\) is not explored. The next state is then \(S_2=(C_0 C_6 C_7)\) and has a cost \(set(C_6)\cap set(C_2)=set(C_7)= All\), with \(C_7 : (d, All )\).

Several paths exist from this state. Those paths will be all explored because they have incomparable evaluation functions, due to the partial order of subsets. Let \(S_3=(C_0 C_6 C_7 C_8)\) be the next state. Its associated cost is \(set(C_7) \cap set(C_4)=set(C_8)=A\). The clause \(C_8\) is a contradiction. So, the first objective state is reached.

When dealing with the clause \(C_5\), the next state is then \(S_4=(C_0 C_6 C_7 C_9)\) having the cost \(set(C_7) \cap set(C_5)=set(C_9)= B\). The clause \(C_9\) is a contradiction. The last objective state is then reached. Thus \(\varGamma \models (b, A \cup B)\).

3.2 Refutation by Linear Possibilistic Multiple Agent Resolution

In multiple-agent possibilistic logic, the gradual subset weakening states that if \(\beta /B \subseteq \alpha /A\) then \((c, \alpha /A)\vdash (c, \beta /B)\). The inclusion \(F\subseteq G\) between two fuzzy subsets F and G of a referential U is classically defined by \(\forall u \in U, F(u)\le G(u)\). In particular, if \(U=All\), then \(\alpha /A\supseteq \beta /B\) if and only if \(A\supseteq B\) and \(\alpha \ge \beta \).

The goal is then to find a given formula with the greatest subset of agents with the greatest certainty degree. Obviously, the union of two partial results \((\bot , \alpha / A)\) and \((\bot , \beta / B)\) should be taken if \(\alpha > \beta \) and \(A\subset B\). These observations are used to directly extend the procedure of the previous section.

Example 2

Let \(\varSigma \) be a multiple-agent possibilistic knowledge base composed by the following clauses:

\(C_1:(\lnot a \vee b, 0.8/All )\)

\(C_2:(a \vee d, 0.7/All )\)

\(C_3:(a \vee \lnot c, 0.9/A )\)

\(C_4:(\lnot d, 0.4/A)\)

\(C_5:(\lnot d, 0.3/B)\)

Note that the propositional knowledge base \(\varSigma ^{\circ }\) coincides with \(\varGamma ^{\circ }\) in the example of Sect. 3. The problem is to find the greatest subset of agents who believe b with the greatest certainty degree.

Fig. 2.
figure 2

Refutation tree of Example 2

Let then \(\varSigma ^\prime \) be the set of clauses equivalent to \(\varSigma ^\prime =\varSigma \cup \{(\lnot b, 1/All)\}\). As depicted in Fig. 2, let us take \(C_0=(\lnot b, 1/All)\) because \(\varSigma ^\prime -\{C_0\}\) is coherent. As the classical projection of \(\varSigma \) is the same as \(\varGamma \), the next state is then \(S_1=(C_0 C_6 )\) and the associated cost is \(fset(C_0)\cap fset(C_1)= fset(C_6)=0.8/All\). Different paths starting with \(C_2\) and \(C_3\) exist from this state. However, unlike in the previous example, both paths will be explored because the fuzzy set 0.9 / A is not included in the fuzzy set 0.7 / All. Using \(C_2\), let \(S_2=(C_0 C_6 C_7)\) be the next state with cost \(fset(C_6)\cap fset(C_2)=fset(C_7)=0.7/All\).

Several paths exist from this state using \(C_4\) or \(C_5\). Let \(S_3=(C_0 C_6 C_7 C_8)\) be the next state using \(C_4\). Its associated cost is \(fset(C_7) \cap fset(C_4)=fset(C_8)=0.4/A\). The clause \(C_8\) is a contradiction. The first objective state is then reached. With the path using the clause \(C_5\), the next state is then \(S_4=(C_0 C_6 C_7 C_9)\) with the cost \(fset(C_7) \cap fset(C_5)=fset(C_9)= 0.3/B\). The clause \(C_9\) is a contradiction. An objective state is then reached.

The development of the path with the clause \(C_3\) induces the next state \(S_5=(C_0 C_6 C_{10})\) with the cost \(fset(C_6) \cap fset(C_3)=fset(C_{10})= 0.8/A\). The clause \(C_{10}\) is not a contradiction and there is no clause containing a literal c so no objective state is reached here. Thus \(\varSigma \models (b, 0.4/A \cup 0.3/B)\).

4 Experimental Study

In order to analyse the behaviour of the proposed approach, the proposed algorithms were implemented with Java and intensive experiments have been performed. For this purpose, several consistent knowledge bases, including multiple-agent knowledge bases and possibilistic multiple-agent knowledge bases, have been generated by varying the number of clauses. For each case of the following experiments, the execution time of the algorithm is evaluated in seconds. The number of Booleanvariables is set to 30 and the number of groups of agents is set respectively to 5, 10 and 15 by setting to 20 the number of agents.

  1. 1.

    Results with multiple-agent knowledge bases:

    Figure 3 shows the behaviour of refutation algorithm by varying the number of clauses from 5000 to 50000. According to the obtained results, we notice that the execution time increase proportionally to the number of clauses.

  2. 2.

    Results with multiple-agent possibilistic knowledge bases:

    Figure 4 shows the behaviour of refutation algorithm by varying the number of clauses from 5000 to 50000. According to Fig. 4, we notice also that the execution time is increased by rising the number of clauses.

  3. 3.

    Comparison between refutations by linear multiple agent resolution and by linear possibilistic multiple agent resolution: In order to compare both approaches, other experiments have been carried out, using large bases containing 50000 clauses, 30 variables and 15 groups of agents. By varying the number of agents from 25 to 200, Fig. 4 reveals us that the execution time of refutation by linear possibilistic multiple agent resolution is only slightly greater than the execution time of refutation by linear multiple agent resolution.

Fig. 3.
figure 3

Execution time of the refutation algorithm for large multiple agent bases.

Fig. 4.
figure 4

Execution time of the algorithm for large possibilistic multiple-agent bases

Fig. 5.
figure 5

Comparison between multiple-agent logic and possibilistic multiple-agent logic in terms of computational time

Discussion. The obtained results allow us to estimate the performance of the proposed approach, which depends on the number of agent groups. Indeed, the execution time linearly increases with the number of clauses, but it increases exponentially with the number of variables. Whereas, when the number of group of agents increases, the execution time increases exponentially (but it linearly increases with the number of agents if their subsets are given in extension)Footnote 1. This can be explained by the way of the refutation tree is constructed, which is based on the suitable clauses. Moreover, each branch of the tree represents one suitable clause for the literal to be deduced. The results also confirm that the execution time of the refutation algorithm for possibilistic multiple-agent knowledge bases is slightly greater than the one obtained for multiple-agent knowledge bases. This is due to the fact that the construction of the refutation tree with fuzzy sets of agents consumes more time than the construction of refutation trees with crisp groups of agents.

5 Conclusion

This paper has investigated a multiple-agent logic. From a representation point of view, this multiple-agent logic allows us to represent beliefs of groups of agents and its possibilistic extension handles fuzzy subsets of agents, thus integrating certainty levels associated with agent beliefs. From a reasoning point of view, we proposed a refutation resolution based on linear strategy for the multiple logic and its possibilistic extension. An experimental study was conducted to evaluate the proposed algorithms. It shows the tractability of the approach.

One may think of several extensions. On the one hand, the multiple agent extension of the Boolean generalized possibilistic logic [5] would allow us to consider the disjunction and the negation of formulas like (pA), and to express quantifiers in propositions such as “at most the agents in subset A believe p”. On the other hand, one might also take into account trust data about information transmitted between agents [6, 12]. For instance, assume agent a trusts agent b at level \(\theta \), which might be written \((b, \theta /a)\), assimilating ab to propositions. Then together with \((p, \alpha /b)\) (agent b is certain at level \(\alpha \) that p is true), it would enable us to infer \((p, \min (\alpha ,\theta )/a)\) [2].