1 Introduction

Graphical decision models provide efficient decision tools. In fact, they allow a compact and a simple representation of decision problems under uncertainty. Most of decision graphical models are based on Influence Diagrams (ID) [17] for representing decision maker’s beliefs and preferences on sequences of decisions to be made under uncertainty. The evaluation of Influence Diagrams ensures optimal decisions while maximizing the decision maker’s expected utilities [6, 16, 17]. Min-based possibilistic Influence Diagrams (PID) [10] allow a gradual expression of both agent’s preferences and knowledge. The graphical part of possibilistic Influence Diagrams is exactly the same as the one of standard Influence Diagrams. Uncertainty is expressed by possibility degrees and preferences are considered as satisfaction degrees.

Unlike probabilistic decision theory, which is based on one expected utility criteria to evaluate optimal decisions, a qualitative possibilistic decision theory [5, 8] provides several qualitative utility criteria for decision approaches under uncertainty. Among these criteria, one can mention the pessimistic and optimistic utilities proposed in [7], the binary utility proposed in [12], etc. As standard Influence Diagrams, direct [10] and an indirect methods [10, 13] have been proposed to evaluate a min-based PID. Besides, Influence Diagrams represent agent’s beliefs and preferences on the same structure and they operate on three types of nodes: chance, decision and utility nodes. In practice, it will be easier for an agent to express its knowledge and preferences separately. Furthermore, it is more simple to treat all nodes in the same way. In [3], authors have proposed a new compact graphical model for representing decision making under uncertainty based on the use of possibilistic networks. Agent’s knowledge and preferences are expressed in qualitative way by two distinct qualitative possibilistic networks. This new representation, for decision making under uncertainty based on min-based possibilistic networks, benefits from the simplicity of possibilistic networks.

In this chapter, we show first how to decompose an initial min-based Influence Diagram into two min-based possibilistic networks: the first one represents agent’s beliefs and the second one encodes its preferences. Then, we define the required steps for splitting a qualitative Influence Diagram into two min-based possibilistic networks preserving the same possibility distribution and the same qualitative utility. Then, this decomposition process provides also the opportunity to exploit the inference algorithms [1, 2] developed for min-based possibilistic networks to solve qualitative Influence Diagrams. This procedure allows us to obtain a more compact qualitative possibilistic network for computing optimal strategy based on the fusion of possibilistic networks. In this context, we present an efficient and unified way of computing optimal optimistic strategy using inference process based on the junction tree associated with the fusion of agents beliefs and preferences networks.

This chapter is organized as follows. Next section briefly recalls basic concepts of graphical frameworks for possibilistic qualitative decision: min-Based Possibilistic Influence Diagrams and min-based possibilistic networks. Section 3 describes how the decomposition process can be efficiently used for encoding an Influence Diagram into two possibilistic networks. Section 4 describes how propagation process can be efficiently used for computing optimal optimistic strategy. Section 5 present an experimental studies. Section 6 concludes the chapter.

2 Graphical Frameworks for Possibilistic Qualitative Decision

2.1 Min-Based Possibilistic Influence Diagrams

A min-based possibilistic Influence Diagram, denoted by \(\varPi ID_{min}(G_{ID}, \pi _{min}^{ID}, \mu _{min}^{ID})\), have two components: the graphical part which is the same as the one of standard Influence Diagrams and the numerical part which consists in evaluating different links in the graph. The uncertainty is expressed by possibility degrees and preferences are considered as satisfaction degrees.

  1. 1.

    A Graphical Component: which is represented by a DAG, denoted by \(G_{ID}=(\mathcal {X}, \mathcal {A})\) where \(\mathcal {X}=\mathcal {C}\cup \mathcal {D} \cup \mathcal {U}\) represents a set of variables containing three different kinds of nodes. \(\mathcal {A}\) is a set of arcs representing either causal influences or information influences between variables.

    • Chance Nodes: are represented by circles. They represent state variables \(X_i \in \mathcal {C}=\{X_1,...,X_n\} \). Chance nodes reflect uncertain factors of a decision problem. A combination \(x=\{x_{1i},...,x_{nj}\}\) of state variable values represents a state.

    • Decision Nodes: are represented by rectangles. They represent decision variables \(D_j \in \mathcal {D}=\{D_1,...,D_p\}\) which depict decision options. A combination \(d=\{d_{1i},...,d_{pj}\}\) of values represents a decision.

    • Utility Nodes: \(V_{k} \in \mathcal {V}=\{V_1,...,V_q\}\) are represented by diamonds. They represent local utility functions (local satisfaction degrees) \(\mu _k \in \{\mu _1,...,\mu _q\}\).

    A conventional assumption that an Influence Diagram must respect is that utility nodes have no children.

  2. 2.

    Numerical Components: Uncertainty is described by means of a priori and conditional possibility distributions relative to chance nodes. More precisely:

    • For every chance node \(X \in \mathcal {C}\), uncertainty is represented by:

      • If X is a root node, a priori possibility degree \(\pi _{ID}(x)\) will be associated for each instance \(x \in \mathbb {D}_{X}\), such that \(\max \limits _{x \in \mathbb {D}_{X}} \pi _{ID}(x)=1\).

      • If X has parents, the conditional possibility degree \(\pi _{ID}(x \mid u_X)\) will be associated for each instance \(x \in \mathbb {D}_X\) and \(u_X \in \mathbb {D}_{Par(X)}=\times _{X_j\in Par(X)} \mathbb {D}_{X_j}\), such that \(\max \limits _{x \in \mathbb {D}_{X}} \pi _{ID}(x \mid u_x)=1\), for any \(u_X\).

    • Decision nodes are not quantified. Indeed, a value of decision node \(D_j\) is deterministic, it will be fixed by the decision maker.

    Once a decision \(d=\{d_{1i},...,d_{pj}\}\in \mathcal {D}\) is fixed, chance nodes of the min-based ID form a qualitative possibilistic network induces a unique joint conditional possibility distribution relative to chance node interpretations \(x=\{x_{1i},...,x_{nj}\}\), in the context of d.

    $$\begin{aligned} \pi _{min}^{ID}(x \mid d)= \min \limits _{i=1..n}\pi _{ID}(x_{il} \mid u_{X_i}). \end{aligned}$$
    (1)

    where \(x_{il}\in \mathbb {D}_{X_i}\) and \(u_{X_i}\in \mathbb {D}_{Par(X_i)} =\times _{X_m \in Par(X_i), D_j\in Par(X_i)}\mathbb {D}_{X_m}\cup \mathbb {D}_{D_j}\).

    • For each utility node \(V_{k=1..q} \in \mathcal {V}\), ordinal values \(\mu _k(u_{V_k})\) are assigned to every possible instantiations \(u_{V_k}\) of the parent variables \(Par(V_k)\). Ordinal values \(\mu _k\) represent satisfaction degrees associated with local instantiations of parents variables.

    The global satisfaction degree \(\mu _{min}^{ID}(x,d)\) relative to the global instantiation (xd) of all variables can be computed as the minimum of the local satisfaction degrees:

    $$\begin{aligned} \mu _{min}^{ID}(x,d)= \min \limits _{k=1..q}\mu _k(u_{V_k}). \end{aligned}$$
    (2)

    where \(u_{V_k} \in \mathbb {D}_{Par(V_k)}=\times _{X_i\in Par(V_k), D_j\in Par(V_k)}\mathbb {D}_{X_i}\cup \mathbb {D}_{D_j}\).

2.2 Min-Based Possibilistic Networks

A min-based possibilistic network [12, 13] over a set of variables \(\textit{V }\) denoted by \(\varPi _{min} = (G ; \pi )\) is characterized by:

  • A Graphical Component: is represented by a DAG, the nodes correspond to variables and arcs represent dependence relations between variables.

  • Numerical Components: these components quantify different links in the DAG by using local possibility distributions for each node A in the context of its parents denoted by \(U_A\). More precisely:

    • For every root node A,a priori possibility degree \(\pi (a)\) will be associated for each \(a \in D_A\), such that \(max \pi (a)=1\).

    • For the rest of the nodes \(U_A \ne {\varnothing }\), the conditional possibility degree \(\pi (a|U_A)\) will be associated for each \(a \in D_A \) and \(U_A \in D_A \), such that \(\max \pi (a|U_A)=1\), for any \(U_A\).

The a priori and the conditional possibility degrees induce a unique joint possibility distribution defined by:

$$\begin{aligned} \pi _{G} (A_1,.,A_n)=\pi _{min}(A_i |U_{A_i}) \end{aligned}$$
(3)

Given two min-based possibilistic networks \(\varPi G = (G ; \pi _{G})\) and \(\varPi G^{'} = (G^{'} ; \pi _{G^{'}})\), the result of merging \(\varPi G\) and \(\varPi G^{'}\) is the possibilistic network \(\varPi G_{\oplus } = (G_{\oplus } ; \pi _{\oplus })\) [9], such that:

$$\begin{aligned} \forall \omega , \pi _{G_{\oplus }}(\omega )=min (\pi _{G}(\omega ),\pi _{G^{'}}(\omega )) \end{aligned}$$
(4)

The syntactic counterpart of the fusion of two possibility distributions, associated to two possibilistic networks, using the min operator is a new min-based possibilistic network. In [9], the authors propose two principal classes for merging min-based possibilistic networks:

  • Fusion of two possibilistic networks \(\varPi G\) and \(\varPi G^{'}\) having the same network structure.

  • Fusion of two possibilistic networks \(\varPi G\) and \(\varPi G^{'}\) with different structures.

For more details on the fusion of possibilistic networks see [9].

3 Decomposition of Min-Based Possibilistic Influence Diagram

This section discuss how a qualitative PID can be modeled by two possibility distributions, one representing agent’s beliefs and the other representing the qualitative utility. So, we propose a decomposition process of min-based PID \(\varPi ID_{min}(G_{ID}, \pi _{min}^{ID}, \mu _{min}^{ID})\) into two min-based possibilistic networks:

  1. 1.

    Agent’s knowledge \(\varPi K_{min}=(G_K, \pi )\). This qualitative possibilistic network should codify the same joint conditional possibility distribution \(\pi _{min}^{ID}\) induced by the PID.

  2. 2.

    Agent’s preferences \(\varPi P_{min}=(G_P, \mu )\). Again, this preference-based possibilistic network must codify the same qualitative utility \(\mu _{min}^{ID}\) induced by the PID.

3.1 The Construction of a Knowledge-Based Qualitative Possibilistic Network

The knowledge-based qualitative possibilistic network \(\varPi K_{min}=(G_K, \pi )\) encodes agent’s beliefs. It induces a unique possibility distribution \(\pi _K\) using Eq. 3. The graphical component \(G_K\) of the new qualitative possibilistic network \(\varPi K_{min}\) is defined on the set of variables \(\mathcal {Y}=\mathcal {X} \cup \mathcal {D}=\{Y_1,...,Y_{n+p}\}\) of chance and decision nodes (where \(n=|\mathcal {X}|\) and \(p=|\mathcal {D}|\)). The building of the knowledge-based possibilistic network \(\varPi K_{min}\) can be summarized by Algorithm 1.

figure a

The new min-based possibilistic network \(\varPi K_{min}=(G_K, \pi )\) induces a unique joint possibility distribution \(\pi _K\) defined by the min-based chain rule.

The following proposition ensures that the joint possibility distribution induced by the new possibilistic network \(\varPi K_{min}\) encodes the same states represented by the Influence Diagram \(\varPi ID_{min}\).

Proposition 1

Let \(\varPi K_{min}=(G_K, \pi )\) be a min-based possibilistic network obtained using Algorithm 1. The joint possibility distribution \(\pi _K\) induced by \(\varPi K_{min}\) is equal to the one induced by the Influence Diagram \(\varPi ID_{min}\). Namely,

$$\begin{aligned} \pi _K (Y_1,...,Y_{n+p})=\pi _{min}^{ID}(X_1,...,X_n\mid D_1,...,D_p) = \min \limits _{X_i \in \mathcal {C}} \pi _{ID}(X_i\mid U_i) \end{aligned}$$
(5)

3.2 Building Preference-Based Qualitative Possibilistic Network

The second qualitative possibilistic network \(\varPi P_{min}=(G_P, \mu )\) represents agent’s preferences associated with the qualitative utility. \(\varPi P_{min}\) induces a unique qualitative utility \(\mu _P\) using Eq. 3. This section shows that this qualitative utility is equal to the qualitative utility \(\mu _{min}^{ID}\) (Eq. 2) encoded by the Influence Diagram \(\varPi ID_{min}\). The graphical component \(G_P\) of the new qualitative possibilistic network \(\varPi P_{min}\) is defined on the set of variables \(\mathcal {Z}=\{Z_1,...,Z_m\} \subset \mathcal {X} \cup \mathcal {D}\) of chance and decision nodes. The set of nodes \(\mathcal {Z}\) represents the union of the parent variables of all utility nodes \(\{V_1,...,V_q\}\) in the Influence Diagram. Namely, \(\mathcal {Z}=\{Z_1,...,Z_m\}= Par(V_1)\cup ...\cup Par(V_q)\), where \(m=|Par(V_1)\cup ...\cup Par(V_q)|\) represents the total of parent variables of all utility nodes in \(\varPi ID_{min}\). During the construction phase of the graph \(G_P\), we need to make sure that the generated graph is a DAG structure. We should also avoid the creation of loops at the merging step of the evaluation process [3]. So, before enumerating the decomposition process of an Influence Diagram \(\varPi ID_{min}\), the notion of topological order generated by a DAG is recalled:

Definition 1

A Directed Acyclic Graph is a linear ordering of its nodes such that for every arc from node \(X_i\) to node \(X_j\), \(X_i\) comes before \(X_j\) in the ordering. Any DAG has at least one topological ordering.

The construction of a topological ordering associated to any DAG is known to be achieved in a linear time. The usual algorithm for topological ordering consists in finding a “start node” having no incoming edges. Then, edges outgoing this node must be removed. This process will be repeated until all nodes will be visited.

We first propose a naive solution that requires a preliminary step which consists to reduce all utility nodes into a single one. This node will inherit the parents of all value nodes. A more advanced solution preserving the initial structure is then proposed. Hence, operating on the initial structure of the Influence Diagram induces a more compact representation.

Decomposition Process with a Single Utility Node:

The first solution consists in reducing all utility nodes into a single one. Hence, it amounts to perform preliminary on the initial Influence Diagram before its decomposition. Formally, the preliminary step consists to reduce the number of value nodes to one, noted \(V_r\), that will inherit the parents of all value nodes (\(Par(V_1),...,Par(V_q)\)) i.e. \(Par(V_r)=Par(V_1)\cup ... \cup Par(V_q)\). The utility value associated to the new utility node \(V_r\) corresponds to the minimum of utilities, which is equivalent to the global satisfaction degree. Namely:

$$\begin{aligned} \mu _r(u_{V_r})=\mu _{min}^{ID}(x,d)= \min \limits _{k=1..q}\mu _k(u_{V_k}). \end{aligned}$$
(6)

where \(u_{V_r} \in \mathbb {D}_{Par(V_r)}\) and \(u_{V_k} \in \mathbb {D}_{Par(V_k)}\). The construction of preference-based possibilistic network \(\varPi P_{min}\) can be summarized by Algorithm 2.

figure b

The following proposition indicates that the min-based possibilistic network\(\varPi P_{min} = (G_P,\mu )\) constructed from the previous steps, codifies the same qualitative utility encoded by the qualitative Influence Diagram \(\varPi ID_{min}(G_{ID}, \pi _{min}^{ID}, \mu _{min}^{ID})\).

Proposition 2

Let \(\varPi ID_{min}(G_{ID}, \pi _{min}^{ID}, \mu _{min}^{ID})\) be a min-based PID. Let \(\varPi P_{min}=(G_P, \mu )\) be a min-based possibilistic network obtained using Algorithm 2. The joint qualitative utility \(\mu _P\) induced by \(\varPi P_{min}\) is equivalent to the one induced by the Influence Diagram \(\varPi ID_{min}\). Namely,

$$\begin{aligned} \mu _P (Z_1,...,Z_m)=\mu _{min}^{ID}(X_1,...,X_n,D_1,...,D_p). \end{aligned}$$
(7)

Decomposition Process Based on the Initial Influence Diagram:

The solution proposed in this section is to try to have the structure of a preference-based network as close as possible to the initial structure of the Influence Diagram. Hence, as we will see, operating on the initial structure of the ID allows a more compact representation than if we have used the reduced ID. The construction of preference-based possibilistic network \(\varPi P_{min}\) can be summarized by Algorithm 3.

figure c

The proposed algorithm generates the qualitative min-based possibilistic network \(\varPi P_{min}=(G_P, \mu )\) step by step. Indeed, for each utility node, the algorithm selects the candidate parents that can be a child of the remaining parents in the DAG \(G_P\) under construction. These candidate nodes appear in the last rank of the topological ordering generated by the ID. Among the candidates, if there exists a node that has not yet been introduced in \(G_P\) or it presents a root node, so it will be selected as the child of the remaining parent variables in the DAG \(G_P\) under construction. Otherwise, if such node does not exist then it means that all candidate nodes are already integrated in the DAG \(G_P\) and they have parents. According to the selected node status an utility will be associated to this node. A total ignorance possibility distribution will be associated with the remaining parent variables.

It is evident that the last solution which operates on the initial ID structure allows a compact representation of the qualitative utility.

It should be noted that in the case of an ID with multiple utility nodes having no common parents, the preference-based qualitative possibilistic network will in fact be disconnected. Indeed, each component of the graph encodes local satisfaction degrees associated to one utility node.

The following proposition shows that the qualitative possibilistic network \(\varPi P_{min} = (G_P,\mu )\), built following the previous steps, encodes the same qualitative utility encoded by the qualitative Influence Diagram \(\varPi ID_{min}(G_{ID}, \pi _{min}^{ID}, \mu _{min}^{ID})\).

Proposition 3

Let \(\varPi ID_{min}(G_{ID}, \pi _{min}^{ID}, \mu _{min}^{ID})\) be a min-based PID. Let \(\varPi P_{min}=(G_P, \mu )\) be a preferences-based possibilistic network obtained using Algorithm 3. The joint qualitative utility \(\mu _P\) induced by \(\varPi P_{min}\) is equal to the one induced by \(\varPi ID_{min}\). Namely,

$$\begin{aligned} \small \mu _P (Z_1,...,Z_m)=\mu _{min}^{ID}(X_1,...,X_n,D_1,...,D_p). \end{aligned}$$
(8)

4 On the Computation of Optimal Optimistic Strategy

4.1 Qualitative Possibilistic Decision

The sequential decisions problem under uncertainty [15] is modeled by a finite set of possible states of the world \(S = \{s_1, s_2,..., s_n\}\), a set of decisions \(D = \{D_1, D_2,..., D_m\}\) and a set of preferences among the consequences. In the same way, a decision D is represented by a combination of values of decision variables \(D_i = \{d_1, d_2,..., d_p\}\) chosen by a decision maker.

The problem of finding a strategy \(\delta \) in sequential decisions problem under uncertainty turns out to be intractable in systems with a large state space on a long planning horizon. The high dimensionality of the state space is partly due to the fact that the states are generally defined as a simple conjunction of all the variables describing the different aspects of the state. Formally, this amounts to define a strategy \(\delta : S \rightarrow D \) which assigns a decision instantiation d to any global instantiation s of the state variables: \(d=\delta (s)\). A strategy expresses the way in which the values of decision variables are chosen, depending on the values of the state variables observed at the time of the choice.

In qualitative possibilistic framework, the uncertainty of the decision-maker about the effect of strategy \(\delta \) is represented by a normalized possibility distribution \(\pi _\delta \) which is a function from states to a simply ordered scale L of plausibility: for a world \(\omega \), \(\pi _\delta (\omega ) \in L\) represents the degree of likelihood that \(\omega \) is the real state of the world and the preferences of the agent are expressed by means of a possibility distribution representing a qualitative utility function \(\mu \) taking its values in the interval [0, 1]. The qualitative utility function \(\mu : S \times D \longrightarrow U \) represents the agent’s preferences. \(\mu \) takes its values in a simply orderly scale in [0, 1]. As in Savage theory, an action is represented by a function d that associates to a world an element of s [14]. The utility of an action (decision) d in a state \(\omega \) and whose consequence is \(d(\omega )\in S\) can be evaluated by combining the possibility degrees \(\pi _\delta (\omega ) \) and the utilities \(\mu (d(\omega ))\) in an appropriate manner for all the possible states of the world.

In order to evaluate a strategy \(\delta \), two qualitative decision criteria have been proposed in [11]. In this chapter, we only deal with optimistic decision making where the qualitative utility function, denoted by \(U^{*}\), and associated to a strategy \(\delta \) is defined by:

$$\begin{aligned} U^{*}(\delta ) = \max _{\omega \in \varOmega } \min (\pi _{\delta }(\omega ) , \mu (\omega )) \end{aligned}$$
(9)

A strategy \(\delta \) specifies a value d of all the decision variables D according to the value s of all state variables. The possibility distribution \(\pi _{\delta }(\omega )\) is computed as follows:

$$\begin{aligned} \pi _{\delta }(\omega ) = \min _{\omega \in \varOmega } (\pi _{K}(\omega ) , \pi _{d \wedge \varepsilon }(\omega )) \end{aligned}$$
(10)

Such that \(\varepsilon \) is the set of evidence updated at every step \(i-1\) of the computation process of the optimal optimistic utility of decision \(d_i\).

$$\begin{aligned} \pi _{d \wedge \varepsilon }(\omega ) = \left\{ \begin{array}{ll} 1 &{} \text {if }\omega \models d \wedge \varepsilon \\ \\ 0 &{} \text {otherwise } \end{array} \right. \end{aligned}$$
(11)

Using Eq. 11, the optimistic utility decision \( U^{*}(\delta )\) becomes:

$$\begin{aligned} U^{*}(\delta ) = \max _{\omega \in \varOmega } \min (\min (\pi _{K}(\omega ) , \mu (\omega )),\pi _{d \wedge \varepsilon }(\omega ))). \end{aligned}$$
(12)

Using technical merging of two min-based possibilistic networks (Eq. 12) becomes:

$$\begin{aligned} U^{*}(\delta ) = \max _{\omega \in \varOmega } \min (\pi _{\oplus }(\omega ),\pi _{d \wedge \varepsilon }(\omega ))). \end{aligned}$$
(13)

Where \( \pi _{\oplus }(\omega ) = \min (\pi _{K}(\omega ) , \mu (\omega ))\).

Example 1

Let us consider a simple decision problem represented by a min-based Influence Diagram \(\varPi ID_{min}(G_{ID}, \pi _{min}^{ID}, \mu _{min}^{ID})\). The graphical component \(G_{ID}\) is given by Fig. 1. We suppose that all variables are binary. The numerical components are represented in Tables 1 and 2.

Fig. 1.
figure 1

An example of influence diagram.

Table 1. Initial possibility distributions associated to the PID of Fig. 1.
Table 2. Initial qualitative utilities associated to the PID of Fig. 1.

This influence diagram is decomposed into two qualitative possibilistic networks. The first one network \(\varPi K_{min}=(G_K, \pi _k)\) describes agent’s knowledge and the second one \(\varPi P_{min}=(G_P, \mu )\) will express its preferences. The graphical component \(G_K\) of \(\pi K_{min}\) is given by Fig. 2. The initial possibility distributions \(\pi _k\) are given by Tables 3 and 4.

Fig. 2.
figure 2

Knowledge-based possibilistic network associated to the PID of Fig. 1.

Table 3. Initial possibility distributions associated to the network of Fig. 2.
Table 4. Initial possibility distributions associated to the network of Fig. 2.

The graphical component \(G_P\) of \(\pi P_{min}\) is given by Fig. 3. The initial possibility distributions associated are given by Tables 5 and 6.

Fig. 3.
figure 3

Preference-based possibilistic network.

Table 5. Initial possibility distributions associated to the network of Fig. 3.
Table 6. Initial possibility distributions associated to the network of Fig. 3.

The union is free of cycles, then the result of merging \(\varPi K_{min}\) and \(\varPi P_{min}\) is the min-based possibilistic network \(\varPi G_{\oplus }=(G_\oplus , \pi _\oplus )\), where \(G_\oplus \) is given in Fig. 4.

Fig. 4.
figure 4

Possibilistic network \(G_\oplus \).

The initial possibility distributions are given by Tables 7, 8 and 9, which are obtained using the minimum of local distributions \(\pi _k\) and \(\pi _P\).

Table 7. The possibility distributions associated to the network of Fig. 4.
Table 8. The possibility distributions associated to the network of Fig. 4.
Table 9. The possibility distributions associated to the network of Fig. 4.

4.2 Computing Sequential Decisions Using Junction Trees

Computing the optimistic sequential decisions amounts to find the normalization degree of the junction tree resulting from the merging of the two possibilistic networks codifying knowledge of the agent and its preferences respectively. Note that the construction of the junction tree is done only once. However, the propagation and the initialization (which are both polynomial) are repeated for each decision \(d^{*}_i\).

Building Junction Tree \(\mathcal {JT}\) . Min-based propagation algorithms begin by transforming the initial graph \(G_{\oplus }\) into a junction tree in three steps [4]:

  • Moralization of the initial graph \(G_{\oplus }\): consists in creating an undirected graph from the initial one by adding links between the parents of each variable.

  • Triangulation of the moral graph: allows to identify sets of variables that can be gathered as clusters or cliques denoted by \(C_i\).

  • Construction of a junction tree \(\mathcal {JT}\): the junction tree is built by connecting the clusters identified in the previous step. Once adjacent clusters have been identified, between each pair of clusters \(C_i\) and \(C_j\), a separator \(S_{ij}\) containing their common variables, is inserted.

Initialization for a Conjunction of Decision \(d_i\) and Evidence \(\varepsilon _{i-1}\) . Once the junction tree built, we proceed to its quantification taking into account the decision \(d_i\) and the evidence \(\varepsilon _{i-1}\) as follows:

  • for each cluster \(C_i\) (resp. \(S_ij\)), \( \pi ^{I}_{C_{i}} \leftarrow 1.\) (resp.\(\pi ^{I}_{S_{ij}} \leftarrow 1\))

  • for each variable \(A_i\), choose a cluster \(C_i\) containing \(A_i \cup U_{A_i} , \pi _{C_i} \leftarrow min(\pi _{C_i}, \pi _\oplus (A_i | U_{A_i})\)

  • encode the fact \(d_{i} \wedge \varepsilon _{i-1}\) as likelihood \(\varLambda _{E}(d_{i} \wedge \varepsilon _{i-1})\):

    $$\begin{aligned} \varLambda _{E}(d_{i} \wedge \varepsilon _{i-1})= \left\{ \begin{array}{ll} 1 &{} E \text { is instanciated as }d_{i} \wedge \varepsilon _{i-1}\\ 0 &{} \text {otherwise } \end{array} \right. \end{aligned}$$
    (14)
  • identify a cluster \(C_i\) containing \(D \wedge \varepsilon _{i-1}\):

    $$\begin{aligned} \pi ^{t}_{C_{i}} \leftarrow min(\pi ^{t}_{C_{i}}, \varLambda _{E}). \end{aligned}$$
    (15)

Note that Eq. 14 does not appear in standard initialization of junction trees associated with standard min-based possibilistic networks. It is proper to our frame-work by entering the fact \( d_{i} \wedge \varepsilon _{i-1} \), the junction trees JT encodes \(\pi _{ \textit{JT}} = (\pi _{G_{\oplus }} (\omega ), \pi _{d_{i} \wedge \varepsilon _{i-1}}(\omega ))\). Then the qualitative utility associated to a decision \(d_i\) taking into account a selected instantiation of decisions in previous steps is summarized by the following proposition:

Proposition 4

Let \(\varPi K_{\min } = (G_K , \pi _K )\) be a min-based possibilistic network representing agent’s beliefs and \(\varPi P_{\min } = (G_P , \mu )\) be a min-based possibilistic network representing agent’s preferences. Let \(\varPi G_\oplus = (G_\oplus , \pi _\oplus )\) be the result of merging \(\varPi K_{\min }\) and \(\varPi P_{\min }\) using the min operator. Let JT be the junction trees corresponding to \(\varPi G_\oplus \) generated using the above initialization procedure. Then,

$$\begin{aligned} U^{*}(\delta ) = \max _{\omega \in \varOmega } \pi _{\textit{JT} }(\omega ). \end{aligned}$$
(16)

Global Propagation. The global propagation is performed in order to make it globally consistent. Namely: \(\max \limits _{C_i \backslash S_{ij}} \pi _{C_i}^t =\pi _{S_{ij}}^t= \max \limits _{C_j \backslash S_{ij}} \pi _{C_j}^t.\)

The global propagation is ensured via a message passing mechanism between clusters which starts by choosing an arbitrary cluster to be a pivot node, then follows two main phases concerning collection and distribution of the evidence:

  • collect-evidence: each cluster passes a message to its adjacent cluster in the pivot direction.

  • distribute-evidence: each cluster passes a message to its adjacent clusters away from the pivot direction beginning by the pivot itself until reaching the leaves of the graph.

If a cluster \(C_i\) sends a message to its adjacent cluster \(C_j\), then the potential of \(C_j\) and their separator \(S_{ij}\) are updated as follows:

  • (a) Update the potential of \(S_{ij}:\) \(\pi ^{t+1}_{S_{ij}}\leftarrow \max \limits _{C_i \backslash S_{ij}} \pi ^t_{C_i}\).

  • (b) Update the potential of \(C_j:\) \(\pi ^{t+1}_{C_j}\leftarrow \min (\pi ^t_{C_j},\pi ^{t+1}_{S_{ij}})\).

Once stability is reached, the computation of the qualitative utility relative to a decision d can be achieved.

Proposition 5

Let \(\varPi K_{min}=(G_K, \pi )\) and \(\varPi P_{min}=(G_P, \mu )\) be the min-based networks representing agent’s beliefs and preferences. Let \(\varPi G_{\oplus }=(G_{\oplus }, \pi _{\oplus })\) be the result of merging \(\varPi K_{min}\) and \(\varPi P_{min}\) using the min operator. Let \(\mathcal {JT}\) be the junction tree associated with \(\varPi G_{\oplus }\) generated using the above global propagation procedure. Then, the computation of optimistic decisions amounts to compute a normalization degree of \(\mathcal {JT}\):

$$\begin{aligned} u^*(d)=h(\pi _{\mathcal {JT}})=\max \limits _{C_i} \pi _{C_i}. \end{aligned}$$
(17)

The optimal optimistic decisions are those maximizing the qualitative utility. The computation of these optimal optimistic decisions is obtained using the Algorithm 4.

figure d

Example 2

Let us continue Example 1. Knowing that the temporal order associated to decisions \(\{D_1,D_2\}\) is: \(D_1 \prec D_2\). To compute a strategy \(\delta \), we first start by constructing the junction tree, as depicted in Fig. 5 associated with the graph \(G_\oplus \) representing the fusion of \(\varPi K_{min}\) and \(\varPi P_{min}\).

Fig. 5.
figure 5

The junction tree associated with \(G_\oplus \).

For each decision value \(D_2 = \{d_2, \lnot d_2\}\), we must run the propagation algorithm is used in order to compute the normalization degree associated with the junction tree.

  • Step 1: \(D = d_2\) the fact \(D_2 = d_2\) is encoded as likelihood using Eq. 14. From the initialization procedure, we get:

    \(\pi {C_1} = \min (1, \pi _{G_\oplus }(D_1)), \pi _{G_\oplus }(D_2| D_1 X_1), \pi _{G_\oplus }(X_1| D1)))\).

    \(\pi {C_2} = \min (1, \pi _{G_\oplus }(X_2| D_1 D_2), \varLambda _{D2})\).

    Once the junction tree is quantified, then the global propagation allows to compute the normalization degree of the junction tree which corresponds to the normalization degree of any cluster. Using this procedure, we obtain: \(U^{*}(d_2) = \max _{C_1} \pi _{C_1}=\max _{C_2} \pi _{C_2}=0.3\).

  • Step 2: \(D_2 = \lnot d_2\) We repeat the same procedure described in the previous step, with \(D_2 = \lnot d_2\). Then, we get: \(U^{*}(\lnot d_2) = \max _{C_1} \pi _{C_1}=\max _{C_2} \pi _{C_2}= 0.4\).

Thus, we can conclude that the optimal optimistic decision is \(D_2 = \lnot d_2\) with the maximal qualitative utility equal to 0.4.

The choice of \(D_2\) is then fixed, so the set of evidence must be updated. Namely \(E_1 = \{D_2 = \lnot d_2\}\). In the same way it will be computed the optimal optimistic utility of decision \(D_1\) taking into account the value of the decision \(D_2\).

  • Step 1: \(D = d_1\)

    In this case, the fact \(d_1 \wedge \lnot d_2\) is encoded as likelihood using Eq. 14. By applying the propagation process, we obtain: \(U^{*}(d_1 \wedge \lnot d_2) = 0.4\).

  • Step 2: \(D_1 = \lnot d_1\)

    We repeat the same procedure with the fact \(\lnot d_1 \wedge \lnot d_2\) using Eq. 14. By applying the propagation process, we get: \(U^{*}(\lnot d_1 \wedge \lnot d_2) = 0.4\).

Thus, we can conclude that the optimal optimistic strategy \(\delta \) is defined by: \(\delta = (D_1 = d_1, D_2 = \lnot d_2)\) and \((D_1 = \lnot d_1, D_2 = \lnot d_2) \) with the maximal qualitative utility equal to 0.4.

5 Experimental Studies

In order to evaluate the performances of the proposed algorithms for the decomposition of an influence diagram, we conducted a set of experiments. Each experiment consists of generating four models representing qualitative possibilistic influences diagrams containing respectively:

  • a single utility node with a single decision node,

  • a single utility node with multiple decision nodes,

  • a multiple nodes with multiple decision nodes,

  • very complex PID.

For each model, the following steps are applied:

  • generating a set of samples by varying the number of nodes corresponding to the associated description.

  • decomposing the PID into two min-based possibilistic networks corresponding to the knowledge and the preferences using:

    1. 1.

      Algorithm 2 (with reduction of the utility nodes),

    2. 2.

      Algorithm 3 (without reduction of the utility nodes).

  • computing the optimistic strategy using Algorithm 4.

Figure 6 illustrates the attitudes of the two algorithms used to compute the optimistic strategies.

Fig. 6.
figure 6

Comparison of execution time for computing optimistic strategies using the Algorithms 2 and 3.

The results, depicted in Fig. 6, indicates that the computation of the optimistic strategies when the decomposition process is achieved without reducing the utility notes (Algorithm 2) is realized in a more optimal time when the reducing process is used (Algorithm 3). Indeed, reducing utilities nodes provides non compact representation of beliefs. It only increases the complexity of the graph.

6 Conclusions

This chapter first proposed a decomposition of a Possibilistic Influence Diagram into two possibilistic networks: the first expresses agents knowledge and the second encodes its preferences. This procedure allows a simple representation of decision problems under uncertainty. Indeed, the decomposition process described in this chapter offers a natural way to express knowledge and preferences of a agent separately in unified way using only one type of nodes. Also this chapter addressed a new approach for computing optimal optimistic sequential decisions (strategy) in a possibilistic graphical framework. Our approach first merges possibilistic networks associated with available uncertain knowledge and possibilistic networks associated with agents preferences. We then showed that computing optimistic sequential decisions comes down to compute a normalization degree of the junction tree associated to the resulting graph of merging agent’s beliefs and preferences networks. This allows an efficient computation of optimal decisions.