Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

Decision Making (DM) is defined as a criteria selection method, e.g. when a person wakes up in the morning and decides to bike instead of driving to work. These decisions can be made in the short, medium and long term. All the firms need to discern about the decisions that are really important or not from a particular perspective. A correct decision-making in any firm can become a competitive advantage.

In a DM scenario there is an event that is, or not, desired, which entail a path among the different alternatives that lets to the firm get the objective [1]. Generally it is required the optimal situation, i.e. the one which will provide the best results from the profit, cost, safety,… point of view [2]. It is possible employing a correct criterion for choosing the best scenario [3].

DM is defines by Harris as [4]:

The research of identifying and choosing alternatives based on the decision-maker weighs/values and preferences. Make a decision entail there are several alternatives to be considered, and not only identify the alternatives is sought but also to choose the one that best fits the aims, restrictions, etc. is desired .

DM consists on the transformation process from data to proceedings [5]. Data collection becomes a strategy task for DM in order to get the proceedings, and they help to keep improving the DM.

There are several alternatives to classify the decisions performed in a business. Figure 1 shows a classification of the decisions that can be made in a business environment.

Fig. 1
figure 1

Decision’s classification

Operational decisions aim to reach the strategic decisions. Wrong operational decisions have not far-reaching implications for the future and may be fixed easily.

Tactical decisions occur with more frequency than the operational decisions. They are control by a procedure and routines. Any wrong tactical decisions may bring troubles to the business.

Strategic decisions are done in long period, where there is not enough dataset and it is critical in the future of the business, having a wrong strategic decisions fatal consequences.

According to the time period, programmed decisions are those repeated frequently in a business, where a procedure is developed to carry out every time it occurs. On the other hand, non-programmed decisions are those emerged unexpectedly. Usually involve a high degree of difficulty and they must be tackled by experts in the field.

The advances in the technology and information help to the firms to develop their own software in order to find good DM. Nevertheless it is recommended to consider algorithms in order to find the optimal DM.

2 DM Process

The diagram of a DM can be performed by decision trees. This is achieved having a graphical representation of those situations that need to be improved. Although measures of importance can be applied to decision trees, the use of Binary Decision Diagrams (BDD) involves a reduction in the computational cost for quantitative resolution, among other improvements. In addition, the cut sets obtained from the BDD will be the basis for the construction of the heuristic method developed for the analysis proposed in this chapter.

The following main scenarios can be distinguished according to the information available in the DM process:

  • DM under certainty: The problem is entirely known, i.e. all possible states for a basic cause (BC) are known and any consequences of each decision can be completely achieved.

  • DM under risk: Implies partial information and some are stochastic. This will be the scenario considered in this chapter.

  • DM under uncertainty: Information about the main problem (MP) and BCs is not complete and part of the information is missed.

Figure 2 shows a flow chart regarding to the main steps for a decision maker considering rational and logical arguments that support their decision.

Fig. 2
figure 2

DM process

2.1 The Decision Maker

The decision maker is the person, system or organization that takes a decision. All decisions and assessments will be influenced by the decision makers. Any decision-maker should be essential to have some skills that can be resumed in experience, good judgment, creativity and quantitative skills. The first three skills are personals, and the last one is supported by existing methods and support systems for DM in order to choose, considering different scenarios, and to help to the decision maker to decide about best DM. In this field the paper presents and describes a quantitative method to support the DM process.

2.2 Constraints and Requirements

In any DM process there are constrains or requirements to consider, e.g. existing resources, available budget, environmental precautions, social issues, legal provisions, etc. Generally the constraints are exogenous and not access in order to incorporate to the mathematical or empirical models, i.e. an endogenization of constraints should be carried out in order to consider them. This endogenization involves conceptualizing the constraints as goals of the decision maker, i.e. it is possible to reformulate a constraint as the main objective [6].

This chapter is focused on expected-utility DM under constraints that can be considered as a process that provides a solution for a decision maker with different objectives: To satisfy the constraints and rule out unfeasible solutions, and to maximize utility functions among the surviving options.

2.3 The Utility Function

For a quantitative stochastic DM case, the utility function provides a value that determines the quality of the solution that is being considered. It is formulated as an analytical expression derived from LDT to BDD conversion. Thresholds might set to be establishing in order to determine the solutions that are most suitable for the objectives of the decision maker.

2.4 Results

Once a decision is made, then it is necessary to choose the best alternative or scenario considering variables as feasibility and costs for implementing the decisions. None of DM processes is completely reliable due to the possibility to take into account the total range of events involved in the solution, and also in an a posteriori evaluation of consequences. Particularly, evaluation of consequences is essential for improving those DM processes with data from forecasting studies. The results derived from a decision can affect the complex structure of the problem, or to modify some features of constraints and requirements. Feedback is necessary in order to determine the quality of the decision because the decision maker requires knowing if the system responds as expected. Moreover, there are some decisions that need to be done periodically and a feedback is needed in order to improve the new decision quality according to the previous decision.

3 Logical Decision Trees

DM process is carried out when a certain problem occurs, with the objective of discerning whether there is a real problem [7]. Logical Decision Trees (LDT) describes graphically the roots and causes of a certain problem and their interrelation. The logical operators ‘AND’ and ‘OR’ are employed in order to connect the events considered [8].

Figure 3 shows a LDT composed of seven non-basic causes and nine basic causes. BCs are those causes that are not possible to be broken down into simpler causes. All these causes are linked by logical gates, in particular by 1 ‘OR’ gates and 3 ‘AND’ gates. LDT provides information about the critical states of BCs and how MP is usually generated. Figure 3 shows that BC7 is one of the most important causes, i.e. if BC7 occurs then MP will occur.

Fig. 3
figure 3

Logical decision tree

When LTD is larger, e.g. composed of hundreds or thousands of BCs, there are also larger alternatives for a direct decision tree analysis. A direct analysis can be done by a LDT to BDD conversion. An “if-then-else” conversion approach is described in reference [9] in order to carry out this conversion.

4 Binary Decision Diagrams

BDDs have been successfully found in the constant search for an efficient way to simulate LDTs. BDDs were introduced by Lee [10], and further popularized by Akers [11], Moret [12] and Bryant [13]. The BDD is used in order to analyze the LDT. They are composed by a data structure that represents the Boolean functions. They provide a mathematical approach to the problem by Boolean algebra, such as Karnaugh maps or truth tables, being less complex than its truth table.

BDD is a directed graph representation of a Boolean function where equivalent Boolean sub-expressions are uniquely represented [14]. A directed acyclic graph is a directed graph, i.e. to each vertex v, there is no possible directed path that starts and finishes in v. It is composed of some interconnected nodes in a way that each node has two vertices. Each vertex is possible to be a terminal or non-terminal vertex. BDD is a graph-based data structure whereby the occurrence probability of a certain problem in a DM is possible to be achieved. Each single variable has two branches: 0-branch corresponds to the cases where the variable is 0 and it is graphically represented by a dashed line (Fig. 7); on the other hand, 1-branch cases are those where the event is being carried out and corresponds to the variable with a value of 1, and it is represented by a solid line (Fig. 7).

It will allow obtaining an analytical expression depending on the occurrence probability and the logical structure of the tree of every single basic cause. Paths starting from the top BC to a terminal one provide a certain state in which MP will occur. These paths are named cut-sets (CS).

4.1 Ranking of the Basic Causes

The size of the BDD, as well as CPU runtime, have an awfully dependence on the variable ordering. Different ranking methods can be used in order to reduce the number of cut-set, and consequently, to reduce the CPU runtime. It must be emphasized that there is not any method that provide the minimum size of BDD in all cases [22]. The main methods are described in this section.

4.1.1 Top-Down-Left-Right (TDLR)

This method generates a ranking of the events by ordering them from the original FT structure in a top-down and then left-right manner [15]. The listing of the events is initialized, at each level, in a left to right path adding the basic events found in the ordering list. In case that any event had been considered previously and it was located higher up the then it is ignored (Fig. 4).

Fig. 4
figure 4

TDLR ranking method

The ranking for the example showed in Fig. 3 using the TDLR method is:

$$ B{C}_1>B{C}_2>B{C}_3>B{C}_4>B{C}_7>B{C}_8>B{C}_9>B{C}_5>B{C}_6 $$

4.1.2 Depth First Search (DFS)

This approach goes from top to down of a root and each sub-tree from left to right (see Fig. 5). This procedure is a non-recursive implementation and all freshly expanded nodes are added as last-input last-output process [16].

Fig. 5
figure 5

DFS ranking method

The ranking for the example presented in Fig. 3 is:

$$ B{C}_1>B{C}_2>B{C}_3>B{C}_4>B{C}_7>B{C}_5>B{C}_6>B{C}_8>B{C}_9 $$

4.1.3 The Breath First Search (BFS)

This algorithm begins ordering all the basic events obtained expanding from the standpoint by the first-input first-output procedure (Fig. 6). The events not considered are added in a queue list named “open” that is recalled “closed” list when the all the events are studied [17].

Fig. 6
figure 6

BFS ranking methods

The ranking for the example of Fig. 3 is:

$$ B{C}_1>B{C}_2>B{C}_3>B{C}_4>B{C}_5>B{C}_6>B{C}_7>B{C}_8>B{C}_9 $$

4.1.4 Level Method

The level of any event is understood as the number of the gates that has higher up a tree until the top event. The “level” method creates the ranking of the events according to the level of them. In case that two or more events have the same level, the event will have higher priority if it appears early in the tree [15]. Table 1 shows the level of the events of the LDT showed in Fig. 3.

Table 1 Level method

Therefore, according to the Table 1 and the Level method, the ranking obtained is:

$$ B{C}_1>B{C}_2>B{C}_3>B{C}_4>B{C}_7>B{C}_8>B{C}_9>B{C}_5>B{C}_6 $$

It can be observed that in this case the ranking proposed by this method is the same that TDLR method, therefore, the CSs obtained will be the same as well.

4.1.5 AND Method

Xie et al. [18] suggest by the AND criterion that the importance of the basic event is based on the “and” gates that are between the k event and the top event, because in FTA the “and” gates imply that there are redundancies in the system. Consequently, basic events under an “and” gate can be considered less important because it is independent to other basic events occurring for the intermediate events [18]. Furthermore:

  • Basic events with the highest number of “and” gates will be ranked at the end.

  • In case of duplicated basic events, the event with less “and” gates has preference.

  • Basic events with the same number of “and” gates can be ranked as the TDLR method approach.

The ranking for the example showed in Fig. 3 using the AND method is:

$$ B{C}_7>B{C}_1>B{C}_2>B{C}_3>B{C}_4>B{C}_8>B{C}_9>B{C}_5>B{C}_6 $$

4.2 BDD Conversions

Small BDDs for calculated the MP occurrence probability is possible to be done manually, but when larger LDTs have to be converted it is almost impossible.

Ite (If-Then-Else) conditional expression is employed in this research work as an approach for the BDD’s cornerstones, based on the approach presented in [19]. Figure 7 shows an example of an ite done in a BDD.

Fig. 7
figure 7

ite applied to BDD

Which could be described as: “If A variable occur, Then f1, Else f2” [20]. The solid line always belongs to the ones as well as the dashed lines to the zeros, above explained.

Considering the Shannon’s theorem is obtained the following expression from Fig. 7.

$$ f={b}_i\cdot {f}_1+{\overline{b}}_i\cdot {f}_2 $$

where

$$ f={b}_i\cdot {f}_1+{\overline{b}}_i\cdot {f}_2=ite\left({b}_i,{f}_1,{f}_2\right) $$

Table 2 shows the different CSs obtained using the abovementioned ranking methods. A comparison between the CSs in done in order to analyze the efficiency of the methods ordering the basic causes showed in Fig. 3.

Table 2 Cut sets from ranking methods for the LDT shows in Fig. 3

There is not a significant difference between these methods because the sizes of all the CSs are similar (Table 2). The main reason is because the LDT that is being analyzing does not have a large number of events. In Table 2 can be observed that the method that generates a lowest number of CSs is the AND method. Therefore it will be chose in this research work.

Figure 8 shows the BDD obtained from the LDT (Fig. 3) to BDD conversion using AND method.

Fig. 8
figure 8

BDD for LDT shows in Fig. 3

4.3 Analytical Expression

The probability of occurrence must be assigned to each BC. P(BCi) is the probability of occurrence of the ith BC. \( P\left(\overline{BC}i\right) \) is the probability of non-occurrence of the ith BC. Therefore:

$$ P\left(\overline{BC}i\right)=1-P(BCi) $$

The probability of occurrence of jth CS (P(CSj)) can be calculated as the product of P(BCi) and \( P\left(\overline{BC}i\right) \) that compose the CS. Probability of occurrence of the MP (QMP) is given by:

$$ {Q}_{MP}={\displaystyle \sum}_{j=1}^nP(CSj) $$

where n is the total number of CSs.

5 Optimization Approach in DM Process

When the probability of occurrence of the MP is achieved, the new objective is to minimize it [21]. It is assumed that LDT will be fixed, and therefore the reduction of the QMP will be performed by the different BCs. Given a BC, the objective is to determinate the investment on it in order to reduce its probability of occurrence, considering the rest of BCs probabilities and the total investment, being the objective function to minimize the probability of occurrence of the top event QMP. A new variable that consider this reduction is defined, being:

$$ \mathbf{Imp}\left(\mathbf{B}\mathbf{C}\right)=\left[ Imp\left(B{C}_1\right), Imp\left(B{C}_2\right),\dots Imp\left(B{C}_i\right)\dots Imp\left(B{C}_n\right),\right] $$

The ith component of Imp(BC) provides the reduction of the probability of occurrence when some resources are assigned to the ith BC. In addition, a probability vector is defined as:

$$ \mathbf{P}\left(\mathbf{B}\mathbf{C}\right)=\left[P\left(B{C}_1\right),P\left(B{C}_2\right),\dots P\left(B{C}_i\right)\dots P\left(B{C}_n\right)\right] $$

The ith component of P(BC) provides the probability of occurrence of the ith BC. Once the BCs have been improved, the new probability assignment is calculated as the difference between its probability of occurrence and its Imp:

$$ {\mathbf{P}}^{*}\left(\mathbf{BC}\right)={P}^{*}\left(B{C}_1\right),{P}^{*}\left(B{C}_2\right),\dots {P}^{*}\left(B{C}_i\right)\dots {P}^{*}\left(B{C}_n\right)=\left[P\left(B{C}_1\right)- Imp\left(B{C}_1\right),P\left(B{C}_2\right)- Imp\left(B{C}_2\right),\dots P\left(B{C}_i\right)- Imp\left(B{C}_i\right)\dots P\left(B{C}_n\right)- Imp\left(B{C}_n\right)\right] $$

The BDD evaluated using P(BC) provides the value of QMP. If it is being evaluated using, P*(BC), the data obtained will be termed as Q * MP . It would be desired that Q MP  ≥ Q * MP , otherwise the optimization procedure is producing wrong outcomes.

The analytic expression provided by BDD will be an optimization function when is evaluated employing P*(BC). The optimization function will be calculated by Q MP (Imp).

BCs are not necessarily corrigible but almost always improvable, therefore Imp(BC) will range between 0 and a certain threshold. The first constrain is defined as:

0 ≤ Imp(BC i ) ≤ a i , where a indicates the maximum improvement that can be done in the ith BC. The a i values will be:

$$ 0\le {a}_i\le P\left(B{C}_i\right) $$

a i  = P(BC i ) means that the ith BC is capable to be totally corrected due to this BC allows its own probability of occurrence to be zero (in this case BC i will not continue contributing to the MP occurrence). If a i  = 0 then the improvements in the ith BC are not possible.

The improvement cost (IC) is defined for each BC in order to adequate a quantitative analysis to the nature of BCs, where a high IC for a BC means that a large amount of resources must be invested in order to reduce the probability of occurrence of BC.

IC(BC) = [IC(BC 1), IC(BC 2) … IC(BC i ) … IC(BC n )], where IC(BC i) indicates the amount of resources invested in BCi for reducing the probability of occurrence of BCi from 1 to 0.

A new variable is defined as the total amount of resources at the time of the investment operation (Bg), where:

$$ {\displaystyle \sum}_{i=n}^{i=1}IC\left(B{C}_i\right)\cdot Imp\left(B{C}_i\right)\le Bg $$

The optimization problem is defined as:

$$ \begin{array}{l} minimize\kern0.5em QMP(Imp)\\ {} subject\ to\kern1em {\displaystyle \sum}_{i=n}^{i=1}IC\left(B{C}_i\right)\cdot Imp\left(B{C}_i\right)\le Bg\\ {}\kern3em Imp\left(B{C}_i\right)-{a}_i\le 0\\ {}\kern3em - Imp\left(B{C}_i\right)\ \le 0\end{array} $$

This is a Non-Linear Programming Problem (NLPP) and NP-hard. The necessary conditions of optimality are defined by Karush-Khun-Tucker (KKT) conditions [4].

6 Case Study

A case study is presented in this section. The LDT showed in Fig. 3 is related to a real case study of a confidential firm. Table 3 details the initial conditions.

Table 3 Initial conditions

The probability of occurrence of the MP is 0.4825, calculated by the analytic expression obtained from the BDD in Fig. 8, and according to the probabilities of occurrence showed in Table 3. And the maximum investment (MI) would be:

$$ MI=0.4*0.9*500+0.4*0.5*150+0.2*0.4*400+0.3*0.9*400+0.1*0.8*150+0.2*0.5*200+0.25*0.6*500+0.2*0.3*300+0.1*0.95*900=560.5 $$

Figure 9 shows the maximum investment for each BC.

Fig. 9
figure 9

Maximum investment allowed

If this maximum budget were available, then the minimum probabilities of occurrence (P min ) of the BCs would be:

$$ {P}_{min}=\left[0.04\kern1em 0.20\kern0.75em 0.12\kern1em 0.03\kern1em 0.020\kern0.75em 0.10\kern0.5em 0.10\kern0.5em 0.14\kern0.5em 0.005\right] $$

And, consequently, the minimum QMP is:

$$ {Q}_{MP}=0.1329 $$

Therefore, when the available budget is less than 560.5 monetary units (μm), it will be necessary to choose which BCs are the best to be improved in order to minimize the QMP.

The budget considered in the following example is 350 μm. For this purpose, the objective function, subjected to the constraints defined by budget and improvement limits, will be:

$$ \begin{array}{l} minimize\kern0.5em QMP\left( Imp(BC)\right)\\ {}\mathrm{subject}\ \mathrm{to}\ \left(500\cdot Imp\left(B{C}_1\right)+150\cdot Imp\left(B{C}_2\right)+400\cdot Imp\left(B{C}_3\right)+400\cdot Imp\left(B{C}_4\right)+150\cdot Imp\left(B{C}_5\right)+200\cdot Imp\left(B{C}_6\right)+500 \cdot Imp\left(B{C}_7\right)+300\cdot Imp\ \left(B{C}_8\right)+900\cdot Imp\left(B{C}_9\right)\right)\cdot 10\le 350\\ {}\kern3em Imp\left(B{C}_1\right)-0.36\le 0\kern1em - Imp\left(B{C}_1\right)\ \le 0\\ {}\kern3em Imp\left(B{C}_2\right)-0.2\le 0;\kern1.25em - Imp\left(B{C}_2\right)\ \le 0\\ {}\kern3em Imp\left(B{C}_3\right)-0.12\le 0;\kern1em - Imp\left(B{C}_3\right)\ \le 0\\ {}\kern3em Imp\left(B{C}_4\right)-0.27\le 0;\kern1em - Imp\left(B{C}_4\right)\ \le 0\\ {}\kern3em Imp\left(B{C}_5\right)-0.08\le 0;\kern1em - Imp\left(B{C}_5\right)\ \le 0\\ {}\kern3em Imp\left(B{C}_6\right)-0.1\le 0;\kern1em - Imp\left(B{C}_6\right)\ \le 0\\ {}\kern3em Imp\left(B{C}_7\right)-0.2\le 0;\kern1em - Imp\left(B{C}_4\right)\ \le 0\\ {}\kern3em Imp\left(B{C}_7\right)-0.2\le 0;\kern1em - Imp\left(B{C}_4\right)\ \le 0\\ {}\kern3em Imp\left(B{C}_8\right)-0.06\le 0;\kern1em - Imp\left(B{C}_5\right)\ \le 0\\ {}\kern3em Imp\left(B{C}_9\right)-0.095\le 0;\kern1em - Imp\left(B{C}_6\right)\ \le 0\end{array} $$

Figure 10 shows the optimal investment allocation subject to a budget of 350 μm in order to minimize QMP.

Fig. 10
figure 10

Optimal investment

There are two BCs (Fig. 10) that are not improved due to the budget is limited, and there are one BCs where the maximum investment has not been completed due to lack of budget. The missing amounts to invest are:

$$ {\mathrm{BC}}_1=105\;\upmu \mathrm{m},\ {\mathrm{BC}}_6=20\;\upmu \mathrm{m},\ {\mathrm{BC}}_9=85\;\upmu \mathrm{m} $$

Adding all these quantities a value of 250, that is precisely the difference between the maximum budget and the available budget, is obtained. Figure 11 shows the behavior of the QMP reduction according to the available budget. The QMP presents a non-linear behavior.

Fig. 11
figure 11

Optimal investment allocation

The slope is bigger at the beginning (Fig. 11), i.e. the investments are more useful until reach to a certain budget. Figure 12 shows the improvement of each investment compared to the previous one. The investment considered rises with a step of 50 monetary units.

Fig. 12
figure 12

Percentage improvement

The firsts 250 μm (Fig. 12) are more useful than the rest of the investment. This is a useful information when the availability of budget is limited.

7 Conclusions

Decision Making (DM) is a criteria selection method for choosing a good alternative. The diagram of a DM can be performed by logical decision trees (LTDs). This is achieved having a graphical representation of those situations that need to be improved. Employing binary decision diagrams (BDDs) can be reduced the computational cost for quantitative analysis. LDT describes graphically the roots of a certain problem and their interrelations.

The size of the BDDs depends of the variable ordering, and therefore the computational cost for the qualitatively analysis. TDLR, DFS, BFS Level and AND methods has been employed in order to find the optimal variable ordering.

A NP-hard and non-linear programming problem (NLPP) is considered in a real case study in this study. The necessary conditions of optimality are defined by Karush-Khun-Tucker (KKT) conditions. It has been found the optimal allocation when resources are limited.