Keywords

1 Paraconsistent Decision Method (MPD)

1.1 General Considerations

Throughout the study years for a Master’s Degree (spent devoted to logic) [17] and PhD (to Decision Making) [23,24], a lot of theory has been read regarding decision making, particularly those decision processes used by Management [34, 41], in Organizations [6, 40] and in Production Engineering processes [28, 29, 39, 45].

During this period, it has been noticed that practically all processes are concerned with using objective data, gathered throughout the period and catalogued in a certain manner. It also has been noticed that the decision process is aimed at using more intangible information stemmed from the knowledge, expertise and sensitivity etc. of experts (specialists) on different subjects.

Such information—usually not catalogued—is of great relevance when it comes to decision making in corporations. So much so that in a good number of cases, the chairman of the company relying almost exclusively on experience makes the decisions based on their knowledge and intuition acquired in past years [8]. Therefore when needed, the chairman will tap into this information that is stored away internally, which is not catalogued anywhere else.

Then the intention of finding a way to use this information was set (knowledge, expertise, experience and expert sensitivity) for decision-making but in such a way that they can be used without the expert’s direct participation, interference or presence. The idea was to use information resulting from the knowledge and intuition of experts with experience in a specific area in order to help others in making decisions.

However, how does one store a person’s knowledge, experience, sensitivity and intuition so that others may use them as ingredients in their own decision-making? After all, these are rather intangible values.

After a lot of thought and research, the possibility of using paraconsistent annotated evidential logic Eτ was contemplated. It allows for valuable evidence (opinions, diagnosis, etc.) of experts be stored in the form of numbers. By doing so, such evidence stored in the form of numbers, may be used by non-expert decision makers.

Some aspects of this idea have proven to be relevant. Paraconsistent annotated evidential logic Eτ enables the translation of an expert’s background by means of degrees of favorable evidence (or belief) and degrees of contrary evidence (or disbelief) all done by way of numbers; it also permits the manipulation of such data even if inconsistent, contradictory or para-complete. Another relevant aspect is that once the experts’ opinions are gathered through degrees of evidence, such data then becomes available to other people for a considerable amount of time, with no need for intervention of an expert, thus saving them the trouble and inconvenience of being asked to intervene at any time. It is practically a perpetuation of these opinions, which may help in decision-making for a long time [22].

Without emphasizing too much detail, this chapter is an attempt to present what has been known as the Paraconsistent Decision Method (PDM).

1.2 Notions of Paraconsistent Annotated Evidential Logic Eτ

Paraconsistent Logic, whose recent discovery has been attributed to Brazilian logician Newton C.A. da Costa, Ph.D., is an adversary of classical logic [30, 32, 35, 36] since it derogates from the principle of non-contradiction, that is, it accepts contradictions (propositions of the form (A \(\wedge \neg\) A) in its structure without trivializing itself, which is the exact opposite of classical logic [11, 12, 13, 15].

As part of this type of logic, paraconsistent annotated logic arose from the research papers of Da Costa et al. [9], who developed the first syntactic and semantic of this logic that was completed by Abe [1, 2]. The latter, along with his research team made significant advances which later resulted in the introduction of the paraconsistent annotated evidential logic Eτ.

In this logic, a proposition p is represented by p (a ; b), with a and b varying on the closed interval [0, 1] of real numbers. Therefore, the pair (a; b) belong to the Cartesian product [0, 1] × [0, 1]. The real number a translates the degree of favorable evidence in p, and b, the degree of contrary evidence in p (a and b are also called the degree of belief and degree of disbelief in p respectively). The pair µ = (a ; b) is referred to as the constant of annotations [3, 4, 14].

So, we have as extreme values: the pair (1; 0), which will translate the logical state known as Truth (V); (0; 1) doing the same for falsity (F); (1; 1) for inconsistency (T), and the pair (0; 0) for the logical state known as para-completeness (⊥).

The set |τ| = [0, 1] × [0, 1] with the order relation ≤* is the annotation lattice (≤* is defined by ((a 1; b 1), (a 2; b 2)) ∈ ≤* ⇔ a 1 ≤ a 2 and b 1 ≤ b 2), where ≤ is the order relation on the set of real numbers.

The annotation lattice defines the unit square represented in the Cartesian plane (Fig. 1).

Fig. 1
figure 1

Cartesian Unit Square (CUS)

For a certain annotation constant µ = (a; b), are defined: G(a; b) = a + b – 1, known as the degree of uncertainty, and H(a; b) = ab, known as the degree of certainty. Please notice that –1 ≤ G ≤ 1 and –1 ≤ H ≤ 1.

The segment CD, for which G = 0, is known as a perfectly defined line (PDL); AB, for which H = 0, is known as a perfectly undefined line (PIL). Other noticeable lines can thus be defined accordingly as follows:

Para-completeness borderline: straight line MN in such a way that

$$\begin{aligned} & {\text{G}} = - {\text{k}}_{1} ,\,{\text{with}}\,0\; < \;{\text{k}}_{1} \; < \;1; \\ & {\text{Inconsistency borderline}}{:}\,{\text{ straight line RS}},\,{\text{ in such a way that}} \\ & {\text{G}} = + {\text{k}}_{1} ,\,{\text{with}}\,0\; < \;{\text{k}}_{1} \; < \;1; \\ & {\text{Falsity borderline}}{:}\,{\text{ straight line TU}},\,{\text{ in such a way that}} \\ & {\text{H}} = {-}{\text{k}}_{ 2} ,\,{\text{with}}\,0\; < \;{\text{k}}_{2} \; < \,1; \\ & {\text{Truth borderline}}{:}\,{\text{ straight line PQ}},\,{\text{ in such a way that}} \\ & {\text{H}} = + {\text{k}}_{ 2} ,\,{\text{with}}\,0\; < \;{\text{k}}_{2} \; < \;1. \\ \end{aligned}$$

Usually, k1 = k2 = k is adopted to give symmetry to the diagram such as in Fig. 2, in which you have k1 = k2 = k = 0.60.

Fig. 2
figure 2

Cartesian Unitary Square (CUS) divided in twelve regions

The unit square of the Cartesian plane can be divided into regions translating the logical states with different characteristics. A division that attributes to the lattice that it represents an interesting and convenient characterization is the one obtained through PDL, PIL and limit lines (Fig. 2), partitioning it into twelve regions.

From these twelve regions, four extreme regions are featured: region of truth (CPQ), region of falsity (DTU), region of para-completeness (AMN) and region of inconsistency (BRS).

The k2 value will be called the level of requirement, because it represents the minimum value for |H| so that the point X ≡ (a; b) belongs to either the region of falsity or truth. In Fig. 2 there are four extreme regions and one central region.

1.3 Operators of Evidential Annotated Paraconsistent Logic: NOT, MAX and MIN

$$\begin{aligned} & {\text{The NOT operator is defined by NOT}}\left( {a;b} \right) = \left( {b;a} \right). \\ & {\text{For example}}{:}\,{\text{ NOT}}\left( {0. 8; \, 0. 3} \right) = \left( {0. 3; \, 0. 8} \right). \\ & {\text{So}}{:}\,\neg {\text{p}}_{(0. 8; \, 0. 3)} = {\text{p}}_{(0. 3; \, 0. 8)} = {\text{p}}_{{\left[ {\sim \, \left( {0. 8; \, 0. 3} \right)} \right]}} . \\ & {\text{Notice that}}{:}\,{\text{ NOT}}({\text{T}}) \, = {\text{T}};{\text{NOT}}( \bot ) \, = \bot ;{\text{NOT}}({\text{V}}) \, = {\text{F}}\;{\text{and}}\;{\text{NOT}}({\text{F}}) \, = {\text{V}}. \\ \end{aligned}$$

Operator MAX (from here on forward called maximizing) is defined as being applied to a group of n annotations (n ≥ 1); it acts in such a way as to maximize the degree of certainty (H = a − b) in this group of annotations by selecting the best favorable evidence (bigger value of a) and the best contrary evidence (smaller value of b). It is defined as follows [27]:

$${\mathbf{MAX}}\left\{ {\left( {\varvec{a}_{ 1} ;\varvec{b}_{ 1} } \right),\left( {\varvec{a}_{ 2} ;\varvec{b}_{ 2} } \right), \ldots ,\left( {\varvec{a}_{\text{n}} ;\varvec{b}_{\text{n}} } \right)} \right\} = \left( {{ \hbox{max} }\left\{ {\varvec{a}_{ 1} ,\varvec{a}_{ 2} , \ldots ,\varvec{a}_{\text{n}} } \right\};\,{ \hbox{min} }\{ \varvec{b}_{ 1} ,\varvec{b}_{ 2} , \ldots ,\varvec{b}_{\text{n}} \} } \right)$$

Operator MIN (from now on called minimizing) is also defined to be applied to a group of n annotations (n ≥ 1); it acts in such a way as to minimize the degree of certainty (H = a  b) in this group of annotations by selecting the worst favorable evidence (smaller value of a) and the worst contrary evidence (bigger value of b). It is defined as follows [27]:

$$\begin{aligned} & {\mathbf{MIN}} \, \left\{{\left( {{\varvec{a}}_{1}};{{\varvec{b}}_{1}} \right), \left( {{\varvec{a}}_{2}} ;{{\varvec{b}}_{2}} \right), \ldots ,\left({{\varvec{a}}_{\text{n}}} ;{{\varvec{b}}_{\text{n}}} \right)} \right\} = \left({\hbox{min}} \left\{ {{\varvec{a}}_{1}}, {{\varvec{a}}_{2}}, \ldots , {{\varvec{a}}_{\text{n}}} \right\};{\hbox{max}}\{{{\varvec{b}}_{1}}, {{\varvec{b}}_{2}}, \ldots, {{\varvec{b}}_{\text{n}}} \} \right) \\ & {\text{If}}\,{\boldsymbol{\mu}_{1}} = \left( {{\varvec{a}}_{1}}; {{\varvec{b}}_{1}} \right), {\boldsymbol{\mu}_{2}} = \left( {{\varvec{a}}_{2}}; {{\varvec{b}}_{2}} \right)\,{\text{and}}\;{{\varvec{a}}_{1}} \; \le \; {{\varvec{a}}_{2}} \;{\text{and}}\;{{\varvec{b}}_{1}} \le {{\varvec{b}}_{2}}, {\text{it follows that}} \\ & {\mathbf{MAX}} \, \left\{ {{\boldsymbol{\mu}}_{1}}, {{\boldsymbol{\mu}}_{2}} \right\} = {\mathbf{MAX}} \, \left\{{\left({{\varvec{a}}_{1}};{{\varvec{b}}_{1}} \right), \left( {{\varvec{a}}_{2}} ;{{\varvec{b}}_{2}} \right)} \right\} = \left( {{\varvec{a}}_{2}} ;{{\varvec{b}}_{1}} \right)\,{\text{and}} \\ & {\mathbf{MIN}} \, \left\{ {{\boldsymbol{\mu}}_{1}} , \, {{\boldsymbol{\mu}}_{2}} \right\} = {\mathbf{MIN}}\left\{ {\left( {{\varvec{a}}_{1}} ;{{\varvec{b}}_{1}} \right), \left( {{\varvec{a}}_{2}} ;{{\varvec{b}}_{2}} \right)} \right\} = ( {{\varvec{a}}_{1}} ;{{\varvec{b}}_{2}}). \\ \end{aligned}$$

The operator MAX must be applied in situations where the items considered are not all determinant, and it is sufficient that one of them presents a favorable condition.

Operator MIN has the purpose of minimizing the degree of certainty to a set of annotations. Therefore, it must be applied in situations where the considered items are all determinant.

Once the experts are separated into groups, operator MAX must be applied inside each group (intragroup) and then, operator MIN among the results obtained from the groups (between groups).

For instance, a set of four specialists distributed among two groups: A, by specialists E1 and E2, and B, by specialists E3 and E4.

The application of the rules in this case is as follows:

$${\mathbf{MIN}}\left\{ {{\mathbf{MAX}}\left[ {\left( {{\text{E}}_{ 1} } \right), \, \left( {{\text{E}}_{ 2} } \right)} \right];\,{\mathbf{MAX}}\left[ {\left( {{\text{E}}_{ 3} } \right),\left( {{\text{E}}_{ 4} } \right)} \right]} \right\} \;\;{\text{or}}\;\;{\mathbf{MIN}}\left\{ {\left[ {{\text{G}}_{\text{A}} } \right];\, \left[ {{\text{G}}_{\text{B}} } \right]} \right\}$$

This way of applying the rules of maximization and minimization for decision-making is known as the min/max principle or optimistic decision, because it minimizes the greater degree of certainty [40].

1.4 Decision Regions and Decision Rule [21, 37]

Figure 2 shows a Cartesian plane unit square divided into twelve regions. Among them, four external regions stand out.

In the AMN and BRS regions, the module of the degree of uncertainty is high (close to 1) and the module of the degree of certainty is low (close to zero).

Figure 2 represents |G| ≥ 0.6 and |H| < 0.6. Therefore, the X points = (a; b) of these two regions translate logical states of high uncertainty (inconsistency /contradictory or para-completeness) and of little certainty. So, they do not provide for decision making since they only acknowledge that the data leading to the pair (a; b) show high uncertainty.

Regions CPQ and DTU are the exact opposite: the module of the degree of uncertainty is low (close to zero) and the module of the degree of certainty is high (close to 1). Figure 2 shows |G| < 0.6 and |H| ≥ 0.6. Therefore, the X points = (a; b) of these two regions translate logical states of low uncertainty (contradictory/inconsistency or para-completeness), but of high certainty (truth or falsity). So they can be used in decision making since they translate a high level of certainty in the enterprise being analyzed.

The region CPQ, in which the degree of certainty is close to 1, is called the region of truth, while the other, DTU, in which the degree of certainty is close to −1, is called the region of falsity.

There is a value of the module of the degree of certainty (|H|) defining the regions of truth and falsity. In the case of Fig. 2, such a value is 0.6. Therefore, if a degree of certainty should be greater than or equal to 0.6 (H ≥ 0.6), then the logical resulting state X = (a; b) will be close to point C; thus resulting in a favorable decision (the enterprise is feasible/viable).

Otherwise, if the degree of certainty is less than or equal to –0.6 (H ≤ –0.6), then the logical resulting state X = (a; b) will be close to point D; hence resulting in an unfavorable decision (the enterprise is not feasible/viable).

The module of the degree of certainty that defines the decision regions (|H|) is called level of requirement (LR).

Therefore, the decision rule can be expressed as follows:

$$\begin{aligned} & {\text{H}} \; \ge \;{\text{LR}} \; \Rightarrow \, {\text{favorable decision }}\left( {\text{enterprise is feasible}} \right); \\ & {\text{H}}\; \le \; {-}{\text{LR}}\; \Rightarrow \; {\text{unfavorable decision }}\left( {\text{enterprise is not feasible}} \right); \\ & {-}{\text{ LR}}\; < \;{\text{H}}\; < \;{\text{LR}} \; \Rightarrow \;{\text{inconclusive analysis}}. \\ \end{aligned}$$

The unit square of the Cartesian plane divided into regions as in Fig. 2, for example, is known as the para-analyzing algorithm [16]. In fact, each region in Fig. 2 translates a set of logical states that determines the tendency of the analyzed situation, as summarized in Table 1.

Table 1 Summary of the analysis of the twelve regions of Cartesian Unitary Square

1.5 The Paraconsistent Decision Method (PDM)

Every reasonable decision must be based on a variety of factors that may have an influence on the enterprise being analyzed. Each of these factors will influence the enterprise in their own unique way, indicating feasibility (favorable decision) or non-feasibility (unfavorable decision) of the enterprise, or still it may not be conclusive and not indicate neither favorably nor unfavorably, or not even contrary. This can be clearly noticed when the para-analyzing algorithm is used, that is, when the values of the degrees of favorable evidence (a i,R) and the degrees of unfavorable evidence (b i,R) for each factor are plotted in such a way that each factor is represented by an X point = (a; b) of the lattice τ.

However, it is not practical to work with a great number of factors because the method would prove to be quite exhausting and expensive. So, the proposition would be to narrow down the list of factors to only those that are most important, that is, to the ones having the most influence on the decision, within of course a limit of rationality as professed by Simon, “who works with a simplified real-life model, taking into consideration the fact that many aspects of the reality are substantially irrelevant in any determined moment; he chooses based on the rhythm of the actual situation, considering only a few more relevant and crucial factors” [41].

Usually, examining separately the influence of each factor is not necessary. What really matters in the viability analysis of an enterprise is the combined influence of all selected factors, which are translated into a final logical state known as barycenter (W). It is represented by a W point in the lattice τ, whose coordinates (a W and b W) are determined by the weighted average of the coordinates of points Xi = (a i,R, b i,R) of τ, that translates the resulting influence of each factor separately.

1.5.1 Steps of the Paraconsistent Decision Method

The Paraconsistent Decision Method (PDM) consists of eight steps of which only a brief idea will be outlined at first, while the rest of details will come along shortly down the chapter.

  1. (1)

    Set the level of requirement (LR) of the decision to be made.

  2. (2)

    Select the most important factors (Fi) that most influence the decision.

  3. (3)

    Define sections (Sj) for each factor (Three, four, five or more sections can be set depending on the case and the level of detail desired).

  4. (4)

    Build the database, which is composed of the weights (Pi) assigned to factors (for instance to distinguish them by importance) and by the values of favorable evidence (or degree of belief) (a) and the contrary evidence (or degree of disbelief) (b) assigned to each factor in one of the sections; the weights and values of evidences are assigned by experts conveniently selected to give their opinion (The database can also be built with stored statistical data obtained from previous experiences in similar enterprises).

  5. (5)

    Perform field survey (or research) to find out in which section (condition) each factor is placed.

  6. (6)

    Obtain the value of the degree of favorable evidence (a i,R) and the value of the degree of contrary evidence (b i,R) for each of the chosen factors (Fi), with 1 ≤ i ≤ n, in sections found in the survey (Spj) by applying the maximizing (MAX operator) and minimizing (MIN operator) techniques of logic (Eτ).

  7. (7)

    Obtain the degree of favorable evidence (a W) and the degree of contrary evidence (b W) of the barycenter of the points representing the selected factors in the lattice (τ).

  8. (8)

    Make the decision by applying the decision rule or the para-analyzing algorithm.

1.5.2 Detailed Steps of the Paraconsistent Decision Method (PDM)

To make an analysis of the feasibility of a project for a decision, the planning should be assigned and designated to a particular person (the business owner, an engineer, a consultant etc.). This individual would be required to handle the data in such a way as to translate them into Eτ language, thus enabling a proper plotting for tools of analysis of this kind of logic.

1.5.2.1 Setting Up the Level of Requirement

Firstly, one should set up the level of requirement (LR) for the decision to be made that depends on the level of safety desired for the decision as well as the responsibility it entails, the size of the investment at stake, the involvement and the risks to human lives, or to environment, etc.

When the level of requirement (LR) is set, the decision regions are automatically defined and, consequently, so is the decision rule and the para-analyzer algorithm. For example, take for instance a situation where the requirement level is set at 0.70.The decision rule is:

  • H ≥ 0.70 ⇒ favorable decision (enterprise is feasible);

  • H ≤ −0.70 ⇒ unfavorable decision (enterprise is not feasible);

  • −0.70 < H < 0.70 ⇒ inconclusive analysis.

See Fig. 3 for the para-analyzer algorithm.

Fig. 3
figure 3

Decision rule and para-analyzer algorithm for a level of requirement equal to 0.70

1.5.2.2 Choice of the Factors of Influence

Secondly, one should find out the factors that may influence in the success (or failure) of the enterprise. This is done by consulting with people who work in similar organizations, or with experts on the subject matter or on projects of the same nature, or even reading specialized literature, etc.

Once the factors that may influence in success (or failure) of the enterprise are found, one should choose the n factors Fi (1 ≤ i ≤ n) that are more important and more influential, that is, those whose conditions would mostly affect the feasibility of the enterprise. Whether the chosen factors may affect in various ways or whether they present different importance in the decision, such differences may be compensated by assigning different weights to each chosen factor.

1.5.2.3 Setting Up Sections for Each Factor

The next step is to set up the sections Si,j (1 ≤ j ≤ s), that translate the conditions in which each factor can be found. Then, depending on the level of refinement intended for the analysis, more (or fewer) sections can be assigned.

Should one choose to assign three sections, they would be:

S1 :

factor is in favorable condition to the enterprise;

S2 :

factor is in neutral condition to the enterprise;

S3 :

factor is in unfavorable condition to the enterprise.

Should one chose to assign five sections, they would be:

S1 :

factor is in very favorable condition to the enterprise;

S2 :

factor is in favorable condition to the enterprise;

S3 :

factor is in neutral condition to the enterprise;

S4 :

factor is in unfavorable condition to the enterprise;

S5 :

factor is in very unfavorable condition to the enterprise.

1.5.2.4 Building Up the Database

Constructing the database is a very important task and to do so m experts Ek (1 ≤ k ≤ m) in the area—or relating area—must be selected. The selection of experts should look for people with different backgrounds, so that the assignment of values is not a result of one single line of thought.

One should notice that the process displays great versatility once it enables the choice of more (or fewer) factors of influence. It also enables the assignment of three or more sections for each factor, the use of a larger (or smaller) number of experts and the use of weights to differentiate between the factors and/or experts. Although the process may allow it, it is not advisable to use less than four experts so that the outcome is not too subjective.

Firstly, experts should indicate if—among the chosen factors—there is a distinction regarding importance. If there is not, then a weight equal to 1 (one) should be assigned to all of them; on the other hand if there is, then each expert should assign a weight (qi,k) to the factor they deem fit and should take into consideration the importance of the factor in relation to the other regarding the decision to be made.

$${\text{q}}_{{{\text{i}},{\text{k}}}} \; = \; {\text{weight assigned by expert k to factor i}}.$$

In assigning the weights, some conditions may be applied, for example the weights must be whole or integer numbers and belonging to the interval [1, 10]. Once all invited experts assign weights to all factors, the final weight, Pi, of the factors will be adopted, and is calculated as the arithmetic mean of the weights assigned by the experts.

$$P_{i} = \frac{{\sum\limits_{k = 1}^{m} {q_{i,k} } }}{m}$$
(1)

Please note that there is a possibility of the experts being distinguished according to their background (practice, experience, knowledge), thereby assigning different weights (rk) to them. In this case, the final weight, Pi, of each factor would not be an arithmetic average but instead a weighted average.

$$P_{i} = \frac{{\sum\limits_{k = 1}^{m} {r_{k} q_{i,k} } }}{{\sum\limits_{k = 1}^{m} {r_{k} } }}$$
(2)
$${\text{r}}_{\text{k}} \; = \; {\text{weight assigned by the knowledge engineer to expert k}}.$$

Highlighted here is just one of the features of the method, showing its versatility and the variety of options the method offers users.

The next step towards building the database is to ask experts to assign the degree of favorable evidence (a) and the degree of contrary evidence (b) to each factor present in the conditions in a to be found and which are characterized by the defined sections.

Each ordered pair (a i,j,k; b i,j,k) formed by the values of the degrees of favorable and contrary evidence, assigned by an expert Ek to the factor Fi according to the condition defined by a section Sj, constitutes an annotation symbolized by μ i,j,k.

The database consists of the matrix of weights [Pi], a column matrix of n rows formed by the weights Pi of the factors and by the matrix of annotations MA = μ i,j,k (bivariate annotations) with n × s rows and m columns, that is: a total of n × s × m elements. This last matrix is composed of all annotations that the m experts assigned to each of the n factors under the conditions defined by the s sections.

The matrix MA = [μ i,j,k] may be represented by [(a i,j,k; b i,j,k)], since each annotation μ i,j,k is an ordered pair of the form (a i,j,k; b i,j,k).

For example, in a situation with four experts (m = 4), five factors (n = 5) and three sections for each factor (s = 3), the matrix of the weights, MP, will be a column matrix of 5 rows (n = 5) and the matrix of annotations, MA, will be a matrix of 15 rows and 4 columns (n × s = 5 × 3 = 15 e m = 4) as indicated in Tables 2 and 3.

Table 2 Calculation table with indication of the bivalued annotations
Table 3 Calculation table with indication of the values of favorable (a) and contrary (b) evidences
1.5.2.5 Field Survey

Now that the decision-making device is complete, one is able to apply the method and reach the final decision, using information that will be collected through research on the condition (defined by section) of each influence factor. So, the next step will be to perform the field survey and find out the real condition of each of the influence factors, that is, to discover in which section Si,j lies each factor Fi.

Upon completion of the survey, one obtains a set of n sections resulting from the survey, Si,jp, with 1 ≤ i ≤ n, one for each factor and translating the actual conditions of the factors (jp translates the particular value of j, 1 ≤ i ≤ s, that was obtained from the research pertaining to factor Fi). These n values of the resulting sections of the survey constitute a column matrix of n rows (Mpq). With this result it is possible to look up in the database what the opinions of the experts are on the feasibility of the enterprise in the conditions of the factors.

Therefore, the database can stand out as another matrix, a subset of MA, that can be named as the matrix of surveyed data MDpq = [λ i,k], of n rows and m columns, made from the rows of MA.

1.5.2.6 Calculation of the Resulting Annotations

At this point, a task needs to be carried out: divide the experts into groups according to the criteria of the engineer that directs the decision making process.

When forming the groups of experts to apply MAX and MIX operators in the study of real cases in order to assist in decision making, some details must be adhered to.

The operator MAX should be applied to situations in which the favorable opinion of just one of them is enough to consider the group result as satisfactory. The operator MIN should be applied to situations where the opinions of two or more experts (or surveyed items) are all determinant and it must be mandatory that all are favorable so that the result of the analysis is considered satisfactory.

The following is an example that may clarify some more how the groups are formed. Imagine the four components of a soccer team: the goalkeeper (a player with the number 1), the defense (four players numbered 2–5), the mid-field (three players numbered 6–8) and the offense (three players numbered from 9 to 11). This is what a soccer understood would call the 4-3-3 tactic.

Every coach knows that in order to build an excellent team he must have a great player in each sector, that is, a formidable goalkeeper, a great defense player, a terrific mid-fielder and a tremendous attacker. Therefore, each sector (group) is judged by their best player, suggesting that maximization is applied to each group.

Therefore in the team’s viability analysis, the groups are already naturally formed. The goalkeeper, who is the only one in the sector, makes up one group (A); The four defense players make up another group (B), bearing in mind that only one great player is enough to meet the requirements of the team. Similarly, the three mid-fielders constitute the third group (C) and the three attackers, the fourth group (D).

On the other hand, if all team sectors are excellent, the team will be “excellent”; whereas if one sector is not excellent, but good, this good sector will define the team status “good”, despite the other three excellent sectors; If medium, the team will be “medium” and so on, thus suggesting the application of the minimization rule among the groups (sectors).

Based on the above, the distribution of the groups and the application of the MAX and MIN operators are defined as follows:

$$\begin{aligned} & {\mathbf{MIN}}\left\{ {\left[ {\text{Group A}} \right],\,\left[ {\text{Group B}} \right],\,\left[ {\text{Group C}} \right],\,\left[ {\text{Group D}} \right]} \right\}\,{\text{or}} \\ & {\mathbf{MIN}}\left\{ {\left[ 1\right],{\mathbf{MAX}}\left[ {2,3,4,5} \right],\,{\mathbf{MAX}}\left[ {6,7,8} \right],\,{\mathbf{MAX}}\left[ {9,10,11} \right]} \right\}\,{\text{or}} \\ & {\mathbf{MIN}}\left\{ {\left[ {\left( {{\boldsymbol{a}}_{\text{A}} ;{\boldsymbol{b}}_{\text{A}} } \right)} \right],\,\left[ {\left( {{\boldsymbol{a}}_{\text{B}} ;{\boldsymbol{b}}_{\text{B}} } \right)} \right],\,\left[ {\left( {{\boldsymbol{a}}_{\text{C}} ;{\boldsymbol{b}}_{\text{C}} } \right)} \right],\,\left[ {\left( {{\boldsymbol{a}}_{\text{D}} ;{\boldsymbol{b}}_{\text{D}} } \right)} \right]} \right\}, \\ \end{aligned}$$

represented by the schematic in Fig. 4.

Fig. 4
figure 4

Operators MAX and MIN scheme application

It should be noted that the goalkeeper’s influence is very high because he is the only one responsible for the result in group A.

The application of these operators provides a way to determine the values of favorable evidence (a i,R) and of contrary evidence (b i,R), results for each factor Fi (1 ≤ i ≤ n) in the section Si,jp found in the survey.

Suppose that the m experts are distributed among p groups Gh, with 1 ≤ h ≤ p, each one with gh expert being \(\sum\limits_{h = 1}^{p} {g_{h} = {\mathbf{m}}}\).

Thus, the group Gh will be composed of the following gh experts: E1h, E2h,…, Eghh. Then, the application of the rule of maximizing within the group Gh (intra-group) can be summarized as follows:

$$\begin{aligned} & {\mathbf{MAX}}\left[ {\left( {{\text{E}}_{{1{\text{h}}}} } \right),\left( {{\text{E}}_{{2{\text{h}}}} } \right), \ldots \left( {{\text{E}}_{\text{ghh}} } \right)} \right]\,{\text{or}} \\ & {\mathbf{MAX}}\left[ {\left( {{\boldsymbol{a}}_{{{\text{i}}, 1 {\text{h}}}} ;{\boldsymbol{b}}_{{{\text{i}},1{\text{h}}}} } \right),\left( {{\boldsymbol{a}}_{{{\text{i}},2{\text{h}}}} ;{\boldsymbol{b}}_{{{\text{i}},2{\text{h}}}} } \right), \ldots ,\left( {{\boldsymbol{a}}_{{{\text{i}},{\text{ghh}}}} ;{\boldsymbol{b}}_{{{\text{i}},{\text{ghh}}}} } \right)} \right] \\ \end{aligned}$$

The result of the maximization is the ordered pair (a i,h; b i,h), in which

$$\varvec{a}_{{{\text{i}},{\text{h}} }} = { \hbox{max} }\left\{ {\varvec{a}_{{{\text{i}}, 1 {\text{h}}}} ,\varvec{a}_{{{\text{i}}, 2 {\text{h}}}} , \ldots ,\varvec{a}_{{{\text{i}},\,{\text{ghh}}}} } \right\}\,{\text{and}}\,\varvec{b}_{{{\text{i}},{\text{h}}}} = { \hbox{min} }\left\{ {\varvec{b}_{{{\text{i}},\, 1 {\text{h}}}} ,\varvec{b}_{{{\text{i}},\, 2 {\text{h}}}} , \ldots ,\varvec{b}_{{{\text{i}},\,{\text{ghh}}}} } \right\}$$

Since there are n factors, n ordered pairs are obtained in this way, resulting in the group Gh, matrix MGh = [(a i,h; b i,h)], with n rows, since 1 ≤ i ≤ n, and one column. It can be inferred that since there are p groups, p similar column matrices are obtained.

Returning to the example of n = 5 factors, s = 3 sections and m = 4 specialists and, assuming that the four experts were distributed among two groups (p = 2), the first, G1, by specialists E1 and E4 and the second, G2, by specialists E2 and E3, the application of the rule of maximizing would be as follows:

$$\begin{aligned} & {\text{Within}}\,\;{\text{G}}_{1} \,\;{\text{group:}}\,\;{\mathbf{MAX}}\left[ {\left( {{\text{E}}_{1} } \right),\left( {{\text{E}}_{ 4} } \right)} \right]; \\ & {\text{Within}}\,\;{\text{G}}_{2} \,\,{\text{group:}}\,\;{\mathbf{MAX}}\left[ {\left( {{\text{E}}_{2} } \right),\left( {{\text{E}}_{3} } \right)} \right]\,\;{\text{or}} \\ & {\mathbf{MAX}}\left[ {\left( {\varvec{a}_{{{\text{i}},1}} ;\varvec{b}_{{{\text{i}},1}} } \right),\left( {\varvec{a}_{{{\text{i}}, 4}} ;\varvec{b}_{{{\text{i}},4}} } \right)} \right],\,{\text{giving}}\,\left( {\varvec{a}_{{{\text{i}},{\text{g1}}}} ;\varvec{b}_{{{\text{i}},{\text{g1}}}} } \right)\,{\text{for}}\,{\text{group}}\,\;{\text{G}}_{1} \,{\text{and}} \\ & {\mathbf{MAX}}\left[ {\left( {\varvec{a}_{{{\text{i}},2}} ;\varvec{b}_{{{\text{i}},2}} } \right),\left( {\varvec{a}_{{{\text{i}}, 3}} ;\varvec{b}_{{{\text{i}},3}} } \right)} \right],\,{\text{giving}}\,\left( {\varvec{a}_{{{\text{i}},{\text{g}}2}} ;\varvec{b}_{{{\text{i}},{\text{g}}2}} } \right)\,{\text{for}}\,{\text{group}}\,\;{\text{G}}_{2} \,{\text{such}}\,{\text{that}} \\ & \varvec{a}_{{{\text{i}},{\text{g}}1 }} = { \hbox{max} }\left\{ {\varvec{a}_{{{\text{i}},1}} ,\varvec{a}_{{{\text{i}},4}} } \right\};\quad \quad \varvec{b}_{{{\text{i}},{\text{g1}} }} = { \hbox{min} }\left\{ {\varvec{b}_{{{\text{i}}, 1}} ,\varvec{b}_{{{\text{i}}, 4}} } \right\}\,{\text{and}} \\ & \varvec{a}_{{{\text{i}},{\text{g}}2 }} = { \hbox{max} }\left\{ {\varvec{a}_{{{\text{i}},2}} ,\varvec{a}_{{{\text{i}},3}} } \right\};\quad \quad \varvec{b}_{{{\text{i}},{\text{g}}2 }} = { \hbox{min} }\left\{ {\varvec{b}_{{{\text{i}},2}} ,\varvec{b}_{{{\text{i}},3}} } \right\}. \\ \end{aligned}$$

Therefore, p = 2 column matrices are obtained with n = 5 rows as a result of the application of the maximization rule within groups G1 and G2 (intra-groups). They are:

$${\text{M}}_{{{\text{G}}1}} = \left[ {\left( {\varvec{a}_{{{\text{i}},{\text{g1}}}} ;\varvec{b}_{{{\text{i}},{\text{g1}}}} } \right)} \right] = \left[ {{\varvec{\uprho}}_{{{\text{i}},{\text{g1}}}} } \right]\quad {\text{and}}\quad {\text{M}}_{\text{G2}} = \left[ {\left( {\varvec{a}_{{{\text{i}},{\text{g}}2}} ;\varvec{b}_{{{\text{i}},{\text{g}}2}} } \right)} \right] = \left[ {{\varvec{\uprho}}_{{{\text{i}},{\text{g2}}}} } \right],$$

and can be represented in another way as in Tables 2 and 3.

Once the maximization rules (MAX operator) are applied within the groups (intra-groups), the next step will be the application of the minimization rule (MIN operator) in the groups (between groups) that can be as follows:

$$\begin{aligned} & {\mathbf{MIN}}\left\{ {\left[ {{\text{G}}_{ 1} } \right],\left[ {{\text{G}}_{ 2} } \right], \ldots \left[ {{\text{G}}_{\text{h}} } \right], \ldots \left[ {{\text{G}}_{\text{p}} } \right]} \right\}\,{\text{or}} \\ & {\mathbf{MIN}}\left\{ {\left( {\varvec{a}_{{{\text{i}},{\text{g}}1}} ;\varvec{b}_{{{\text{i}},{\text{g}}1}} } \right), \, \left( {\varvec{a}_{{{\text{i}},{\text{g}}2}} ;\varvec{b}_{{{\text{i}},{\text{g}}2}} } \right), \ldots \left( {\varvec{a}_{{{\text{i}},{\text{gh}}}} ;\varvec{b}_{{{\text{i}},{\text{gh}}}} } \right), \ldots ,\left( {\varvec{a}_{{{\text{i}},{\text{gp}}}} ;\varvec{b}_{{{\text{i}},{\text{gp}}}} } \right)} \right\}, \\ \end{aligned}$$

Hence obtaining for each factor Fi the resulting annotation (a i,R; b i,R), in which

$$\begin{aligned} \varvec{a}_{{{\text{i}},{\text{R}}}} & = { \hbox{min} }\left\{ {\varvec{a}_{{{\text{i}},{\text{g1}}}} ,\varvec{a}_{{{\text{i}},{\text{g2}}}} , \ldots ,\varvec{a}_{{{\text{i}},{\text{gh}}}} , \ldots ,\varvec{a}_{{{\text{i}},{\text{gp}}}} } \right\}\,{\text{and}} \\ \varvec{b}_{{{\text{i}},{\text{R}}}} & = { \hbox{max} }\left\{ {\varvec{b}_{{{\text{i}},{\text{g1}}}} ,\varvec{b}_{{{\text{i}},{\text{g2}}}} , \ldots ,\varvec{b}_{{{\text{i}},{\text{gh}}}} , \ldots ,\varvec{b}_{{{\text{i}},{\text{gp}}}} } \right\}. \\ \end{aligned}$$

Since there are n factors, these results will constitute a matrix column with n rows, which will be called resulting matrix MR = [(a i,R; b i,R)] = [ω i,R].

Going back to the example of n = 5 factors, s = 3 sections and m = 4 experts, the application of the minimization rule would be reduced to MIN{[G1], [G2]}.

$$\begin{aligned} & \varvec{a}_{{1,{\text{R}}}} = { \hbox{min} }\left\{ {\varvec{a}_{{1,{\text{g1}}}} ,\varvec{a}_{{ 1,{\text{g}}2}} } \right\}{\text{e}} \quad \varvec{b}_{{1,{\text{R}}}} = { \hbox{max} }\left\{ {\varvec{b}_{{1,{\text{g1}}}} ,\varvec{b}_{{1,{\text{g}}2}} } \right\}; \\ & \varvec{a}_{{2,{\text{R}}}} = { \hbox{min} }\left\{ {\varvec{a}_{{2,{\text{g1}}}} ,\varvec{a}_{{2,{\text{g2}}}} } \right\}{\text{e}} \quad \varvec{b}_{{2,{\text{R}}}} = { \hbox{max} }\left\{ {\varvec{b}_{{2,{\text{g1}}}} ,\varvec{b}_{{2,{\text{g2}}}} } \right\}; \\ & \varvec{a}_{{3,{\text{R}}}} = { \hbox{min} }\left\{ {\varvec{a}_{{3,{\text{g1}}}} ,\varvec{a}_{{3,{\text{g2}}}} } \right\}{\text{e}} \quad \varvec{b}_{{3,{\text{R}}}} = { \hbox{max} }\left\{ {\varvec{b}_{{3,{\text{g1}}}} ,\varvec{b}_{{3,{\text{g2}}}} } \right\} \\ & \varvec{a}_{{4,{\text{R}}}} = { \hbox{min} }\left\{ {\varvec{a}_{{4,{\text{g1}}}} ,\varvec{a}_{{4,{\text{g2}}}} } \right\}{\text{e}} \quad \varvec{b}_{{4,{\text{R}}}} = { \hbox{max} }\left\{ {\varvec{b}_{{4,{\text{g1}}}} ,\varvec{b}_{{4,{\text{g2}}}} } \right\} \\ & \varvec{a}_{{5,{\text{R}}}} = { \hbox{min} }\left\{ {\varvec{a}_{{5,{\text{g1}}}} ,\varvec{a}_{{5,{\text{g2}}}} } \right\}{\text{e}} \quad \varvec{b}_{{5,{\text{R}}}} = { \hbox{max} }\left\{ {\varvec{b}_{{5,{\text{g1}}}} ,\varvec{b}_{{5,{\text{g2}}}} } \right\} \\ \end{aligned}$$

The resulting column matrix (MR) of 5 rows along with the previous are represented in Tables 2 and 3.

The application of the rules of maximization (MAX) and minimization (MIN) to the example in analysis can be summarized as follows:

$${\mathbf{MIN}}\left\{ {{\mathbf{MAX}}\left[ {\left( {{\text{E}}_{ 1} } \right)\,\left( {{\text{E}}_{ 4} } \right)} \right],{\mathbf{MAX}}\left[ {\left( {{\text{E}}_{ 2} } \right)\,\left( {{\text{E}}_{ 3} } \right)} \right]} \right\}\,\;{\text{or}}\,\;{\mathbf{MIN}}\left\{ {\left[ {{\text{G}}_{ 1} } \right]\,\left[ {{\text{G}}_{ 2} } \right]} \right\}.$$

In applications, some of the matrices seen (matrix of weights, [Pi], of surveyed sections, Mpq, of surveyed data, MDpq, matrix of the groups, MGh, and resulting matrix, MR) will be displayed as columns in the calculations table and would have the same format in Tables 2 or 3. These tables take into consideration, for example, a situation with four experts (m = 4), five factors (n = 5) and three sections for each factor (s = 3), used as an example.

The values of the resulting favorable evidence (a i,R) and contrary evidence (b i,R) obtained for all factors, aid in determining what the influence of each factor is in terms of feasibility of the enterprise.

1.5.2.7 Determining the Barycenter

Usually, there is not much interest in discovering the influence of each factor separately. However, it is crucial to know the combined influence of all factors on the feasibility of the enterprise, once it leads to the final decision.

The combined influence of factors is determined by the analysis of the barycenter (W) of the points representing them in the Cartesian plane (in lattice τ). In order to determine the barycenter, one must calculate its coordinates, that are the degrees of favorable (a W) and contrary (b W) evidence. The degree of favorable evidence of the barycenter (a W) is equal to the weighted average of the degrees of favorable evidence results (a i,R) for all the factors, by taking as coefficients the weights (Pi) assigned by experts to the factors. In like manner, the degree of contrary evidence of the barycenter (b W) is calculated.

$$a_{W} = \frac{{\sum\limits_{i = 1}^{n} {P_{i} a_{i,R} } }}{{\sum\limits_{i = 1}^{n} {P_{i} } }}\quad \quad \quad \quad b_{W} = \frac{{\sum\limits_{i = 1}^{n} {P_{i} b_{i,R} } }}{{\sum\limits_{i = 1}^{n} {P_{i} } }}$$
(3)

In the case where all factors have equal weights (Pi), the weighted averages above will turn into arithmetic means and the barycenter of the points representing the factors will turn into the geometric center of those points. In this case the Eq. 3 becomes as follows:

$$a_{W} = \frac{{\sum\limits_{i = 1}^{n} {a_{i,R} } }}{n}\quad \quad \quad \quad b_{W} = \frac{{\sum\limits_{i = 1}^{n} {b_{i,R}^{{}} } }}{n}$$
(4)
1.5.2.8 Decision-Making

Since the favorable (a W) and contrary (b W) evidence values of the barycenter are determined, the final decision is now ready to be made by utilizing the para-analyzer algorithm.

To do this, just plot the ordered pair (a W; b W) in the Cartesian plane and find out in which region of the lattice τ does the barycenter W belongs. If it belongs to the region of truth, then the decision will be favorable, i.e., the analysis implies that the enterprise is feasible. If it belongs to the region of falsity, then the decision will be unfavorable, i.e. the analysis implies that the enterprise is not feasible. However, if it is found in any other different region of the lattice (τ) the analysis is deemed not conclusive. In such a case, the feasibility of the enterprise is not stated.

Another way of reaching a final decision is the application of the decision rule. In this case, just calculate the degree of certainty of the barycenter (HW = a W – b W) and apply the decision rule. If HW ≥ LR, then the decision is favorable and the implementation of the enterprise is recommended (feasible); if HW ≤ –LR, then the decision is unfavorable and the implementation of the enterprise is not recommended (not feasible) and, if –LR < HW < LR, the analysis is inconclusive.

It is important to note, therefore, that the degree of certainty of the barycenter (HW) is the well determined final number that will enable the decision-making, and that the entire process will lead to this very important number.

All the operations described above can be carried out with the aid of a computer program such as Microsoft’s Excel Software Package. For simplification purposes, this program will be referred to as Calculation Program (CP).

In order to illustrate the application of the PDM, one example will be presented in the next paragraph.

2 PDM in Analysis of Viability

To set an example, we are going to apply PDM in a problem that marketing professionals often face, and that is a thorough study involving the launching of a new product [20]. There is a great number of factors influencing such a decision.

Basically, the idea is to isolate the factors of major influence on these decisions, establish five sections for each one and, with the assistance of specialists, obtain annotations for each factor in each section, attributing a degree of favorable evidence (a) and a degree of contrary evidence (b) to all of them [18, 19].

After that, applying the operators (MAX) and (MIN) one obtains resultant degrees of favorable evidence (a i,R) and contrary evidence (b i,R) for each factor. These, when plotted on the Cartesian Unit Square, CUS, will facilitate in finding out how viability was influenced by each factor. This is the para-analyzer algorithm.

For the final decision making, it is necessary to know the combined influence of all analyzed factors. This may be determined by the barycenter W of the points that represent each factor separately.

The degree of favorable evidence (a W) of W is the arithmetic mean of the resulting degrees of favorable evidence for all the factors, and the degree of contrary evidence (b W) is the arithmetic mean of the resulting degrees of contrary evidence for all the factors. With such values one can calculate the degree of certainty of W and apply the rule of decision.

2.1 Choosing Factors of Influence and Establishing Sections

We have come up with ten factors (F01 to F10) that may influence the viability of launching a new product.

For each of these factors five sections were established (S1 to S5), so that S1 represents a very favorable situation, S2 represents a favorable situation, S3 represents a neutral situation, S4 represents an unfavorable situation and S5 is a very unfavorable situation in terms of launching a new product. After that, specialists (E1 to E4) will be required to attribute the degree of favorable evidence (a) and the degree of contrary evidence (b) in relation to the viability of the product in each of the sections for all of the factors. Their results will constitute the database.

The chosen factors and the established sections are:

F01: necessity and utility of the product—translated by the percentage of the population that uses the product—S1: more than 90 %; S2: between 70 and 90 %; S3: between 30 and 70 %; S4: between 10 and 30 %; S5: less than 10 %.

F02: number of features or functions of the product—measured by comparing the average M of features or functions of similar market product—S1: more than 1.5 M; S2: between 1.2 and 1.5 M; S3: between 0.8 and 1.2 M; S4: between 0.5 and 0.8 M; S5: less than 0.5 M.

F03: competition—translated by the quality and quantity of competitors in the same region—S1: very little; S2: little; S3: average; S4: strong; S5: very strong.

F04: clients potential—translated by the size and purchasing power of the region’s population—S1: very big; S2: big; S3: average; S4: small; S5: very small.

F05: acceptance of product or similar product existing in the market—translated by the percentage of the population using the product—S1: more than 90 %; S2: between 70 and 90 %; S3: between 30 and 70 %; S4: between 10 and 30 %; S5: less than 10 %.

F06: product price in the market—translated in relation to the average market price P of the product (or a similar product)—S1: less than 70 %P; S2: between 70 and 90 %P; S3: between 90 and 110 %P; S4: between 110 and 130 %P; S5: more than 130 %P.

F07: product estimated cost—translated in relation to the market average price P (or a similar product)—S1: less than 20 %P; S2: between 20 and 40 %P; S3: between 40 and 60 %P; S4: between 60 and 80 %; S5: more than 80 %P.

F08: product life cycle (C)—measured by one time unit T—S1: more than 10 T; S2: between 8 and 10 T; S3: between 4 and 8 T; S4: between 2 and 4 T; S5: less than 2T.

F09: Deadline for project development and product implementation—measured in terms of life cycle (C)—S1: less than 10 %C; R2: between 10 and 30 %C; S3: between 30 and 70 %C; S4: between 70 and 90 %C; S5: more than 90 %C.

F10: Investment for project development and product implementation—Measured in terms of net result (RES) expected in the product life cycle—S1: less than 20 %RES; S2: between 20 and 40 %RES; S3: between 40 and 60 %RES; S4: between 60 and 80 %RES; S5: more than 80 %RES.

2.2 Database Construction

Below is an assumption of the opinions obtained from four specialists (E1: marketing professional; E2: economist; E3: production engineer; E4: business manager). They are given in Table 4.

Table 4 Database (degrees of favorable and contrary evidences attributed by specialists in each section for all factors)

2.3 Working Out the PDM

Once the database is built, we will proceed to analyze the viability of product X in Region Y. To do so, we must conduct a survey in Region Y with respect to product X, so as to determine in which section each factor is encountered. The result of this survey can be summarized in columns 1 and 2 of Table 5.

Table 5 Surveyed sections, degrees of favorable and contrary evidences, application of operators MAX and MIN, calculation and analysis of results

This means that researchers must check in Region Y for each of the factors Fi, (1 ≤ i ≤ 10) and in which section Sj (1 ≤ j ≤ 5) product X is found. Column 2 of Table 5 must be filled in with the values Sj. With these results we can then extract from the database (Table 4) the specialist’s opinions on the conditions of product X in Region Y. They are summarized in columns 3–10 in Table 5.

After that, we can apply the operators MAX and MIN of the Paraconsistent Annotated Evidential Logic. For this application it is necessary to form the groups of specialists according to the opinion of the engineer. For example, in the given frame of specialists it is reasonable to have: in Group A—a professional of marketing (E1) along with an economist (E2); in Group B—a production engineer (E3) with a business manager (E4). Therefore to apply the maximization (MAX) and minimization (MIN) rules to the specialist’s opinions, we will do the following:

$$\begin{aligned} & \left[ {\left( {{\text{E}}_{ 1} } \right)\;{\mathbf{MAX}}\;\left( {{\text{E}}_{ 2} } \right)} \right]\;{\mathbf{MIN}}\;\left[ {\left( {{\text{E}}_{ 3} } \right)\;{\mathbf{MAX}}\,\left( {{\text{E}}_{ 4} } \right)} \right]\,{\text{or}} \\ & {\mathbf{MIN}}\;\left\{ {{\mathbf{MAX}}\left[ {\left( {{\mathbf{E}}_{{\mathbf{1}}} } \right),\,\left( {{\mathbf{E}}_{{\mathbf{2}}} } \right)} \right], \, {\mathbf{MAX}}\left[ {\left( {{\mathbf{E}}_{{\mathbf{3}}} } \right),\,\left( {{\mathbf{E}}_{{\mathbf{4}}} } \right)} \right]} \right\} \\ \end{aligned}$$

In Table 5, the result of the application of the operator MAX to groups A and B (intra-groups) are in columns 11–14. The result of the application of the operator MIN between groups A and B (inter-groups) is shown in columns 15 and 16.

We are now going to analyze the final results with the para-analyzer algorithm. To do so, we are going to plot them together in the Cartesian plane (Fig. 5), assuming as boundary liner for truth and falsity the straight lines determined by |H| = 0.60 and as inconsistency and indetermination boundaries, the straight lines determined by |G| = 0.60, which means we are adopting 0.60 as the level of requirement for decision making, that is, we will make decisions with at least 0.60 or 60 % of certainty. The decision rule with such value is as follows:

Fig. 5
figure 5

Analysis of the results by para-analyzer algorithm

$$\begin{aligned} & {\mathbf{H}} \; \ge \; {\mathbf{0}}.{\mathbf{60}} \; \Rightarrow \;{\mathbf{viable}}; \\ & {\mathbf{H}} \; \le \; - {\mathbf{0}}.{\mathbf{60}}\; \Rightarrow \; {\mathbf{unviable}}; \, {\mathbf{and}} \\ & - {\mathbf{0}}.{\mathbf{60}} \; < \; {\mathbf{H}}\; < \; {\mathbf{0}}.{\mathbf{60}}\; \Rightarrow \;{\mathbf{inconclusive}}. \\ \end{aligned}$$

This analysis determines the influence of each factor (F1 to F10) for the viability of launching product X in Region Y and also the combined influence of all factors through the barycenter W.

In the present case of study, the viability analysis for product X in Region Y, the analysis of the points obtained in the CUS has shown us that four factors (F02, F03, F05 and F09) recommend the launching of the product with level of requirement equal to 0.60 since they belong to the truth region (viability); two factors (F01 and F06) do not suggest the launching of the product since they belong to the falsity region (unviability).

The other factors fell in the inconclusive region, thus indicating that the product launch is neither viable nor inviable. F04 fell in the semi-truth region that tends to inconsistency; F10 fell in the semi-truth region also tending to para-completeness or indetermination; and F07 e F08 is in the semi-falsity region that tends to para-completeness or indetermination.

However, the collective influences of all unviability factors in launching product X in region Y can be summarized by point W. This is the barycenter of the ten points and translates the combined influence of the ten analyzed factors. Since W is in the semi-truth region tending to inconsistency, we can say that the analysis result is inconclusive. That is, the analysis does not recommend the launching of product X in region Y, but it does not say otherwise either. It simply suggests that new surveys should be conducted in an attempt to increase the evidences.

The analysis of influence for each factor in relation to the product viability performed by the para-analyzer algorithm, can be done numerically by calculating the resulting degree of certainty, Hi = a i,Rb i,R for each of the factors and by the application of the rule of decision (columns 17–19 from Table 5). The influence of all factors combined can be analyzed likewise. The only thing to do is to calculate the barycenter’s degree of certainty W, HW = a Wb W.

Since (HW = a Wb W = 0.211) and (–0.60 < 0.211 < 0.60), the result is inconclusive, that is, it is not possible to assume the viability of product X launch in region Y, nor its unviability.

It is important to notice that once the survey is conducted, i.e., since column 2 of Table 5 has been filled out, all other operations translated by columns 3–19 can be automatically performed by a small computer program, based on Excel.

In order to perform a fidelity test of the method and exercise its application, we would suggest that the reader conduct a viability analysis to launch a product X′ in a Y′ region, assuming that in field research, all the factors fell into section S1, in other words, all the factors were highly in favor of the launching of product X′ in region Y′. In this case, evidently, it is expected a highly favorable viability analysis for product X′ in region Y′.

In fact, by applying the PDM to this case (and this is the expected drill) we have a W = 0.93 and b W = 0.09. This enables the calculation HW = a Wb W = 0.93 – 0.09 = 0.84. Since 0.84 ≥ 0.60, the rule of decision affirms the viability for product X′ launch in region Y′ (Fig. 6).

Fig. 6
figure 6

All factors are highly favorable

On the contrary, if all factors are in section S5, by the PDM we have a W = 0.15 and b W = 0.90 (please verify this result as an exercise). This leads to the calculation HW = a Wb W = 0.15 – 0.90 = –0.75. Since –0.75 ≤ –0.60, the rule of decision is claiming the unviability of product X″ launch in region Y″ (Fig. 7).

Fig. 7
figure 7

All factors are highly unfavorable

3 Decision Method (PDM) and Statistical Decision Method (SDM): A Comparison [25, 26]

3.1 An Application of the Rule of Decision to Enable the Comparison

In order to do the comparison, the rule of decision (or para-analyzer algorithm) will be applied in a hypothetical case. To make it simple, we have picked enterprise Ω in which only ten factors (F01 to F10) have significant influence [20]. We will assume that the opinions of four specialists (Ek) have been collected and that in order to apply the rules of maximization (MAX) and minimization (MIN), they have been grouped as: Group A: (E1 + E2) and Group B: (E3 + E4).

Therefore, the application scheme of Operators MAX and MIN is [27]:

$$\begin{aligned} & \left[ {\left( {{\text{E}}_{ 1} } \right)\;{\mathbf{MAX}}\;\left( {{\text{E}}_{ 2} } \right)} \right]\;{\mathbf{MIN}}\;\left[ {\left( {{\text{E}}_{ 3} } \right)\;{\mathbf{MAX}}\;\left( {{\text{E}}_{ 4} } \right)} \right]\;{\text{or}} \\ & {\mathbf{MIN}}\;\left\{ {{\mathbf{MAX}}\;\left[ {\left( {{\text{E}}_{ 1} } \right), \, \left( {{\text{E}}_{ 2} } \right)} \right],\;{\mathbf{MAX}}\;\left[ {\left( {{\text{E}}_{ 3} } \right), \, \left( {{\text{E}}_{ 4} } \right)} \right]} \right\} \\ \end{aligned}$$

For decision making, was choose a level of requirement equal to 0.70. So, the rule of decision is:

$$\begin{aligned} & {\text{H}} \; \ge \; 0. 70 \; \Rightarrow \;{\text{favorable decision }}\left( {\text{viable enterprise}} \right); \\ & {\text{H}} \; \le \;{-}0. 70\; \Rightarrow \; {\text{unfavorable decision }}\left( {\text{unviable enterprise}} \right); \\ & {-}0. 70 \; < \;{\text{H}} \; < \;0. 70 \; \Rightarrow \;{\text{inconclusive analysis}}. \\ \end{aligned}$$

Table 6 shows in columns 2–9 the degrees of favorable and contrary evidence that the specialists attributed to the factors; in columns 10–13, the results of the application of the rule of maximization (MAX) intra-groups; in columns 14 and 15, the degrees of favorable evidence (a R) and contrary evidence (b R) resulting from the application of the minimization rule (MIN) within the groups; and in columns 16–18, the analysis of results.

Table 6 PDM calculations table

3.2 Analysis of Results

There are eight factors in the region of truth and two in the region of quasi-truth, as you can see in Table 6 and in Fig. 8.

Fig. 8
figure 8

Analysis of result by the para-analyzer algorithm

Since (HW = a Wb W = 0.775) and (0.775 ≥ 0.70), the result is favorable, that is, it is possible to affirm the viability of the enterprise.

3.3 A Short Revision of the Statistical Decision Method (SDM)

Statistical decisions are decisions made concerning a specific population based on data gathered from its sample(s). For example, you are interested in determining the fairness of a coin, or comparing the efficiency of one drug over another in curing an illness, etc.

Statistical hypotheses about the population in question are formulated in an attempt to arrive at a decision. They constitute affirmations about the probability distributions of the population. Usually, a statistical hypothesis is formulated with the intention to be rejected [42].

So, to discover if a coin is faulty, one must formulate the hypothesis that it is not faulty, i.e., that the probability to obtain one of the faces (heads, for example) is p = 0.5. This is called the null hypothesis (H0: the coin is fair). Any other hypothesis different from the null is called the alternative hypothesis (H1: p ≠ 0.5, the coin is not fair) [7].

In practice, H0 is accepted, and based on a random sample together with probability theory, one shall determine if the sampled results are very different from the expected, that is, if the observed difference is significant enough to reject H0 and thereby accepting H1.

For instance, in tossing a coin approximately 50 times, 25 heads are expected to be obtained; however, if 40 heads are attained, then there’s an inclination to reject the hypothesis H0 that the coin is fair (and accept the alternative hypothesis H1).The process that allows us to decide upon rejecting a hypothesis by determining if the sampled data is significantly different from the expected, is called hypothesis testing or test of significance [5].

If H0 is rejected when it should be accepted, one may say that a type I error has occurred; but, if it is accepted when it should have been rejected, the error is type II [42]. In both situations there is an error of decision. The use of larger samples, which is not often possible, can help reduce the chance of these errors from occurring.

In testing an established hypothesis, H0, the maximum probability to commit a type I error is called the level of significance, often represented by α, for which the most common values are 0.05 (or 5 %) and 0.01 (or 1 %).

So, if α is set at 5 % in planning the hypothesis test, then there is a 5 in 100 chance that H0 will be rejected when in fact it should be accepted, that is, there is a 95 % confidence of making the right decision and so one can say that H0 is rejected at the 0.05 (or 5 %) level of significance. In the example of the coin, one should say that there are evidences that the coin is not fair, in the level of significance 0.05 (or 5 %).

If a variable X has a normal distribution with mean μX and standard deviation σX, then the reduced variable distribution (or standard score) [z = (X − μX)/σX] is normal with mean 0 and standard deviation 1 [31, 33, 42].

For the 5 % level of significance, the critical values z (zc), which separate the region of acceptance of H0 from the region of rejection of H0, are –1.96 and +1.96 (Fig. 9). So, if the value of X0 of the variable X observed in the sample leads to a score z0 less than or equal to –1.96, or greater than or equal to +1.96, then H0 will be rejected at the 5 % level of significance. In this case, one can say that z0 is significantly different from 0 (mean of z) to allow rejection of H0 at the 5 % level of significance. Therefore, for this level of significance, the statistical decision rule is:

Fig. 9
figure 9

Regions of acceptance and rejection in a normal curve

To accept H0: if –1.96 < z 0  < +1.96 or, in a more generic way,

$${\text{if}} \;{\mathbf{-z}}_{{\mathbf{c}}} < {\mathbf{z}}_{{\mathbf{0}}} < + \, {\mathbf{z}}_{{\mathbf{c}}} ;$$

To reject H0: if z 0  ≤ –1.96 or z 0  ≥ +1.96 or, in a more generic way,

$${\text{if}}\,{\mathbf{z}}_{{\mathbf{0}}} \le {\mathbf{-z}}_{{\mathbf{c}}} \,\,{\mathbf{or}}\,\,{\mathbf{z}}_{{\mathbf{0}}} \ge + {\mathbf{z}}_{{\mathbf{c}}} .$$

At the 1 % level of significance, the critical values of z are –2.58 and +2.58 (for two-tail tests).

3.4 The PDM and Normal Distributions

In order to compare the Paraconsistent Decision Method (PDM) with the Statistical Decision Method (SDM), a few considerations in relation to PDM have been made.

  1. (a)

    The variation interval of the degree of certainty (−1 ≤ H ≤ 1) has been divided into classes with amplitude a = 0.1, with extremes on whole decimal values of H (0.0 × 10−1, ±1.0 × 10−1, ±2.0 × 10−1, …) (column 2, Table 7). Therefore, the midpoints of the classes are: ±0.5 × 10−1 = ±0.05, ±1.5 × 10−1 = ±0.15, ±2.5 × 10−1 = ±0.25, …, ±9.5 × 10−1 = ±0.95 (Column 3, Table 7). To each class a value of the level of requirement (K) is associated (column 1, Table 7).

    Table 7 Classes, observed (PDM) and expected (Normal) frequencies, χ2 (chi-square) calculation and accumulated areas under the PDM and NAC curves, with standard deviation = 0.444
  2. (b)

    If H = M is the middle point of one class, then its extremes are M – 0.05 and M + 0.05 (Column 2, Table 7). So, this class will be defined by the interval K = M – 0.05 ≤ H < M + 0.05, for H ≥ 0, or M – 0.05 < H ≤ M + 0.05 = K, for H < 0, where K is the corresponding level of requirement (Fig. 10).

    Fig. 10
    figure 10

    Classes of the degree of certainty, emphasizing two ones: levels of requirement 0.5 (PQRS) and 0.4 (EFGH)

  3. (c)

    For each class, the area of the defined (demarcated) CUS region was calculated (Fig. 10). It was called the class area and its value AM = 0.1 × (1 – |M|) was obtained.

  4. (d)

    Since the CUS area is equal to 1, the frequency of the class defined by the value H = M (center of class) is equal to the class area (AM) divided by its amplitude (a).

Therefore: f(H = M) = AM/a = 0.1 × (1 – |M|)/0.1 = 1 – |M|.

  1. (e)

    So, it is possible to calculate areas (AM) and frequencies (fM) of all classes (columns 4 and 5, Table 7) and produce the corresponding frequencies diagram (Fig. 11).

    Fig. 11
    figure 11

    Distribution of frequencies obtained by the PDM

  2. (f)

    One level of requirement LR = K is adopted for decision making by PDM. This implies that the decision will be favorable if HW ≥ K and unfavorable, if HW ≤ –K, HW being the degree of certainty of the barycenter.

The decision will be favorable if the barycenter W belongs to the CUS region defined by condition H ≥ K, that is, if it belongs to the tail-end of the curve formed by the classes of middle points M so that M ≥ K + 0.05 or |M| ≥ K + 0.05.

The decision will be unfavorable if the barycenter W belongs to the CUS region defined by condition H ≤ –K, that is, if it belongs to the curve tail formed by the classes of middle points M so that M ≤ –K – 0.05 or |M| ≥ K + 0.05.

Therefore, if the barycenter W should belong to one of the tail-ends of the curve (right or left) of the distribution of H frequencies defined by the level of requirement LR = K, then it means that the degree of certainty of the barycenter is significantly different from zero so that one can make a decision (favorable or unfavorable).

In order to perform the comparison using the statistical method of decision (SMD), we have looked at the normal distribution of mean equal to zero (since distribution of H has mean equal to zero as well) that better adheres to the distribution of H frequency (of the PDM).

To measure this adherence, a χ2 (chi-squared) test was applied. To do so, the frequency of each class of the degree of certainty (fO = fH) (column 5, Table 7, Fig. 11) was considered as observed frequency, while the frequency of the same class obtained by the normal curve (fE = fN) (column 6, Table 7, Fig. 12) was considered as the expected frequency. This frequency was obtained with the help of an Excel table by using the function DIST.NORM(X; SMDIA; DESVPAD; FALSE).

Fig. 12
figure 12

Distribution of frequencies obtained by NAC curve (average zero e standard deviation 0.444)

We have found out that the best adherence of the normal distribution of mean zero to the distribution of degree of certainty of the PDM occurs at a standard deviation equal to 0.444, for which the chi-squared is minimum and equal to χ2 = 0.07412 (Table 8, result from Column 7, Table 7, Figs. 13 and 14). This was called the normal adherent curve (NAC).

Table 8 Comparison between the areas of tail distribution of PDM and NAC curves and variation of χ2 value for some values of the standard deviation
Fig. 13
figure 13

PDM and NAC distribution curves

Fig. 14
figure 14

Accumulated areas of PDM and NAC distribution curves

In these conditions, decision by PDM with a level of requirement equal to K (favorable if HW ≥ K, or unfavorable if HW ≤ –K) corresponds to a decision by SDM with a level of significance equal to the area under NAC, above K (favorable decision) or below –K (unfavorable decision) (See Table 8, Fig. 15).

Fig. 15
figure 15

Tail of normal curve with the best adherence to PDM curve (NAC)

3.5 Comparison of PDM with SDM

For the normal curve, the area of each class has been calculated by the product of its frequency (column 6, Table 7) with the amplitude of classes (a = 0.1). The accumulated distribution areas of PDM and Normal curves (columns 8 and 9, Table 7) were obtained by the accumulated sum of the areas of the classes. In this calculation for the normal curve a correction was made corresponding to the area under the curve up to the value −1.0.

The area in the tail-end (α) of the normal curve is called the level of significance and represents the uncertainty with which one can accept that the result obtained (HW) is sufficiently different from zero (mean of H) in order to say that the enterprise is viable (favorable decision) or unviable (unfavorable decision).

Similarly, the PDM’s tail-end of the curve that will be called the level of uncertainty (β), represents the area of the CUS region (here a triangle) to which H ≥ K or H ≤ −K. So, when we state that a decision has been made by the PDM with a level of requirement, it means that the degree of certainty of the barycenter is, in module, greater than or equal to the level of requirement or that the decision displays a level of uncertainty β.

As seen before, in order to make a decision with PDM, it is necessary to calculate the degree of certainty of the barycenter (HW) and compare it with the level of requirement. The example shows that HW = 0.775 is compared with the level of requirement LR = 0.70. Since HW ≥ LR, the decision is favorable (the enterprise is viable) to the level of requirement 0.70, that is, it is possible to say that the enterprise is viable with a maximum level of uncertainty β = 4.50 % (See Table 8).

In order to make a decision using the statistical process, it is necessary to calculate:

  1. (a)

    the critical value of the standard variable of the normal adherent curve NAC (*zc) that corresponds to the chosen level of requirement (0.70, in the example). To do so, it is necessary to check how many standard deviations of the NAC (0.444) the level of requirement is above the mean (zero), as follows:

    $$^{*} {\text{z}}_{\text{c}} = ( 0.70-0 )/0.444 = 1.58;$$
  2. (b)

    the observed value of the standard variable of NAC (*zo) that corresponds to the degree of certainty of the barycenter (0.775, in the example). To do so, it is necessary to check how many standard deviations of the NAC (0.444) the degree of certainty of the barycenter is above the average (zero), as follows:

    $$^{*} {\text{z}}_{\text{o}} = {{(0.775 - 0)} \mathord{\left/ {\vphantom {{(0.775 - 0)} {0. 4 4 4}}} \right. \kern-0pt} {0. 4 4 4}} = 1. 7 5;$$
  3. (c)

    Since *zo ≥ *zc, it is clear that the value Hw is significantly larger than the mean zero, leading to the conclusion of the analysis as being favorable (the enterprise is viable) at a level of significance 5.71 % (See Table 8).

Note: In Table 8 we noticed that if the level of requirement adopted by the PDM is 0.60, then the degree of uncertainty of the PDM will be 8.00 % and the level of significance of the SMD will be 8.78 %; similarly, if it’s 0.80, then these values will be 2.00 and 3.55 %, respectively.

3.6 Conclusions

During the development of the PDM and its comparison with the SDM, it was observed that they are similar in many aspects. For some aspects, the PDM seems to be more advantageous while for others, the SDM is the more advantageous.

Since it uses techniques of paraconsistent annotated evidential logic Eτ, the PDM represents a valuable and original tool in the process of decision making capable of dealing with uncertain and contradictory data—without being trivial—and without collapsing. Usually this feature is not present in classical decision processes such as the SDM, which are based on classical logic.

The PDM offers results that equally indicate if the survey displays viability (truth) or unviability (falsity) of the analyzed enterprise or even if the result is not conclusive, thus recommending a further and more accurate analysis. The SDM serves this purpose with the same efficiency.

Furthermore, judging the position of the representative points of the factors of influence and the barycenter in the lattice (τ), the PDM indicates the level of contradiction displayed by the data in relation to each factor—or all of them—put altogether.

Therefore, going further from the SDM, the PDM can state, for example, if there is any contradiction between the data used and if such contraction is emphasized or not. It also indicates if the contradiction shown constitutes inconsistency or para-completeness (lack of data). Therefore, not only does it accepts contradictory data, but it also points out the degree of contradiction of this data and, more importantly, it’s possible for such data to be manipulated and utilized despite being contradictory.

The PDM offers the important possibility of qualitative analyses of balance sheets, investments, etc. to be transformed into quantitative analyses, which are more accurate and useful for professionals of those areas and are also easier to be handled in computational processes.

This is achievable because the PDM deals with degrees of evidence, which despite being objective, numbers translate subjective features—experts’ opinions resulting from experience, knowledge and sensitivity accumulated throughout the years. Such subjectivity—although offering the PDM more opportunities, can be understood as a sore point in relation to the SDM that uses purely objective data.

In PDM the opinions of the experts are collected once, then stored in a database that may be used in many decision making instances. With this, and without any additional costs, it is possible to use high level experts and make their wise opinions last forever.

Another advantage common to the PDM and SDM is versatility. It is possible to make PDM more accurate and reliable in many ways such as using a larger number of factors of influence, or establishing more than three sections for each factor, increasing the requirement level and collecting the opinions of a larger number of experts to build the database, etc.

One of the greatest advantages of the PDM over SDM is that the first can only compare levels of evidence without having to operate on them. This is crucial considering that the evidence degrees are variables that get only to the ordinal level and therefore, they could not be applied to the SDM, which requires variables at reasoning levels. However, as it has been shown in item 13, application of the SDM with the PDM database can lead to significant coherent results. There are a fuzzy method of decision and its comparison with the statiscal method [26, 38, 43,44, 46]