Keywords

1 Introduction

Group Decision Making (GDM) is a process where multiple decision makers (or experts) act collectively, analyze problems, evaluate options according to a set of criteria, and select a solution from a collection of alternatives [3]. The importance of solving these topics is paramount as applications can be found in management science, operational research, industrial and chemical engineering, among others Cf. [4, 8,9,10,11, 18].

As noted in [7], when organizations gather specialized groups o larger groups, and the number of alternatives increases, unanimity may be difficult to attain, particularly in diversified groups. For this reason flexible or milder benchmarks (definitions) of consensus and consistency have been employed. Consensus has to do with group cooperation since the alternative, option, or course of action to be attained is the best representative for the entire group. On the other hand, consistency is obtained when each expert has her/his information and, consequently, her/his judgments free of contradictions. Since inconsistencies may lead to incoherent results, Individual Consistency should be sought after in order to make rational choices [5, 15], and is then related with the management of human subjectivity, imprecision, hesitation or uncertainty along the decision-making process.

For instance, in the AHP method, an MPR or a pairwise comparison matrix is composed of judgements between any two criteria components which are declared within a crisp rank, called Saaty’s Scale (\(SS\in [\frac{1}{9} \ 9]\)). An MPR is also called a subjective judgement matrix and is adopted to express decision maker(s) (DM) preferences.

Thus, AHP method is a very common method for multi-criteria decision making and still remains an easy and a reliable approach, as many real applications have demonstrated. Nevertheless, dealing with expert’s uncertainties is still an open problem.

In this paper these problems are addressed and solved through the Hadamard product to measure the dissimilarity of two matrices and an algorithm based on a Non Linear Optimization Approach (NLOA). The main contribution is a couple of algorithms for giving reliable intervals for a group of DM whom have proposed a set of MPRs possibly inconsistent. After some iterations, these algorithms return a set of reliable intervals MPRs which are now consistent within an arbitrary threshold. Thus, the set of DM can now confidently pick up an MPR or the entire Interval MPR from their corresponding reliable Interval MPR, in order to re-express her/his final judgment decision. The main goal is to develop a system that, once a set of DM has proposed a set of MPRs, then a set of reliable intervalsFootnote 1 can be generated by the system. In this manner, DM could be more confident, by re-expressing their judgments within these reliable intervals; despite their inherent uncertainty and imprecision, due to incomplete information or the evolving problem complexity.

The paper is organized as follows: In Sect. 2, some preliminaries are given to support a basis for the main methodologies and techniques previously described above. In Sect. 3, the methodology applicable for obtaining reliable intervals given by MPRs is introduced, and the main role in the GDM framework is enlightened. Then in Sect. 4, the GDM implementation is described in detail via some numerical examples. Finally, in Sect. 5 some concluding remarks and discussions, about the main advantages of the methodology and future research recommendations are provided.

2 Preliminaries

In the following, some necessary concepts and properties are introduced to support our contribution. Further details can be found in [6, 12, 17].

Consider a GDM problem and let \(D =\{d_1, d_2, \cdots , d_m\}\) be the set of DM, and \(C=\{c_1,c_2,\cdots ,c_n\}\) be a finite set of alternatives, where \(c_i\) denotes the ith alternative. With an MPR, a DM provides judgments for every pair of alternatives which reflect her/his degree of preference of the first alternative over the second. Thus, an MPR, for instance A = \((a_{ij} )_{n\times n}\) is a positive reciprocal \(n\times n\) matrix, \(a_{ij} > 0\), such that \(a_{ji} = 1/a_{ij}\), \(\forall i, j \in N\), and consequently \(a_{ii} = 1\); where \(i\in N\). Note that \(a_{ij}\) belongs precisely to the Saaty’s scale and is interpreted as the ratio of the preference intensity of alternative \(c_i\) to that of \(c_j\).

An MPR \(n\times n\) matrix is called a completely consistent matrix (cf. [12]) if

$$\begin{aligned} a_{ij} = a_{il}a_{lj}, \ \forall \ i, j, l \in N. \end{aligned}$$
(1)

Thus, a completely consistent matrix \(K=(k_{ij})_{n\times n}\) can be constructed from (1) as follows,

$$\begin{aligned} k_{ij} = \prod _{r=1}^{n} \left( a_{ir}a_{rj}\right) ^{1/n}. \end{aligned}$$
(2)

Some Lemmata are given below in order to obtain our main results.

Lemma 1

Suppose \(a>0\), \(\epsilon >1\), then

$$\begin{aligned} 1.0 < a+\frac{1}{a} \le a^\epsilon +\left( \frac{1}{a}\right) ^\epsilon \end{aligned}$$
(3)

where equality holds if and only if \(a=1\).

Proof:

It is straightforward since the addition of inequalities and reciprocal values (Cf. [2], pp. 29–31).

Lemma 2

Suppose \(a>0\), \(\epsilon <1\), then

$$\begin{aligned} 1.0< a^\epsilon +\left( \frac{1}{a}\right) ^\epsilon \le a + \frac{1}{a} \end{aligned}$$
(4)

where equality holds if and only if \(a=1\).

Proof:

It is straightforward. Idem.

Lemma 3

Let us suppose \(a>1\), \(\overline{\epsilon }\le 1\), , then, without loss of generality, any given intervalFootnote 2 in \(\left[ \frac{1}{9}\ \ 9\right] \) can be obtained by

where and equality holds if and only if

Proof:

It is straightforward (Cf. [2], pp. 657–658).

2.1 Measuring the Consistency of an MPR

The Hadamard product is a useful operator to measure the degree of deviation between two MPRs, where given \(A=(a_{ij})_{n\times n}\) and \(B=(b_{ij})_{n\times n}\) is defined by

$$\begin{aligned} C=(c_{ij})_{n\times n}=A\circ B=a_{ij}b_{ij}. \end{aligned}$$
(5)

The degree of dissimilarity of A and B is given by \(d(A,B)=\frac{1}{n^2}e^T\, A\circ B^T e\). I.e.,

$$\begin{aligned} d(A,B)=\frac{1}{n^2} \varSigma _{i=1}^{n}\varSigma _{j=1}^{n}a_{ij}b_{ji}= \frac{1}{n}\left[ \frac{1}{n}\varSigma _{i=1}^{n-1}\varSigma _{j=i+1}^{n}\left( a_{ij}b_{ji}+a_{ji}b_{ij}\right) +1\right] . \end{aligned}$$
(6)

where \(e=(1,1,\cdots ,1)^T_{n\times 1}\).

Note that \(d(A,B)\ge 1\) where \(d(A,B)=1\) if and only if \(A=B\) and \(d(A,B)=d(B,A)\).

Thus, through the Hadamard product, the consistency index of A is defined as \(CI_{K}(A)=d(A,K)\), where K is the corresponding completely consistent matrix obtained from A.

Then if

$$\begin{aligned} d(A,K)=CI_{K}(A)\le \overline{CI}, \end{aligned}$$
(7)

where \(\overline{CI}\) is an acceptable threshold value, then we call matrix A as an MPR with an acceptable consistency.

An MPR A is completely consistent if and only if \(CI_K(A)=1\). Thus, a threshold used to test the compatibility of two MPRs up to an acceptable level of consistency, was suggested in [13, 16] as \(\overline{CI}=1.1\).

On the other hand, trough the EM, the consistency ratio (CR) was defined in [12] as \(CR = CI/RI\), where the Random Index (RI) is the average value of the CI for random matrices using the Saaty’s Scale. Moreover, an MPR is only accepted as a consistent matrix if and only if its \(\overline{CR} \le 0.1\).

When it comes to measuring the degree of dissimilarity between two matrices, both methods yield the same results. Nevertheless, as previously stated in [13], the Hadamard product is more reliable to measure the dissimilarity of two matrices constructed by ratio scales.

2.2 Prioritization Method

The process of deriving a priority vector \(w=(w_1,w_2,\cdots ,w_n)^T\) from an MPR, is called a prioritization method, where \(w_l\ge 0\) and \(\varSigma _{l=1}^nw_l=1\). Two prioritization methods are commonly used:

  1. (1)

    The eigenvalue method (EM), (proposed by [12] and [13]), where the principal right eigenvector of A, called \(\lambda _{max}\), is the desired priority vector w, which can be obtained by solving

    $$\begin{aligned} Aw=\lambda w, \quad e^Tw=1. \end{aligned}$$
    (8)
  2. (2)

    Row geometric mean method (RGMM) or logarithmic least square method: The RGMM uses the \(L^2\) metric by defining an objective function of the following optimization problem:

    $$\begin{aligned} \left\{ \begin{array}{*{20}l} \min \varSigma _{i=1}^n \varSigma _{j>i} \left[ \ln (a_{ij}) -(\ln (w_i)-\ln (w_j)) \right] ^2 \\ s.t.\ w_i\ge 0, \, \, \varSigma _{i=1}^n \ w_i=1 \end{array} \right. , \end{aligned}$$
    (9)

where, a unique solution exists with the geometric means of the rows of matrix A:

$$\begin{aligned} w_{i}= \frac{(\prod _{j=1}^n a_{ij})^{1/n}}{\varSigma _{i=1}^n(\prod _{j=1}^n a_{ij})^{1/n}}. \end{aligned}$$
(10)

As both methods (EM and RGMM) generate similar results (Cf. [14]) and since the group of DM is assumed acting together as a unit, AIJ and RGMM become appropriate methods to give reliable intervals in assessment of consistency model in GDM. Thus, AIJ and RGMM methods are used in the remainder of this paper.

2.3 Individual Consistency Improving Algorithm

Algorithm 1

Input: The individual multiplicative preference relations \(A_l=(a_{ij})_{n\times n}\), \(l=1,2,\cdots ,m\), the Consistency parameter \(\theta \in (0,1)\), the maximum number of iterative times \(h_{\max }\ge 1\), the Consistency threshold \(\overline{CI}\)Footnote 3.

Output: the adjusted multiplicative preference relation \(\overline{A}_l\) and the Consistency index \(CI_H(\overline{A_l})\).

Step 1. Set \(A_{l,0}=(a_{ij, 0}^{l})_{n \times n}=A_l=(a_{ij}^{l})_{n\times n}\) and \(h=0\).

Step 2. Compute \(K_l\) by (2) and the Consistency index \(CI_H(A_l)\), where

$$\begin{aligned} CI_H = d(A_l,K_l). \end{aligned}$$
(11)

Step 3. If \(CI_H(A_l)\le \overline{CI}\), then go to Step 5; otherwise, go to the next step.

Step 4. Apply the following strategy to update the last matrix \(A_l=(a_{ij, l})_{n\times n}\).

$$\begin{aligned} A_{l+1} = (A_l)^\theta \circ (K_l)^{1-\theta }. \end{aligned}$$
(12)

where \(\theta \in (0, 1)\). Let \(l=l+1\), and return to Step 2.

Step 5. Let \(\overline{A} = A_l\). Output \(\overline{A}_l\) and \(CI_H (\overline{A}_l)\).

Step 6. End.

Based on the Algorithm 1, Eq. (2) and Definition given by Eq. (7):

Theorem 1

For each iteration r, the consistency of the MPR under analysis is improved.

I.e., and , \(\forall \ \ \beta > 1\).

Proof:

From step 4 of the Algorithm 1 and Eq. (2), it follows:

$$\begin{aligned} \begin{array}{rll} a_{ij,r+1}&{}=&{}(a_{ij,r})^{\theta } \cdot (k_{ij,r})^{1-\theta },\\ k_{ij,r+1}&{}=&{}\mathop {\prod }\nolimits _{l=1}^{n}(a_{il,r+1}\cdot a_{lj,r+1})^{\frac{1}{n}}=\mathop {\prod }\nolimits _{l=1}^{n}\left[ (a_{il,r}\cdot a_{lj,r})^{\theta }\cdot (k_{il,r}\cdot k_{lj,r})^{1-\theta }\right] ^{\frac{1}{n}}\\ &{}=&{}k_{ij,r}. \end{array} \end{aligned}$$
(13)

Then,

$$\begin{aligned} \begin{array}{lll} a_{ij,r+1}k_{ji,r+1}= & {} (a_{ij,r})^{\theta } \cdot (k_{ij,r})^{1-\theta }k_{ji,r}=(a_{ij,r})^{\theta } \cdot (k_{ji,r})^{\theta -1}k_{ji,r}=(a_{ij,r}k_{ji,r})^{\theta }. \end{array} \end{aligned}$$
(14)

By using Lemma 2, one obtains:

$$\begin{aligned} \begin{array}{lll} a_{ij,r+1}k_{ji,r+1}+a_{ji,r+1}k_{ij,r+1}= & {} (a_{ij,r} k_{ji,r})^{\theta }+ (a_{ji,r} k_{ij,r})^{\theta }\le a_{ij,r}k_{ji,r}+a_{ji,r}k_{ij,r}. \end{array} \end{aligned}$$
(15)

Let a pair (i, j) be chosen such that the inequality strictly holds. I.e., since \(A^{(r)}\ne K^{(r)}\), then there exists at least one pair such that:

$$\begin{aligned} \begin{array}{lll} \frac{1}{n^2}\mathop {\sum }\nolimits _{i=1}^{n-1} \mathop {\sum }\nolimits _{j=i+1}^{n} &{}&{}( a_{ij,r+1}k_{ji,r+1}+a_{ji,r+1}k_{ij,r+1})+\frac{1}{n} <\\ &{}&{} \frac{1}{n^2}\mathop {\sum }\nolimits _{i=1}^{n-1} \mathop {\sum }\nolimits _{j=i+1}^{n} ( a_{ij,r}k_{ji,r}+a_{ji,r}k_{ij,r})+\frac{1}{n} \end{array}. \end{aligned}$$
(16)

It implies, . Thus, we have , and consequently, the sequence is monotone decreasing and has a lower bound. Apply the limit existence theorem for a sequence where exists, thus . Assume that

By contradiction, it can be shown that the .

Since is calculated iteratively in the Algorithm 1, decreases as the number of iterations increases.   \(\square \)

3 Reliable Intervals Programming Method

In order to have an assessment of individual consistency (\(CI_H\)), one can measure the compatibility of \(A_l\) with respect to (w.r.t.) its own completely consistent matrix K given by Eq. (2). Thus,

$$\begin{aligned} CI_H(A_l)=d(A_l,K)\le \overline{CI}, \end{aligned}$$
(17)

where \(\overline{CI}=1.1\), \(A_l\), \(l=1,2,\cdots ,m\) is an individual MPR.

From Eqs. (17), (2) and (6), for \(CI_H\) it follows \(d(A,K)\le \overline{CI} \Rightarrow \)

$$\begin{aligned} \begin{array}{lll} 1.0\le \frac{1}{n}\left[ \frac{1}{n}\varSigma _{i=1}^{n-1}\varSigma _{j=i+1}^{n}\left( a_{ij} \mathop {\prod }\nolimits _{r=1}^n(a_{jr}a_{ri})^{1/n}+a_{ji}\mathop {\prod }\nolimits _{r=1}^n(a_{ir}a_{rj})^{1/n}\right) +1\right] \le \overline{CI}.\\ \end{array} \end{aligned}$$
(18)

3.1 Reliable Intervals for Individual Consistency

By applying the Algorithm 1 to a matrix \(A_l\), and when the modifier parameter \(\theta \) is close to 1, i.e. \(0<<\theta <1\), a new matrix defined as \(A^O_{max}\) is obtained. This matrix has slight modificationsFootnote 4 from the original \(A_l\) under analysis. Naturally, \(A^O_{max}\) verifies individual Consistency by Eq. (17). On the other hand, when \(\theta \) is close to zero, i.e as \(0<\theta<<1\), a new matrix, defined as \(A^O_{min}\) is obtained.

Based on this fact, an interval matrix \(A_I^O\) can be defined as follows,

$$\begin{aligned} A_M^{O} \equiv \left[ a_{ij,\min }^O \ - \ a_{ij,\max }^O \right] _{n\times n}, \quad i,j\in N, \ M=1,2,\cdots ,m. \end{aligned}$$
(19)

Naturally, for each one of the \(A^O_M\) interval matrices synthesized from \(A_l\), \(l=1,2,\cdots ,m\); they couldn’t verify, for every combination of values chosen within the intervals, the inequality given in Eq. (18). Then a Nonlinear Minimization Approach (NLOA) can solve the inequalities from Eq. (18) for consistency evaluation, and the bounds imposed by Eq. (19).

From Eq. (19), following inequalities can be stated in terms of optimization variables \(x_i\), \(i=1,2,\cdots ,n\) as follows:

$$\begin{aligned} \begin{array}{ccccc} a_{12,min}^{O} \le x_1 \le a_{12,max}^{O},\cdots , a_{(n-1)n,min}^{O} \le x_{\frac{n^2-n}{2}} \le a_{(n-1)n,max}^{O}, \end{array} \end{aligned}$$
(20)

and the inequalities given from Eq. (18) are \(1.0\le \overline{CI}\le 1.1\).

Finally, the initialization point \(x_0\) used in the NLOA can be set at any point within the corresponding interval given by Eq. (20).

In the following, an algorithm based on a NLOA is used to obtain reliable intervals for assessment of based decision models such as those given by Algorithm 1 for index \(\overline{CI}\).

The following method obtains reliable and acceptably Consistent intervals matrices. Thus, we can find an interval matrix verifying individual consistency in the whole interval.

In order to use a NLOA, a Sequential Quadratic Programming (SQP) algorithm can be found in [1]. In the following, our algorithm is described in detail.

Algorithm 2

Input: \(A_M^O=(a_{ij,min}^O - a_{ij,max}^O)_{n\times n}\), the initial interval matrix; \(x_0\): the initial value for the nonlinear optimization; which is to be defined within the corresponding interval; \(\overline{CI}\) for the individual Consistency assessment.

Output: \(\overline{A}_M^O\): the consistency interval matrix computed and verifying interval conditions given by Eq. (18).

Step 1: Get the function for assessment of individual Consistency given by Eq. (18)

Step 2: Define for the nonlinear optimization algorithm

$$\begin{aligned} a_{ij,min}^{O} \le x_i \le a_{ij,max}^{O}; j>i , \ i,j=1,2,\cdots ,n. \end{aligned}$$
(21)

Thus, assign the linear inequality constraints as follows:

$$\begin{aligned} \begin{array}{lll} c(2k+1)&{}=&{}x_i- a_{ij,max}^{O}; \ c(2(k+1))=x_i-a_{ij,min}^{O};\\ j>i , \ i,j&{}=&{}1,2,\cdots ,n, \ k=0,1,2, \cdots ,\frac{n^2-n}{2}. \end{array} \end{aligned}$$
(22)

Step 3: Obtain the acceptable index of individual consistency, \(1.0 \le CI \le \overline{CI}\).

Thus, based on matrix K given by Eq. (2), the nonlinear inequality constraints imposed is given by (18).

Step 4: Solve the former nonlinear optimization problem using an algorithm to minimize it and obtain the matrix \(\overline{A}_{Mmin}^O=(a_{Mij,min}^{O})_{n\times n}\). Solve again the same nonlinear optimization problem but this time in order to maximize it. Obtain \(\overline{A}_{Mmax}^O=(a_{Mij,max}^{O})_{n\times n}\).

Step 5: Compose the Consistency Interval Matrix \(\overline{A}_M^O\) as follows:

$$\begin{aligned} \overline{A}_M^O = (\overline{a}_{Mij,min}^{O} - \overline{a}_{Mij,max}^{O})_{n\times n} \end{aligned}$$
(23)

where (\(\overline{a}_{Mij,min}^{O} - \overline{a}_{Mij,max}^{O}\)), stands for the interval obtained.

Step 6: end.

A scheme of the Algorithm implementation is depicted in Fig. 1.

Fig. 1.
figure 1

Process flowchart for improving individual consistency.

4 The Complete Process of Improving Consistency for an MPR

In Fig. 1 is shown a complete support model for a Decision Making problem based on Reliable Intervals.

4.1 Numerical Examples

In the following, Algorithms 1 and 2 are applied to numerical examples, built and used by several authors [6, 17], and tested in order to illustrate their performance.

Example 1:

Let us suppose a set of five DM providing the following judgment matrices \(\{A_{1},\cdots ,A_{5}\}\) on a set of four alternatives \(C_1\), \(C_2\), \(C_3\), and \(C_4\), which need to be ranked from best to the worst. Let \(w^{(k)}=(w_1^{(k)},\cdots ,w_4^{(k)})^T\) be the individual priority vector derived from judgment matrix \(A_{k}\) using RGMM or the eigenvector method. \(A_{k}\) and \(w^{(k)}\), \((k=1,2,\cdots ,5)\) are

$$\begin{aligned} \begin{array}{lll} A_1= \left( \begin{array}{cccc} 1 &{} 4 &{} 6 &{} 7 \\ &{} 1 &{} 3 &{} 4 \\ &{} * &{} 1 &{} 2 \\ &{} * &{} * &{} 1 \\ \end{array} \right) ,\ A_2= \left( \begin{array}{cccc} 1 &{} 5 &{} 7 &{} 9 \\ &{} 1 &{} 4 &{} 6 \\ &{} * &{} 1 &{} 2 \\ &{} * &{} * &{} 1 \\ \end{array} \right) ,\ A_3= \left( \begin{array}{cccc} 1 &{} 3 &{} 5 &{} 8 \\ &{} 1 &{} 4 &{} 5 \\ &{} * &{} 1 &{} 2 \\ &{} * &{} * &{} 1 \\ \end{array} \right) ,\\ A_4= \left( \begin{array}{cccc} 1 &{} 6 &{} 7 &{} 8 \\ &{} 1 &{} 5 &{} 5 \\ &{} * &{} 1 &{} 4 \\ &{} * &{} * &{} 1 \\ \end{array} \right) ,\ A_5= \left( \begin{array}{cccc} 1 &{} 1/2 &{} 1 &{} 2 \\ &{} 1 &{} 2 &{} 3 \\ &{} * &{} 1 &{} 4 \\ &{} * &{} * &{} 1 \\ \end{array} \right) , \end{array} \end{aligned}$$
(24)
$$\begin{aligned} \begin{array}{l} w^{(1)}=(0.6145, 0.2246, 0.0985, 0.0624),\ w^{(2)}=(0.6461,0.2270, 0.0793, 0.0476),\\ w^{(3)}=(0.5393, 0.2764, 0.0967, 0.0575),\ w^{(4)}=(0.6514, 0.2174, 0.0885, 0.0428),\\ w^{(5)}=(0.2221, 0.4134, 0.2641, 0.1004). \end{array} \end{aligned}$$
(25)

where \(*\) stands for the corresponding inverted terms of symmetric entries.

Thus, the corresponding priority vector \(w^{c}=(w_1^{c}, w_2^{c}, w_3^{c}, w_4^{c},w_5^{c})^T,\) is calculated and listed below.

By applying the Algorithm 1 the Consistency assessment is addressed with \(\overline{CI}=1.1\) for each \(A_i, i=1,2,\cdots ,5\), one obtains:

$$\begin{aligned} \begin{array}{lcl} CI_H(A_1)=1.0255, CI_H(A_2)=1.0450, CI_H(A_3)=1.0226\\ CI_H(A_4)=1.1194, CI_H(A_5)=1.0241, CI_H(A^{c})=1.0324.\\ \end{array} \end{aligned}$$
(26)

MPRs \(A_1\), \(A_2\), \(A_3\), \(A_5 \) are of acceptable consistency, however \(A_4\) does not. Then, in the following we apply Algorithm 1 in order to obtain reliable intervals where consistency holds.

Algorithm 1: Processing Consistency in the intervals

Step 1: Take \(0<\theta<<1\), for instance \(\theta =0.01\) and apply Algorithm 1. Then, re-execute Algorithm 1 for \(0<<\theta <1\), for instance \(\theta =0.99\). After 1 step for \(\theta =0.01\) and 9 steps for \(\theta =0.99\), a Consistency Interval MPR, defined by Eq. (19) can be obtained as:

$$\begin{aligned} {A_{4}}= \left( \begin{array}{cccc} (1,1) &{} (3.0175 , 5.6504 ) &{} (7.0307 , 7.3598) &{} (8.4579, 15.1293) \\ &{} (1,1) &{} (2.4748 , 4.7021) &{} (5.0070 , 5.0805 ) \\ &{} * &{} (1,1) &{} (2.0816 , 3.7782) \\ &{} * &{} * &{} (1,1) \\ \end{array} \right) , \end{aligned}$$
(27)

Note: Eq. (27) is not reliable, since it does not verify conditions imposed on the Saaty’s scale.

By applying the Algorithm 2, a reliable consistency interval \(\overline{A}_4^O\) is obtained, and if the fourth DM decides, for example, that the midpoints are the better evaluation, a final MPR follows:

$$\begin{aligned} \overline{A}_{4}= \left( \begin{array}{cccc} 1 &{} 4.33395 &{} 7.19525 &{} 8.72895 \\ &{} 1 &{} 3.58845 &{} 5.04375 \\ &{} * &{} 1 &{} 2.9299 \\ &{} * &{} * &{} 1 \\ \end{array} \right) , \end{aligned}$$
(28)

The final ranking of the alternatives is \(C_1>C_2>C_3>C_4\), which coincides with [6, 17] where MPR \(A_4\) is slightly different but evaluated in a similar ratio. This result indicates that \(C_1\) is the best option, nevertheless the strategy to pick up a suitable point in Consistent reliable intervals is revealed to be useful when, given a particular situation the DM has to observe some constraints imposed by the(ir) framework or express the(ir) uncertainties.

5 Concluding Remarks and Future Work

In order to provide a flexible tool for DM when they are required to derive a suitable Multiplicative Preference Relation, this paper demonstrates the utilization of a methodology to synthesize reliable intervals where consistency constraints hold. Once decision makers have proposed their MPRs, our algorithm can solve for intervals from well-known decision support models. One advantage of our algorithm is that DM can re-express their preferences within an interval where usually, they have to observe some constraints based on decision targets, framework rules and advice. Depending on the analyzed problem, a certain level of flexibility can be found.

Another advantage of our approach when DM pick up the complete Interval Multiplicative Preference Relation is to have a degree of certainty and at the same time they can observe the constraints imposed by their framework. In our approach, reliable interval MPRs provide a distinct advantage in interpretation of hesitancy and uncertainty about the final consistency.

Our approach is proposed based on some numerical algorithms where a nonlinear optimization algorithm is concurrently applied.