Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

Tropical (idempotent) mathematics, which deals with the theory and applications of semirings with idempotent addition [4, 6], finds use in operations research, computer science and other fields. Optimization problems that are formulated and solved in the framework of tropical mathematics constitute an important research domain, which offers new solutions to old and novel problems in various applied areas, including project scheduling [7, 10], location analysis [9] and decision making [8, 11]. The problems are usually defined to minimize or maximize functions on vectors over idempotent semifields (semirings with multiplicative inverses).

In this paper, we apply methods of tropical optimization to handle problems of rating alternatives on the basis of the log-Chebyshev approximation of pairwise comparison matrices. We derive a direct solution in a closed form, and investigate the solution when it is not unique. Provided the approximation problem yields a set of score vectors, rather than a unique (up to a constant factor) one, we find those vectors in the set, which least and most differentiate between the alternatives with the highest and lowest scores, and thus can be representative of the entire solution.

2 Rating Alternatives via Pairwise Comparisons

The method of rating alternatives from pairwise comparisons finds use in decision making when a direct evaluation of the ratings is unacceptable or infeasible (see, e.g., [12] for further details). The outcome of the comparisons is described by a square symmetrically reciprocal matrix \(\mathbf A =(a_{ij})\), where \(a_{ij}\) shows the relative preference of alternative i over j, and satisfies the condition \(a_{ij}=1/a_{ji}>0\) for all ij.

To provide consistency of the data given by pairwise comparison matrices, the entries of the matrices must be transitive to provide the equality \(a_{ij}=a_{ik}a_{kj}\) for all ijk. A pairwise comparison matrix with only transitive entries is called consistent.

For each consistent matrix \(\mathbf A =(a_{ij})\), there is a positive vector \(\mathbf x =(x_{i})\) whose elements completely determine the entries of \(\mathbf A \) by the relation \(a_{ij}=x_{i}/x_{j}\). Provided that a matrix \(\mathbf A \) is consistent, its corresponding vector \(\mathbf x \) is considered to represent directly, up to a positive factor, the individual scores of alternatives in question.

The pairwise comparison matrices encountered in practice are generally inconsistent, which leads to a problem of approximating these matrices by consistent matrices. To solve the problem, the approximation with the principal eigenvector [12, 13], least squares approximation [2, 13] and other techniques [1, 3, 5] are used.

Another approach involves the approximation of a reciprocal matrix \(\mathbf A =(a_{ij})\) by a consistent matrix \(\mathbf X =(x_{ij})\) in the log-Chebyshev sense, where the approximation error is measured with the Chebyshev metric on the logarithmic scale. Since both matrices \(\mathbf A \) and \(\mathbf X \) have positive entries, and the logarithm is monotone increasing, the error can be written as \(\max _{i,j}|{\log } a_{ij}-\log x_{ij}|=\log \max _{i,j}\max \{a_{ij}/x_{ij},x_{ij}/a_{ij}\}\).

Considering that the minimization of the logarithm is equivalent to minimizing its argument, and that the matrix \(\mathbf X \) can be defined through a positive vector \(\mathbf x =(x_{i})\) by the equality \(x_{ij}=x_{i}/x_{j}\) for all ij, the error function to minimize is replaced by \(\max _{i,j}\max \{a_{ij}/x_{ij},x_{ij}/a_{ij}\}=\max _{i,j}\max \{a_{ij}x_{j}/x_{i},a_{ji}x_{i}/x_{j}\}\). The application of the condition \(a_{ij}=1/a_{ji}\) yields \(\max _{i,j}\max \{a_{ij}x_{j}/x_{i},a_{ji}x_{i}/x_{j}\}=\max _{i,j}a_{ij}x_{j}/x_{i}\), which finally reduces the approximation problem to finding positive vectors \(\mathbf x \) to

$$\begin{aligned} \begin{aligned}&\text {minimize}&\displaystyle {\max _{i,j}a_{ij}x_{j}/x_{i}}. \end{aligned} \end{aligned}$$
(1)

Assume that the approximation results in a set \(\mathscr {S}\) of score vectors \(\mathbf x \), rather than a unique (up to a constant factor) one. Then, further analysis is needed to reduce to a very few representative solutions, such as some “worst” and “best” solutions.

As the purpose of calculating the scores is to differentiate alternatives, one can concentrate on two vectors \(\mathbf x =(x_{i})\) from \(\mathscr {S}\), which least and most differentiate between the alternatives with the highest and lowest scores by minimizing and maximizing the contrast ratio \(\max _{i}x_{i}/\min _{i}x_{i}=\max _{i}x_{i}\cdot \max _{i}x_{i}^{-1}\). Then, the problem of calculating the least (the most) differentiating solution is to find vectors \(\mathbf x \in \mathscr {S}\) that

$$\begin{aligned} \begin{aligned}&\text {minimize (maximize)}&\displaystyle {\max _{i}x_{i}\cdot \max _{i}x_{i}^{-1}}. \end{aligned} \end{aligned}$$
(2)

Below, we reformulate problems (1) and (2) in terms of tropical mathematics, and then apply recent results in tropical optimization to offer complete, direct solutions.

3 Preliminary Definitions, Notation and Results

We start with a brief overview of the basic definitions and notation of tropical algebra. For further details on tropical mathematics, see, e.g., recent publications [4, 6].

Consider the set of nonnegative reals \(\mathbb {R}_{+}\), which is equipped with two operations, addition \(\oplus \) defined as maximum, and multiplication \(\otimes \) defined as usual, and has 0 and 1 as their neutral elements. Addition is idempotent, since \(x\oplus x=\max (x,x)=x\) for all \(x\in \mathbb {R}_{+}\). Multiplication is distributive over addition and invertible to give each \(x\ne 0\) an inverse \(x^{-1}\) such that \(x\otimes x^{-1}=xx^{-1}=1\). The system \((\mathbb {R}_{+},\oplus ,\otimes ,0,1)\) is called the idempotent semifield or the max-algebra and denoted \(\mathbb {R}_{\max }\). In the sequel, the sign \(\otimes \) is omitted for brevity. The power notation has the standard meaning.

The set of matrices over \(\mathbb {R}_{+}\) with m rows and n columns is denoted by \(\mathbb {R}_{+}^{m\times n}\). A matrix with all zero entries is the zero matrix. The matrices without zero rows are called row-regular. Matrix operations employ the conventional entry-wise formulae, where the scalar operations \(\oplus \) and \(\otimes \) play the role of the usual addition and multiplication.

The multiplicative conjugate transpose of a nonzero matrix \(\mathbf A =(a_{ij})\) is the matrix \(\mathbf A ^{-}=(a_{ij}^{-})\) with the entries \(a_{ij}^{-}=a_{ji}^{-1}\) if \(a_{ji}\ne 0\), and \(a_{ij}^{-}=0\) otherwise.

Consider the square matrices in the set \(\mathbb {R}_{+}^{n\times n}\). A matrix with 1 along the diagonal and 0 elsewhere is the identity matrix denoted \(\mathbf I \). The power notation specifies iterated products as \(\mathbf A ^{0}=\mathbf I \) and \(\mathbf A ^{p}=\mathbf A ^{p-1}\mathbf A \) for any matrix \(\mathbf A \) and integer \(p>0\).

The tropical spectral radius of a matrix \(\mathbf A =(a_{ij})\in \mathbb {R}_{+}^{n\times n}\) is the scalar given by

$$\begin{aligned} \lambda = \displaystyle {\bigoplus _{1\le k\le n}\bigoplus _{1\le i_{1},\ldots ,i_{k}\le n}(a_{i_{1}i_{2}}a_{i_{2}i_{3}}\cdots a_{i_{k}i_{1}})^{1/k}}. \end{aligned}$$
(3)

The asterate operator (the Kleene star) maps the matrix \(\mathbf A \) onto the matrix

$$\begin{aligned} \mathbf A ^{*} = \mathbf I \oplus \mathbf A \oplus \cdots \oplus \mathbf A ^{n-1}. \end{aligned}$$
(4)

The column vectors with n elements form the set \(\mathbb {R}_{+}^{n}\). The vectors with all elements equal to 0 and to 1 are denoted by \(\mathbf 0 \) and \(\mathbf 1 \). A vector is regular if it has no zero elements. For any nonzero column vector \(\mathbf x =(x_{i})\), its conjugate transpose is the row vector \(\mathbf x ^{-}=(x_{i}^{-})\), where \(x_{i}^{-}=x_{i}^{-1}\) if \(x_{i}\ne 0\), and \(x_{i}^{-}=0\) otherwise.

We conclude the overview with examples of tropical optimization problems. Suppose that, given a matrix \(\mathbf A =(a_{ij})\in \mathbb {R}_{+}^{n\times n}\), we need to find vectors \(\mathbf x \in \mathbb {R}_{+}^{n}\) that

$$\begin{aligned} \begin{aligned}&\text {minimize}&\mathbf x ^{-}\mathbf A \mathbf x . \end{aligned} \end{aligned}$$
(5)

The next complete, direct solution to the problem is obtained in [7].

Lemma 1

Let \(\mathbf {A}\) be a matrix with spectral radius \(\lambda >0\). Then, the minimum value in (5) is equal to \(\lambda \), and all regular solutions are given by \(\mathbf {x}=(\lambda ^{-1}\mathbf {A})^{*}\mathbf {u}\), \(\mathbf {u}\ne \mathbf {0}\).

Given a matrix \(\mathbf A \in \mathbb {R}_{+}^{m\times n}\) and vectors \(\mathbf p \in \mathbb {R}_{+}^{m}\), \(\mathbf q \in \mathbb {R}_{+}^{n}\), we now find \(\mathbf x \in \mathbb {R}_{+}^{n}\) that

$$\begin{aligned} \begin{aligned}&\text {minimize}&\mathbf q ^{-}\mathbf x (\mathbf A \mathbf x )^{-}\mathbf p . \end{aligned} \end{aligned}$$
(6)

A solution given by [9] uses a sparsification technique to provide the next result.

Lemma 2

Let \(\mathbf{{A}}=(a_{ij})\) be a row-regular matrix, \(\mathbf{{p}}=(p_{i})\) be nonzero and \(\mathbf{{q}}=(q_{j})\) be regular vectors, and \(\varDelta =(\mathbf {A}\mathbf {q})^{-}\mathbf {p}\). Let \({\widehat{\mathbf{A}}}=(\widehat{a}_{ij})\) denote the matrix with entries \(\widehat{a}_ {ij}=a_{ij}\) if \(a_{ij}\ge \varDelta ^{-1}p_{i}q_{j}^{-1}\), and \(\widehat{a}_{ij}=0\) otherwise. Let \(\mathscr {A}\) be the set of matrices obtained from \(\widehat{\mathbf{A}}\) by fixing one nonzero entry in each row and setting the others to 0.

Then, the minimum value in problem (6) is equal to \(\varDelta =(\mathbf {A}\mathbf {q})^{-}\mathbf {p}\), and all regular solutions are given by the conditions \(\mathbf {x}=(\mathbf {I}\oplus \varDelta ^{-1}\mathbf {A}_{1}^{-}\mathbf {p}\mathbf {q}^{-})\mathbf {u}\), \(\mathbf {u}\ne \mathbf {0}\), \(\mathbf {A}_{1}\in \mathscr {A}\).

Finally, we consider a maximization version of problem (6) to find vectors \(\mathbf x \) that

$$\begin{aligned} \begin{aligned}&\text {maximize}&\mathbf q ^{-}\mathbf x (\mathbf A \mathbf x )^{-}\mathbf p . \end{aligned} \end{aligned}$$
(7)

A complete solution to the problem is obtained in [10]. Below, we describe this solution in a more compact vector form using the representation lemma in [9].

Lemma 3

Let \(\mathbf{{A}}=(\mathbf{{a}}_{j})\) be a matrix with regular columns \(\mathbf{{a}}_{j}=(a_{ij})\), and \(\mathbf{{p}}=(p_{i})\) and \(\mathbf{{q}}=(q_{j})\) be regular vectors. Let \(\mathbf{{A}}_{sk}\) denote the matrix obtained from \(\mathbf {A}\) by fixing the entry \(a_{sk}\) for some indices s and k, and replacing the other entries by 0.

Then, the maximum value in (7) is equal to \(\varDelta =\mathbf{{q}}^{-}\mathbf{{A}}^{-}\mathbf{{p}}\), and all regular solutions are given by \(\mathbf{{x}}=(\mathbf{{I}}\oplus \mathbf{{A}}_{sk}^{-}\mathbf{{A}})\mathbf{{u}}\), \( \mathbf{{u}}\ne \mathbf{{0}}\), \(k=\arg \max _{j}q_{j}^{-1}\mathbf{{a}}_{j}^{-}\mathbf{{p}}\), \(s=\arg \max _{i}a_{ik}^{-1}p_{i}\).

4 Application to Rating Alternatives

We are now in a position to represent optimization problems (1) and (2) stated above in the tropical mathematics setting, and then to solve them in an explicit form.

Consider problem (1) of evaluating the score vector based on the log-Chebyshev approximation of a pairwise comparison matrix \(\mathbf A \). In terms of the max-algebra \(\mathbb {R}_{\max }\) the problem takes the form (5). Application of Lemma 1 yields the following result.

Theorem 1

Let \(\mathbf {A}\) be a pairwise comparison matrix with spectral radius \(\lambda \), and denote \(\mathbf{{A}}_{\lambda }=\lambda ^{-1}\mathbf{{A}}\) and \(\mathbf{{B}}=\mathbf{{A}}_{\lambda }^{*}\). Then, all score vectors are given by \(\mathbf{{x}}=\mathbf{{B}}\mathbf{{u}}\), \(\mathbf{{u}}\ne \mathbf{{0}}\).

Example 1

Suppose the result of comparing \(n=4\) alternatives is given by the matrix

$$\begin{aligned} \mathbf A =\left( \begin{matrix} 1 &{} 1/3 &{} 1/2 &{} 1/3 \\ 3 &{} 1 &{} 4 &{} 1 \\ 2 &{} 1/4 &{} 1 &{} 2 \\ 3 &{} 1 &{} 1/2 &{} 1 \end{matrix} \right) . \end{aligned}$$
(8)

To apply Theorem 1, we use (3) to find \(\lambda =(a_{23}a_{34}a_{42})^{1/3}=2\), and calculate \(\mathbf A _{\lambda } = \left( {\begin{matrix} 1/2 &{} 1/6 &{} 1/4 &{} 1/6 \\ 3/2 &{} 1/2 &{} 2 &{} 1/2 \\ 1 &{} 1/8 &{} 1/2 &{} 1 \\ 3/2 &{} 1/2 &{} 1/4 &{} 1/2 \end{matrix}} \right) \). Then, we follow (4) to compute \( \mathbf A _{\lambda }^{*} = \left( {\begin{matrix} 1 &{} 1/6 &{} 1/3 &{} 1/3 \\ 3 &{} 1 &{} 2 &{} 2 \\ 3/2 &{} 1/2 &{} 1 &{} 1 \\ 3/2 &{} 1/2 &{} 1 &{} 1 \end{matrix}} \right) \).

As the last three columns of the matrix \(\mathbf A _{\lambda }^{*}\) are collinear, we take one of them, say, the second. Combining with the first column multiplied by 1 / 3 leads to the solution

$$\begin{aligned} \mathbf x = \mathbf B \mathbf u , \qquad \mathbf B = \left( \begin{matrix} 1/3 &{} 1/6 \\ 1 &{} 1 \\ 1/2 &{} 1/2 \\ 1/2 &{} 1/2 \end{matrix} \right) , \qquad \mathbf u = (u_{1},u_{2})^{T}, \qquad u_{1},u_{2} \ne 0. \end{aligned}$$
(9)

Note that all the solutions assign the highest score to the second alternative and the lowest to the first. Moreover, the solutions which least and most differentiate between these alternatives, are the first and the second columns in the matrix \(\mathbf B \).

In the general case, the least and most differentiating solutions from a set of vectors, given in the form \(\mathbf x =\mathbf B \mathbf u \), are determined by solving problems (2). The problems are to minimize and maximize the contrast ratio for the elements of the vector \(\mathbf x \), which, in terms of tropical mathematics, takes the form \(\mathbf 1 ^{T}\mathbf x \mathbf x ^{-}\mathbf 1 =\mathbf 1 ^{T}\mathbf B \mathbf u (\mathbf B \mathbf u )^{-}\mathbf 1 \).

To find a vector \(\mathbf x =\mathbf B \mathbf u \) with the least differentiation between scores, we solve the problem

$$\begin{aligned} \begin{aligned}&\text {minimize}&\mathbf 1 ^{T}\mathbf B \mathbf u (\mathbf B \mathbf u )^{-}\mathbf 1 . \end{aligned} \end{aligned}$$

Assuming the matrix \(\mathbf B \) is obtained as in Theorem 1, we have the next result.

Theorem 2

Let \(\widehat{\mathbf{B}}\) be a sparsified matrix derived from \(\mathbf {B}\) by setting to 0 all entries below \(\varDelta ^{-1}=((\mathbf{{B}}(\mathbf{{1}}^{T}\mathbf{{B}})^{-})^{-}\mathbf{{1}})^{-1}\), and \(\mathscr {B}\) be the set of matrices obtained from \(\widehat{\mathbf{B}}\) by fixing one nonzero entry in each row and setting the others to 0. Then, the least differentiating score vectors are given by \(\mathbf{{x}}=\mathbf{{B}}(\mathbf{{I}}\oplus \varDelta ^{-1}\mathbf{{B}}_{1}^{-}\mathbf{{1}}\mathbf{{1}}^{T}\mathbf{{B}})\mathbf{{v}}\), \(\mathbf{{v}}\ne \mathbf{{0}}\), \(\mathbf{{B}}_{1}\in \mathscr {B}\).

Proof

We reduce the problem under study to (6) by the substitutions \(\mathbf q ^{-}=\mathbf 1 ^{T}\mathbf B \), \(\mathbf A =\mathbf B \), \(\mathbf p =\mathbf 1 \) and \(\mathbf x =\mathbf u \). Since the matrix \(\mathbf B \) has only nonzero entries, the regularity conditions of Lemma 2 are satisfied. Application of this lemma involves evaluating the minimum value \(\varDelta =(\mathbf B (\mathbf 1 ^{T}\mathbf B )^{-})^{-}\mathbf 1 \), calculating the sparsified matrix \(\widehat{bf B}\), and forming the matrix set \(\mathscr {B}\). The solution is given by \(\mathbf u =(\mathbf I \oplus \varDelta ^{-1}\mathbf B _{1}^{-}\mathbf 1 \mathbf 1 ^{T}\mathbf B )\mathbf v \), where \(\mathbf v \ne \mathbf 0 \) and \(\mathbf B _{1}\in \mathscr {B}\). Turning back to the vector \(\mathbf x =\mathbf B \mathbf u \) yields the desired result. \(\square \)

Example 2

Consider the solution obtained in the form (9) in Example 1 for the matrix (8). To apply the result of Theorem 2, we successively calculate \( \mathbf 1 ^{T}\mathbf B = \left( \begin{array}{cc} 1&1 \end{array} \right) \), \( \mathbf B (\mathbf 1 ^{T}\mathbf B )^{-} = \left( {\begin{matrix} 1/3 \\ 1 \\ 1/2 \\ 1/2 \end{matrix}} \right) \), \( \varDelta = (\mathbf B (\mathbf 1 ^{T}\mathbf B )^{-})^{-}\mathbf 1 = 3\), and \( \widehat{\mathbf{B }} = \left( {\begin{matrix} 1/3 &{} 0 \\ 1 &{} 1 \\ 1/2 &{} 1/2 \\ 1/2 &{} 1/2 \end{matrix}} \right) \).

We now examine the matrices obtained from \(\widehat{\mathbf{B }}\) by leaving one nonzero entry in each row. For instance, consider the matrix \( \mathbf B _{1} = \left( {\begin{matrix} 1/3 &{} 0 \\ 1 &{} 0 \\ 1/2 &{} 0 \\ 1/2 &{} 0 \end{matrix}} \right) \), which leaves the first column in \(\widehat{\mathbf{B }}\) unchanged, and has all zero entries in the second. We have \( \mathbf B _{1}^{-}\mathbf 1 = \left( {\begin{matrix} 3 \\ 0 \end{matrix}} \right) \), \( \mathbf B _{1}^{-}\mathbf 1 \mathbf 1 ^{T}\mathbf B = \left( {\begin{matrix} 3 &{} 3 \\ 0 &{} 0 \end{matrix}} \right) \), \( \mathbf I \oplus \varDelta ^{-1}\mathbf B _{1}^{-}\mathbf 1 \mathbf 1 ^{T}\mathbf B = \left( {\begin{matrix} 1 &{} 1 \\ 0 &{} 1 \end{matrix}} \right) \), and \( \mathbf B (\mathbf I \oplus \varDelta ^{-1}\mathbf B _{1}^{-}\mathbf 1 \mathbf 1 ^{T}\mathbf B ) = \left( {\begin{matrix} 1/3 &{} 1/3 \\ 1 &{} 1 \\ 1/2 &{} 1/2 \\ 1/2 &{} 1/2 \end{matrix}} \right) . \)

As both columns in the last matrix coincide, we take one to write the least differentiating solution in the form \(\mathbf x =\left( \begin{array}{cccc} 1/3&1&1/2&1/2 \end{array} \right) ^{T} v\), \( v \ne 0\). Calculations with the other matrices obtained from \(\widehat{\mathbf{B }}\) yield the same result, and are thus omitted.

To obtain the most differentiating score vectors we need to solve the problem

$$\begin{aligned} \begin{aligned}&\text {maximize}&\mathbf 1 ^{T}\mathbf B \mathbf u (\mathbf B \mathbf u )^{-}\mathbf 1 . \end{aligned} \end{aligned}$$

Similarly as before, we reduce this problem to (7), conclude that the conditions of Lemma 3 are fulfilled, and finally apply this lemma to obtain the next solution.

Theorem 3

Let \(\mathbf{{B}}=(\mathbf{{b}}_{j})\) be a matrix with columns \(\mathbf{{b}}_{j}=(b_{ij})\), and \(\mathbf{{B}}_{sk}\) denote the matrix obtained from \(\mathbf {B}\) by fixing the entry \(b_{sk}\) and replacing the others by 0.

Then, the most differentiating score vectors are given by \(\mathbf{{x}}=\mathbf{{B}}(\mathbf{{I}}\oplus \mathbf{{B}}_{sk}^{-}\mathbf{{B}})\mathbf{{v}}\), \(\mathbf{{v}}\ne \mathbf{{0}}\), \(k=\arg \max _{j}\mathbf{{1}}^{T}\mathbf{{b}}_{j}\mathbf{{b}}_{j}^{-}\mathbf{{1}}\), \(s=\arg \max _{i}b_{ik}^{-1}\).

Example 3

We start with the solution at (9), and compute \( \mathbf 1 ^{T}\mathbf b _{1} = 1\), \( \mathbf 1 ^{T}\mathbf b _{2} = 1\), \( \mathbf b _{1}^{-}\mathbf 1 = 3\), and \( \mathbf b _{2}^{-}\mathbf 1 = 6\). Since \( \mathbf 1 ^{T}\mathbf b _{1}\mathbf b _{1}^{-}\mathbf 1 = 3\) and \( \mathbf 1 ^{T}\mathbf b _{2}\mathbf b _{2}^{-}\mathbf 1 = 6\), we take \( k=2\), \(s=1\).

We have \( \mathbf B _{12} = \left( {\begin{matrix} 0 &{} 1/6 \\ 0 &{} 0 \\ 0 &{} 0 \\ 0 &{} 0 \end{matrix}} \right) \), \( \mathbf I \oplus \mathbf B _{12}^{-}\mathbf B = \left( {\begin{matrix} 1 &{} 0 \\ 2 &{} 1 \end{matrix}} \right) \), and \( \mathbf B (\mathbf I \oplus \mathbf B _{12}^{-}\mathbf B ) = \left( {\begin{matrix} 1/3 &{} 1/6 \\ 2 &{} 1 \\ 1 &{} 1/2 \\ 1 &{} 1/2 \end{matrix}} \right) \).

Since the columns in the last matrix are collinear, we take one of them, say, the second, to write the most differentiating vector as \( \mathbf x = \left( \begin{array}{cccc} 1/6&1&1/2&1/2 \end{array} \right) ^{T} v\), \(v\ne 0\).