1 Introduction

Tropical (idempotent) mathematics, which is an area concerned with the theory and applications of algebraic systems with idempotent addition, incorporates tropical optimization as an important research domain. Since the first studies in early 1960s, real-world optimization problems have often served to motivate and illustrate the developments in tropical mathematics. Tropical optimization problems are formulated and solved in the tropical mathematics setting, and appear in many recent works in the area, which include the monographs and textbooks by Golan (2003), Heidergott et al. (2006), McEneaney (2006), Gondran and Minoux (2008), Butkovič (2010), Maclagan and Sturmfels (2015), and a great many contributed papers.

Applications of tropical optimization cover various problems in project scheduling, location analysis, decision making and in other fields. Some related examples can be found, e.g., in Krivulin (2015a, c, 2016, 2017a, b, c) and Krivulin and Sergeev (2017). There are multidimensional tropical optimization problems that can be solved directly to describe all solutions in a compact closed vector form, whereas for other problems, only algorithmic solutions are available, which offer numerical iterative procedures to find a solution if one exists. For a brief overview of tropical optimization problems, one can see, e.g., Krivulin (2014, 2015b).

Multi-criteria decision problems, in which one needs to rate alternatives by evaluating their scores from the results of pairwise comparisons under several criteria, constitute a theoretically interesting and practically important class of problems in decision analysis [see, e.g., Saaty (1990) and Gavalec et al. (2015)]. The most common solution to the problems is based on the Analytical Hierarchy Process (AHP) method, developed in Saaty (1977, 1990, 2013), which involves calculating the principal eigenvectors of pairwise comparison matrices. Available solutions include the fuzzy AHP, interval AHP, and other techniques as in van Laarhoven and Pedrycz (1983), Gavalec et al. (2015), Kubler et al. (2016) and Ahn (2017).

In the context of tropical mathematics, the decision problems of rating alternatives are examined in Elsner and van den Driessche (2004, 2010), Gursoy et al. (2013) and Tran (2013), which offer solutions that follow the AHP method with the tropical eigenvectors or subeigenvectors used instead of the conventional principal eigenvectors.

Another approach to the solution of the problems is proposed and developed in Krivulin (2015c, 2016) and Krivulin and Sergeev (2017), which is based on the minimax log-Chebyshev approximation of pairwise comparison matrices. The approach involves the representation of the approximation problems in terms of tropical mathematics as tropical optimization problems, and the direct solution of these problems using methods and techniques of tropical optimization.

In this paper, we further develop the above approach to solve the problem of rating alternatives from pairwise comparisons under two equally weighted (unweighted) criteria. Given pairwise comparison matrices for each criterion, the problem is to find the overall scores of the alternatives. We offer a solution that involves the minimax approximation of the comparison matrices by a common consistent matrix of unit rank in terms of the Chebyshev metric in logarithmic scale. The approximation problem reduces to a bi-objective optimization problem to minimize the approximation errors simultaneously for both comparison matrices.

Furthermore, we formulate the problem in terms of tropical mathematics as a tropical optimization problem. To solve the optimization problem obtained, we apply methods and results of tropical optimization to derive a complete Pareto-optimal solution in a direct explicit form ready for further analysis and straightforward computation. We then exploit this result to solve the bi-criteria decision problem of interest. As illustrations, we present examples of the solution of two-dimensional optimization problems in general form, and of a decision problem with four alternatives in numerical form.

The paper is organized as follows. We start in Sect. 2 with a brief overview of one and bi-criteria decision problems under consideration and their representation as problems of the log-Chebyshev approximation of pairwise comparison matrices. In Sect. 3, we give an introduction to basic definitions and notation, and present some preliminary results of tropical algebra to provide an analytical framework for solving a bi-objective tropical optimization problem. Section 4 includes the main result, which offers a direct complete Pareto-optimal solution to the bi-objective problem. In Sect. 5, we illustrate the obtained result with examples of the complete solution of two-dimensional optimization problems in general form. Finally, in Sect. 6 we demonstrate application of the solution to a bi-criteria decision problem.

2 One and bi-criteria decision problems

The method of pairwise comparisons finds wide application in decision making to estimate scores (rates, preferences) of alternatives (choices, decisions) when a direct rating of the alternatives is impossible or infeasible. The method uses the result of pairwise comparisons of alternatives with an appropriate scale under one or several criteria to evaluate the individual score of each alternative [see, e.g., Thurstone 1927 and Saaty (1977, 1990) for further details and application examples].

2.1 Rating by pairwise comparison

Consider a problem to rate n alternatives from a given pairwise comparison matrix \(\varvec{A}=(a_{ij})\), where the entry \(a_{ij}\) shows the relative priority of alternative i over j. The matrix is symmetrically reciprocal, which implies that the equality \(a_{ij}=1/a_{ji}>0\) holds for all \(i,j=1,\ldots ,n\). A pairwise comparison matrix \(\varvec{A}\) is called consistent if its entries are transitive in the sense of the condition \(a_{ij}=a_{ik}a_{kj}\), which must hold for all \(i,j,k=1,\ldots ,n\).

Furthermore, each consistent matrix \(\varvec{A}\) is of unit rank, and has entries \(a_{ij}=x_{i}/x_{j}\) given by a positive vector \(\varvec{x}=(x_{i})\) that entirely specifies \(\varvec{A}\). If a pairwise comparison matrix \(\varvec{A}\) is consistent, its related vector \(\varvec{x}\) defines, up to a positive factor, the individual scores of alternatives. In case that the matrix \(\varvec{A}\) is inconsistent, as is usually the case in practice, an approximation problem arises to find an approximating consistent matrix \(\varvec{X}=(x_{ij})\) with \(x_{ij}=x_{i}/x_{j}\), or, equivalently, the corresponding vector \(\varvec{x}=(x_{i})\).

The commonly used approach to the problem is based on the approximation in the Frobenius (or spectral) norm [see, e.g., Saaty (1977, 1990, 2013)], which results in the principal (Perron) eigenvector method, where the principal eigenvector of the pairwise comparison matrix is taken as the vector of scores \(\varvec{x}\). Other solutions, proposed and examined in a range of works, including Saaty and Vargas (1984), Barzilai (1997), Chu (1998), Farkas et al. (2003) and Gavalec et al. (2015), employ least squares and logarithmic least squares methods, Chebyshev approximation and some other techniques.

Another approach, which applies the best approximation of matrices in the Chebyshev sense on logarithmic scale, is proposed in Krivulin (2015c, 2016), where the minimax log-Chebyshev approximation is represented as a tropical optimization problem, which can be completely solved in an exact vector form.

2.2 Minimax approximation in log-Chebyshev sense

Consider the problem to approximate a pairwise comparison matrix \(\varvec{A}=(a_{ij})\) by a consistent matrix \(\varvec{X}=(x_{ij})\), where \(a_{ij}=1/a_{ji}\) and \(x_{ij}=x_{i}/x_{j}\) for all \(i,j=1,\ldots ,n\). Observing that the matrix \(\varvec{A}\) is assumed to be positive, we can measure the approximation error by the Chebyshev distance in logarithmic scale.

Let \(\log \) denote a logarithmic function with a base greater than one. Since this function is monotone increasing, the log-Chebyshev distance between \(\varvec{A}\) and \(\varvec{X}\) to be minimized can be written as

$$\begin{aligned} \max _{1\le i,j\le n}|\log a_{ij}-\log x_{ij}| = \log \max _{1\le i,j\le n}\max \left( \frac{a_{ij}}{x_{ij}},\frac{x_{ij}}{a_{ij}}\right) . \end{aligned}$$

Moreover, the monotonicity property makes the problem of minimizing the last logarithm equivalent to minimizing its argument. As a result, under the conditions \(a_{ij}=1/a_{ji}\) and \(x_{ij}=x_{i}/x_{j}\), the approximation problem reduces to minimizing

$$\begin{aligned} \max _{1\le i,j\le n}\max \left( \frac{a_{ij}}{x_{ij}},\frac{x_{ij}}{a_{ij}}\right) = \max _{1\le i,j\le n}\max \left( \frac{a_{ij}x_{j}}{x_{i}},\frac{a_{ji}x_{i}}{x_{j}}\right) = \max _{1\le i,j\le n}\frac{a_{ij}x_{j}}{x_{i}}, \end{aligned}$$

where the functions on the right are minimized over all positive vectors \(\varvec{x}=(x_{i})\).

Note that, in approximating reciprocal matrices by consistent matrices of unit rank, minimizing the log-Chebyshev approximation error is equivalent to minimizing the relative error

$$\begin{aligned} \max _{1\le i,j\le n}\frac{|a_{ij}-x_{i}/x_{j}|}{a_{ij}}. \end{aligned}$$

To verify the equivalence [see also Elsner and van den Driessche (2004)], we represent the relative error as

$$\begin{aligned} \max _{1\le i,j\le n}\frac{|a_{ij}-x_{i}/x_{j}|}{a_{ij}}= & {} \max _{i<j}\max \left\{ \left| \frac{x_{i}}{a_{ij}x_{j}}-1\right| ,\left| \frac{a_{ij}x_{j}}{x_{i}}-1\right| \right\} \\= & {} \max _{i<j}\max \left\{ \frac{x_{i}}{a_{ij}x_{j}}-1,\frac{a_{ij}x_{j}}{x_{i}}-1\right\} = \max _{1\le i,j\le n}\frac{a_{ij}x_{j}}{x_{i}} - 1. \end{aligned}$$

Since these error functions differ only by an additive constant, we conclude that both approximation problems with the log-Chebyshev and relative errors are equivalent to the problem of finding vectors \(\varvec{x}=(x_{i})\) that

$$\begin{aligned} \begin{aligned}&\text {minimize}&\max _{1\le i,j\le n}\frac{a_{ij}x_{j}}{x_{i}}. \end{aligned} \end{aligned}$$

Complete solutions to this and related problems in the context of rating alternatives on the basis of pairwise comparisons, are given in Krivulin (2015c, 2016), Krivulin and Sergeev (2017) and Krivulin (2018).

2.3 Pairwise comparison under two criteria

Suppose n alternatives are compared in pairs under two equally weighted (unweighted) criteria, which results in two pairwise comparison matrices \(\varvec{A}=(a_{ij})\) and \(\varvec{B}=(b_{ij})\). The problem is formulated as a bi-criteria problem to find vectors \(\varvec{x}=(x_{i})\) such that the matrix \(\varvec{X}=(x_{i}/x_{j})\) simultaneously approximate both matrices \(\varvec{A}\) and \(\varvec{B}\).

A solution to the problem can be obtained by applying the AHP method [see, e.g., Saaty (1977, 1990, 2013)]. In the weighted case, the AHP solution is based on separate approximation of each matrix \(\varvec{A}\) and \(\varvec{B}\) by consistent matrices using their principal eigenvectors. The vector of individual scores of alternatives is calculated as a weighted sum of normalized principal eigenvectors, where the weights (priorities) of the criteria can be found by the principal eigenvector method from a pairwise comparison matrix of criteria, or obtained in other ways. Under the assumption of equal weights, the weighted sum is reduced to the ordinary (unweighted) sum of normalized principal eigenvectors.

In the framework of the minimax log-Chebyshev approximation, the problem can be formulated as the bi-objective problem of finding positive vectors \(\varvec{x}=(x_{i})\) to

$$\begin{aligned} \begin{aligned}&\text {minimize}&\left( \max _{1\le i,j\le n}\frac{a_{ij}x_{j}}{x_{i}},\ \max _{1\le i,j\le n}\frac{b_{ij}x_{j}}{x_{i}} \right) . \end{aligned} \end{aligned}$$
(1)

The common way to handle this problem, which has two competing objectives in conflict with each other, is to find a compromise solution that could not be improved. The set of solutions, where one objective can be improved only at the expense of the other objective, is usually considered the best compromise solution, which is called the nondominated or Pareto-optimal solution [see, e.g., Ehrgott (2005), Luc (2008), Pappalardo (2008), Benson (2009)].

Note that problem (1) can be solved using the technique described in Krivulin (2015c, 2016) and Krivulin and Sergeev (2017). The solution involves a matrix \(\varvec{C}=(c_{ij})\) with the entries \(c_{ij}=\max \{a_{ij},b_{ij}\}\) to find vectors \(\varvec{x}=(x_{i})\) that

$$\begin{aligned} \begin{aligned}&\text {minimize}&\max _{1\le i,j\le n}\frac{c_{ij}x_{j}}{x_{i}}. \end{aligned} \end{aligned}$$

This approach presents an analogue of the AHP decision scheme, which applies the minimax log-Chebyshev approximation instead of the principal eigenvector method and the direct calculation of weighted sums. The solution is based on methods and techniques of tropical optimization, and offers the result in a compact vector form. However, this solution involves a scalarization of the bi-criteria problem, and hence can hardly provide a way to obtain all Pareto-optimal solutions.

Below, we further develop the tropical optimization approach to provide a direct, explicit representation for all Pareto-optimal solutions of problem (1), which is given in a form ready for further analysis and straightforward computation.

3 Algebraic definitions and preliminary results

We start with a brief overview of the algebraic definitions and preliminary results of tropical mathematics from Krivulin (2015a, b, 2017a, b), which provide an analytical framework for the formulation and solution of the bi-objective tropical optimization problem to be considered in the next section. For further details at both basic and advanced levels, and for application examples, one can consult, e.g., the recent books by Golan (2003), Heidergott et al. (2006), McEneaney (2006), Gondran and Minoux (2008), Butkovič (2010) and Maclagan and Sturmfels (2015).

3.1 Idempotent semifield

Consider a nonempty set \({\mathbb {X}}\) equipped with addition \(\oplus \) and multiplication \(\otimes \) such that both operations are associative and commutative, addition is idempotent and has zero \({\mathbb {0}}\), whereas multiplication distributes over addition, has identity \({\mathbb {1}}\), and is invertible for all nonzero elements. The system \(({\mathbb {X}},\oplus ,\otimes ,{\mathbb {0}},{\mathbb {1}})\), which is an idempotent commutative semigroup with zero under addition and Abelian group under multiplication, is usually called the idempotent semifield.

Idempotent addition \(\oplus \) conforms to the rule \(x\oplus x=x\) for all \(x\in {\mathbb {X}}\), and induces a partial order on \({\mathbb {X}}\) such that \(x\le y\) if and only if \(x\oplus y=y\). This partial order is assumed extended to a compatible total order. Invertible multiplication \(\otimes \) provides an inverse \(x^{-1}\) for any \(x\ne {\mathbb {0}}\) to satisfy the identity \(xx^{-1}={\mathbb {1}}\). (Here and henceforth, the multiplication symbol \(\otimes \) is omitted to save writing.) The powers with integer exponents indicate iterated products, and are defined as \(x^{p}=xx^{p-1}\), \(x^{-p}=(x^{-1})^{p}\), and \(x^{0}={\mathbb {1}}\) for all nonzero \(x\in {\mathbb {X}}\) and integer \(p\ge 1\). Moreover, the equation \(x^{p}=a\) is assumed solvable with respect to x for all \(a\in {\mathbb {X}}\) and integer \(p\ge 1\), which extends the power notation to rational exponents.

The operations in the semifield have the following properties with respect to the order relation induced by the idempotent addition. First, the inequalities \(x\le x\oplus y\) and \(y\le x\oplus y\) hold. Furthermore, the inequality \(x\oplus y\le z\) is equivalent to the pair of inequalities \(x\le z\) and \(y\le z\). Both operations \(\oplus \) and \(\otimes \) are monotone in each argument, which implies that the inequality \(x\le y\) yields the inequalities \(x\oplus z\le y\oplus z\) and \(xz\le yz\) for all \(z\in {\mathbb {X}}\). The inversion is antitone, which means that \(x\le y\) results in \(x^{-1}\ge y^{-1}\) for all \(x,y\ne {\mathbb {0}}\). Finally, the exponential inequalities \(x^{q}\ge x^{r}\) if \(x\le {\mathbb {1}}\) and \(x^{q}\le x^{r}\) if \(x\ge {\mathbb {1}}\) are valid under the condition \(q\le r\), where q and r are positive rationals.

As examples of the idempotent semifield under study, consider real semifields \({\mathbb {R}}_{\max ,+}=({\mathbb {R}}\cup \{-\infty \},\max ,+,-\infty ,0)\) and \({\mathbb {R}}_{\max ,\times }=({\mathbb {R}}_{+},\max ,\times ,0,1)\), where \({\mathbb {R}}\) is the set of reals, and \({\mathbb {R}}_{+}=\{x\ge 0|\ x\in {\mathbb {R}}\}\). In the semifield \({\mathbb {R}}_{\max ,+}\), which is typically called the max-plus algebra, addition \(\oplus \) is defined as \(\max \) and multiplication \(\otimes \) as \(+\). The zero \({\mathbb {0}}\) and identity \({\mathbb {1}}\) are given by \(-\infty \) and 0. The inverse \(x^{-1}\) coincides with the opposite number \(-x\) in the conventional arithmetic. The power \(x^{y}\) corresponds to the arithmetic product yx, and is well-defined for any \(x,y\in {\mathbb {R}}\). The partial order induced by the idempotent addition is compatible with the natural linear order on \({\mathbb {R}}\).

The semifield \({\mathbb {R}}_{\max ,\times }\) is commonly referred to as the max-algebra, and has the operations defined as \(\oplus =\max \) and \(\otimes =\times \), and neutral elements as \({\mathbb {0}}=0\) and \({\mathbb {1}}=1\). The inverse and power notations have the standard meaning, and the partial order extends to the natural linear order. The max-algebra will serve below as the basis for the application of tropical optimization to the bi-criteria decision problem of interest.

3.2 Matrices and vectors

We now consider matrices with entries in \({\mathbb {X}}\), and denote the set of the matrices with m rows and n columns by \({\mathbb {X}}^{m\times n}\). Idempotent algebra of matrices over \({\mathbb {X}}\) is routinely defined, where the matrix operations follow the standard rules with the scalar operations \(\oplus \) and \(\otimes \) in place of the ordinary arithmetic addition and multiplication. Specifically, for any matrices \(\varvec{A}=(a_{ij})\in {\mathbb {X}}^{m\times n}\), \(\varvec{B}=(b_{ij})\in {\mathbb {X}}^{m\times n}\) and \(\varvec{C}=(c_{ij})\in {\mathbb {X}}^{n\times l}\), and a scalar \(x\in {\mathbb {X}}\), matrix addition and multiplication, and scalar multiplication are given by the entry-wise formulas

$$\begin{aligned} \{\varvec{A}\oplus \varvec{B}\}_{ij} = a_{ij}\oplus b_{ij}, \quad \{\varvec{A}\varvec{C}\}_{ij} = \bigoplus _{k=1}^{n}a_{ik}c_{kj}, \quad \{x\varvec{A}\}_{ij} = xa_{ij}. \end{aligned}$$

The properties of the scalar operations \(\oplus \) and \(\otimes \) with respect to the order relation extend to the matrix operations, where the inequalities are understood entry-wise. A matrix that has all entries equal to \({\mathbb {0}}\) is the zero matrix denoted by \(\varvec{0}\). A matrix with at least one nonzero entry in each column is called column-regular.

A square matrix with all diagonal entries equal to \({\mathbb {1}}\) and the off-diagonal entries to \({\mathbb {0}}\) is the identity matrix denoted by \(\varvec{I}\). The power notation serves to indicate repeated multiplication of a matrix with itself, defined as \(\varvec{A}^{0}=\varvec{I}\) and \(\varvec{A}^{p}=\varvec{A}\varvec{A}^{p-1}\) for any square matrix \(\varvec{A}\) and integer \(p\ge 1\).

Consider a square matrix \(\varvec{A}=(a_{ij})\in {\mathbb {X}}^{n\times n}\). The trace of \(\varvec{A}\) is given by

$$\begin{aligned} \mathrm {tr}\varvec{A} = a_{11}\oplus \cdots \oplus a_{nn} = \bigoplus _{k=1}^{n}a_{kk}. \end{aligned}$$

For any matrices \(\varvec{A},\varvec{B}\in {\mathbb {X}}^{n\times n}\) and scalar \(x\in {\mathbb {X}}\), the following identities hold:

$$\begin{aligned} \mathrm {tr}(\varvec{A}\oplus \varvec{B}) = \mathrm {tr}\varvec{A} \oplus \mathrm {tr}\varvec{B}, \quad \mathrm {tr}(\varvec{A}\varvec{B}) = \mathrm {tr}(\varvec{B}\varvec{A}), \quad \mathrm {tr}(x\varvec{A}) = x\mathrm {tr}\varvec{A}. \end{aligned}$$

To describe the solution to optimization problems in the sequel, we need to define the function, which assigns to the matrix \(\varvec{A}\) the scalar

$$\begin{aligned} \mathrm {Tr}(\varvec{A}) = \mathrm {tr}\varvec{A} \oplus \cdots \oplus \mathrm {tr}\varvec{A}^{n} = \bigoplus _{k=1}^{n}\mathrm {tr}\varvec{A}^{k}. \end{aligned}$$
(2)

Provided that the condition \(\mathrm {Tr}(\varvec{A})\le {\mathbb {1}}\) holds, we apply the asterate operator, also known as the Kleene star, which yields the matrix

$$\begin{aligned} \varvec{A}^{*} = I\oplus \varvec{A}\oplus \cdots \oplus \varvec{A}^{n-1} = \bigoplus _{k=0}^{n-1}\varvec{A}^{k} \end{aligned}$$

Finally, we use the the spectral radius of the matrix \(\varvec{A}\), which is given by

$$\begin{aligned} \lambda = \mathrm {tr}\varvec{A} \oplus \cdots \oplus \mathop {\mathrm {tr}}\nolimits ^{1/n}(\varvec{A}^{n}) = \bigoplus _{k=1}^{n}\mathop {\mathrm {tr}}\nolimits ^{1/k}(\varvec{A}^{k}). \end{aligned}$$
(3)

A matrix with one column or row forms a vector over \({\mathbb {X}}\). The vectors are considered as column vectors unless otherwise specified. The set of column vectors with n elements is denoted by \({\mathbb {X}}^{n}\). A vector with all entries equal to \({\mathbb {0}}\) is the zero vector denoted by \(\varvec{0}\). A vector is called regular if it has no zero elements.

For any regular vector \(\varvec{x}=(x_{i})\), the multiplicative conjugate transposition yields the row vector \(\varvec{x}^{-}=(x_{i}^{-})\) with the elements \(x_{i}^{-}=x_{i}^{-1}\) if \(x_{i}\ne {\mathbb {0}}\), and \(x_{i}^{-}={\mathbb {0}}\) otherwise.

3.3 Vector inequalities

Given a matrix \(\varvec{A}\in {\mathbb {X}}^{m\times n}\) and a vector \(\varvec{d}\in {\mathbb {X}}^{m}\), consider the problem to solve, with respect to the unknown vector \(\varvec{x}\in {\mathbb {X}}^{n}\), the inequality

$$\begin{aligned} \varvec{A}\varvec{x} \le \varvec{d}. \end{aligned}$$
(4)

A direct solution to the problem is described as follows [see, e.g., Krivulin (2015a)].

Lemma 1

For any column-regular matrix \(\varvec{A}\) and regular vector \(\varvec{d}\), all solutions to inequality (4) are given by

$$\begin{aligned} \varvec{x} \le (\varvec{d}^{-}\varvec{A})^{-}. \end{aligned}$$

Now suppose that, given a matrix \(\varvec{A}\in {\mathbb {X}}^{n\times n}\), the problem is to find regular vectors \(\varvec{x}\in {\mathbb {X}}^{n}\) to satisfy the inequality

$$\begin{aligned} \varvec{A}\varvec{x} \le \varvec{x}. \end{aligned}$$
(5)

The next result, obtained in Krivulin (2015b), offers a direct solution.

Theorem 1

For any matrix \(\varvec{A}\), the following statements hold:

  1. 1.

    If \(\mathrm {Tr}(\varvec{A})\le {\mathbb {1}}\), then all regular solutions to (5) are given by \(\varvec{x}=\varvec{A}^{*}\varvec{u}\), where \(\varvec{u}\) is any regular vector.

  2. 2.

    If \(\mathrm {Tr}(\varvec{A})>{\mathbb {1}}\), then there is only the trivial solution \(\varvec{x}=\varvec{0}\).

3.4 Identities and inequalities for traces

We start with an obvious binomial identity that is valid for any square matrices \(\varvec{A},\varvec{B}\in {\mathbb {X}}^{n\times n}\) and positive integer m in the following form [see also Krivulin (2017a)]:

$$\begin{aligned} (\varvec{A}\oplus \varvec{B})^{m} = \varvec{A}^{m} \oplus \bigoplus _{k=1}^{m}\bigoplus _{i_{0}+i_{1}+\cdots +i_{k}=m-k} \varvec{A}^{i_{0}}(\varvec{B}\varvec{A}^{i_{1}}\cdots \varvec{B}\varvec{A}^{i_{k}}). \end{aligned}$$

Taking trace of both sides and using properties of traces yield

$$\begin{aligned} \mathrm {tr}(\varvec{A}\oplus \varvec{B})^{m} = \mathrm {tr}\varvec{A}^{m} \oplus \bigoplus _{k=1}^{m}\bigoplus _{i_{1}+\cdots +i_{k}=m-k} \mathrm {tr}(\varvec{B}\varvec{A}^{i_{1}}\cdots \varvec{B}\varvec{A}^{i_{k}}). \end{aligned}$$

After summing over \(m=1,\ldots ,n\), and rearranging terms, we obtain the identity

$$\begin{aligned} \mathrm {Tr}(\varvec{A}\oplus \varvec{B}) = \bigoplus _{k=1}^{n}\mathrm {tr}\varvec{A}^{k} \oplus \bigoplus _{k=1}^{n} \bigoplus _{m=0}^{n-k} \bigoplus _{i_{1}+\cdots +i_{k}=m} \mathrm {tr}(\varvec{B}\varvec{A}^{i_{1}}\cdots \varvec{B}\varvec{A}^{i_{k}}). \end{aligned}$$
(6)

Furthermore, we assume \(s,t>{\mathbb {0}}\), and introduce two functions

$$\begin{aligned} \begin{aligned} G(s)&= \bigoplus _{k=1}^{n-1} \bigoplus _{m=1}^{n-k} \bigoplus _{i_{1}+\cdots +i_{k}=m} s^{-k/m}\mathop {\mathrm {tr}}\nolimits ^{1/m}(\varvec{B}\varvec{A}^{i_{1}}\cdots \varvec{B}\varvec{A}^{i_{k}}), \\ H(t)&= \bigoplus _{k=1}^{n-1} \bigoplus _{m=1}^{n-k} \bigoplus _{i_{1}+\cdots +i_{k}=m} t^{-m/k}\mathop {\mathrm {tr}}\nolimits ^{1/k}(\varvec{B}\varvec{A}^{i_{1}}\cdots \varvec{B}\varvec{A}^{i_{k}}). \end{aligned} \end{aligned}$$
(7)

The functions G and H possess a duality property that holds for the pair of inequalities \(G(s)\le t\) and \(H(t)\le s\), which appear to be equivalent. To verify this property, suppose that, for some s and t, the first inequality \(G(s)\le t\) is valid. This inequality is equivalent to the system of inequalities

$$\begin{aligned} \bigoplus _{i_{1}+\cdots +i_{k}=m} s^{-k/m}\mathop {\mathrm {tr}}\nolimits ^{1/m}(\varvec{B}\varvec{A}^{i_{1}}\cdots \varvec{B}\varvec{A}^{i_{k}}) \le t, \quad m=1,\ldots ,n-k; \quad k=1,\ldots ,n-1. \end{aligned}$$

Multiplication of both sides of the inequalities by \(t^{-1}s^{k/m}\), followed by raising to the power m / k, leads to the system

$$\begin{aligned} \bigoplus _{i_{1}+\cdots +i_{k}=m} t^{-m/k}\mathop {\mathrm {tr}}\nolimits ^{1/k}(\varvec{B}\varvec{A}^{i_{1}}\cdots \varvec{B}\varvec{A}^{i_{k}}) \le s, \quad m=1,\ldots ,n-k; \quad k=1,\ldots ,n-1. \end{aligned}$$

By summing up the inequalities in the system, we obtain the second inequality

$$\begin{aligned} H(t) = \bigoplus _{k=1}^{n-1} \bigoplus _{m=1}^{n-k} \bigoplus _{i_{1}+\cdots +i_{k}=m} t^{-m/k}\mathop {\mathrm {tr}}\nolimits ^{1/k}(\varvec{B}\varvec{A}^{i_{1}}\cdots \varvec{B}\varvec{A}^{i_{k}}) \le s. \end{aligned}$$

Observing that all transformations performed are invertible, we conclude that both inequalities \(G(s)\le t\) and \(H(t)\le s\) are equivalent.

Let us discuss the computational complexity of the functions G and H. Consider the function G, and rewrite it in equivalent form as

$$\begin{aligned} G(s) = \bigoplus _{k=1}^{n-1} \bigoplus _{m=1}^{n-k} s^{-k/m} \mathop {\mathrm {tr}}\nolimits ^{1/m}(\varvec{R}_{km}), \quad \varvec{R}_{km} = \bigoplus _{i_{1}+\cdots +i_{k}=m} \varvec{B}\varvec{A}^{i_{1}}\cdots \varvec{B}\varvec{A}^{i_{k}}, \end{aligned}$$

which shows that the computational time required to calculate G is determined by the time taken to calculate the matrices \(\varvec{R}_{km}\) for all \(k=1,\ldots ,n\) and \(m=1,\ldots ,n-k\).

Note that \(\varvec{R}_{km}\) is defined as the sum of products \(\varvec{B}\varvec{A}^{i_{1}}\cdots \varvec{B}\varvec{A}^{i_{k}}\) over all nonnegative integers \(i_{1},\ldots ,i_{k}\) such that \(i_{1}+\cdots +i_{k}=m\). Since the recurrence equation

$$\begin{aligned} \varvec{R}_{km} = \varvec{B}\varvec{R}_{k-1,m}\oplus \varvec{R}_{k,m-1}\varvec{A}, \quad \varvec{R}_{0m} = \varvec{A}^{m}, \quad \varvec{R}_{k0} = \varvec{B}^{k}, \quad \varvec{R}_{00} = \varvec{I}, \end{aligned}$$

is valid for all \(k,m\ge 1\), each matrix \(\varvec{R}_{km}\) can be obtained from the matrices \(\varvec{R}_{k-1,m}\) and \(\varvec{R}_{k,m-1}\) through two matrix multiplications and one matrix addition. Observing that one matrix multiplication takes at most \(O(n^{3})\) scalar operations, and the number of matrices \(\varvec{R}_{km}\) used in the coefficients of the function G is \(n(n-1)/2\), the overall time needed to calculate G(s) is \(O(n^{5})\). The calculation of H(t) takes the same time.

4 Unconstrained bi-objective optimization problem

We are now in a position to describe our main result, which offers a complete solution to a tropical bi-objective optimization problem in the form of a direct, explicit representation of both Pareto frontier and related Pareto-optimal solution of the problem.

Suppose that, given square matrices \(\varvec{A},\varvec{B}\in {\mathbb {X}}^{n\times n}\), we need to find regular vectors \(\varvec{x}\in {\mathbb {X}}^{n}\) that solve the bi-objective optimization problem

$$\begin{aligned} \begin{aligned}&\text {minimize}&(\varvec{x}^{-}\varvec{A}\varvec{x},\ \varvec{x}^{-}\varvec{B}\varvec{x}). \end{aligned} \end{aligned}$$
(8)

To cope with this problem, we implement the approach, which is based on the use of parameters, introduced to represent optimal values of the objective functions in the Pareto frontier. We reduce the problem to a parametrized vector inequality, and then exploit the existence condition for the solution of the inequality to evaluate the parameters and to construct the Pareto frontier. Finally, the solutions of the inequality, which correspond to the parameters in the Pareto frontier, are taken as a complete Pareto-optimal solution to the problem.

The next statement offers a complete solution to problem (8).

Theorem 2

Let \(\varvec{A}\) be a matrix with spectral radius \(\mu >{\mathbb {0}}\), \(\varvec{B}\) a matrix with spectral radius \(\nu >{\mathbb {0}}\), and G(s) and H(t) be corresponding functions defined as (7).

Then the following statements hold:

  1. 1.

    If \(\mu <G(\nu )\), then the Pareto frontier of problem (8) is the set of points \((\alpha ,\beta )\) defined by the conditions

    $$\begin{aligned} \mu \le \alpha \le G(\nu ), \quad \beta = H(\alpha ), \end{aligned}$$
    (9)

    and all regular Pareto-optimal solutions are given by

    $$\begin{aligned} \varvec{x} = (\alpha ^{-1}\varvec{A} \oplus \beta ^{-1}\varvec{B})^{*} \varvec{u}, \quad \varvec{u} > \varvec{0}; \end{aligned}$$
  2. 2.

    If \(\mu \ge G(\nu )\), then the Pareto frontier is reduced to the single point

    $$\begin{aligned} \alpha = \mu , \quad \beta = \nu , \end{aligned}$$

    and all regular solutions are given by

    $$\begin{aligned} \varvec{x} = (\mu ^{-1}\varvec{A} \oplus \nu ^{-1}\varvec{B})^{*} \varvec{u}, \quad \varvec{u} > \varvec{0}. \end{aligned}$$

Proof

Denote the minimum values of the objective functions \(\varvec{x}^{-}\varvec{A}\varvec{x}\) and \(\varvec{x}^{-}\varvec{B}\varvec{x}\) in the Pareto frontier of problem (8) by \(\alpha \) and \(\beta \). Then, all solutions are defined by the system of equations

$$\begin{aligned} \varvec{x}^{-}\varvec{A}\varvec{x} = \alpha , \quad \varvec{x}^{-}\varvec{B}\varvec{x} = \beta . \end{aligned}$$

Since we assume \(\alpha \) and \(\beta \) to be the minimum values, the set of corresponding regular solutions does not change if the equalities are replaced by the inequalities

$$\begin{aligned} \varvec{x}^{-}\varvec{A}\varvec{x} \le \alpha , \quad \varvec{x}^{-}\varvec{B}\varvec{x} \le \beta . \end{aligned}$$

By using Lemma 1, we solve the first inequality with respect to \(\varvec{A}\varvec{x}\) and the second to \(\varvec{B}\varvec{x}\) to rewrite the system in equivalent form as

$$\begin{aligned} \alpha ^{-1}\varvec{A}\varvec{x} \le \varvec{x}, \quad \beta ^{-1}\varvec{B}\varvec{x} \le \varvec{x}, \end{aligned}$$

which then combine into one inequality

$$\begin{aligned} (\alpha ^{-1}\varvec{A} \oplus \beta ^{-1}\varvec{B}) \varvec{x} \le \varvec{x}. \end{aligned}$$

According to Theorem 1, regular solutions of the last inequality exist if and only if the following condition holds:

$$\begin{aligned} \mathrm {Tr}(\alpha ^{-1}\varvec{A}\oplus \beta ^{-1}\varvec{B}) \le {\mathbb {1}}, \end{aligned}$$
(10)

and all solutions are given, through a vector of parameters \(\varvec{u}\), by

$$\begin{aligned} \varvec{x} = (\alpha ^{-1}\varvec{A} \oplus \beta ^{-1}\varvec{B})^{*} \varvec{u}, \quad \varvec{u} > \varvec{0}. \end{aligned}$$
(11)

To examine the existence condition, we first use (6) for calculating

$$\begin{aligned} \mathrm {Tr}(\alpha ^{-1}\varvec{A}\oplus \beta ^{-1}\varvec{B}) = \bigoplus _{k=1}^{n}\alpha ^{-k}\mathrm {tr}\varvec{A}^{k} \oplus \bigoplus _{k=1}^{n} \bigoplus _{m=0}^{n-k} \bigoplus _{i_{1}+\cdots +i_{k}=m} \alpha ^{-m}\beta ^{-k} \mathrm {tr}(\varvec{B}\varvec{A}^{i_{1}}\cdots \varvec{B}\varvec{A}^{i_{k}}). \end{aligned}$$

In this case, inequality (10) is equivalent to the system of inequalities

$$\begin{aligned} \alpha ^{-k}\mathrm {tr}\varvec{A}^{k}&\le {\mathbb {1}},\\ \beta ^{-k} \bigoplus _{m=0}^{n-k} \bigoplus _{i_{1}+\cdots +i_{k}=m} \alpha ^{-m} \mathrm {tr}(\varvec{B}\varvec{A}^{i_{1}}\cdots \varvec{B}\varvec{A}^{i_{k}})&\le {\mathbb {1}}, \quad k=1,\ldots ,n. \end{aligned}$$

By rearranging the terms to isolate powers of \(\alpha \) and \(\beta \) on the right-hand side, and taking roots, we rewrite the system as

$$\begin{aligned} \mathop {\mathrm {tr}}\nolimits ^{1/k}(\varvec{A}^{k})&\le \alpha ,\\ \bigoplus _{m=0}^{n-k} \bigoplus _{i_{1}+\cdots +i_{k}=m} \alpha ^{-m/k} \mathop {\mathrm {tr}}\nolimits ^{1/k}(\varvec{B}\varvec{A}^{i_{1}}\cdots \varvec{B}\varvec{A}^{i_{k}})&\le \beta , \quad k=1,\ldots ,n. \end{aligned}$$

We aggregate all these inequalities into two inequalities

$$\begin{aligned} \alpha&\ge \bigoplus _{k=1}^{n}\mathop {\mathrm {tr}}\nolimits ^{1/k}(\varvec{A}^{k}),\\ \beta&\ge \bigoplus _{k=1}^{n}\mathop {\mathrm {tr}}\nolimits ^{1/k}(\varvec{B}^{k}) \oplus \bigoplus _{k=1}^{n-1}\bigoplus _{m=1}^{n-k}\bigoplus _{i_{1}+\cdots +i_{k}=m} \alpha ^{-m/k} \mathop {\mathrm {tr}}\nolimits ^{1/k}(\varvec{B}\varvec{A}^{i_{1}}\cdots \varvec{B}\varvec{A}^{i_{k}}). \end{aligned}$$

With the spectral radii of the matrices \(\varvec{A}\) and \(\varvec{B}\) given by

$$\begin{aligned} \mu = \bigoplus _{k=1}^{n}\mathop {\mathrm {tr}}\nolimits ^{1/k}(\varvec{A}^{k}), \quad \nu = \bigoplus _{k=1}^{n}\mathop {\mathrm {tr}}\nolimits ^{1/k}(\varvec{B}^{k}), \end{aligned}$$

and the notation

$$\begin{aligned} H(\alpha ) = \bigoplus _{k=1}^{n-1}\bigoplus _{m=1}^{n-k}\bigoplus _{i_{1}+\cdots +i_{k}=m} \alpha ^{-m/k} \mathop {\mathrm {tr}}\nolimits ^{1/k}(\varvec{B}\varvec{A}^{i_{1}}\cdots \varvec{B}\varvec{A}^{i_{k}}), \end{aligned}$$

the last inequalities take the more compact form

$$\begin{aligned} \alpha \ge \mu , \quad \beta \ge \nu \oplus H(\alpha ). \end{aligned}$$
(12)

We now consider the feasible area in the \(\alpha \beta \)-coordinate system, which is defined by these inequalities. Our aim is to determine Pareto-efficient points \((\alpha ,\beta )\) in the feasible area, such that no coordinate of the point can be decreased without increasing the other, and thus to construct the Pareto frontier for the problem.

Since all interior points cannot be Pareto-efficient, we need only examine the points on the boundary of the area, which includes an open left vertical segment with \(\alpha =\mu \), lower-left segment, where \(\beta =\nu \oplus H(\alpha )\), and an open lower horizontal segment with \(\beta =\nu \). Observing that the points of both vertical and horizontal segments are obviously not Pareto-efficient, we conclude that the Pareto frontier coincides with the lower-left boundary segment of the feasible area.

To represent the Pareto frontier in a more convenient form, we examine the second inequality at (12). First, assume that the condition

$$\begin{aligned} H(\alpha ) \le \nu \end{aligned}$$

is valid. In this case, the Pareto frontier, which is now the lower-left boundary of the area given by the inequalities \(\alpha \ge \mu \) and \(\beta \ge \nu \), degenerates into one point

$$\begin{aligned} \alpha = \mu , \quad \beta = \nu . \end{aligned}$$

We solve the inequality \(H(\alpha )\le \nu \) with respect to \(\alpha \) by turning to the equivalent inequality \(G(\nu )\le \alpha \). Taking into account that the inequality \(\alpha \ge \mu \) holds, we conclude that, under the condition

$$\begin{aligned} \mu \ge G(\nu ), \end{aligned}$$

solution (11), which simultaneously minimizes both criteria, is reduced to

$$\begin{aligned} \varvec{x} = (\mu ^{-1}\varvec{A} \oplus \nu ^{-1}\varvec{B})^{*} \varvec{u}, \quad \varvec{u} > \varvec{0}. \end{aligned}$$

Otherwise, the Pareto frontier is given by the conditions

$$\begin{aligned} \mu \le \alpha \le G(\nu ), \quad \beta = H(\alpha ), \end{aligned}$$

whereas the Pareto-optimal solution takes the general form of (11). \(\square \)

We conclude this section with the observation that the solution obtained has a polynomial time complexity. Indeed, the evaluation of the parameters \(\alpha \) and \(\beta \) by using the functions G(s) and H(t) requires at most \(O(n^{5})\) scalar operations. At the same time, given the parameters \(\alpha \) and \(\beta \), the calculation of the Kleene star matrix to represent the solution vector \(\varvec{x}\) takes at most \(O(n^{4})\) operations, which leads to the overall polynomial time complexity of order \(O(n^{5})\).

5 Examples of two-dimensional problems

In this section, we consider illustrative examples of bi-objective two-dimensional problems, and provide complete Pareto-optimal solutions of these problems. The purpose of this section is to demonstrate computational technique involved in the solution, and to give illuminating geometrical illustrations of the results obtained.

Example 1

Consider problem (8) with \(n=2\) and the matrices

$$\begin{aligned} \varvec{A} = \begin{pmatrix} a_{11} &{} a_{12} \\ a_{21} &{} a_{22} \end{pmatrix}, \quad \varvec{B} = \begin{pmatrix} b_{11} &{} b_{12} \\ b_{21} &{} b_{22} \end{pmatrix}, \end{aligned}$$

and assume the entries of both matrices to be nonzero.

To apply Theorem 2, we first evaluate the spectral radii \(\mu \) and \(\nu \) of the matrices \(\varvec{A}\) and \(\varvec{B}\). With the matrix powers, given by

$$\begin{aligned} \varvec{A}^{2} = \begin{pmatrix} a_{11}^{2}\oplus a_{12}a_{21} &{} a_{12}(a_{11}\oplus a_{22}) \\ a_{21}(a_{11}\oplus a_{22}) &{} a_{12}a_{21}\oplus a_{22}^{2} \end{pmatrix}, \quad \varvec{B}^{2} = \begin{pmatrix} b_{11}^{2}\oplus b_{12}b_{21} &{} b_{12}(b_{11}\oplus b_{22}) \\ b_{21}(b_{11}\oplus b_{22}) &{} b_{12}b_{21}\oplus b_{22}^{2} \end{pmatrix}, \end{aligned}$$

we apply (3) for \(n=2\) to obtain

$$\begin{aligned} \mu = a_{11}\oplus a_{12}^{1/2}a_{21}^{1/2}\oplus a_{22}, \quad \nu = b_{11}\oplus b_{12}^{1/2}b_{21}^{1/2}\oplus b_{22}. \end{aligned}$$

Furthermore, we use (7) to construct the functions

$$\begin{aligned} G(s) = s^{-1}\mathrm {tr}(\varvec{B}\varvec{A}), \quad H(t) = t^{-1}\mathrm {tr}(\varvec{B}\varvec{A}), \end{aligned}$$

where the trace of the matrix

$$\begin{aligned} \varvec{B}\varvec{A} = \begin{pmatrix} a_{11}b_{11}\oplus a_{12}b_{21} &{} a_{11}b_{12}\oplus a_{12}b_{22} \\ a_{21}b_{11}\oplus a_{22}b_{21} &{} a_{21}b_{12}\oplus a_{22}b_{22} \end{pmatrix} \end{aligned}$$

on the right-hand sides is calculated as

$$\begin{aligned} \mathrm {tr}(\varvec{B}\varvec{A}) = a_{11}b_{11}\oplus a_{12}b_{21} \oplus a_{21}b_{12}\oplus a_{22}b_{22}. \end{aligned}$$

Next, we follow (2) to expand the Kleene star matrix \((\alpha ^{-1}\varvec{A}\oplus \beta ^{-1}\varvec{B})^{*}\), whose columns generate the solution vectors. Taking onto account that \(\alpha \ge \mu \ge a_{ii}\) and \(\beta \ge \nu \ge b_{ii}\) for \(i=1,2\), we obtain

$$\begin{aligned} (\alpha ^{-1}\varvec{A}\oplus \beta ^{-1}\varvec{B})^{*} = \varvec{I} \oplus \alpha ^{-1}\varvec{A}\oplus \beta ^{-1}\varvec{B} = \begin{pmatrix} {\mathbb {1}} &{} \alpha ^{-1}a_{12}\oplus \beta ^{-1}b_{12} \\ \alpha ^{-1}a_{21}\oplus \beta ^{-1}b_{21} &{} {\mathbb {1}} \end{pmatrix}. \end{aligned}$$

We are now in a position to rewrite the statement of Theorem 2 in terms of the two-dimensional problem under consideration. With the notation \(c=\mathrm {tr}(\varvec{B}\varvec{A})\), the solution to the problem is given as follows. If \(\mu \nu <c\), then the Pareto frontier \((\alpha ,\beta )\) of problem (8) is defined by the conditions

$$\begin{aligned} \mu \le \alpha \le \nu ^{-1}c, \quad \beta = \alpha ^{-1}c. \end{aligned}$$
(13)

All regular Pareto-optimal solutions are written, using a vector \(\varvec{u}=(u_{1},u_{2})^{T}\), as

$$\begin{aligned} \varvec{x} = \left( \begin{array}{cc} {\mathbb {1}} &{} \alpha ^{-1}a_{12}\oplus \beta ^{-1}b_{12} \\ \alpha ^{-1}a_{21}\oplus \beta ^{-1}b_{21} &{} {\mathbb {1}} \end{array} \right) \varvec{u}, \quad \varvec{u} > \varvec{0}. \end{aligned}$$
(14)

Let us examine the collinearity of the columns in the generating matrix, given by

$$\begin{aligned} \begin{pmatrix} {\mathbb {1}} \\ \alpha ^{-1}a_{21}\oplus \beta ^{-1}b_{21} \end{pmatrix}, \quad \begin{pmatrix} \alpha ^{-1}a_{12}\oplus \beta ^{-1}b_{12} \\ {\mathbb {1}} \end{pmatrix}. \end{aligned}$$

It is easy to see that these two columns are collinear if and only if the equality condition \((\alpha ^{-1}a_{12}\oplus \beta ^{-1}b_{12})(\alpha ^{-1}a_{21}\oplus \beta ^{-1}b_{21})={\mathbb {1}}\) holds. After expanding the left-hand side, the condition becomes

$$\begin{aligned} \alpha ^{-2}a_{12}a_{21} \oplus \alpha ^{-1}\beta ^{-1} (a_{12}b_{21} \oplus a_{21}b_{12}) \oplus \beta ^{-2}b_{12}b_{21} = {\mathbb {1}}. \end{aligned}$$
(15)

It follows from the conditions on \(\alpha \) and \(\beta \) at (13) that \(\alpha \ge \mu \ge (a_{12}a_{21})^{1/2}\), \(\alpha \beta =c\ge a_{12}b_{21}\oplus a_{21}b_{12}\), and \(\beta =\alpha ^{-1}c\ge \nu \ge (b_{12}b_{21})^{1/2}\).

As a direct consequence, we obtain the following inequalities: \(\alpha ^{-2}a_{12}a_{21}\le {\mathbb {1}}\), \(\alpha ^{-1}\beta ^{-1}(a_{12}b_{21}\oplus a_{21}b_{12})\le {\mathbb {1}}\), and \(\beta ^{-2}b_{12}b_{21}\le {\mathbb {1}}\). Note that the first inequality holds as equality if and only if \(\alpha =\mu \) and \(\mu =(a_{12}a_{21})^{1/2}\). The second inequality becomes an equality if \(c=a_{12}b_{21}\oplus a_{21}b_{12}\), and the third does if \(\alpha =\nu ^{-1}c\) and \(\nu =(b_{12}b_{21})^{1/2}\).

The inequalities obtained combine into one inequality, which has the same left-hand side as the collinearity condition at (15),

$$\begin{aligned} \alpha ^{-2}a_{12}a_{21} \oplus \alpha ^{-1}\beta ^{-1} (a_{12}b_{21} \oplus a_{21}b_{12}) \oplus \beta ^{-2}b_{12}b_{21} \le {\mathbb {1}}. \end{aligned}$$
(16)

This composite inequality holds as equality, resulting in collinear columns in the generating matrix, if and only if at least one component inequality holds as equality.

We now summarize the above discussion on the collinearity of columns to refine the solution by dropping one of the columns, say the first, if they are collinear. We consider the following conditions, which yield collinear columns.

First suppose that the condition \(\alpha =\mu =(a_{12}a_{21})^{1/2}\) holds, Then, \(\beta =\mu ^{-1}c\), and the Pareto-optimal solution is given up to a positive factor by the vector

$$\begin{aligned} \begin{pmatrix} a_{12}^{1/2}a_{21}^{-1/2}\oplus a_{12}^{1/2}a_{21}^{1/2}b_{12}c^{-1} \\ {\mathbb {1}} \end{pmatrix}. \end{aligned}$$

Under the condition \(c=a_{12}b_{21}\oplus a_{21}b_{12}\), we have \(\beta =\alpha ^{-1}(a_{12}b_{21}\oplus a_{21}b_{12})\), and the solutions take the form of the vector

$$\begin{aligned} \begin{pmatrix} \alpha ^{-1}a_{12}\oplus \alpha b_{12}(a_{12}b_{21}\oplus a_{21}b_{12})^{-1} \\ {\mathbb {1}} \end{pmatrix}, \quad \mu \le \alpha \le \nu ^{-1}(a_{12}b_{21}\oplus a_{21}b_{12}). \end{aligned}$$

Note that these solutions are generated by the matrix with columns obtained from the above vector by setting \(\alpha =\mu \) and \(\alpha =\nu ^{-1}(a_{12}b_{21}\oplus a_{21}b_{12})\), which is given by

$$\begin{aligned} \begin{pmatrix} \mu ^{-1}a_{12}\oplus \mu b_{12}(a_{12}b_{21}\oplus a_{21}b_{12})^{-1} &{}\;\;&{} \nu a_{12}(a_{12}b_{21}\oplus a_{21}b_{12})^{-1}\oplus \nu ^{-1}b_{12} \\ {\mathbb {1}} &{} &{} {\mathbb {1}} \end{pmatrix}. \end{aligned}$$

Furthermore, with \(\beta =\nu =(b_{12}b_{21})^{1/2}\), we have \(\alpha =(b_{12}b_{21})^{-1/2}c\), which yields the solution

$$\begin{aligned} \begin{pmatrix} a_{12}b_{12}^{1/2}b_{21}^{1/2}c \oplus b_{12}^{-1/2}b_{21}^{1/2} \\ {\mathbb {1}} \end{pmatrix}. \end{aligned}$$

If none of the above conditions is valid, inequality (16) becomes strict, which means that the columns in the generating matrix are non-collinear. In this case, the solution retains the general form (14), where the matrix cannot be reduced to a vector.

Consider the Pareto frontier, which forms a segment with the endpoints

$$\begin{aligned} (\mu ,\ \mu ^{-1}c), \quad (\nu ^{-1}c,\ \nu ). \end{aligned}$$

The first endpoint corresponds to the solution

$$\begin{aligned} \varvec{x}_{\mu } = \begin{pmatrix} {\mathbb {1}} &{} \mu ^{-1}a_{12} \oplus \mu b_{12}c^{-1} \\ \mu ^{-1}a_{21} \oplus \mu b_{21}c^{-1} &{} {\mathbb {1}} \end{pmatrix} \varvec{u}, \quad \varvec{u} > \varvec{0}. \end{aligned}$$

It follows from the above discussion that, if either the conditions \(\mu =(a_{12}a_{21})^{1/2}\) or \(c=a_{12}b_{21}\oplus a_{21}b_{12}\) are satisfied, then the solution takes the form of unique (up to a positive factor) vectors, given respectively by

$$\begin{aligned} \begin{pmatrix} a_{12}^{1/2}a_{21}^{-1/2} \oplus a_{12}^{1/2}a_{21}^{1/2}b_{12}c^{-1} \\ {\mathbb {1}} \end{pmatrix}, \quad \begin{pmatrix} \mu ^{-1}a_{12} \oplus \mu b_{12}(a_{12}b_{21}\oplus a_{21}b_{12})^{-1} \\ {\mathbb {1}} \end{pmatrix}. \end{aligned}$$

The second endpoint yields the solution in the form

$$\begin{aligned} \varvec{x}_{\nu } = \begin{pmatrix} {\mathbb {1}} &{} \nu a_{12}c^{-1} \oplus \nu ^{-1}b_{12} \\ \nu a_{21}c^{-1} \oplus \nu ^{-1}b_{21} &{} {\mathbb {1}} \end{pmatrix} \varvec{v}, \quad \varvec{v} > \varvec{0}, \end{aligned}$$

which, under the conditions \(c=a_{12}b_{21}\oplus a_{21}b_{12}\) or \(\nu =(b_{12}b_{21})^{1/2}\), reduces to unique vectors, defined respectively as

$$\begin{aligned} \begin{pmatrix} \nu a_{12}(a_{12}b_{21}\oplus a_{21}b_{12})^{-1} \oplus \nu ^{-1}b_{12} \\ {\mathbb {1}} \end{pmatrix}, \quad \begin{pmatrix} a_{12}b_{12}^{1/2}b_{21}^{1/2}c^{-1} \oplus b_{12}^{1/2}b_{21}^{-1/2} \\ {\mathbb {1}} \end{pmatrix}. \end{aligned}$$

In the case that \(\mu \nu \ge c\), the Pareto frontier shrinks into the single point

$$\begin{aligned} \alpha = \mu , \quad \beta = \nu , \end{aligned}$$

and all regular solutions are given by

$$\begin{aligned} \varvec{x} = \begin{pmatrix} {\mathbb {1}} &{} \mu ^{-1}a_{12}\oplus \nu ^{-1}b_{12} \\ \mu ^{-1}a_{21}\oplus \nu ^{-1}b_{21} &{} {\mathbb {1}} \end{pmatrix} \varvec{u}, \quad \varvec{u} > \varvec{0}. \end{aligned}$$

If one of the conditions \(\mu =(a_{12}a_{21})^{1/2}\), \(c=a_{12}b_{21}\oplus a_{21}b_{12}\), and \(\nu =(b_{12}b_{21})^{1/2}\) holds, the solution reduces to unique (up to a positive factor) vectors, given by

$$\begin{aligned} \begin{pmatrix} a_{12}^{1/2}a_{21}^{-1/2} \oplus \nu ^{-1}b_{12} \\ {\mathbb {1}} \end{pmatrix}, \quad \begin{pmatrix} \mu ^{-1}a_{12}\oplus \nu ^{-1}b_{12} \\ {\mathbb {1}} \end{pmatrix}, \quad \begin{pmatrix} \mu ^{-1}a_{12} \oplus b_{12}^{1/2}b_{21}^{-1/2} \\ {\mathbb {1}} \end{pmatrix}. \end{aligned}$$

Note that, in the case when the Pareto frontier degenerates into one point, the solution does not have to be a unique (up to a positive factor) vector, and can be a cone formed by two non-collinear vectors, if the above conditions do not hold.

In Figs. 1 and 2, we provide a graphical illustration of the discussion, given in the framework of the \({\mathbb {R}}_{\max ,\times }\) semifield (max-algebra). Figure 1 offers examples of Pareto frontiers in the form of a segment (left), and in the degenerate form of a point (right).

In Fig. 2, we demonstrate examples of the Pareto-optimal solutions in the form of a cone (left), and of a vector (right).

Fig. 1
figure 1

Examples of Pareto frontiers with \(\mu \nu <c\) (left), and \(\mu \nu \ge c\) (right), where \(c=\mathrm {tr}(\varvec{B}\varvec{A})\)

Fig. 2
figure 2

Examples of Pareto-optimal solution cone (left), and single-vector solution (right)

Example 2

Suppose that the matrices in problem (8) are symmetrically reciprocal, and given by

$$\begin{aligned} \varvec{A} = \begin{pmatrix} {\mathbb {1}} &{} a \\ a^{-1} &{} {\mathbb {1}} \end{pmatrix}, \quad \varvec{B} = \begin{pmatrix} {\mathbb {1}} &{} b \\ b^{-1} &{} {\mathbb {1}} \end{pmatrix}, \quad a\ne b. \end{aligned}$$

By using results of the previous example, we obtain

$$\begin{aligned} \mu = \nu = {\mathbb {1}}, \quad \mathrm {tr}(\varvec{B}\varvec{A}) = ab^{-1} \oplus a^{-1}b = c. \end{aligned}$$

Furthermore, we represent the matrix, which generates the solutions, as follows:

$$\begin{aligned} (\alpha ^{-1}\varvec{A}\oplus \beta ^{-1}\varvec{B})^{*} = \begin{pmatrix} {\mathbb {1}} &{} \alpha ^{-1}a\oplus \beta ^{-1}b \\ \alpha ^{-1}a^{-1}\oplus \beta ^{-1}b^{-1} &{} {\mathbb {1}} \end{pmatrix}. \end{aligned}$$

Since \(a\ne b\), we have \(c=a^{-1}b\,\oplus \,ab^{-1}>{\mathbb {1}}\), and hence the condition \(\mu \nu <\mathrm {tr}(\varvec{B}\varvec{A})\) holds. The Pareto frontier \((\alpha ,\beta )\) is defined as

$$\begin{aligned} {\mathbb {1}} \le \alpha \le c, \quad \beta = \alpha ^{-1}c, \end{aligned}$$

and all regular Pareto-optimal solutions are given by

$$\begin{aligned} \varvec{x} = \begin{pmatrix} {\mathbb {1}} &{} \alpha ^{-1}a\oplus \beta ^{-1}b \\ \alpha ^{-1}a^{-1}\oplus \beta ^{-1}b^{-1} &{} {\mathbb {1}} \end{pmatrix} \varvec{u}, \quad \varvec{u} > \varvec{0}. \end{aligned}$$

Observing that \(\beta =\alpha ^{-1}c\), \(c=a^{-1}b\oplus ab^{-1}\) and \(\alpha \le c\), we have

$$\begin{aligned} (\alpha ^{-1}a\oplus \beta ^{-1}b)(\alpha ^{-1}a^{-1}\oplus \beta ^{-1}b^{-1}) = \alpha ^{-2}\oplus (ab^{-1}\oplus a^{-1}b)c^{-1}\oplus \alpha ^{-2}c^{-2} = {\mathbb {1}}, \end{aligned}$$

which implies that the columns in the generating matrix are collinear. Taking one of them, say the second, we reduce the generating matrix to one parametrized column

$$\begin{aligned} \begin{pmatrix} \alpha ^{-1}a \oplus \alpha bc^{-1} \\ {\mathbb {1}} \end{pmatrix}, \quad {\mathbb {1}} \le \alpha \le c. \end{aligned}$$

Then, we see that, as \(\alpha \) passes from \({\mathbb {1}}\) to c, the value of \(\alpha ^{-1}a\oplus \alpha bc^{-1}\) changes from a to b. As a result, all solutions can be generated by the columns \((a,{\mathbb {1}})^{T}\) and \((b,{\mathbb {1}})^{T}\) to provide a new matrix representation, where the parameter \(\alpha \) is eliminated,

$$\begin{aligned} \varvec{x} = \begin{pmatrix} a &{} b \\ {\mathbb {1}} &{} {\mathbb {1}} \end{pmatrix} \varvec{u}, \quad \varvec{u} > \varvec{0}. \end{aligned}$$

The endpoints of the frontier segment are given by

$$\begin{aligned} ({\mathbb {1}},\ c), \quad (c,\ {\mathbb {1}}), \end{aligned}$$

which correspond to the solutions

$$\begin{aligned} \varvec{x}_{\mu } = \begin{pmatrix} a \\ {\mathbb {1}} \end{pmatrix} u, \quad u>0; \quad \varvec{x}_{\nu } = \begin{pmatrix} b \\ {\mathbb {1}} \end{pmatrix} v, \quad v>0. \end{aligned}$$

In Fig. 3, we give a graphical illustration in terms of the semifield \({\mathbb {R}}_{\max ,\times }\) for the example considered. The Pareto frontier is shown on the left as the lower-left segment of the hatched border of the feasible area for the criteria. The Pareto-optimal solutions \(\varvec{x}\) are depicted on the right as the cone with hatched boundaries, generated by the vectors \(\varvec{x}_{\mu }\) and \(\varvec{x}_{\nu }\).

It is clear from the illustration that, as b tends to a, the frontier degenerates into the point (1, 1), whereas the solution cone does to the single (up to a positive factor) vector \(\varvec{x}=(a,1)^{T}\).

Fig. 3
figure 3

Examples of Pareto frontier segment (left) and corresponding solution cone (right)

6 Application to bi-criteria decision problem

We now turn back to the problem of evaluating scores of alternatives based on pairwise comparisons under two equally weighted (unweighted) criteria, and apply results of the previous sections to solve this problem. We use the solution approach, which involves minimax log-Chebyshev approximation of pairwise comparison matrices, and leads to the solution of the bi-objective problem in the form of (1).

It is not difficult to see that the representation of problem (1) in terms of the semifield \({\mathbb {R}}_{\max ,\times }\) (max-algebra) yields problem (8). In this case, a complete Pareto-optimal solution to the problem is given by the direct application of Theorem 2. Below, we demonstrate the use of the theorem by an illustrative numerical example.

Example 3

Consider a problem to rate \(n=4\) alternatives using pairwise comparison data obtained according to two equally weighted criteria, and given by the matrices

$$\begin{aligned} \varvec{A} = \begin{pmatrix} 1 &{} 3 &{} 4 &{} 2 \\ 1/3 &{} 1 &{} 1/2 &{} 1/3 \\ 1/4 &{} 2 &{} 1 &{} 4 \\ 1/2 &{} 3 &{} 1/4 &{} 1 \end{pmatrix}, \quad \varvec{B} = \begin{pmatrix} 1 &{} 2 &{} 4 &{} 2 \\ 1/2 &{} 1 &{} 1/3 &{} 1/2 \\ 1/4 &{} 3 &{} 1 &{} 4 \\ 1/2 &{} 2 &{} 1/4 &{} 1 \end{pmatrix}. \end{aligned}$$

To find the solution vector \(\varvec{x}=(x_{1},x_{2},x_{3},x_{4})^{T}\) by applying Theorem 2, we have to start with evaluating the spectral radii \(\mu \) and \(\nu \), and constructing the functions G and H. First, we calculate the matrix powers

$$\begin{aligned} \varvec{A}^{2} = \begin{pmatrix} 1 &{} 8 &{} 4 &{} 16 \\ 1/3 &{} 1 &{} 4/3 &{} 2 \\ 2 &{} 12 &{} 1 &{} 4 \\ 1 &{} 3 &{} 2 &{} 1 \end{pmatrix}, \quad \varvec{A}^{3} = \begin{pmatrix} 8 &{} 48 &{} 4 &{} 16 \\ 1 &{} 6 &{} 4/3 &{} 16/3 \\ 4 &{} 12 &{} 8 &{} 4 \\ 1 &{} 4 &{} 4 &{} 8 \end{pmatrix}, \quad \varvec{A}^{4} = \begin{pmatrix} 16 &{} 48 &{} 32 &{} 16 \\ 8/3 &{} 16 &{} 4 &{} 16/3 \\ 4 &{} 16 &{} 16 &{} 32 \\ 4 &{} 24 &{} 4 &{} 16 \end{pmatrix}, \end{aligned}$$

and then use (3) to obtain the spectral radius of \(\varvec{A}\) as follows:

$$\begin{aligned} \mu = \mathrm {tr}\varvec{A} \oplus \mathop {\mathrm {tr}}\nolimits ^{1/2}(\varvec{A}^{2}) \oplus \mathop {\mathrm {tr}}\nolimits ^{1/3}(\varvec{A}^{3}) \oplus \mathop {\mathrm {tr}}\nolimits ^{1/4}(\varvec{A}^{4}) = 2. \end{aligned}$$

In the same way, we form the matrices

$$\begin{aligned} \varvec{B}^{2} = \begin{pmatrix} 1 &{} 12 &{} 4 &{} 16 \\ 1/2 &{} 1 &{} 2 &{} 4/3 \\ 2 &{} 8 &{} 1 &{} 4 \\ 1 &{} 2 &{} 2 &{} 1 \end{pmatrix}, \quad \varvec{B}^{3} = \begin{pmatrix} 8 &{} 32 &{} 4 &{} 16 \\ 2/3 &{} 6 &{} 2 &{} 8 \\ 4 &{} 8 &{} 8 &{} 4 \\ 1 &{} 6 &{} 4 &{} 8 \end{pmatrix}, \quad \varvec{B}^{4} = \begin{pmatrix} 16 &{} 32 &{} 32 &{} 16 \\ 4 &{} 16 &{} 8/3 &{} 8 \\ 4 &{} 24 &{} 16 &{} 32 \\ 4 &{} 16 &{} 4 &{} 16 \end{pmatrix}, \end{aligned}$$

to calculate their traces, and thus find the spectral radius of \(\varvec{B}\) to be

$$\begin{aligned} \nu = 2. \end{aligned}$$

Furthermore, with \(n=4\), the function G defined by (7) takes the form

$$\begin{aligned} G(s)= & {} s^{-1}\mathrm {tr}(\varvec{B}\varvec{A}) \oplus s^{-1/2}\mathop {\mathrm {tr}}\nolimits ^{1/2}(\varvec{B}\varvec{A}^{2}) \oplus s^{-1/3}\mathop {\mathrm {tr}}\nolimits ^{1/3}(\varvec{B}\varvec{A}^{3}) \\&\oplus \,s^{-2}\mathrm {tr}(\varvec{B}^{2}\varvec{A}) \oplus s^{-1}\mathop {\mathrm {tr}}\nolimits ^{1/2}(\varvec{B}^{2}\varvec{A}^{2}) \oplus s^{-1}\mathop {\mathrm {tr}}\nolimits ^{1/2}((\varvec{B}\varvec{A})^{2}) \oplus s^{-3}\mathrm {tr}(\varvec{B}^{3}\varvec{A}). \end{aligned}$$

To evaluate the coefficients in the function, we calculate the matrices

$$\begin{aligned} \varvec{B}\varvec{A}= & {} \begin{pmatrix} 1 &{} 8 &{} 4 &{} 16 \\ 1/2 &{} 3/2 &{} 2 &{} 4/3 \\ 2 &{} 12 &{} 3/2 &{} 4 \\ 2/3 &{} 3 &{} 2 &{} 1 \end{pmatrix}, \quad \varvec{B}\varvec{A}^{2} = \begin{pmatrix} 8 &{} 48 &{} 4 &{} 16 \\ 2/3 &{} 4 &{} 2 &{} 8 \\ 4 &{} 12 &{} 8 &{} 6 \\ 1 &{} 4 &{} 8/3 &{} 8 \end{pmatrix}, \\ \varvec{B}\varvec{A}^{3}= & {} \begin{pmatrix} 16 &{} 48 &{} 32 &{} 16 \\ 4 &{} 24 &{} 8/3 &{} 8 \\ 4 &{} 18 &{} 16 &{} 32 \\ 4 &{} 24 &{} 4 &{} 32/3 \end{pmatrix}, \quad \varvec{B}^{2}\varvec{A} = \begin{pmatrix} 8 &{} 48 &{} 6 &{} 16 \\ 2/3 &{} 4 &{} 2 &{} 8 \\ 8/3 &{} 12 &{} 8 &{} 4 \\ 1 &{} 4 &{} 4 &{} 8 \end{pmatrix}, \\ \varvec{B}^{2}\varvec{A}^{2}= & {} \begin{pmatrix} 16 &{} 48 &{} 32 &{} 24 \\ 4 &{} 24 &{} 8/3 &{} 8 \\ 4 &{} 16 &{} 32/3 &{} 32 \\ 4 &{} 24 &{} 8/3 &{} 16 \end{pmatrix}, \quad (\varvec{B}\varvec{A})^{2} = \begin{pmatrix} 32/3 &{} 48 &{} 32 &{} 16 \\ 4 &{} 24 &{} 3 &{} 8 \\ 6 &{} 18 &{} 24 &{} 32 \\ 4 &{} 24 &{} 6 &{} 32/3 \end{pmatrix}, \\ \varvec{B}^{3}\varvec{A}= & {} \begin{pmatrix} 32/3 &{} 48 &{} 32 &{} 16 \\ 4 &{} 24 &{} 3 &{} 8 \\ 4 &{} 16 &{} 16 &{} 32 \\ 4 &{} 24 &{} 4 &{} 16 \end{pmatrix}. \end{aligned}$$

Evaluating the traces of the matrices obtained yields

$$\begin{aligned}&\mathrm {tr}(\varvec{B}\varvec{A}) = 3/2, \quad \mathrm {tr}(\varvec{B}\varvec{A}^{2}) = 8, \quad \mathrm {tr}(\varvec{B}\varvec{A}^{3}) = 24, \quad \mathrm {tr}(\varvec{B}^{2}\varvec{A}) = 8, \\&\quad \mathrm {tr}(\varvec{B}^{2}\varvec{A}^{2}) = 24, \quad \mathrm {tr}(\varvec{B}\varvec{A})^{2} = 24, \quad \mathrm {tr}(\varvec{B}^{3}\varvec{A}) = 24. \end{aligned}$$

After substitution of traces and rearrangement of terms, the function becomes

$$\begin{aligned} G(s) = 24s^{-3} \oplus 8s^{-2} \oplus 24^{1/2}s^{-1} \oplus 8^{1/2}s^{-1/2} \oplus 24^{1/3}s^{-1/3}. \end{aligned}$$

Similarly, we construct the function

$$\begin{aligned} H(t)= & {} t^{-1}\mathrm {tr}(\varvec{B}\varvec{A}) \oplus t^{-2}\mathrm {tr}(\varvec{B}\varvec{A}^{2}) \oplus t^{-3}\mathrm {tr}(\varvec{B}\varvec{A}^{3}) \oplus t^{-1/2}\mathop {\mathrm {tr}}\nolimits ^{1/2}(\varvec{B}^{2}\varvec{A}) \\&\oplus \, t^{-1}\mathop {\mathrm {tr}}\nolimits ^{1/2}(\varvec{B}^{2}\varvec{A}^{2}) \oplus t^{-1}\mathop {\mathrm {tr}}\nolimits ^{1/2}((\varvec{B}\varvec{A})^{2}) \oplus t^{-1/3}\mathop {\mathrm {tr}}\nolimits ^{1/3}(\varvec{B}^{3}\varvec{A}), \end{aligned}$$

and then reduce it to

$$\begin{aligned} H(t) = 24t^{-3} \oplus 8t^{-2} \oplus 24^{1/2}t^{-1} \oplus 8^{1/2}t^{-1/2} \oplus 24^{1/3}t^{-1/3}. \end{aligned}$$

We now construct the Pareto frontier for the problem as a set of points \((\alpha ,\beta )\) given by (9). We start with the adjustment of the range of the parameter \(\alpha \), defined by the inequality \(\mu \le \alpha \le G(\nu )\). Since, the substitution \(\nu =2\) yields \(G(2)=3\), the inequality becomes \(2\le \alpha \le 3\).

Next, we consider the equality \(\beta =H(\alpha )\), and refine the function H(t) on the right-hand side by taking into account the range \(2\le t\le 3\). First, we note that the condition \(t\ge 2\) leads to the inequalities \(8t^{-2}\le 24^{1/3}t^{-1/3}\) and \(8^{1/2}t^{-1/2}\le 24^{1/3}t^{-1/3}\). At the same time, it follows from the inequality \(t\le 3\) that \(24^{1/2}t^{-1}\ge 8^{1/2}t^{-1/2}\). As a result, if \(2\le t\le 3\), the function reduces to \(H(t)=24t^{-3}\oplus 24^{1/2}t^{-1}\oplus 24^{1/3}t^{-1/3}\).

We denote \(\theta =24^{1/4}\approx 2.2134\), and observe that, for all \(t\ge \theta \), we have the inequality \(24^{1/2}t^{-1}\le 24^{1/3}t^{-1/3}\), and for \(t\le \theta \), the inequality \(24^{1/2}t^{-1}\le 24t^{-3}\). In this case, the function becomes \(H(t)=24t^{-3}\oplus 24^{1/3}t^{-1/3}\), which, together with the boundary condition on \(\alpha \), yields the description of the Pareto frontier in the form

$$\begin{aligned} 2 \le \alpha \le 3, \quad \beta = 24\alpha ^{-3}\oplus 24^{1/3}\alpha ^{-1/3}. \end{aligned}$$

With \(\alpha \) and \(\beta \) given by these conditions, and a vector \(\varvec{u}=(u_{1},u_{2},u_{3},u_{4})^{T}\), the Pareto-optimal solution to the problem is represented as

$$\begin{aligned} \varvec{x} = (\alpha ^{-1}\varvec{A}\oplus \beta ^{-1}\varvec{B})^{*} \varvec{u}, \quad \varvec{u} > \varvec{0}. \end{aligned}$$

We conclude with the computation of two extreme solutions corresponding to the endpoints of the Pareto frontier, and an intermediate solution for an inner point of the frontier segment. After calculating \(H(2)=3\) and \(H(3)=2\), we represent the endpoints of the Pareto frontier segment as

$$\begin{aligned} (\mu ,H(\mu )) = (2,3), \quad (G(\nu ),H(G(\nu ))) = (3,2). \end{aligned}$$

To obtain the matrix, which generates the solution under the conditions \(\alpha =2\) and \(\beta =3\), we take matrix

$$\begin{aligned} 2^{-1}\varvec{A}\oplus 3^{-1}\varvec{B} = \begin{pmatrix} 1/2 &{} 3/2 &{} 2 &{} 1 \\ 1/6 &{} 1/2 &{} 1/4 &{} 1/6 \\ 1/8 &{} 1 &{} 1/2 &{} 2 \\ 1/4 &{} 3/2 &{} 1/8 &{} 1/2 \end{pmatrix}, \end{aligned}$$

and calculate its second and third powers

$$\begin{aligned} \begin{pmatrix} 1/4 &{} 2 &{} 1 &{} 4 \\ 1/12 &{} 1/4 &{} 1/3 &{} 1/2 \\ 1/2 &{} 3 &{} 1/4 &{} 1 \\ 1/4 &{} 3/4 &{} 1/2 &{} 1/4 \end{pmatrix}, \quad \begin{pmatrix} 1 &{} 6 &{} 1/2 &{} 2 \\ 1/8 &{} 3/4 &{} 1/6 &{} 2/3 \\ 1/2 &{} 3/2 &{} 1 &{} 1/2 \\ 1/8 &{} 1/2 &{} 1/2 &{} 1 \end{pmatrix}. \end{aligned}$$

With these powers, we obtain the generating matrix in the form

$$\begin{aligned} (2^{-1}\varvec{A}\oplus 3^{-1}\varvec{B})^{*} = \begin{pmatrix} 1 &{} 6 &{} 2 &{} 4 \\ 1/6 &{} 1 &{} 1/3 &{} 2/3 \\ 1/2 &{} 3 &{} 1 &{} 2 \\ 1/4 &{} 3/2 &{} 1/2 &{} 1 \end{pmatrix}. \end{aligned}$$

It is easy to verify that all columns of this matrix are collinear to each other. Indeed, multiplications of the first column by 6, 2 and 4 yield the second, third and fourth columns, respectively. Therefore, we can drop all columns except one, say the first, and write the solution, corresponding to the first endpoint of the Pareto frontier, as

$$\begin{aligned} \varvec{x}_{\mu } = \left( \begin{array}{c} 1 \\ 1/6 \\ 1/2 \\ 1/4 \end{array} \right) u, \quad u>0. \end{aligned}$$

By setting \(u=1\), we have the vector of rates \(\varvec{x}_{\mu }\approx (1,0.1667,0.5,0.25)^{T}\).

In the similar way, we examine the second endpoint with \(\alpha =3\) and \(\beta =2\). We consider the matrix

$$\begin{aligned} 3^{-1}\varvec{A}\oplus 2^{-1}\varvec{B} = \begin{pmatrix} 1/2 &{} 1 &{} 2 &{} 1 \\ 1/4 &{} 1/2 &{} 1/6 &{} 1/4 \\ 1/8 &{} 3/2 &{} 1/2 &{} 2 \\ 1/4 &{} 1 &{} 1/8 &{} 1/2 \end{pmatrix}, \end{aligned}$$

and calculate its powers to obtain the generating matrix

$$\begin{aligned} (3^{-1}\varvec{A}\oplus 2^{-1}\varvec{B})^{*} = \begin{pmatrix} 1 &{} 4 &{} 2 &{} 4 \\ 1/4 &{} 1 &{} 1/2 &{} 1 \\ 1/2 &{} 2 &{} 1 &{} 2 \\ 1/4 &{} 1 &{} 1/2 &{} 1 \end{pmatrix}. \end{aligned}$$

Since all columns in this matrix are collinear, we take the first column, and write the solution as

$$\begin{aligned} \varvec{x}_{\nu } = \begin{pmatrix} 1 \\ 1/4 \\ 1/2 \\ 1/4 \end{pmatrix} v, \quad v>0. \end{aligned}$$

With \(v=1\), we have \(\varvec{x}_{\nu }=(1,0.25,0.5,0.25)^{T}\).

Finally, we assume that \(\alpha =\theta \), where \(\theta =24^{1/4}\), and note that \(\beta =H(\theta )=\theta \). We form the matrix

$$\begin{aligned} \theta ^{-1}(\varvec{A}\oplus \varvec{B}) = \theta ^{-1} \begin{pmatrix} 1 &{} 3 &{} 4 &{} 2 \\ 1/2 &{} 1 &{} 1/2 &{} 1/2 \\ 1/4 &{} 3 &{} 1 &{} 4 \\ 1/2 &{} 3 &{} 1/4 &{} 1 \end{pmatrix}, \end{aligned}$$

and then find its second and third powers

$$\begin{aligned} \theta ^{-2} \begin{pmatrix} 3/2 &{} 12 &{} 4 &{} 16 \\ 1/2 &{} 3/2 &{} 2 &{} 2 \\ 2 &{} 12 &{} 3/2 &{} 4 \\ 3/2 &{} 3 &{} 2 &{} 3/2 \end{pmatrix}, \quad \theta ^{-3} \begin{pmatrix} 8 &{} 48 &{} 6 &{} 16 \\ 1 &{} 6 &{} 2 &{} 8 \\ 6 &{} 12 &{} 8 &{} 6 \\ 3/2 &{} 6 &{} 6 &{} 8 \end{pmatrix}. \end{aligned}$$

Calculation of the generating matrix yields

$$\begin{aligned} (\theta ^{-1}(\varvec{A}\oplus \varvec{B}))^{*} = \begin{pmatrix} 1 &{} 2\theta &{} 4\theta ^{-1} &{} 2\theta ^{2}/3 \\ \theta ^{-1}/2 &{} 1 &{} \theta ^{2}/12 &{} \theta /3 \\ \theta /4 &{} \theta ^{2}/2 &{} 1 &{} 4\theta ^{-1} \\ \theta ^{2}/16 &{} 3\theta ^{-1} &{} \theta /4 &{} 1 \end{pmatrix}. \end{aligned}$$

Since all columns are collinear, we take the first column to write the solution corresponding to \(\alpha =\beta =\theta \), as

$$\begin{aligned} \varvec{x}_{\theta } = \begin{pmatrix} 1 \\ \theta ^{-1}/2 \\ \theta /4 \\ \theta ^{2}/16 \end{pmatrix} w, \quad \theta = 24^{1/4}, \quad w>0. \end{aligned}$$

If \(w=1\), the solution can be represented as \(\varvec{x}_{\theta }\approx (1,0.2259,0.5533,0.3062)^{T}\).

Let us consider the obtained solutions

$$\begin{aligned} x_{\mu } \approx \left( \begin{array}{c} 1 \\ 0.1667 \\ 0.5 \\ 0.25 \end{array} \right) , \quad x_{\theta } \approx \left( \begin{array}{c} 1 \\ 0.2259 \\ 0.5533 \\ 0.3062 \end{array} \right) , \quad x_{\nu } = \left( \begin{array}{c} 1 \\ 0.25 \\ 0.5 \\ 0.25 \end{array} \right) \end{aligned}$$

All of these solutions assign the highest rate to the first alternative. Next come the third and fourth alternatives. Finally, the second is rated lower than or equal to (as an extreme solution) the fourth alternative.

7 Conclusions

In this paper, we have developed an analytical solution to a bi-criteria decision problem to rate alternatives on the basis of pairwise comparisons according to two criteria. The problem was first formulated as a bi-objective optimization problem, where the objective functions are defined as the errors of the log-Chebyshev approximation of two symmetrically reciprocal matrices by a reciprocal matrix of unit rank. Then, we represented and solved the bi-objective problem in terms of a general idempotent semifield as a tropical optimization problem.

The solution approach is based on the introduction of two parameters that describe the optimal values of the objective function, and the reduction of the bi-objective problem to a parametrized vector inequality. The conditions for the existence of solutions to the inequality serve to describe the Pareto frontier for the optimization problem, whereas the corresponding solutions of the inequality act as the Pareto-optimal solution. We used this approach to derive a complete solution to the optimization problem in the form, which provides a direct description of both the Pareto frontier and corresponding Pareto-optimal solutions, and involves a polynomial computational complexity.

We have applied the solution of the optimization problem to solve the bi-criteria decision problem of rating alternative in a compact vector form, which is ready for formal analysis and practical implementation. Examples of solving optimization and decision-making problems were given to illustrate the results obtained.

Possible lines of future research can include the application of tropical optimization to solve bi-criteria decision problems, where the criteria have different weights, and multi-criteria decision problems with equal and different weights.