The method of paired comparisons is one of the most frequently encountered methods of performing subjective measurements and ranking different types of alternatives [1, 2]. The solution of problems by means of paired comparisons is performed in two stages: 1) measurement proper, the essential nature of which reduces to a comparison of all the alternatives and the construction of a matrix of paired comparisons; 2) processing of the results obtained, introduced into this matrix.

While the process of direct measurements does not present any difficulty, existing methods of processing the results of these measurements leads to a number of problems, which the author proposes to consider before presenting yet another method of processing [1,2,3,4,5,6]. Note that all the methods of processing the results of paired comparisons are based on an analysis and comparison of two matrices. The first of these matrices is a matrix of paired comparisons constructed on the basis of the conclusions of an expert with possible errors in carrying out the assessment, while the second is written under the condition that the expert is not allowed to make any errors in the course of the assessment. Let us consider these matrices:

$$ {\displaystyle \begin{array}{c}\nabla =\left(\begin{array}{cccc}{\upalpha}_{11}& {\upalpha}_{12}& \dots & {\upalpha}_{1n}\\ {}{\upalpha}_{21}& {\upalpha}_{22}& \dots & {\upalpha}_{2n}\\ {}\dots & \dots & \dots & \dots \\ {}{\upalpha}_{n1}& {\upalpha}_{n2}& \dots & {\upalpha}_{nn}\end{array}\right);\\ {}\varDelta =\left(\begin{array}{cccc}{\upnu}_1/{\upnu}_1& {\upnu}_1/{\upnu}_2& \dots & {\upnu}_1/{\upnu}_n\\ {}{\upnu}_2/{\upnu}_1& {\upnu}_2/{\upnu}_2& \dots & {\upnu}_2/{\upnu}_n\\ {}\dots & \dots & \dots & \dots \\ {}{\upnu}_n/{\upnu}_1& {\upnu}_n/{\upnu}_2& \dots & {\upnu}_n/{\upnu}_n\end{array}\right),\end{array}} $$

where ∇ and Δ are, correspondingly, the first and second matrix; αij, an element of matrix ∇ that specifies how many times (in the expert’s opinion) the ith alternative exceeds the jth alternative; α11 = α22 = ... = αnn = 1, αij = 1/αji for any i, j; ν1, ν2, ..., νn are unknown estimates of the quality of the alternatives in accordance with the values of which the ranking must be performed.

Assumptions are presented in [1] to the effect that an expert possesses sufficient skill and is objective (is a disinterested individual), that his errors are random, and that the mathematical expectations of these errors are equal to zero. In this case it may be assumed that matrix ∇ is approximately equal to matrix Δ and, correspondingly, for any integers i, j, 0 < in, 0 < jn, the approximate equalities αij ≈ νij are valid. Then the following approximate equalities will be satisfied for any i, 0 < in:

$$ {\upnu}_i\approx {\upalpha}_{i1}{\upnu}_1;\kern0.5em {\upnu}_i\approx {\upalpha}_{i2}{\upnu}_2;\kern0.5em {\upnu}_i\approx {\upalpha}_{i3}{\upnu}_3;\kern0.5em {\upnu}_i\approx {\upalpha}_{in}{\upnu}_n. $$
(1)

Determining the mean value νi from (1), we obtain the expression

$$ {\displaystyle \begin{array}{cc}\sum \limits_{k=1}^n{\upalpha}_{ik}{\upnu}_k\approx n{\upnu}_i,& i=1,2,\dots, n.\end{array}} $$
(2)

Since the roots of any polynomial (including the characteristic polynomial) depend continuously on its coefficients, on the basis of (2) we propose that the vector ν = (ν1, ν2, ..., νn)T may be assumed to be close to the eigenvector of matrix ∇ corresponding to the eigenvalue of this matrix close to n. It may be proved by mathematical induction that for the matrix Δ the characteristic polynomial has the form

$$ P\left(\uplambda \right)={\left(-1\right)}^n{\uplambda}^{n-1}\left(\uplambda -n\right). $$

In other words, all the eigenvalues of Δ are equal to zero, other than one eigenvalue, which is equal to n. Therefore, the eigenvalue of ∇ closest to n is assumed to be its maximal eigenvalue.

Based on these heuristic considerations, we wish to present a number of recommendations on ranking of alternatives on the basis of a matrix of paired comparisons, which we state in the form of the following traditional algorithm [1]:

  • find the maximal eigenvalue of ∇ and the eigenvector corresponding to this eigenvalue;

  • use the coordinates of the eigenvector found on the preceding step to rank alternatives (the greater the coordinate, the “better” is the alternative).

In addition, a transition to the use of eigenvectors for purposes of ranking creates additional requirements on the matrix ∇ which consist in the fact that an estimate of consistency (ratio of index of consistency to some random index) of matrix ∇ must not exceed 0.1 [1]. However, in actual practice this requirement does not hold in most cases. Nevertheless, the recommendation presented here is sufficiently significant, hence new studies [3, 4] have been published that propose methods of correcting a matrix of pairwise comparisons ∇ by means of which a necessary level of consistency may be attained. However, we do not believe that the procedures which have been proposed for improving consistency are entirely correct, since the colleagues involved in processing the estimates of experts are not specialists in the fields in which the assessment is performed and, consequently, cannot affect the estimates of the experts.

Note that the determinant of the matrix of coefficients of the system in which the eigenvectors is found is most often not equal to zero, since the eigenvalues are not found precisely, but only approximately. Consequently, the matrix is not degenerate, but it is ill-defined [7]. Therefore, in solving a corresponding system by Gaussian elimination, even the sequence of operations in the process of elimination of variables influences the final result, which may lead to ambiguous results in determining the eigenvector. This result may be verified on third-order matrices.

It must also be taken into account that the roots of polynomials possess a high degree of sensitivity to any change in coefficients. The situation in which a change of one coefficient of a polynomial with value (–210) to the value (–210+2–23) leads to the result that 10 real roots turn into complex roots is considered in [7]. Of course, in the case discussed in the present article the concern is not with an arbitrary polynomial, but with the characteristic polynomial for an inverse-symmetric matrix. For this case, however, there is no guarantee that the roots of the characteristic polynomial may turn into complex roots where small errors have been made by the experts. Such an example is presented in [6] for a third-order inverse-symmetric matrix. With an error made by an expert equal to –1/20 (which is entirely realistic in the solution of practical problems), two of the three roots of the characteristic polynomial of the matrix become complex. Consequently, it does not make sense to select the greatest root (greatest eigenvalue) in this situation.

An explanation of these problems may be found not in the area of selecting an algorithm for finding the roots of the characteristic polynomial and not even in the field of errors of computations, but in the sensitivity of the values of the roots to a variation in the coefficients of the polynomial, that is in the sensitivity of the values of the eigenvalues to a variation in the elements of the matrix of paired comparisons. It is then found that the ranking procedure (method of processing the results) proposed by Saaty cannot be considered to be robust relative to oscillations in the initial data [1, 7]. It is this which constitutes the vulnerability of Saaty’s technique as a consequence of which it has become necessary to abandon the use of the eigenvectors of the matrix of paired comparisons for the purpose of ranking alternatives.

Such attempts have been undertaken previously [5]. However, the method of optimal group strategies considered in this study proved to be quite complex for extensive use in practical applications. It was proposed in this study that the laws of the distribution of random variables be reviewed by the method of maximum likelihood and that a law of distribution of the errors of the expert (or experts) be established with subsequent application of the law thus found for constructing an optimal strategy in the form of a vector. The coordinates of this vector are specified for the purpose of ranking alternatives (as in the preceding case, the coordinates of an eigenvector are used). Each of these problems, that is, the determination of the distribution law of the errors of experts and the construction of a mathematical model of an optimal strategy in the form of an optimization problem for finding a given strategy together with its subsequent solution, is an independent, technically complex problem.

In the present article, I will propose an extraordinarily simple and reliable technique as well an algorithm corresponding to the technique for processing the results of paired comparisons. Such techniques along with the algorithm do not overload the matrix of paired comparisons with additional requirements, other than the natural requirements formulated earlier. We will use the matrices ∇ and Δ to construct this technique. Simple analysis of Δ shows that each of its columns constitutes an estimate of alternatives constructed in different units of measurement. For example, in constructing the first column the first alternative is selected as a model, that is, the other alternatives are compared to it. In this case an estimate of the quality of the first alternative is a unit of measurement for estimates of the quality of the other alternatives. The columns of matrix ∇ have the same sense, with the only difference that all the indicated estimates are approximate.

Thus, in the columns of matrix ∇ of paired comparisons the expert is in fact presents n sets of approximate estimates of alternatives, but expressed in terms of different units of measurement. Since it is not easy to work with estimates specified in different units of measurement, in these estimates of the expert, which are represented in the columns of the matrices ∇ and Δ, it is best to switch to unified dimensionless units of measurement. This may be achieved if each column of the matrices is normalized by dividing it by the sum of its elements.

It is easily established by direct verification that normalization of each column of Δ results in each column vector of the matrix having the form

$$ \upgamma ={\left({\upnu}_1/\sum \limits_{k=1}^n{\upnu}_k,{\upnu}_2/\sum \limits_{k=1}^n{\upnu}_k,\dots, {\upnu}_n/\sum \limits_{k=1}^n{\upnu}_k\right)}^{\mathrm{T}}. $$

For the sake of brevity, the coordinates of vector γ are denoted γ1, γ2, ..., γn. Then,

$$ {\upgamma}_i={\upnu}_i/\sum \limits_{k=1}^n{\upnu}_k,i=1,2,\dots, n. $$

Thus, the vectors γ also rank alternatives, like the coordinates ν.

Thus, matrix Δ is transformed into the matrix

$$ {\varDelta}^{\hbox{'}}=\left(\begin{array}{cccc}{\upgamma}_1& {\upgamma}_1& \dots & {\upgamma}_1\\ {}{\upgamma}_2& {\upgamma}_2& \dots & {\upgamma}_2\\ {}\dots & \dots & \dots & \dots \\ {}{\upgamma}_n& {\upgamma}_n& \dots & {\upgamma}_n\end{array}\right). $$

But (since it is close to matrix Δ) matrix ∇ following normalization of its columns may then be represented in the form of matrix ∇′:

$$ {\nabla}^{\hbox{'}}=\left(\begin{array}{cccc}{\upgamma}_1+{\upmu}_{11}& {\upgamma}_1+{\upmu}_{12}& \dots & {\upgamma}_1+{\upmu}_{1n}\\ {}{\upgamma}_2+{\upmu}_{21}& {\upgamma}_2+{\upmu}_{22}& \dots & {\upgamma}_2+{\upmu}_{2n}\\ {}\dots & \dots & \dots & \dots \\ {}{\upgamma}_n+{\upmu}_{n1}& {\upgamma}_n+{\upmu}_{n2}& \dots & {\upgamma}_n+{\upmu}_{nn}\end{array}\right), $$

where μij, i, j = 1, ..., n, are eigenvalues determined by the errors of the experts (for the sake of brevity, we will refer to them as the normalized errors of the experts).

Note that corresponding elements of matrices Δ′ and ∇′ are close to each other, as follows from the closeness of the corresponding elements of matrices Δ and ∇. We denote by ξ the random variable that determines the normalized errors of the experts. Then, each row of the matrix

$$ \varOmega =\left(\begin{array}{cccc}{\upmu}_{11}& {\upmu}_{12}& \dots & {\upmu}_{1n}\\ {}{\upmu}_{21}& {\upmu}_{22}& \dots & {\upmu}_{2n}\\ {}\dots & \dots & \dots & \dots \\ {}{\upmu}_{n1}& {\upmu}_{n2}& \dots & {\upmu}_{nn}\end{array}\right) $$

will constitute a realization (sample of values) of the random variable ξ. If we allow that the assumption that the mathematical expectation M[ξ] of the random variable ξ is equal to zero is satisfied, i.e., that the expert does not commit systematic errors, it may be assumed that the following approximate equalities are valid:

$$ \frac{1}{n}\sum \limits_{k=1}^n{\upmu}_{ik}\approx 0,i=1,2,\dots, n. $$
(3)

Let us find the average values of the elements of the rows of matrix ∇′, assuming that these values are the corresponding coordinates of the vector Λ. We then write out the expression

$$ \varLambda ={\left({\upgamma}_1+\frac{1}{n}\sum \limits_{k=1}^n{\upmu}_{1k},{\upgamma}_2+\frac{1}{n}\sum \limits_{k=1}^n{\upmu}_{2k},\dots {\upgamma}_n+\frac{1}{n}\sum \limits_{k=1}^n{\upmu}_{nk}\right)}^{\mathrm{T}}. $$

In light of (3), it may be assumed that the coordinates of Λ rank alternatives in the same way as do the coordinates of γ and, consequently, in the same way as the coordinates of ν = (ν1, ν2, ..., νn)T. From these arguments, there follows a simple algorithm for ranking alternatives on the basis of a matrix of paired comparisons.

  1. 1.

    Normalize the columns of ∇, dividing each element of the column of this matrix by the sum of the elements of the column in which the indicated element occurs; construct the matrix ' consisting of the normalized columns.

  2. 2.

    Find the average values (by row) of the elements of matrix ' obtained in Step 1 and construct the vector Λ the coordinates of which are selected next to rank the alternatives.

The effectiveness of the method was verified on a sufficiently large number of problems. The results obtained in a realization of the algorithm coincide completely with the results calculated by means of the traditional algorithm using the eigenvector in those cases in which it is applicable (estimate of consistency of the matrix of paired comparisons not exceeding 0.1) and under the condition that the maximal eigenvalue of the matix is determined with error not exceeding 0.0001.

It should be noted that a whole group of experts may also be employed. This will make it possible to decrease the error of the estimates of the analyzed alternatives, since the number of realizations of the random variable ξ grows with increasing number of experts. For example, if m experts participate in the assessment and each of the experts presents a matrix of paired comparisons, the length of the samples of the random variable ξ will be equal to nm. In this case, we rewrite (3) as follows:

$$ {(nm)}^{-1}\sum \limits_{k=1}^{nm}{\upmu}_{ik}\approx 0,i=1,2,\dots, n. $$
(4)

It is natural to expect that in accordance with the laws of large numbers, equality (4) will be more precise than equality (3) [8]. Consequently, the coordinates of the vector

$$ \varLambda ={\left({\upgamma}_1+\frac{1}{nm}\sum \limits_{k=1}^n{\upmu}_{1k},{\upgamma}_2+\frac{1}{nm}\sum \limits_{k=1}^n{\upmu}_{2k},\dots {\upgamma}_n+\frac{1}{nm}\sum \limits_{k=1}^{nm}{\upmu}_{nk}\right)}^{\mathrm{T}}. $$

will, in turn, estimate and rank the alternatives more precisely.

Thus, a simple and reliable algorithm for processing the results of paired comparisons for estimating the quality of alternatives has been proposed. Through the use of the new algorithm, we are able to reject the use of the eigenvector corresponding to the maximum eigenvalue of the matrix of paired comparisons for these purposes, which leads to overloading the matrix with additional requirements. The proposed algorithm makes it possible in practical applications to eliminate additional constraints in ranking alternatives by the method of paired comparisons, which arise due to the use of the traditional algorithm