1 Introduction

In a recent paper in this journal Xu and Cai (2012) described a method to determine expert weights in GMADM problem, hereafter called the XC-model. They first normalized all individual decision matrices of experts. Then the collective decision matrix is constructed by the weighted arithmetic averaging of individual decision matrices and the nonlinear optimization model based on deviation function between individual decision matrices and collective decision matrix is defined. Based on the derived expert weights, they aggregated the individual decision matrices to collective matrix and the weighted additive of each alternative values considered as the score of that alternatives. The best alternative and rank of all alternatives computed with these scores.

The authors of XC-model employ genetic algorithm (GA) to find the solution. As we know the GA algorithms are typically used to provide approximate solutions to problems that cannot be solved easily using other analytical techniques. Many optimization problems fall into this category. It may be too computationally-intensive to find an exact solution but sometimes a near-optimal solution can be effective. Due to their random nature, GA algorithms are never guaranteed to find an optimal solution for any problem. On the other hand, the famous simplex method for linear programming (LP) assumes that the objective function and constraints are linear functions of the variables. A linear function expresses proportionality—its graph is a straight line. For such problems, the simplex method is highly accurate, very fast—often hundreds of times faster than other methods—and yields the globally optimal solution in virtually all cases.

The purpose of this short paper is to present an improved version of the XC-model to determine the expert weights in GMADM problem. For this purpose, we convert the proposed nonlinear programming model in XC-model to a linear one and apply the simplex method to solve the problem instead of using the GA algorithm.

2 XC-Model (Xu and Cai 2012)

Assume that the GMADM problem is composed of n alternatives \(\{x_{1},x_{2},\ldots ,x_{n}\}\), m attributes \(\{u_{1},u_{2},\ldots ,u_{m}\}\) which their weights are \(\{w_{1},w_{2},\ldots ,w_{m}\}\), where \(w_{i}\ge 0,~i=1,2,\ldots ,m,\) and \(\displaystyle \sum \nolimits _{i=1}^{m} {w_{i}}=1\). Also suppose that there is a group of s experts \(\{e_{1},e_{2},\ldots ,e_{s}\}\) with the weights \(\{\lambda _{1},\lambda _{2},\ldots ,\lambda _{s}\}\), where \(\lambda _{k}\ge 0,~k=1,2,\ldots ,s,\) and \(\displaystyle \sum \nolimits _{k=1}^{s} {\lambda _{k}}=1\). The preferences of the kth expert consider as the entries of kth matrix that shows with \(A^{k}\). The (ij)th entry of this matrix that shows with \(a_{ij}^{k}\) is the preference of kth expert about jth alternative with respect to ith attribute.

In GMADM problem we have two attribute types, benefit attributes and cost attributes. For comparing and use the values of attributes all values must be normalized such that they do not have dimensions and units. In the XC-model the authors use the following transformation to normalize data

$$\begin{aligned} \begin{array}{lcc} r_{ij}^{k}=a_{ij}^{k}\bigg / \displaystyle \sum _{h=1}^{n} {a_{ih}^{k}}, &{} \text { for benefit attribute } u_{i}, &{} j=1,2,\ldots ,n\\ r_{ij}^{k}=(1/a_{ij}^{k})\bigg / \displaystyle \sum _{h=1}^{n} {(1/a_{ih}^{k})}, &{} \text { for cost attribute } u_{i}, &{} j=1,2,\ldots ,n\\ \end{array} \end{aligned}$$
(1)

Next, the deviation variable \(e_{ij}^{k}\) is defined in XC-model as below:

$$\begin{aligned} e_{ij}^{k}=\left| r_{ij}^{k}-\sum _{k=1}^{s} {\lambda _{k}r_{ij}^{k}}\right| , \quad \text { for all } \quad i=1,2,\ldots ,m, \quad j=1,2,\ldots ,n, \quad k=1,2,\ldots ,s \end{aligned}$$
(2)

where \(\displaystyle \sum \nolimits _{k=1}^{s} {\lambda _{k}r_{ij}^{k}}\) is considered as collective decision matrix of all individual decision matrices. Now construct the following deviation function:

$$\begin{aligned} F(\lambda )={\sum _{k=1}^{s}}~ {\sum _{j=1}^{n}}~ {\sum _{i=1}^{m}} w_{i} e_{ij}^{k}={\sum _{k=1}^{s}}~ {\sum _{j=1}^{n}}~ {\sum _{i=1}^{m}} w_{i}\left| r_{ij}^{k}-\sum _{k=1}^{s} {\lambda _{k}r_{ij}^{k}}\right| \end{aligned}$$
(3)

Clearly, the above deviation should be as small as possible. So the following non-linear model should be solved:

$$\begin{aligned} \begin{array}{ll} \min F(\lambda )= &{} \displaystyle {\sum _{k=1}^{s}}~ {\sum _{j=1}^{n}}~ {\sum _{i=1}^{m}} w_{i}\left| r_{ij}^{k}-\sum _{k=1}^{s} {\lambda _{k}r_{ij}^{k}}\right| \\ s.t. &{} \lambda _{k}>0, ~\quad k=1,2,\ldots ,s, ~\quad \displaystyle \sum _{k=1}^{s} {\lambda _{k}}=1 \end{array} \end{aligned}$$
(4)

To solve the above model, Xu and Cai adopt a GA that can be described as follows: GA algorithm

Step 1 Predefine the maximum iteration number t*, and randomly generate an initial population \(\Theta ^{(t)}=\{\lambda ^{(1)},\lambda ^{(2)},\ldots ,\lambda ^{(p)}\}\) , where \(t = 0\), and \(\lambda ^{(l)}=\{\lambda _{1}^{(1)},\lambda _{2}^{(l)},\ldots ,\lambda _{s}^{(l)}\} ~(l=1,2,\ldots ,p)\) are the expert weight vectors (or chromosomes). Then input the attribute weights \(w_{i}~(i=1,2,\ldots ,m)\) and all the normalized individual decision matrices \(R^{k}=(r_{ij}^{k})_{m\times n} ~(k=1,2,\ldots ,s)\).

Step 2 By the optimization model (4), define the fitness function as:

$$\begin{aligned} F(\lambda ^{(l)})= \displaystyle {\sum _{k=1}^{s}}~ {\sum _{j=1}^{n}}~ {\sum _{i=1}^{m}} w_{i}|r_{ij}^{k}-\sum _{k=1}^{s} {\lambda _{k}^{(l)} r_{ij}^{k}}| \end{aligned}$$
(5)

and then compute the fitness value \(F(\lambda ^{(l)})\) of each \(\lambda ^{(l)}\) in the current population \(\Theta ^{(t)}\), where \(\lambda _{k}\ge 0, ~k=1,2,\ldots ,s\), and \(\displaystyle \sum \nolimits _{k=1}^{s} {\lambda _{k}}=1\).

Step 3 Create new weight vectors (or chromosomes) by mating the current weight vectors, and apply mutation and recombination as the parent chromosomes mate.

Step 4 Delete members of the current population \(\Theta ^{(t)}\) to make room for the new weight vectors.

Step 5 Utilize (5) to compute the fitness values of the new weight vectors, and insert these vectors into the current population \(\Theta ^{(t)}\).

Step 6 If there is no further decrease of the minimum fitness value, or \(t = t^{*}\), then go to Step 7; otherwise, let \(t=t +1\), and go to Step 3.

Step 7 Output the minimum fitness value \(F(\lambda ^{*})\) and the corresponding weight vector \(\lambda ^{*}\).

Based on the optimal weight vector \(\lambda ^{*}\), they get the collective decision matrix \(R = (r_{ij})_{m \times n}\):

$$\begin{aligned} r_{ij}=\displaystyle \sum _{k=1}^{s} {\lambda _{k}^{*}r_{ij}^{k}}, ~\text {for all} ~i=1,2,\ldots ,m, ~j=1,2,\ldots ,n \end{aligned}$$
(6)

Then utilize the weighted arithmetic averaging operator:

$$\begin{aligned} r_{j}=\displaystyle \sum _{i=1}^{m} {w_{i}r_{ij}}, \quad \text {for all} ~j=1,2,\ldots ,n \end{aligned}$$
(7)

to aggregate all the attribute values in jth column of R, and get the overall attribute value \(r_{j}\) corresponding to the alternative \(x_{j}\). Afterthat, they rank all the alternatives \(x_{j},~ (j =1,2,\ldots ,n)\) and select the best one according to \(r_{j},~ (j =1,2,\ldots ,n)\).

3 The Proposed Method

As mentioned before Xu and Cai solve the model (4) using genetic algorithm. But as we know the main disadvantage of GA is that there is no guarantee of finding global optimal solution. Indeed in GA the optimal solution heavily depends on the fitness function, hence it must be determined accurately. There are no standard method to define a fitness function and it is the sole responsibility of the user to define it. Besides, sometimes premature convergence may occur. Thus the diversity in the population is lost, which is one of the major objectives of GA. Finally, note that in GA the termination criteria are also not standardized. Until now no effective single terminator criterion has been identified. To address these issues, we recommend another strategy to solve model (4). For this purpose, we convert model (4) to an LP model which can be solved easily using any LP solver. To show that the non-linear model can be linearized, let

$$\begin{aligned} \begin{array}{c} a_{ij}^{k}=\frac{1}{2}\left( \left| r_{ij}^{k}-\displaystyle \sum _{k=1}^{s}{\lambda _{k}r_{ij}^{k}}\right| +r_{ij}^{k}-\displaystyle \sum _{k=1}^{s}{\lambda _{k}r_{ij}^{k}}\right) , \\ b_{ij}^{k}=\frac{1}{2}\left( \left| r_{ij}^{k}-\displaystyle \sum _{k=1}^{s}{\lambda _{k}r_{ij}^{k}}\right| -\left( r_{ij}^{k}-\displaystyle \sum _{k=1}^{s}{\lambda _{k}r_{ij}^{k}}\right) \right) \end{array} \end{aligned}$$
(8)

Then, the model (4) is transformed to the following LP model:

$$\begin{aligned} \begin{array}{ll} \min F(\lambda )= &{} \displaystyle {\sum _{k=1}^{s}}~ {\sum _{j=1}^{n}}~ {\sum _{i=1}^{m}} w_{i}(a_{ij}^{k}+b_{ij}^{k}) \\ s.t. &{}\displaystyle \sum _{k=1}^{s} {\lambda _{k}}=1 \\ &{} a_{ij}^{k}-b_{ij}^{k}=r_{ij}^{k}-\displaystyle \sum _{k=1}^{s}{\lambda _{k}r_{ij}^{k}},~i=1,2,\ldots ,m,~ j=1,2,\ldots ,n \\ &{} a_{ij}^{k}\ge 0, ~b_{ij}^{k} \ge 0, ~\lambda _{k}>0, ~k=1,2,\ldots ,s, ~i=1,2,\ldots ,m, ~j=1,2,\ldots ,n. \end{array} \end{aligned}$$
(9)

The model (9) is linear and can be solved easily. Evidently, using an LP solver leads to the optimal and exact solution of this model.

4 Illustrative Example

We applied our method, to the same GMADM problem as discussed in Xu and Cai (2012). An investment company is planning to exploit a new model of cars and there are four feasible alternatives \(x_{j},~(j = 1,2,3,4)\). When making a decision, the attributes considered are as follows: \(u_{1}\): investment amount \((\$100{,}000.000)\); \(u_{2}\): expected net-profit amount \((\$100{,}000.000)\); \(u_{3}\): venture profit amount \((\$100{,}000.000)\); and \(u_{4}\): venture-loss amount \((\$100{,}000.000)\). Among these four attributes, \(u_{2}\) and \(u_{3}\) are of benefit type; \(u_{1}\) and \(u_{4}\) are of cost type. The weight vector of the attributes \(u_{i},~(i =1,2,3,4)\) is \(w = (0.3,0.2,0.2,0.3)\). An expert group is formed which consists of three experts \(e_{k}~(k = 1,2,3)\). These experts evaluate the alternatives \(x_{j}~(j = 1,2,3,4)\) with respect to the attributes \(u_{i}~(i =1,2,3,4)\), and construct the following three decision matrices (see Tables 1, 2, 3):

Table 1 Decision matrix \(A_{1}\)
Table 2 Decision matrix \(A_{2}\)
Table 3 Decision matrix \(A_{3}\)

By (1), we first normalize the decision matrices \(A_{k}~(k = 1,2,3)\) into the normalized decision matrices \(R_{k}~(k =1,2,3)\) (see Tables 4, 5, 6):

Table 4 Decision matrix \(R_{1}\)
Table 5 Decision matrix \(R_{2}\)
Table 6 Decision matrix \(R_{3}\)

Based on the normalized decision matrices \(R_{k}~(k = 1,2,3)\), the weight vector obtained by XC-model is \(\lambda ^{*} = (0.445,0.318,0.237)\) while solving our proposed linear model provides the expert weights as \(\lambda ^{*} = (0.7006,0.1437,0.1557)\). As we see the first expert \(e_{1}\) has the maximum weight with both methods. So \(e_{1}\) plays an important role in decision making process. With XC-model \(e_{2}\) and \(e_{3}\) ranked second and third expert, respectively. But using proposed method they ranked third and second expert, respectively. Now we rank the alternatives based on the derived weights.

The collective decision matrix based on our model is shown in Table 7. The overall attribute values \(r_{j}~ (j =1,2,3,4)\) are \( ~r_{1}=0.287, ~r_{2} =0.215, ~r_{3} = 0.299\) and \(~r_{4} =0.199\). Based on which we get the ranking of the alternatives \(x_{j}~(j =1,2,3,4)\) as \(x_{3}\succ x_{1} \succ x_{2} \succ x_{4}\) which is same as XC-model.

Table 7 Collective decision matrix R with proposed model

5 Conclusion

In this paper we presented an improvement of the XC-model for group multiple attribute decision making problem. The contribution of this paper is to provide a model for deriving expert weights using a linear optimization which can be solved easily. The proposed model provide an exact optimal solution. An illustrative example is presented to compare our model with the XC-model.