Abstract
This paper presents an accelerated Promethee (Preference Ranking Organization METHod for Enrichment Evaluations) multi-criteria algorithm based on dimensionality reduction in large scale environments. In our context, the Promethee algorithm is used to select from a large set of objects, one or a small set of objects with a good compromise between several qualitative and quantitative criteria. The exact solution can be used by applying the exact multi-criteria Promethee algorithm. However, the drawback, with this type of exact algorithm, is the long execution time due to the combinatorial aspect of the problem. The exact Promethee computing time is linked both to the dimension of the problem (number of qualitative and quantitative criteria) and the size of the problem (number of objects). To address the previous drawback, we propose to accelerate the Promethee algorithm in combining the exact Promethee algorithm with an algorithm inherited from the Machine Learning (ML) field. The experiments demonstrate the potential of our approach under different scenarios to accelerate the respond time.
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
In the literature, several multi-criteria algorithms have been proposed in the past [8, 9, 14]. These algorithms have been used with success to solve many problems [2], as scheduling problems [11] or profiles recommendation [10].
Among the different multi-criteria algorithms, we would like to mention the Promethee (Preference Ranking Organization METHod for Enrichment Evaluations) algorithm which is a multi-criteria decision aid system [14]. The benefit of the Promethee algorithm is that it allows to select, from a large set of objects, one object or a small set of objects with a ‘good’ compromise between several qualitative and/or quantitative criteria. However, the main drawback of the Promethee is the long execution time to solve a problem with a big size (number of objects) and a large dimension (number of qualitative and quantitative criteria). The exact Promethee computing time is linked to the problem dimension, problem size and the uses of a sorting step in the overall process.
To improve the computing time for the solution, we propose, in this paper, a new accelerated Promethee algorithm, based on Machine Learning method. The solution of the initial problem is now an approximation compared to the exact algorithm. One challenge is to estimate the quality of the solution, we mean the gap between the exact and the approximated solution.
The accelerated Promethee algorithm is based on dimensionality reduction. For instance, the principle consists to reduce the number of candidate objects by applying algorithm inherited from the Machine Learning field. For instance, in our approach the K-means algorithm is used to reduce the size of the problem. Then, in the second step, we apply the exact Promethee algorithm on a reduced number of objects to get a solution in a reasonable computing time and with a certain quality.
The organization of the paper is as follows. Section 2 presents some related works. Section 3 is divided into Subsect. 3.1 which describes the exact Promethee multi-criteria algorithm, and Subsect. 3.2 which describes the k-means clustering algorithm. Section 4 describes our accelerated Promethee algorithm based on a combination between the exact Promethee and K-means algorithms. Section 5 introduces exhaustive experiments that allow the validation of our accelerated Promethee algorithm. Finally, a conclusion and some future works are given in Sect. 6.
2 Related Work
In this section, we start by presenting, briefly, some multi-criteria algorithms. Then, we present a short overview of multi-criteria related problems and an example of a large scale multi-criteria study. Finally, we conclude this section by a positioning.
2.1 Short Overview of Multi-criteria Algorithms
In the following subsection we present two multi-criteria algorithms (i) TOPSIS [9]; and (ii) Kung [8].
TOPSIS (Technique for Order Preference by Similarity to an Ideal Solution) is a multi-criteria algorithm which allows to choose from a big set of objects the solution with the shortest distance from the best solution and the furthest distance from the worst solution [12]. However, the use of the TOPSIS algorithm requires, for each criterion, a weight and information related to the minimization or maximization of the criterion. To find a solution, the TOPSIS algorithm goes through the following 7 steps:
-
Fill a Decision Matrix (DM) of n lines (number of objects) and c columns (number of criteria). Each value \(f_{ij}\) in DM represents the value of the object \(o_i\) in \(criterion_j\).
-
Calculate the Normalized Decision Matrix (NDM). The normalized value \(r_{ij}=f_{ij}/ \sqrt{\sum _{i=1}^{n} f_{ij}^2}\), for \(i=1, ..., n\) and \(j=1,\cdots , c\).
-
Calculate the Weighted Normalized Decision Matrix (WNDM). The weighted normalized value \(v_{ij}=r_{ij}\times w_{j}\), for \(i=1, ..., n\) and \(j=1,\cdots , c\).
With \(w_j\) is the weight of the \(j^{th}\) criterion, and the \(\sum _{j=1}^{c}w_j=1\).
-
Determine the best (\(A^{+}\)) and the worst (\(A^{-}\)) solutions.
$$ \begin{array}{ l c l l c l } A^{+} &{} = &{} \{v_{1}^{+}, ...,v_{c}^{+}\} &{} A^{-} &{} = &{} \{v_{1}^{-}, ...,v_{c}^{-}\} \\ &{} = &{} \{(max(v_{ij}|i=1,...,n)|j\in I'), &{} \quad &{} = &{} \{(min(v_{ij}|i=1,...,n)|j\in I'), \\ &{} &{} (min(v_{ij}|i=1,...,n)|j\in I'')\} &{} \quad &{} \quad &{} (max(v_{ij}|i=1,...,n)|j\in I'')\}\\ \end{array} $$I’ is associated to the criteria having the positive impact, and I” is associated to the criteria having a negative impact.
-
Calculate the Separation Measures (SM), using the n-dimensional Euclidean distance. The separation of each object from the best solution is given by the following \(SM_{i}^{+}\) formula:
\(SM_{i}^{+} = \sqrt{\sum _{j=1}^{c} (v_{ij}-v_{j}^{+})^2}\), for \(i = 1, \cdots , n\).
The separation of each object from the worst solution is given by the following \(SM_{i}^{-}\) formula:
\(SM_{i}^{-} = \sqrt{\sum _{j=1}^{c} (v_{ij}-v_{ij}^{-})^2}\), for \(i = 1, \cdots , n\)
-
Calculate the Relative Closeness (RC) to the best solution. For each object \(o_i\), the \(RC_i\) is defined by the following \(RC_i\) formula:
\(RC_{i} = SM_{i}^{-}/ (SM_{i}^{+}+SM_{i}^{-})\), \(i=1, \cdots , n\).
-
Score objects according to their RC value.
Kung algorithm [8] is also an other algorithm used in the multi-criteria decision context [5]. As presented in [5], Kung algorithm firstly sorts objects in descending order according to the first criterion. Thereafter, the set of objects are recursively halved as Top half (T) and Bottom half (B) sub set of objects. As T is better than B in the first criterion, so we check the B for domination with T. The solutions of B which are not dominated by solutions of T are merged with members of T to form merged set of objects M. This means, in the case of a minimization function, that a solution \(x_1\) is better than other solution \(x_2\), if the value of \(x_1\) is smaller than the value of \(x_2\). The algorithm, called Front(P), can be summarized in two steps:
-
Sort objects according to the order of importance in the first criterion and rename the population of objects as P of size N.
-
Front(P): if |P| = 1, return P as the output of Front(P). Otherwise, T = Front(\(P^1 - P^{|P/2|}\)) and B = Front(\(P^{|P/2|+1} - P^P\)). IF \(i^{th}\) non-dominated solution B is not dominated by any non-dominated solution of T, create a merged set M = \(\{\)T U i\(\}\). Finally, return M as output of Front(P).
2.2 Short Overview of Multi-criteria Related Studies
In this subsection, we present some multi-criteria related studies [10, 11].
In [11], a new scheduling strategy based on multi-criteria decision algorithm is proposed. The principle of this strategy consists to choose, from a set of nodes, able to execute a new submitted container, the node which has a good compromise between several criteria. According to the results presented in [11], the new scheduling strategy based on the multi-criteria algorithm allows to improve the performance comparing to others scheduling strategies.
In [10], also a multi-criteria algorithm is used to recommend profiles. The goal of the work presented in [10] consist to score a set of profiles configured according to multi-criteria and saved in a database according to their similarity compared to a new profile. In this study, the multi-criteria algorithm is used to compute the distance score between each profile saved in a database and the new profile.
2.3 Large Scale Multi-criteria Algorithm
In [4], two practical Large Scale Multi-Objectives Scheduling (LSMOS) strategies are proposed. The goal of the work proposed in [4], consists to show how we can improve the computing time obtained by using a Kung multi-criteria algorithm.
2.4 Positioning
The novelty of this paper is to propose a new accelerated Promethee multi-criteria algorithm adapted for large scale environments with thousand of objects. The benefit of our approach is that it allows to get a solution faster than with an exact Promethee method. The difference between this study and the study presented in [4], is that our study is based on Promethee multi-criteria algorithm. However, the study proposed in [4] is based on Kung multi-criteria algorithm.
3 Key Algorithms Used by the Accelerated Promethee Algorithm
In this section, we present the exact Promethee and K-means algorithms which will be used as key building blocks of our approach depicted in Sect. 4.
3.1 Exact Promethee Algorithm
The exact Promethee (Preference Ranking Organization METHod for Enrichment Evaluations) algorithm presents a multi-criteria decision aid system [14]. It gives a good compromise between several qualitative and quantitative criteria. The Promethee algorithm allows to compare objects between them pair by pair, along different criteria for each object. All criteria of objects are evaluated according to two functions: (i) Minimization; and (ii) Maximization. That means, each criterion can be minimized or maximized. However, the use of the Promethee algorithm requires two informations for each criterion: a weight and a preference function. In our context, we suppose that the weight of all criteria are the same and equal to 1. However, the preference function characterizes the difference for a criterion between the evaluations obtained by two possible objects into a preference degree ranging from 0 to 1. In [7], six basic preference functions have been proposed. In this work we use the usual preference functions describe in following. To summarize, the Promethee algorithm is composed of four steps [15]:
-
1.
It computes for each pair of possible objects (\(Object_a\) and \(Object_b\)) and for each criterion, the value of the preference degree. Let \(g_j(Object_a)\) be the value of a criterion j for \(Object_a\). We note \(d_{j}(Object_a, Object_b)\) (\(d_j(Object_a, Object_b) = g_j(Object_a) - g_j(Object_b)\)), the difference of value of a criterion j for \(Object_a\) and \(Object_b\). \(P_j(Object_a, Object_b)\) is the value of the preference degree of a criterion j for \(Object_a\) and \(Object_b\). The preference function used to compute these preference degrees is defined such as:
$$ P_j(d_j)= \left\{ \begin{array}{rc} 0 &{} d_j \le 0 \\ 1 &{} d_j > 0 \\ \end{array} \right. $$ -
2.
It computes for each pair of possible objects, a global preference index. Let C be the set of considered criteria (qualitative and/or quantitative) and \(w_j\) the weight associated to the criterion j. The global preference index for a pair of possible \(Object_a\) and \(Object_b\) is calculated as follows:
$$ \uppi (Object_a,Object_b)= \sum _{j \in C} W_j \times P_j(Object_a,Object_b)$$ -
3.
It computes for each possible object the positive outranking flow \(\upphi ^+(Object_a)\) and the negative outranking flow \(\upphi ^-(Object_a)\). Let A be the set of objects with size of n. The positive and negative outranking flow of objects are calculated by the following formula:
$$\upphi ^+(Object_a)=\frac{1}{n-1}\sum _{x \in A}\uppi (Object_a,x)$$and
$$\begin{aligned} \upphi ^-(Object_a)=\frac{1}{n-1}\sum _{x \in A}\uppi (x,Object_a) \end{aligned}$$ -
4.
It uses the outranking flows to establish a complete ranking between the objects. The ranking is based on the net outranking flows \(\upphi (Object_a)\) which is calculated as follows: \(\upphi (Object_a)= \upphi ^+(Object_a) - \upphi ^-(Object_a) \). In our work, the first objects returned by Promethee algorithm are objects with the highest net outranking values.
Example of How the Exact Promethee Algorithm Works. Assume that at time \(t_0\), three objects exist (\(Object_a\), \(Object_b\), \(Object_c\)) with different criteria as it is presented in Table 1. In our context, the Promethee algorithm will be used to select form the set of objects, the object which has a good compromise between several criteria. As explained before, the exact Promethee algorithm starts by computing for each pair of possible objects a difference value in each criterion (\(d_x(Object_i,Object_j)\)) and the preference degree (\(P_x(Object_i,Object_j)\)). Then, the system calculates the global preference index \(\upphi (Object_i)\). In Table 2 with the first pair objects (\(Object_a\), \(Object_b\)), the difference value for the criterion 1 is d(\(Object_a,Object_b\)) = 3 − 1 = 2. However, as this difference value is positive, using an usual preference function, the preference degree equals to 1 (as presented in Table 3).
As in our work we suppose that the weight of all criteria equal to 1, the global preference index of the first pair of objects (\(Object_a,Object_b\)) = \(2 \times \textit{1} + 1 \times \textit{1} = 3\) (as it is presented in Table 3). Finally, to get the rank of objects and select the object which has a maximum rank, the system calculates the positive and negative outranking flow and the net outranking flow parameters. Table 4, shows how the system calculates these different parameters. For example, for \(Object_a\), the positive outranking flow (\(\upphi ^+\)) is \(\frac{1}{2}(3+2)=2.5\). The negative outranking flow (\(\upphi ^-\)) is \(\frac{1}{2}(0+0)=0\). The net outranking flow (\(\upphi = \upphi ^+ - \upphi ^-\)) is 2.5 (2.5 − 0).
Using Promethee algorithm, the \(Object_a\) is the first selected object because it has the maximum net outranking flow. This result confirms our expectation. As it is presented in Table 1, the \(Object_a\) is the object with the high value in each criterion.
3.2 K-means Clustering Algorithm
K-means clustering algorithm is a type of unsupervised learning algorithm [13]. The goal of this algorithm is to build clusters of data, with the number of clusters is represented by the variable K [13]. The algorithm works iteratively to assign each data point to one of K clusters. Data points are clustered based on feature similarity. The principle of the K-means algorithm is described as follows:
-
1.
Randomly choose k objects as center objects;
-
2.
Calculate the distance between each object and each center object. Then, assign each object to the cluster which has the nearest center object;
-
3.
Recalculate the new center object of each cluster;
-
4.
Repeat step 2 and 3 until you achieve a cluster stability.
Example of How the K-means Algorithm Works. Let us consider a set of objects as depicted in Fig. 1 presented in [13]. Let us also suppose that the number of clusters we want to have equal to 3 (k = 3). According to Fig. 1, we arbitrarily choose three objects, each object represent an initial cluster center. In Fig. 1(a), each initial center is marked by symbol (+). Then, each object from the set of objects is assigned to a cluster based on the cluster center to which it is the nearest. Next, the cluster centers are updated. That means, each cluster recalculates its center based on the current objects in the cluster. Using the new cluster centers, objects are redistributed to the clusters based on which cluster center is the nearest (Fig. 1(b)). The process of iteratively reassigning objects to clusters to improve the partitioning is referred to as iterative relocation (Fig. 1(c)). Eventually, no reassignment of the objects in any cluster occurs and so the process terminates. The resulting clusters are returned by the clustering process.
4 Accelerated Promethee Algorithm Based on K-means
The novelty of this paper is in the introduction of a new accelerated Promethee algorithm to reduce the exact Promethee computing time. The goal is to combine the exact Promethee with the K-means algorithm to find, quickly from a large set of objects, one object or a small set of objects which has a good compromise between multi-criteria.
As shown in Fig. 2 and Algorithm 1, the principle of the proposed accelerated Promethee algorithm consists to start by applying the K-means algorithm on all set of objects to form K clusters with the same characteristics. K can be chosen to be equal to \(2 \times \log _2(n)\), with n being the size of the problem, i.e. the number of objects. This step reduces the state space of the problem. Then we choose a set of objects in each cluster, for instance randomly, but it is possible to choose objects at a certain distance of the centroid or by applying the best known K-Nearest Neighbors (KNN) algorithm [3]. The set of resulting objects is called E. The objective is to have more information to improve the accuracy of the result. Finally, apply the exact Promethee algorithm on set E in order to obtain a result set of objects R. The objects returned by our accelerate Promethee algorithm are ranked according to their net outranking score.
5 Experimental Evaluation
In this section, we present some experimental evaluation to demonstrate the potential of our work. The following experimentations are done on Intel Xeon machine. The used machine is booked from Grid5000 platform [6], an experimental large-scale testbed for distributed computing in France. Our approach is implemented in Go programming language. Go is an open source programming language that makes it easy to build simple, reliable, and efficient software.
In this experimental evaluation, we propose to validate our approach according to 6, 8, 10 and 12 criteria. Each criterion has a random value varying between 0 and 100. The goal of our experiments is not to focus only on the computation time of our algorithm but rather to exemplify the quality of the approximated solution of our approach.
The focus of our paper is in providing with a methodology aiming to combine an exact method for the decision, a clustering method and a dimensionality reduction method. The problem of finding a realistic workload in a context of many criteria and many objects is an issue. In following, the k-means algorithm is used with \(k=2\times \log _2(n)\) (n number of the input objects). The number of objects returned by k-means is equal to \(\frac{n}{2}\).
5.1 Comparison Between the Exact and the Accelerated Promethee Algorithms
Figure 3 (respectively Figs. 4, 5 and 6) shows a comparison between the computing time obtained using the exact and the accelerated Promethee algorithms using 6 (respectively 8, 10 and 12) criteria. As a result, we note, from Figs. 3, 4, 5 and 6, that the computing time obtained with the accelerated Promethee is very small compared to the computing time obtained with the exact Promethee algorithm. Table 5 shows the speedup obtained using the accelerated Promethee algorithm compared to the exact Promethee algorithm. We notice that the speedup increases with the problem size. When the problem size becomes larger, the speedup is more important. This is a nice property of our implementation.
5.2 Metrics of Performance
We now propose to compare the exact value returned by the Promethee algorithm and the approximated value returned by our accelerated Promethee algorithm. To do this comparison, we propose to use the Sorensen-Dice index [1]. Note that the comparison is done between the first 50 and 100 objects returned by the accelerated and the exact Promethee algorithms. Note also that each time the Promethee algorithm is used it returns objects ranked according to their net outranking flow. Sorensen-Dice index [1] is a statistic index used for comparing the similarity of sample sets. It is defined as
With \({\displaystyle A =(a_{1},a_{2},\ldots ,a_{n})}\) and \({\displaystyle B =(b_{1},b_{2},\ldots ,b_{n})}\) are two vectors composed from a set of objects.
Table 6 (resp. Table 7) represents the Sorensen-Dice indexes obtained by comparing the exact and the accelerated Promethee algorithm based on 6, 8, 10 and 12 criteria between the first 50 objects (resp. 100 objects) returned by the accelerated and the exact Promethee algorithms. We notice that the Sorensen-Dice index values in Tables 6 and 7 are varying between 0.46 and 0.61. These results confirm the potential of our approach method to predict the exact Promethee result.
5.3 General Discussion
In our approach, each criterion may have a weight depending on the user need. In this series of experiments, we do not investigate the impact of the weights on criteria regarding the performance and quality of the result. The originality of Promethee is in mixing qualitative and quantitative criteria. In this paper, we do prefer to focus on the originality of our work which is in a coupling between methods inherited both from the field of combinatorial optimization and from the field of machine learning. The context is to find a solution, with a good respond time, when we have a problem with a big size (number of objects) and a large dimension (number of criteria).
6 Conclusion
We presented in this paper an accelerated Promethee algorithm adapted for large scale environment to reduce the exact Promethee computing time. The proposed accelerated Promethee algorithm is based on the dimensionality reduction to choose an object or a small set of objects with a good compromise in a multi-criteria context. The principle consists to use the K-means algorithm to reduce the number of candidate objects, then to apply the exact Promethee on a reduced number of objects. We discussed in this paper the conditions to get a quick answer versus a precise answer. A balance between the normal law to use (i.e. the shape of the data) and the expected precision should be identified. In practical cases, to satisfy the final user, we have shown the impact of such considerations on the obtained results. As a perspective, we plan to investigate more in deep the quality metrics, to be able to observe the solution according to different point of view regarding the data.
References
Jackson, D.A., Somers, K., Harvey, H.H.: Similarity coefficients: measures of co-occurrence and association or simply measures of occurrence? Am. Nat. 133(03), 436–453 (1989)
Behzadian, M., Kazemzadeh, R., Albadvi, A., Aghdasi, M.: Promethee: a comprehensive literature review on methodologies and applications. Eur. J. Oper. Res. 200(1), 198–215 (2010)
Cover, T., Hart, P.: Nearest neighbor pattern classification. IEEE Trans. Inf. Theory 13(1), 21–27 (1967)
Cérin, C., Menouer, T., Lebbah, M.: Accelerating the computation of multi-objectives scheduling solutions for cloud computing. In: 2018 IEEE 8th International Symposium on Cloud and Service Computing (SC2), pp. 49–56, November 2018
Ding, L., Zeng, S., Kang, L.: A fast algorithm on finding the non-dominated set in multi-objective optimization. In: The 2003 Congress on Evolutionary Computation 2003, CEC 2003, vol. 4, pp. 2565–2571, December 2003
Grid5000: https://www.grid5000.fr/
Brans, J.P., Mareschal, B.: Promethee Methods. In: Figueira, J., Greco, S., Ehrogott, M. (eds.) Multiple Criteria Decision Analysis: State of the Art Surveys. International Series in Operations Research & Management Science, vol. 78, pp. 163–186. Springer, New York (2005). https://doi.org/10.1007/0-387-23081-5_5
Kung, H.T., Luccio, F., Preparata, F.P.: On finding the maxima of a set of vectors. J. ACM 22(4), 469–476 (1975)
Lai, Y.-J., Liu, T.-Y., Hwang, C.-L.: Topsis for MODM. Eur. J. Oper. Res. 76(3), 486–500 (1994). Facility Location Models for Distribution Planning
Menouer, T., Darmon, P.: New profile recommendation approach based on multi-criteria algorithm. In: 2018 IEEE International Conference on Big Data (Big Data), pp. 4961–4966, December 2018
Menouer, T., Darmon, P.: New scheduling strategy based on multi-criteria decision algorithm. In: 2019 27th Euromicro International Conference on Parallel, Distributed and Network-Based Processing (PDP), pp. 101–107, February 2019
Opricovic, S., Tzeng, G.-H.: Compromise solution by MCDM methods: a comparative analysis of VIKOR and TOPSIS. Eur. J. Oper. Res. 156(2), 445–455 (2004)
Han, J., Pei, J., Kamber, M.: Data Mining: Concepts and Techniques, 3rd edn. Elsevier, Amsterdam (2011)
Deshmukh, S.C.: Preference ranking organization method of enrichment evaluation (promethee). Int. J. Eng. Sci. Invent. 2, 28–34 (2013)
Taillandier, P., Stinckwich, S.: Using the Promethee multi-criteria decision making method to define new exploration strategies for rescue robots. In: International Symposium on Safety, Security, and Rescue Robotics (2011)
Acknowledgments
We thank the Grid5000 team for their help to use the testbed. Grid’5000 is supported by a scientific interest group (GIS) hosted by Inria and including CNRS, RENATER and several universities as well as other organizations.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Menouer, T., Cérin, C., Darmon, P. (2020). Accelerated Promethee Algorithm Based on Dimensionality Reduction. In: Hsu, CH., Kallel, S., Lan, KC., Zheng, Z. (eds) Internet of Vehicles. Technologies and Services Toward Smart Cities. IOV 2019. Lecture Notes in Computer Science(), vol 11894. Springer, Cham. https://doi.org/10.1007/978-3-030-38651-1_17
Download citation
DOI: https://doi.org/10.1007/978-3-030-38651-1_17
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-38650-4
Online ISBN: 978-3-030-38651-1
eBook Packages: Computer ScienceComputer Science (R0)